Sample records for values springer series

  1. Official portrait of astronaut Robert C. Springer

    NASA Technical Reports Server (NTRS)

    1988-01-01

    Official portrait of astronaut Robert C. Springer, United Stated Marine Corps (USMC) Colonel, member of Astronaut Class 9 (1980), and mission specialist. Springer wears launch and entry suit (LES) while holding helmet.

  2. Living in a Jerry Springer World

    ERIC Educational Resources Information Center

    Houston, Paul D.

    2005-01-01

    The author admits that he has watched Jerry Springer on occasion. It is a guilty pleasure. The Springer show has come to represent the extremes in the society--perversion, unlikely pairings, lying, and cheating. Liberal Hollywood has been roundly criticized, and justifiably so, over the direction it has taken with much of the entertainment to the…

  3. STS-38 MS Springer climbs through CCT side hatch prior to egress training

    NASA Image and Video Library

    1990-03-05

    STS-38 Mission Specialist (MS) Robert C. Springer, wearing launch and entry suit (LES), climbs through the side hatch of the crew compartment trainer (CCT) located in JSC's Mockup and Integration Laboratory (MAIL) Bldg 9A. Springer will practice emergency egress through the side hatch using the crew escape system (CES) pole (at Springer's left). The inflated safety cushion under Springer will break his fall as he rolls out of the side hatch.

  4. STS-38 MS Springer climbs through CCT side hatch prior to egress training

    NASA Technical Reports Server (NTRS)

    1990-01-01

    STS-38 Mission Specialist (MS) Robert C. Springer, wearing launch and entry suit (LES), climbs through the side hatch of the crew compartment trainer (CCT) located in JSC's Mockup and Integration Laboratory (MAIL) Bldg 9A. Springer will practice emergency egress through the side hatch using the crew escape system (CES) pole (at Springer's left). The inflated safety cushion under Springer will break his fall as he rolls out of the side hatch.

  5. Filtrations on Springer fiber cohomology and Kostka polynomials

    NASA Astrophysics Data System (ADS)

    Bellamy, Gwyn; Schedler, Travis

    2018-03-01

    We prove a conjecture which expresses the bigraded Poisson-de Rham homology of the nilpotent cone of a semisimple Lie algebra in terms of the generalized (one-variable) Kostka polynomials, via a formula suggested by Lusztig. This allows us to construct a canonical family of filtrations on the flag variety cohomology, and hence on irreducible representations of the Weyl group, whose Hilbert series are given by the generalized Kostka polynomials. We deduce consequences for the cohomology of all Springer fibers. In particular, this computes the grading on the zeroth Poisson homology of all classical finite W-algebras, as well as the filtration on the zeroth Hochschild homology of all quantum finite W-algebras, and we generalize to all homology degrees. As a consequence, we deduce a conjecture of Proudfoot on symplectic duality, relating in type A the Poisson homology of Slodowy slices to the intersection cohomology of nilpotent orbit closures. In the last section, we give an analogue of our main theorem in the setting of mirabolic D-modules.

  6. Heritability of lenticular myopia in English Springer spaniels.

    PubMed

    Kubai, Melissa A; Labelle, Amber L; Hamor, Ralph E; Mutti, Donald O; Famula, Thomas R; Murphy, Christopher J

    2013-11-08

    We determined whether naturally-occurring lenticular myopia in English Springer spaniels (ESS) has a genetic component. Streak retinoscopy was performed on 226 related ESS 30 minutes after the onset of pharmacologic mydriasis and cycloplegia. A pedigree was constructed to determine relationships between affected offspring and parents. Estimation of heritability was done in a Bayesian analysis (facilitated by the MCMCglmm package of R) of refractive error in a model, including terms for sex and coat color. Myopia was defined as ≤-0.5 diopters (D) spherical equivalent. The median refractive error for ESS was 0.25 D (range, -3.5 to +4.5 D). Median age was 0.2 years (range, 0.1-15 years). The prevalence of myopia in related ESS was 19% (42/226). The ESS had a strong correlation (r = 0.95) for refractive error between the two eyes. Moderate heritability was present for refractive error with a mean value of 0.29 (95% highest probability density, 0.07-0.50). The distribution of refractive error, and subsequently lenticular myopia, has a moderate genetic component in ESS. Further investigation of genes responsible for regulation of the development of refractive ocular components in canines is warranted.

  7. Recurrent Neural Networks for Multivariate Time Series with Missing Values.

    PubMed

    Che, Zhengping; Purushotham, Sanjay; Cho, Kyunghyun; Sontag, David; Liu, Yan

    2018-04-17

    Multivariate time series data in practical applications, such as health care, geoscience, and biology, are characterized by a variety of missing values. In time series prediction and other related tasks, it has been noted that missing values and their missing patterns are often correlated with the target labels, a.k.a., informative missingness. There is very limited work on exploiting the missing patterns for effective imputation and improving prediction performance. In this paper, we develop novel deep learning models, namely GRU-D, as one of the early attempts. GRU-D is based on Gated Recurrent Unit (GRU), a state-of-the-art recurrent neural network. It takes two representations of missing patterns, i.e., masking and time interval, and effectively incorporates them into a deep model architecture so that it not only captures the long-term temporal dependencies in time series, but also utilizes the missing patterns to achieve better prediction results. Experiments of time series classification tasks on real-world clinical datasets (MIMIC-III, PhysioNet) and synthetic datasets demonstrate that our models achieve state-of-the-art performance and provide useful insights for better understanding and utilization of missing values in time series analysis.

  8. [A series about the value of physical examination].

    PubMed

    de Jongh, T O H; Zaat, J O M

    2010-01-01

    This article is the introduction to a new series in the Nederlands Tijdschrift voor Geneeskunde about the value of physical examination. Associated with this series, on the website (www.ntvg.nl) there are chapters of the new textbook on physical examination and films about carrying out physical examinations. Although physical examination is an essential part of the diagnostic process, often little attention is paid to the correct execution of the examination and there is insufficient knowledge of the value of the findings. The diagnostic process usually involves analysing all the information from the patient's history and a physical examination. However, research has only been done on the value of specific tests and even that is very limited. The most important measure we use for the results of a physical examination is the likelihood ratio, which shows how the likelihood of presence or absence of a disease changes depending on the examination results.

  9. Invited Speaker Support for SBP Conference Series (SBP 2014) held in April, 2014 in Washington, DC.

    DTIC Science & Technology

    2014-07-23

    Experts in Social Media Wenhui Liao, Sameena Shah and Masoud Makrehchi Talk 2 Predicting Social Ties in Massively Multiplayer Online Games Jina...sweets, beverages SBP Poster The Needs of Metaphor David Bracewell SBP Poster Predicting Guild Membership in Massively Multiplayer Online ...Science series. These proceedings can be accessed online through the following link: http://link.springer.com/book/10.1007%2F978-3-319-05579-4 (Springer

  10. A time series model: First-order integer-valued autoregressive (INAR(1))

    NASA Astrophysics Data System (ADS)

    Simarmata, D. M.; Novkaniza, F.; Widyaningsih, Y.

    2017-07-01

    Nonnegative integer-valued time series arises in many applications. A time series model: first-order Integer-valued AutoRegressive (INAR(1)) is constructed by binomial thinning operator to model nonnegative integer-valued time series. INAR (1) depends on one period from the process before. The parameter of the model can be estimated by Conditional Least Squares (CLS). Specification of INAR(1) is following the specification of (AR(1)). Forecasting in INAR(1) uses median or Bayesian forecasting methodology. Median forecasting methodology obtains integer s, which is cumulative density function (CDF) until s, is more than or equal to 0.5. Bayesian forecasting methodology forecasts h-step-ahead of generating the parameter of the model and parameter of innovation term using Adaptive Rejection Metropolis Sampling within Gibbs sampling (ARMS), then finding the least integer s, where CDF until s is more than or equal to u . u is a value taken from the Uniform(0,1) distribution. INAR(1) is applied on pneumonia case in Penjaringan, Jakarta Utara, January 2008 until April 2016 monthly.

  11. Complex-valued time-series correlation increases sensitivity in FMRI analysis.

    PubMed

    Kociuba, Mary C; Rowe, Daniel B

    2016-07-01

    To develop a linear matrix representation of correlation between complex-valued (CV) time-series in the temporal Fourier frequency domain, and demonstrate its increased sensitivity over correlation between magnitude-only (MO) time-series in functional MRI (fMRI) analysis. The standard in fMRI is to discard the phase before the statistical analysis of the data, despite evidence of task related change in the phase time-series. With a real-valued isomorphism representation of Fourier reconstruction, correlation is computed in the temporal frequency domain with CV time-series data, rather than with the standard of MO data. A MATLAB simulation compares the Fisher-z transform of MO and CV correlations for varying degrees of task related magnitude and phase amplitude change in the time-series. The increased sensitivity of the complex-valued Fourier representation of correlation is also demonstrated with experimental human data. Since the correlation description in the temporal frequency domain is represented as a summation of second order temporal frequencies, the correlation is easily divided into experimentally relevant frequency bands for each voxel's temporal frequency spectrum. The MO and CV correlations for the experimental human data are analyzed for four voxels of interest (VOIs) to show the framework with high and low contrast-to-noise ratios in the motor cortex and the supplementary motor cortex. The simulation demonstrates the increased strength of CV correlations over MO correlations for low magnitude contrast-to-noise time-series. In the experimental human data, the MO correlation maps are noisier than the CV maps, and it is more difficult to distinguish the motor cortex in the MO correlation maps after spatial processing. Including both magnitude and phase in the spatial correlation computations more accurately defines the correlated left and right motor cortices. Sensitivity in correlation analysis is important to preserve the signal of interest in f

  12. Values in Higher Education. The Wilson Lecture Series.

    ERIC Educational Resources Information Center

    Wilson, O. Meredith

    The text of a lecture in the University of Arizona Wilson Lecture Series on values in higher education is presented, with responses by Richard H. Gallagher, Jeanne McRae McCarthy, and Raymond H. Thompson. The theme of the talk is that man is by evolution and by necessity a thinking animal, who now finds himself in a technologically dependent…

  13. Large scale variability, long-term trends and extreme events in total ozone over the northern mid-latitudes based on satellite time series

    NASA Astrophysics Data System (ADS)

    Rieder, H. E.; Staehelin, J.; Maeder, J. A.; Ribatet, M.; Davison, A. C.

    2009-04-01

    Various generations of satellites (e.g. TOMS, GOME, OMI) made spatial datasets of column ozone available to the scientific community. This study has a special focus on column ozone over the northern mid-latitudes. Tools from geostatistics and extreme value theory are applied to analyze variability, long-term trends and frequency distributions of extreme events in total ozone. In a recent case study (Rieder et al., 2009) new tools from extreme value theory (Coles, 2001; Ribatet, 2007) have been applied to the world's longest total ozone record from Arosa, Switzerland (e.g. Staehelin 1998a,b), in order to describe extreme events in low and high total ozone. Within the current study this analysis is extended to satellite datasets for the northern mid-latitudes. Further special emphasis is given on patterns and spatial correlations and the influence of changes in atmospheric dynamics (e.g. tropospheric and lower stratospheric pressure systems) on column ozone. References: Coles, S.: An Introduction to Statistical Modeling of Extreme Values, Springer Series in Statistics, ISBN:1852334592, Springer, Berlin, 2001. Ribatet, M.: POT: Modelling peaks over a threshold, R News, 7, 34-36, 2007. Rieder, H.E., Staehelin, J., Maeder, J.A., Ribatet, M., Stübi, R., Weihs, P., Holawe, F., Peter, T., and Davison, A.C.: From ozone mini holes and mini highs towards extreme value theory: New insights from extreme events and non stationarity, submitted to J. Geophys. Res., 2009. Staehelin, J., Kegel, R., and Harris, N. R.: Trend analysis of the homogenized total ozone series of Arosa (Switzerland), 1929-1996, J. Geophys. Res., 103(D7), 8389-8400, doi:10.1029/97JD03650, 1998a. Staehelin, J., Renaud, A., Bader, J., McPeters, R., Viatte, P., Hoegger, B., Bugnion, V., Giroud, M., and Schill, H.: Total ozone series at Arosa (Switzerland): Homogenization and data comparison, J. Geophys. Res., 103(D5), 5827-5842, doi:10.1029/97JD02402, 1998b.

  14. Hierarchical time series bottom-up approach for forecast the export value in Central Java

    NASA Astrophysics Data System (ADS)

    Mahkya, D. A.; Ulama, B. S.; Suhartono

    2017-10-01

    The purpose of this study is Getting the best modeling and predicting the export value of Central Java using a Hierarchical Time Series. The export value is one variable injection in the economy of a country, meaning that if the export value of the country increases, the country’s economy will increase even more. Therefore, it is necessary appropriate modeling to predict the export value especially in Central Java. Export Value in Central Java are grouped into 21 commodities with each commodity has a different pattern. One approach that can be used time series is a hierarchical approach. Hierarchical Time Series is used Buttom-up. To Forecast the individual series at all levels using Autoregressive Integrated Moving Average (ARIMA), Radial Basis Function Neural Network (RBFNN), and Hybrid ARIMA-RBFNN. For the selection of the best models used Symmetric Mean Absolute Percentage Error (sMAPE). Results of the analysis showed that for the Export Value of Central Java, Bottom-up approach with Hybrid ARIMA-RBFNN modeling can be used for long-term predictions. As for the short and medium-term predictions, it can be used a bottom-up approach RBFNN modeling. Overall bottom-up approach with RBFNN modeling give the best result.

  15. Two-pass imputation algorithm for missing value estimation in gene expression time series.

    PubMed

    Tsiporkova, Elena; Boeva, Veselka

    2007-10-01

    Gene expression microarray experiments frequently generate datasets with multiple values missing. However, most of the analysis, mining, and classification methods for gene expression data require a complete matrix of gene array values. Therefore, the accurate estimation of missing values in such datasets has been recognized as an important issue, and several imputation algorithms have already been proposed to the biological community. Most of these approaches, however, are not particularly suitable for time series expression profiles. In view of this, we propose a novel imputation algorithm, which is specially suited for the estimation of missing values in gene expression time series data. The algorithm utilizes Dynamic Time Warping (DTW) distance in order to measure the similarity between time expression profiles, and subsequently selects for each gene expression profile with missing values a dedicated set of candidate profiles for estimation. Three different DTW-based imputation (DTWimpute) algorithms have been considered: position-wise, neighborhood-wise, and two-pass imputation. These have initially been prototyped in Perl, and their accuracy has been evaluated on yeast expression time series data using several different parameter settings. The experiments have shown that the two-pass algorithm consistently outperforms, in particular for datasets with a higher level of missing entries, the neighborhood-wise and the position-wise algorithms. The performance of the two-pass DTWimpute algorithm has further been benchmarked against the weighted K-Nearest Neighbors algorithm, which is widely used in the biological community; the former algorithm has appeared superior to the latter one. Motivated by these findings, indicating clearly the added value of the DTW techniques for missing value estimation in time series data, we have built an optimized C++ implementation of the two-pass DTWimpute algorithm. The software also provides for a choice between three different

  16. On vector-valued Poincaré series of weight 2

    NASA Astrophysics Data System (ADS)

    Meneses, Claudio

    2017-10-01

    Given a pair (Γ , ρ) of a Fuchsian group of the first kind, and a unitary representation ρ of Γ of arbitrary rank, the problem of construction of vector-valued Poincaré series of weight 2 is considered. Implications in the theory of parabolic bundles are discussed. When the genus of the group is zero, it is shown how an explicit basis for the space of these functions can be constructed.

  17. 31 CFR 359.55 - How are redemption values calculated for book-entry Series I savings bonds?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... for book-entry Series I savings bonds? 359.55 Section 359.55 Money and Finance: Treasury Regulations... DEBT OFFERING OF UNITED STATES SAVINGS BONDS, SERIES I Book-Entry Series I Savings Bonds § 359.55 How are redemption values calculated for book-entry Series I savings bonds? We base current redemption...

  18. 31 CFR 351.70 - How are redemption values calculated for book-entry Series EE savings bonds?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... for book-entry Series EE savings bonds? 351.70 Section 351.70 Money and Finance: Treasury Regulations... DEBT OFFERING OF UNITED STATES SAVINGS BONDS, SERIES EE Book-Entry Series EE Savings Bonds § 351.70 How are redemption values calculated for book-entry Series EE savings bonds? We base current redemption...

  19. 31 CFR 351.70 - How are redemption values calculated for book-entry Series EE savings bonds?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... for book-entry Series EE savings bonds? 351.70 Section 351.70 Money and Finance: Treasury Regulations... DEBT OFFERING OF UNITED STATES SAVINGS BONDS, SERIES EE Book-Entry Series EE Savings Bonds § 351.70 How are redemption values calculated for book-entry Series EE savings bonds? We base current redemption...

  20. Occurrence of CPPopt Values in Uncorrelated ICP and ABP Time Series.

    PubMed

    Cabeleira, M; Czosnyka, M; Liu, X; Donnelly, J; Smielewski, P

    2018-01-01

    Optimal cerebral perfusion pressure (CPPopt) is a concept that uses the pressure reactivity (PRx)-CPP relationship over a given period to find a value of CPP at which PRx shows best autoregulation. It has been proposed that this relationship be modelled by a U-shaped curve, where the minimum is interpreted as being the CPP value that corresponds to the strongest autoregulation. Owing to the nature of the calculation and the signals involved in it, the occurrence of CPPopt curves generated by non-physiological variations of intracranial pressure (ICP) and arterial blood pressure (ABP), termed here "false positives", is possible. Such random occurrences would artificially increase the yield of CPPopt values and decrease the reliability of the methodology.In this work, we studied the probability of the random occurrence of false-positives and we compared the effect of the parameters used for CPPopt calculation on this probability. To simulate the occurrence of false-positives, uncorrelated ICP and ABP time series were generated by destroying the relationship between the waves in real recordings. The CPPopt algorithm was then applied to these new series and the number of false-positives was counted for different values of the algorithm's parameters. The percentage of CPPopt curves generated from uncorrelated data was demonstrated to be 11.5%. This value can be minimised by tuning some of the calculation parameters, such as increasing the calculation window and increasing the minimum PRx span accepted on the curve.

  1. Extreme events in total ozone over Arosa: Application of extreme value theory and fingerprints of atmospheric dynamics and chemistry and their effects on mean values and long-term changes

    NASA Astrophysics Data System (ADS)

    Rieder, Harald E.; Staehelin, Johannes; Maeder, Jörg A.; Peter, Thomas; Ribatet, Mathieu; Davison, Anthony C.; Stübi, Rene; Weihs, Philipp; Holawe, Franz

    2010-05-01

    ón, Mt. Pinatubo). Furthermore, atmospheric loading in ozone depleting substances lead to a continuous modification of column ozone in the northern hemisphere also with respect to extreme values (partly again in connection with polar vortex contributions). It is shown that application of extreme value theory allows the identification of many more such fingerprints than conventional time series analysis of annual and seasonal mean values. Especially, the analysis shows the strong influence of dynamics, revealing that even moderate ENSO and NAO events have a discernible effect on total ozone (Rieder et al., 2010b). Overall the presented new extremes concept provides new information on time series properties, variability, trends and the influence of dynamics and chemistry, complementing earlier analyses focusing only on monthly (or annual) mean values. References: Coles, S.: An Introduction to Statistical Modeling of Extreme Values, Springer Series in Statistics, ISBN:1852334592, Springer, Berlin, 2001. Ribatet, M.: POT: Modelling peaks over a threshold, R News, 7, 34-36, 2007. Rieder ,H.E., Staehelin, J., Maeder, J.A., Ribatet, M., Stübi, R., Weihs, P., Holawe, F., Peter, T., and A.D., Davison (2010): Extreme events in total ozone over Arosa - Part I: Application of extreme value theory, to be submitted to ACPD. Rieder, H.E., Staehelin, J., Maeder, J.A., Ribatet, M., Stübi, R., Weihs, P., Holawe, F., Peter, T., and A.D., Davison (2010): Extreme events in total ozone over Arosa - Part II: Fingerprints of atmospheric dynamics and chemistry and effects on mean values and long-term changes, to be submitted to ACPD. Staehelin, J., Renaud, A., Bader, J., McPeters, R., Viatte, P., Hoegger, B., Bugnion, V., Giroud, M., and Schill, H.: Total ozone series at Arosa (Switzerland): Homogenization and data comparison, J. Geophys. Res., 103(D5), 5827-5842, doi:10.1029/97JD02402, 1998a. Staehelin, J., Kegel, R., and Harris, N. R.: Trend analysis of the homogenized total ozone series of

  2. Solving ODE Initial Value Problems With Implicit Taylor Series Methods

    NASA Technical Reports Server (NTRS)

    Scott, James R.

    2000-01-01

    In this paper we introduce a new class of numerical methods for integrating ODE initial value problems. Specifically, we propose an extension of the Taylor series method which significantly improves its accuracy and stability while also increasing its range of applicability. To advance the solution from t (sub n) to t (sub n+1), we expand a series about the intermediate point t (sub n+mu):=t (sub n) + mu h, where h is the stepsize and mu is an arbitrary parameter called an expansion coefficient. We show that, in general, a Taylor series of degree k has exactly k expansion coefficients which raise its order of accuracy. The accuracy is raised by one order if k is odd, and by two orders if k is even. In addition, if k is three or greater, local extrapolation can be used to raise the accuracy two additional orders. We also examine stability for the problem y'= lambda y, Re (lambda) less than 0, and identify several A-stable schemes. Numerical results are presented for both fixed and variable stepsizes. It is shown that implicit Taylor series methods provide an effective integration tool for most problems, including stiff systems and ODE's with a singular point.

  3. The Ethics of Biomedical Big Data : Brent Daniel Mittelstadt and Luciano Floridi, eds. 2016, Springer International Publishing (Cham, Switzerland, 978-3-319-33523-0, 480 pp.).

    PubMed

    Mason, Paul H

    2017-12-01

    The availability of diverse sources of data related to health and illness from various types of modern communication technology presents the possibility of augmenting medical knowledge, clinical care, and the patient experience. New forms of data collection and analysis will undoubtedly transform epidemiology, public health, and clinical practice, but what ethical considerations come in to play? With a view to analysing the ethical and regulatory dimensions of burgeoning forms of biomedical big data, Brent Daniel Mittelstadt and Luciano Floridi have brought together thirty scholars in an edited volume that forms part of Springer's Law, Governance and Technology book series in a collection titled The Ethics of Biomedical Big Data. With eighteen chapters partitioned into six carefully devised sections, this volume engages with core theoretical, ethical, and regulatory challenges posed by biomedical big data.

  4. Extreme Value Theory and the New Sunspot Number Series

    NASA Astrophysics Data System (ADS)

    Acero, F. J.; Carrasco, V. M. S.; Gallego, M. C.; García, J. A.; Vaquero, J. M.

    2017-04-01

    Extreme value theory was employed to study solar activity using the new sunspot number index. The block maxima approach was used at yearly (1700-2015), monthly (1749-2016), and daily (1818-2016) scales, selecting the maximum sunspot number value for each solar cycle, and the peaks-over-threshold (POT) technique was used after a declustering process only for the daily data. Both techniques led to negative values for the shape parameters. This implies that the extreme sunspot number value distribution has an upper bound. The return level (RL) values obtained from the POT approach were greater than when using the block maxima technique. Regarding the POT approach, the 110 year (550 and 1100 year) RLs were lower (higher) than the daily maximum observed sunspot number value of 528. Furthermore, according to the block maxima approach, the 10-cycle RL lay within the block maxima daily sunspot number range, as expected, but it was striking that the 50- and 100-cycle RLs were also within that range. Thus, it would seem that the RL is reaching a plateau, and, although one must be cautious, it would be difficult to attain sunspot number values greater than 550. The extreme value trends from the four series (yearly, monthly, and daily maxima per solar cycle, and POT after declustering the daily data) were analyzed with the Mann-Kendall test and Sen’s method. Only the negative trend of the daily data with the POT technique was statistically significant.

  5. Echocardiographic assessments of longitudinal left ventricular function in healthy English Springer spaniels.

    PubMed

    Dickson, D; Shave, R; Rishniw, M; Patteson, M

    2017-08-01

    To establish reference intervals for echocardiographic measures of longitudinal left ventricular function in adult English Springer spaniel (ESS) dogs. This study involved 42 healthy adult ESS. Animals were prospectively recruited from a general practice population in the United Kingdom. Dogs were examined twice, at least 12 months apart, to exclude dogs with progressive cardiac disease. Mitral annular plane systolic excursion, tissue Doppler imaging mitral annular velocities and two-dimensional speckle-tracking echocardiographic left ventricular longitudinal strain and strain rate were measured. Intraoperator and intraobserver variability were examined and reference intervals were calculated. The potential effects of body weight, age and heart rate on these variables were examined. Intraoperator and intraobserver variability was <10% for all parameters except tissue Doppler imaging E' (the peak velocity of early diastolic mitral annular motion as determined by pulsed wave Doppler) and two-dimensional speckle-tracking echocardiographic variables, which were all <20%. Thirty-nine dogs were used to create reference intervals. Significant (but mostly weak) effects of age, heart rate and body weight on were detected. Reference intervals were similar to previously published values in different breeds. Breed specific reference intervals for measures of longitudinal left ventricular function in the ESS are presented. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. The World's Approach toward Publishing in Springer and Elsevier's APC-Funded Open Access Journals

    ERIC Educational Resources Information Center

    Sotudeh, Hajar; Ghasempour, Zahra

    2018-01-01

    Purpose: The present study explored tendencies of the world's countries--at individual and scientific development levels--toward publishing in APC-funded open access journals. Design/Methodology/Approach: Using a bibliometric method, it studied OA and NOA articles issued in Springer and Elsevier's APC journals? during 2007-2011. The data were…

  7. A cluster merging method for time series microarray with production values.

    PubMed

    Chira, Camelia; Sedano, Javier; Camara, Monica; Prieto, Carlos; Villar, Jose R; Corchado, Emilio

    2014-09-01

    A challenging task in time-course microarray data analysis is to cluster genes meaningfully combining the information provided by multiple replicates covering the same key time points. This paper proposes a novel cluster merging method to accomplish this goal obtaining groups with highly correlated genes. The main idea behind the proposed method is to generate a clustering starting from groups created based on individual temporal series (representing different biological replicates measured in the same time points) and merging them by taking into account the frequency by which two genes are assembled together in each clustering. The gene groups at the level of individual time series are generated using several shape-based clustering methods. This study is focused on a real-world time series microarray task with the aim to find co-expressed genes related to the production and growth of a certain bacteria. The shape-based clustering methods used at the level of individual time series rely on identifying similar gene expression patterns over time which, in some models, are further matched to the pattern of production/growth. The proposed cluster merging method is able to produce meaningful gene groups which can be naturally ranked by the level of agreement on the clustering among individual time series. The list of clusters and genes is further sorted based on the information correlation coefficient and new problem-specific relevant measures. Computational experiments and results of the cluster merging method are analyzed from a biological perspective and further compared with the clustering generated based on the mean value of time series and the same shape-based algorithm.

  8. 31 CFR 351.35 - What do I need to know about interest rates, penalties, and redemption values for Series EE bonds...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... rates, penalties, and redemption values for Series EE bonds with issue dates of May 1, 2005, or... SAVINGS BONDS, SERIES EE Maturities, Redemption Values, and Investment Yields of Series EE Savings Bonds Series Ee Savings Bonds with Issue Dates of May 1, 2005, Or Thereafter § 351.35 What do I need to know...

  9. 31 CFR 351.35 - What do I need to know about interest rates, penalties, and redemption values for Series EE bonds...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... rates, penalties, and redemption values for Series EE bonds with issue dates of May 1, 2005, or... SAVINGS BONDS, SERIES EE Maturities, Redemption Values, and Investment Yields of Series EE Savings Bonds Series Ee Savings Bonds with Issue Dates of May 1, 2005, Or Thereafter § 351.35 What do I need to know...

  10. 31 CFR 351.35 - What do I need to know about interest rates, penalties, and redemption values for Series EE bonds...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... rates, penalties, and redemption values for Series EE bonds with issue dates of May 1, 2005, or... SAVINGS BONDS, SERIES EE Maturities, Redemption Values, and Investment Yields of Series EE Savings Bonds Series Ee Savings Bonds with Issue Dates of May 1, 2005, Or Thereafter § 351.35 What do I need to know...

  11. 31 CFR 351.35 - What do I need to know about interest rates, penalties, and redemption values for Series EE bonds...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... rates, penalties, and redemption values for Series EE bonds with issue dates of May 1, 2005, or... SAVINGS BONDS, SERIES EE Maturities, Redemption Values, and Investment Yields of Series EE Savings Bonds Series Ee Savings Bonds with Issue Dates of May 1, 2005, Or Thereafter § 351.35 What do I need to know...

  12. 31 CFR 351.35 - What do I need to know about interest rates, penalties, and redemption values for Series EE bonds...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... rates, penalties, and redemption values for Series EE bonds with issue dates of May 1, 2005, or... SAVINGS BONDS, SERIES EE Maturities, Redemption Values, and Investment Yields of Series EE Savings Bonds Series Ee Savings Bonds with Issue Dates of May 1, 2005, Or Thereafter § 351.35 What do I need to know...

  13. Autoregressive-model-based missing value estimation for DNA microarray time series data.

    PubMed

    Choong, Miew Keen; Charbit, Maurice; Yan, Hong

    2009-01-01

    Missing value estimation is important in DNA microarray data analysis. A number of algorithms have been developed to solve this problem, but they have several limitations. Most existing algorithms are not able to deal with the situation where a particular time point (column) of the data is missing entirely. In this paper, we present an autoregressive-model-based missing value estimation method (ARLSimpute) that takes into account the dynamic property of microarray temporal data and the local similarity structures in the data. ARLSimpute is especially effective for the situation where a particular time point contains many missing values or where the entire time point is missing. Experiment results suggest that our proposed algorithm is an accurate missing value estimator in comparison with other imputation methods on simulated as well as real microarray time series datasets.

  14. Rational approximations from power series of vector-valued meromorphic functions

    NASA Technical Reports Server (NTRS)

    Sidi, Avram

    1992-01-01

    Let F(z) be a vector-valued function, F: C yields C(sup N), which is analytic at z = 0 and meromorphic in a neighborhood of z = 0, and let its Maclaurin series be given. In this work we developed vector-valued rational approximation procedures for F(z) by applying vector extrapolation methods to the sequence of partial sums of its Maclaurin series. We analyzed some of the algebraic and analytic properties of the rational approximations thus obtained, and showed that they were akin to Pade approximations. In particular, we proved a Koenig type theorem concerning their poles and a de Montessus type theorem concerning their uniform convergence. We showed how optical approximations to multiple poles and to Laurent expansions about these poles can be constructed. Extensions of the procedures above and the accompanying theoretical results to functions defined in arbitrary linear spaces was also considered. One of the most interesting and immediate applications of the results of this work is to the matrix eigenvalue problem. In a forthcoming paper we exploited the developments of the present work to devise bona fide generalizations of the classical power method that are especially suitable for very large and sparse matrices. These generalizations can be used to approximate simultaneously several of the largest distinct eigenvalues and corresponding eigenvectors and invariant subspaces of arbitrary matrices which may or may not be diagonalizable, and are very closely related with known Krylov subspace methods.

  15. Comparison of missing value imputation methods in time series: the case of Turkish meteorological data

    NASA Astrophysics Data System (ADS)

    Yozgatligil, Ceylan; Aslan, Sipan; Iyigun, Cem; Batmaz, Inci

    2013-04-01

    This study aims to compare several imputation methods to complete the missing values of spatio-temporal meteorological time series. To this end, six imputation methods are assessed with respect to various criteria including accuracy, robustness, precision, and efficiency for artificially created missing data in monthly total precipitation and mean temperature series obtained from the Turkish State Meteorological Service. Of these methods, simple arithmetic average, normal ratio (NR), and NR weighted with correlations comprise the simple ones, whereas multilayer perceptron type neural network and multiple imputation strategy adopted by Monte Carlo Markov Chain based on expectation-maximization (EM-MCMC) are computationally intensive ones. In addition, we propose a modification on the EM-MCMC method. Besides using a conventional accuracy measure based on squared errors, we also suggest the correlation dimension (CD) technique of nonlinear dynamic time series analysis which takes spatio-temporal dependencies into account for evaluating imputation performances. Depending on the detailed graphical and quantitative analysis, it can be said that although computational methods, particularly EM-MCMC method, are computationally inefficient, they seem favorable for imputation of meteorological time series with respect to different missingness periods considering both measures and both series studied. To conclude, using the EM-MCMC algorithm for imputing missing values before conducting any statistical analyses of meteorological data will definitely decrease the amount of uncertainty and give more robust results. Moreover, the CD measure can be suggested for the performance evaluation of missing data imputation particularly with computational methods since it gives more precise results in meteorological time series.

  16. The Total Ozone Series of Arosa: History, Homogenization and new results using statistical extreme value theory

    NASA Astrophysics Data System (ADS)

    Staehelin, J.; Rieder, H. E.; Maeder, J. A.; Ribatet, M.; Davison, A. C.; Stübi, R.

    2009-04-01

    Atmospheric ozone protects the biota living at the Earth's surface from harmful solar UV-B and UV-C radiation. The global ozone shield is expected to gradually recover from the anthropogenic disturbance of ozone depleting substances (ODS) in the coming decades. The stratospheric ozone layer at extratropics might significantly increase above the thickness of the chemically undisturbed atmosphere which might enhance ozone concentrations at the tropopause altitude where ozone is an important greenhouse gas. At Arosa, a resort village in the Swiss Alps, total ozone measurements started in 1926 leading to the longest total ozone series of the world. One Fery spectrograph and seven Dobson spectrophotometers were operated at Arosa and the method used to homogenize the series will be presented. Due to its unique length the series allows studying total ozone in the chemically undisturbed as well as in the ODS loaded stratosphere. The series is particularly valuable to study natural variability in the period prior to 1970, when ODS started to affect stratospheric ozone. Concepts developed by extreme value statistics allow objective definitions of "ozone extreme high" and "ozone extreme low" values by fitting the (daily mean) time series using the Generalized Pareto Distribution (GPD). Extreme high ozone events can be attributed to effects of ElNino and/or NAO, whereas in the chemically disturbed stratosphere high frequencies of extreme low total ozone values simultaneously occur with periods of strong polar ozone depletion (identified by statistical modeling with Equivalent Stratospheric Chlorine times Volume of Stratospheric Polar Clouds) and volcanic eruptions (such as El Chichon and Pinatubo).

  17. 31 CFR 351.9 - When will I receive the redemption value of my Series EE savings bonds?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... DEBT OFFERING OF UNITED STATES SAVINGS BONDS, SERIES EE Maturities, Redemption Values, and Investment... will be paid the redemption value of your book-entry bond when it reaches final maturity, if you have...

  18. Empowering Grandparents Raising Grandchildren: A Training Manual for Group Leaders. Springer Series on Life Styles and Issues in Aging.

    ERIC Educational Resources Information Center

    Cox, Carole B.

    Noting that grandparents in the parenting role are often overwhelmed by the problems of their children, grandchildren, and the social milieu in which they live, this manual presents a 14-session workshop series designed to empower grandparents who are raising their grandchildren alone. Designed to complement "To Grandmother's House We Go and…

  19. Primary seborrhoea in English springer spaniels: a retrospective study of 14 cases.

    PubMed

    Scott, D W; Miller, W H

    1996-04-01

    Primary seborrhoea was diagnosed in 14 English springer spaniels over a 17-year period. Seven of the dogs developed clinical signs by two years of age. The dermatosis began as a generalised non-pruritic dry scaling which gradually worsened. Some dogs remained in this dry (seborrhoea sicca) stage, but in most cases the dermatosis became greasy and inflamed (seborrhoea oleosa and seborrhoeic dermatitis). Eight of the dogs suffered from recurrent episodes of superficial or deep bacterial pyoderma. Histological findings in skin biopsy specimens included marked orthokeratotic hyperkeratosis of surface and infundibular epithelium, papillomatosis, parakeratotic capping of the papillae, and superficial perivascular dermatitis in which lymphocytes and mast cells were prominent. The dogs with seborrhoea sicca responded more satisfactorily to therapy with topical emollient-humectant agents or oral omega-3/omega-6 fatty acid supplementation. Dogs with seborrhoea oleosa and seborrhoeic dermatitis did not respond satisfactorily to topical therapy. One dog, however, responded well to etretinate and omega-3/omega-6 fatty acid administration. No dog was cured.

  20. 31 CFR 359.39 - How are redemption values calculated for definitive Series I savings bonds?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 31 Money and Finance: Treasury 2 2014-07-01 2014-07-01 false How are redemption values calculated for definitive Series I savings bonds? 359.39 Section 359.39 Money and Finance: Treasury Regulations Relating to Money and Finance (Continued) FISCAL SERVICE, DEPARTMENT OF THE TREASURY BUREAU OF THE FISCAL...

  1. 31 CFR 359.39 - How are redemption values calculated for definitive Series I savings bonds?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 31 Money and Finance:Treasury 2 2012-07-01 2012-07-01 false How are redemption values calculated for definitive Series I savings bonds? 359.39 Section 359.39 Money and Finance: Treasury Regulations Relating to Money and Finance (Continued) FISCAL SERVICE, DEPARTMENT OF THE TREASURY BUREAU OF THE PUBLIC...

  2. 31 CFR 359.39 - How are redemption values calculated for definitive Series I savings bonds?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 31 Money and Finance:Treasury 2 2013-07-01 2013-07-01 false How are redemption values calculated for definitive Series I savings bonds? 359.39 Section 359.39 Money and Finance: Treasury Regulations Relating to Money and Finance (Continued) FISCAL SERVICE, DEPARTMENT OF THE TREASURY BUREAU OF THE PUBLIC...

  3. 31 CFR 359.39 - How are redemption values calculated for definitive Series I savings bonds?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 31 Money and Finance:Treasury 2 2011-07-01 2011-07-01 false How are redemption values calculated for definitive Series I savings bonds? 359.39 Section 359.39 Money and Finance: Treasury Regulations Relating to Money and Finance (Continued) FISCAL SERVICE, DEPARTMENT OF THE TREASURY BUREAU OF THE PUBLIC...

  4. 31 CFR 359.39 - How are redemption values calculated for definitive Series I savings bonds?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 31 Money and Finance: Treasury 2 2010-07-01 2010-07-01 false How are redemption values calculated for definitive Series I savings bonds? 359.39 Section 359.39 Money and Finance: Treasury Regulations Relating to Money and Finance (Continued) FISCAL SERVICE, DEPARTMENT OF THE TREASURY BUREAU OF THE PUBLIC...

  5. Chronic hepatitis in the English springer spaniel: clinical presentation, histological description and outcome.

    PubMed

    Bexfield, N H; Andres-Abdo, C; Scase, T J; Constantino-Casas, F; Watson, P J

    2011-10-15

    Medical records and liver histology of 68 English springer spaniels (ESS) with a histological diagnosis of CH were reviewed retrospectively. PCR was performed on liver tissue for canine adenovirus-1 (CAV-1), canine parvovirus, canine herpesvirus and pathogenic Leptospira species. Follow-up information was obtained to calculate survival times. Median age at presentation was three years seven months (range, seven months to eight years five months) and there were 48 female and 20 male dogs. Clinical signs were non-specific and five dogs were asymptomatic. All dogs had an increase in serum activity of one or more hepatobiliary enzymes. Histopathology demonstrated hepatocyte necrosis and apoptosis with varying amounts of fibrosis. A predominantly lymphoplasmacytic infiltrate throughout the hepatic parenchyma was found in all 68 dogs, but 45 of these dogs also had a neutrophilic component to the inflammatory infiltrate. There was no significant copper accumulation and no aetiological agent was identified by PCR. The median survival time was 189 days (range, 1 to 1211 days), 38 dogs died within three months and 12 dogs survived more than a year following diagnosis.

  6. From ozone mini-holes and mini-highs towards extreme value theory: New insights from extreme events and non-stationarity

    NASA Astrophysics Data System (ADS)

    Rieder, H. E.; Staehelin, J.; Maeder, J. A.; Ribatet, M.; Stübi, R.; Weihs, P.; Holawe, F.; Peter, T.; Davison, A. C.

    2009-04-01

    "extremes concept" provides new information on the data distribution and variability within the Arosa record as well as on the influence of ELOs and EHOs on the long-term trends of the ozone time series. References: Bojkov, R. D., and Balis, D.S.: Characteristics of episodes with extremely low ozone values in the northern middle latitudes 1975-2000, Ann. Geophys., 19, 797-807, 2001. Coles, S.: An Introduction to Statistical Modeling of Extreme Values, Springer Series in Statistics, ISBN:1852334592, Springer, Berlin, 2001. Pickands, J.: Statistical inference using extreme order statistics, Ann. Stat., 3, 1, 119-131, 1975. Ribatet, M.: POT: Modelling peaks over a threshold, R News, 7, 34-36, 2007. Rieder, H.E., Staehelin, J., Maeder, J.A., Stübi, R., Weihs, P., Holawe, F., and M. Ribatet: From ozone mini holes and mini highs towards extreme value theory: New insights from extreme events and non stationarity, submitted to J. Geophys. Res., 2009. Staehelin, J., Kegel, R., and Harris, N. R.: Trend analysis of the homogenized total ozone series of Arosa (Switzerland), 1929-1996, J. Geophys. Res., 103(D7), 8389-8400, doi:10.1029/97JD03650, 1998a. Staehelin, J., Renaud, A., Bader, J., McPeters, R., Viatte, P., Hoegger, B., Bugnion, V., Giroud, M., and Schill, H.: Total ozone series at Arosa (Switzerland): Homogenization and data comparison, J. Geophys. Res., 103(D5), 5827-5842, doi:10.1029/97JD02402, 1998b.

  7. Absolute continuity for operator valued completely positive maps on C∗-algebras

    NASA Astrophysics Data System (ADS)

    Gheondea, Aurelian; Kavruk, Ali Şamil

    2009-02-01

    Motivated by applicability to quantum operations, quantum information, and quantum probability, we investigate the notion of absolute continuity for operator valued completely positive maps on C∗-algebras, previously introduced by Parthasarathy [in Athens Conference on Applied Probability and Time Series Analysis I (Springer-Verlag, Berlin, 1996), pp. 34-54]. We obtain an intrinsic definition of absolute continuity, we show that the Lebesgue decomposition defined by Parthasarathy is the maximal one among all other Lebesgue-type decompositions and that this maximal Lebesgue decomposition does not depend on the jointly dominating completely positive map, we obtain more flexible formulas for calculating the maximal Lebesgue decomposition, and we point out the nonuniqueness of the Lebesgue decomposition as well as a sufficient condition for uniqueness. In addition, we consider Radon-Nikodym derivatives for absolutely continuous completely positive maps that, in general, are unbounded positive self-adjoint operators affiliated to a certain von Neumann algebra, and we obtain a spectral approximation by bounded Radon-Nikodym derivatives. An application to the existence of the infimum of two completely positive maps is indicated, and formulas in terms of Choi's matrices for the Lebesgue decomposition of completely positive maps in matrix algebras are obtained.

  8. 31 CFR 359.55 - How are redemption values calculated for book-entry Series I savings bonds?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... prorated to the book-entry par investment amount for the corresponding issue and redemption dates... to $25.04; calculated value of $25.045 rounds to $25.05. [Book-entry par investment ÷ 100] × [CRV... for book-entry Series I savings bonds? 359.55 Section 359.55 Money and Finance: Treasury Regulations...

  9. 31 CFR 351.33 - What are interest rates and redemption values for Series EE bonds issued May 1, 1997, through...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 31 Money and Finance: Treasury 2 2010-07-01 2010-07-01 false What are interest rates and... Series Ee Savings Bonds with Issue Dates of May 1, 1997, Through April 1, 2005 § 351.33 What are interest rates and redemption values for Series EE bonds issued May 1, 1997, through April 1, 2005, during an...

  10. Time-Critical Cooperative Path Following of Multiple UAVs over Time-Varying Networks

    DTIC Science & Technology

    2011-01-01

    Notes in Control and Information Systems Series (K. Y. Pettersen, T. Gravdahl, and H. Nijmeijer, Eds.). Springer-Verlag, 2006. 29M. Breivik , V...Information Systems Series (K. Y. Pettersen, T. Gravdahl, and H. Nijmeijer, Eds.). Springer-Verlag, 2006. 31M. Breivik , E. Hovstein, and T. I. Fossen. Ship

  11. The Value of Interrupted Time-Series Experiments for Community Intervention Research

    PubMed Central

    Biglan, Anthony; Ary, Dennis; Wagenaar, Alexander C.

    2015-01-01

    Greater use of interrupted time-series experiments is advocated for community intervention research. Time-series designs enable the development of knowledge about the effects of community interventions and policies in circumstances in which randomized controlled trials are too expensive, premature, or simply impractical. The multiple baseline time-series design typically involves two or more communities that are repeatedly assessed, with the intervention introduced into one community at a time. It is particularly well suited to initial evaluations of community interventions and the refinement of those interventions. This paper describes the main features of multiple baseline designs and related repeated-measures time-series experiments, discusses the threats to internal validity in multiple baseline designs, and outlines techniques for statistical analyses of time-series data. Examples are given of the use of multiple baseline designs in evaluating community interventions and policy changes. PMID:11507793

  12. 31 CFR 351.21 - How are redemption values determined during any extended maturity period of Series EE savings...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 31 Money and Finance: Treasury 2 2010-07-01 2010-07-01 false How are redemption values determined during any extended maturity period of Series EE savings bonds with issue dates prior to May 1, 1995? 351.21 Section 351.21 Money and Finance: Treasury Regulations Relating to Money and Finance (Continued...

  13. A GA based penalty function technique for solving constrained redundancy allocation problem of series system with interval valued reliability of components

    NASA Astrophysics Data System (ADS)

    Gupta, R. K.; Bhunia, A. K.; Roy, D.

    2009-10-01

    In this paper, we have considered the problem of constrained redundancy allocation of series system with interval valued reliability of components. For maximizing the overall system reliability under limited resource constraints, the problem is formulated as an unconstrained integer programming problem with interval coefficients by penalty function technique and solved by an advanced GA for integer variables with interval fitness function, tournament selection, uniform crossover, uniform mutation and elitism. As a special case, considering the lower and upper bounds of the interval valued reliabilities of the components to be the same, the corresponding problem has been solved. The model has been illustrated with some numerical examples and the results of the series redundancy allocation problem with fixed value of reliability of the components have been compared with the existing results available in the literature. Finally, sensitivity analyses have been shown graphically to study the stability of our developed GA with respect to the different GA parameters.

  14. Hematocrit and plasma osmolality values of young-of-year shortnose sturgeon following acute exposures to combinations of salinity and temperature

    USGS Publications Warehouse

    Ziegeweid, J.R.; Black, M.C.

    2010-01-01

    Little is known about the physiological capabilities of young-of-year (YOY) shortnose sturgeon. In this study, plasma osmolality and hematocrit values were measured for YOY shortnose sturgeon following 48-h exposures to 12 different combinations of salinity and temperature. Hematocrit levels varied significantly with temperature and age, and plasma osmolalities varied significantly with salinity and age. Plasma osmolality and hematocrit values were similar to previously published values for other sturgeons of similar age and size in similar treatment conditions. ?? 2010 Springer Science+Business Media B.V.

  15. Restoring the Savanna to the Savannah River Site.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harrington, Timothy B.

    2006-07-01

    The Longleaf Pine Ecosystem - Ecology, Silviculture, and Restoration. Shibu Jose, Eric J. Jokela, and Deborah L. Miller, (eds.) Springer Series on Environmental Management. Springer Science and Business Media publisher. Chapter 5. Pp 135-156. Chapter 5 of the book.

  16. Sourcing in the Air Force: An Optimization Approach

    DTIC Science & Technology

    2009-09-01

    quality supplies and services at the lowest cost ( Gabbard , 2004). The commodity sourcing strategy focuses on developing a specific sourcing strategy...Springer Series in Operations Research. New York: Springer-Verlag. Gabbard , E.G. (2004, April). Strategic sourcing: Critical elements and keys to success

  17. Plant competition, facilitation, and other overstory-understory interactions in longleaf pine ecosystems.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Imm, Donald; Blake, John I

    2006-07-01

    The Longleaf Pine Ecosystem - Ecology, Silviculture, and Restoration. Shibu Jose, Eric J. Jokela, and Deborah L. Miller, (eds.) Springer Series on Environmental Management. Springer Science and Business Media publisher. Box 10.2 Pp 330-333. An insert on overstory-understory interactions in longleaf pine ecosystems.

  18. Climatic interpretation of tree-ring methoxyl d2H time-series from a central alpine larch forest

    NASA Astrophysics Data System (ADS)

    Riechelmann, Dana F. C.; Greule, Markus; Siegwolf, Rolf T. W.; Esper, Jan; Keppler, Frank

    2017-04-01

    We measured stable hydrogen isotope ratios of lignin methoxyl groups (d2HLM) in high elevation larch trees (Larix decidua Mill.) from the Simplon Valley in southern Switzerland. Thirty-seven larch trees were sampled and five individuals analysed for their d2HLM values at annual (1971-2009) and pentadal resolution (1746-2009). Testing the climate response of the d2HLM series, the annually resolved series show a positive correlation of r = 0.60 with June/July precipitation and weaker but negative correlation with June/July temperature. In addition, a negative correlation with June-August d2H in precipitation of the nearby GNIP station in Locarno is observed. The pentadally resolved d2HLM series show no significant correlation to climate parameters. The positive correlation of the annually resolved data to summer precipitation is uncommon to d2H measurements from tree-rings (Feakins et al., 2013; Helle and Schleser, 2004; McCarroll and Loader, 2004; Mischel et al., 2015; White et al., 1994). However, we explain the positive association with warm season hydroclimate as follows: methoxyl groups of lignin are directly formed from tissues in the xylem water. More precipitation during June and July, which are on average relatively dry month, results in higher d2H values of the xylem water and therefore, higher d2H value in the lignin methoxyl groups. Therefore, we suggest that d2HLM values of high elevation larch trees might likely serve as a summer precipitation proxy. References: Feakins, S.J., Ellsworth, P.V., Sternberg, L.d.S.L., 2013. Lignin methoxyl hydrogen isotope rations in a coastal ecosystem. Geochimica et Cosmochimica Acta, 121: 54-66. Helle, G., Schleser, G.H., 2004. Interpreting Climate Proxies from Tree-rings. In: Fischer, H., Floeser, G., Kumke, T., Lohmann, G., Miller, H., Negendank, J.F.W., et al., editors. The Climate in Historical Times. Springer Berlin Heidelberg, pp. 129-148. McCarroll, D., Loader, N.J., 2004. Stable isotopes in tree rings. Quaternary

  19. [Sensitivity, specificity and prognostic value of CEA in colorectal cancer: results of a Tunisian series and literature review].

    PubMed

    Bel Hadj Hmida, Y; Tahri, N; Sellami, A; Yangui, N; Jlidi, R; Beyrouti, M I; Krichen, M S; Masmoudi, H

    2001-01-01

    In order to determine the sensitivity of CEA in the diagnosis of colo-rectal carcinoma, we studied a series of 48 patients with colo-rectal carcinoma (1992-1996). The sensitivity was at 52% with a reference value of 5 ng/ml and 68.7% for a reference value of 2.5 ng/ml. With a reference value of 5 ng/ml, the sensitivity of CEA was at 37% only for patients with colo-rectal carcinoma at Dukes B stage, 66.6% for patients at stage C and 75% for patients at stage D. The dosage of CEA was carried out with a sandwich immunoenzymatic technique in tube. There is no statistic significant correlation between the pre-operative rate of CEA and the localisation of the tumor and its histologic type; in contrast, it was significantly correlated with the ganglionnary metastasis. A significant relationship between the pre-operative rate of CEA and the Dukes stage was found for a reference value of 10 ng/ml but not for a reference value of 5 ng/ml. We calculated the specificity of the CEA for the cancers of colon and rectum which was at 76.98% with a reference value of 5 ng/ml and 86% with a reference value of 10 ng/ml.

  20. Harmonic Series Meets Fibonacci Sequence

    ERIC Educational Resources Information Center

    Chen, Hongwei; Kennedy, Chris

    2012-01-01

    The terms of a conditionally convergent series may be rearranged to converge to any prescribed real value. What if the harmonic series is grouped into Fibonacci length blocks? Or the harmonic series is arranged in alternating Fibonacci length blocks? Or rearranged and alternated into separate blocks of even and odd terms of Fibonacci length?

  1. 31 CFR 351.32 - How are redemption values calculated for Series EE bonds with issue dates of May 1, 1997, through...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 31 Money and Finance: Treasury 2 2010-07-01 2010-07-01 false How are redemption values calculated... Money and Finance: Treasury Regulations Relating to Money and Finance (Continued) FISCAL SERVICE, DEPARTMENT OF THE TREASURY BUREAU OF THE PUBLIC DEBT OFFERING OF UNITED STATES SAVINGS BONDS, SERIES EE...

  2. Values, Valuing, and Evaluation. Research on Evaluation Program, Paper and Report Series. Interim Draft.

    ERIC Educational Resources Information Center

    Gephart, William J.

    The paper discusses the meaning of value and valuing, their roles in evaluation, and the potency of value systems in problem solving logic. Evaluation is defined as a process for facilitating decision making. A decision making situation occurs when there are options which are impossible to treat equivalently, and there is an impact in the…

  3. Li Metal Anodes and Rechargeable Lithium Metal Batteries. Springer Series in Materials Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Jiguang; Xu, Wu; Henderson, Wesley A.

    Lithium (Li) metal is an ideal anode material for rechargeable batteries. With the urgent need for the “next generation” rechargeable batteries, such as Li-S, Li-air batteries as well as rechargeable Li metal batteries using Li intercalation compounds as the cathode, the use of Li metal anode has attracted significant interests in recent years. Unfortunately, rechargeable batteries based on Li metal anode have not yet been commercialized mainly due to two barriers: one is the growth of Li dendrites and associated safety hazard, and another is the low Coulombic efficiency (CE) of Li cycling and associated early battery failure due tomore » Li powdering and increasing cell impedance. To have a high CE, minimum side reactions between freshly/native deposited Li and electrolyte has to be minimized. These reactions are proportional to the chemical and electrochemical activity of native Li when they are in direct contact with surrounding electrolyte. They are also proportional to the surface area of deposited Li. This means that high CE of Li deposition/stripping always related to a low surface area Li deposition and suppressed Li dendrite growth. Therefore, the enhancement of CE is a more fundamental factors controlling long term, stable cycling of Li metal anode. In this book, we will first review the general models of the dendrite growth mechanism. The effect of SEI layer on the modeling of Li dendrite growth will also be discussed. Then we will discuss various instruments/tools that are critical for the investigation of Li dendrite growth. In the Chapter 3, various factors which affect CE of Li cycling and dendrite growth will be discussed together with an emphasize on enhancement of CE. Chapter 4 of the book will discuss the specific application of Li metal anode in several key rechargeable Li metal batteries, including Li-air batteries, Li-S batteries and Li metal batteries using intercalation compounds as cathode. At last, the perspective on the future development and application of Li metal batteries will be discussed in the Chapter 5.« less

  4. A multi-centennial time series of well-constrained ΔR values for the Irish Sea derived using absolutely-dated shell samples from the mollusc Arctica islandica

    NASA Astrophysics Data System (ADS)

    Butler, P. G.; Scourse, J. D.; Richardson, C. A.; Wanamaker, A. D., Jr.

    2009-04-01

    Determinations of the local correction (ΔR) to the globally averaged marine radiocarbon reservoir age are often isolated in space and time, derived from heterogeneous sources and constrained by significant uncertainties. Although time series of ΔR at single sites can be obtained from sediment cores, these are subject to multiple uncertainties related to sedimentation rates, bioturbation and interspecific variations in the source of radiocarbon in the analysed samples. Coral records provide better resolution, but these are available only for tropical locations. It is shown here that it is possible to use the shell of the long-lived bivalve mollusc Arctica islandica as a source of high resolution time series of absolutely-dated marine radiocarbon determinations for the shelf seas surrounding the North Atlantic ocean. Annual growth increments in the shell can be crossdated and chronologies can be constructed in a precise analogue with the use of tree-rings. Because the calendar dates of the samples are known, ΔR can be determined with high precision and accuracy and because all the samples are from the same species, the time series of ΔR values possesses a high degree of internal consistency. Presented here is a multi-centennial (AD 1593 - AD 1933) time series of 31 ΔR values for a site in the Irish Sea close to the Isle of Man. The mean value of ΔR (-62 14C yrs) does not change significantly during this period but increased variability is apparent before AD 1750.

  5. Estimating return periods of extreme values from relatively short time series of winds

    NASA Astrophysics Data System (ADS)

    Jonasson, Kristjan; Agustsson, Halfdan; Rognvaldsson, Olafur; Arfeuille, Gilles

    2013-04-01

    An important factor for determining the prospect of individual wind farm sites is the frequency of extreme winds at hub height. Here, extreme winds are defined as the value of the highest 10 minutes averaged wind speed with a 50 year return period, i.e. annual exceeding probability of 2% (Rodrigo, 2010). A frequently applied method to estimate winds in the lowest few hundred meters above ground is to extrapolate observed 10-meter winds logarithmically to higher altitudes. Recent study by Drechsel et al. (2012) showed however that this methodology is not as accurate as interpolating simulated results from the global ECMWF numerical weather prediction (NWP) model to the desired height. Observations of persistent low level jets near Colima in SW-Mexico also show that the logarithmic approach can give highly inaccurate results for some regions (Arfeuille et al., 2012). To address these shortcomings of limited, and/or poorly representative, observations and extrapolations of winds one can use NWP models to dynamically scale down relatively coarse resolution atmospheric analysis. In the case of limited computing resources one has typically to make a compromise between spatial resolution and the duration of the simulated period, both of which can limit the quality of the wind farm siting. A common method to estimate maximum winds is to fit an extreme value distribution (e.g. Gumbel, gev or Pareto) to the maximum values of each year of available data, or the tail of these values. If data are only available for a short period, e.g. 10 or 15 years, then this will give a rather inaccurate estimate. It is possible to deal with this problem by utilizing monthly or weekly maxima, but this introduces new problems: seasonal variation, autocorrelation of neighboring values, and increased discrepancy between data and fitted distribution. We introduce a new method to estimate return periods of extreme values of winds at hub height from relatively short time series of winds, simulated

  6. A 280-Year Long Series of Phenological Observations of Cherry Tree Blossoming Dates for Switzerland

    NASA Astrophysics Data System (ADS)

    Rutishauser, T.; Luterbacher, J.; Wanner, H.

    2003-04-01

    long instrumental data from Europe. In addition, the series is one of the few historical phenological records to assess past climate and ecological changes. Lieth, H. (1974). Phenology and Seasonality Modeling. Berlin, Heidelberg, New York, Springer.

  7. Sun Series program for the REEDA System. [predicting orbital lifetime using sunspot values

    NASA Technical Reports Server (NTRS)

    Shankle, R. W.

    1980-01-01

    Modifications made to data bases and to four programs in a series of computer programs (Sun Series) which run on the REEDA HP minicomputer system to aid NASA's solar activity predictions used in orbital life time predictions are described. These programs utilize various mathematical smoothing technique and perform statistical and graphical analysis of various solar activity data bases residing on the REEDA System.

  8. Values and Society.

    ERIC Educational Resources Information Center

    Nelson, Jack L.

    The idea of a democratic society based on human rights and social justice is the social issue examined in this book which is one of a series on challenges and choices in American values. The format followed in the series includes the following for secondary students: case studies illustrating the issue by focusing on human institutions, factual…

  9. Long series of geomagnetic measurements - unique at satellite era

    NASA Astrophysics Data System (ADS)

    Mandea, Mioara; Balasis, Georgios

    2017-04-01

    We have long appreciated that magnetic measurements obtained at Earth's surface are of great value in characterizing geomagnetic field behavior and then probing the deep interior of our Planet. The existence of new magnetic satellite missions data offer a new detailed global understanding of the geomagnetic field. However, when our interest moves over long-time scales, the very long series of measurements play an important role. Here, we firstly provide an updated series of geomagnetic declination in Paris, shortly after a very special occasion: its value has reached zero after some 350 years of westerly values. We take this occasion to emphasize the importance of long series of continuous measurements, mainly when various techniques are used to detect the abrupt changes in geomagnetic field, the geomagnetic jerks. Many novel concepts originated in dynamical systems or information theory have been developed, partly motivated by specific research questions from the geosciences. This continuously extending toolbox of nonlinear time series analysis is a key to understand the complexity of geomagnetic field. Here, motivated by these efforts, a series of entropy analysis are applied to geomagnetic field time series aiming to detect dynamical complex changes associated with geomagnetic jerks.

  10. Uranium-series ages of marine terraces, La Paz Peninsula, Baja California Sur, Mexico

    USGS Publications Warehouse

    Sirkin, L.; Szabo, B. J.; Padilla, G.A.; Pedrin, S.A.; Diaz, E.R.

    1990-01-01

    Uranium-series dating of coral samples from raised marine terrace deposits between 1.5 and 10 m above sea level in the La Paz Peninsula area, Baja California Sur, yielded ages between 123 ka and 138 ka that are in agreement with previously reported results. The stratigraphy and ages of marine units near the El Coyote Arroyo indicate the presence of two high stands of the sea during the last interglacial or oxygen isotope substage 5e at about 140 ka and 123 ka. Accepting 5 m for the sea level during the last interglacial transgression, we calculate average uplift rates for the marine terraces of about ???70 mm/ka and 40 mm/ka. These slow rates of uplift indicate a relative stability of the La Paz peninsula area for the past 140 000 years. In contrast, areas of Baja California affected by major faultf experienced higher rates of uplift. Rockwell et al. (1987) reported vertical uplift rates of 180 to 300 mm/ka at Punta Banda within the Aqua Blanea fault zone in northern Baja California. ?? 1990 Springer-Verlag.

  11. Counseling Female Offenders and Victims: A Strengths-Restorative Approach. Springer Series on Family Violence.

    ERIC Educational Resources Information Center

    van Wormer, Katherine

    This books considers the many aspects of how the criminal justice system can be reshaped to address the needs of victims of violence and offenders who themselves are often the victims of abuse. It presents a new model that offers an integrated framework to combine tenets of social work's strengths framework with the restorative justice model. It…

  12. Efficient Algorithms for Segmentation of Item-Set Time Series

    NASA Astrophysics Data System (ADS)

    Chundi, Parvathi; Rosenkrantz, Daniel J.

    We propose a special type of time series, which we call an item-set time series, to facilitate the temporal analysis of software version histories, email logs, stock market data, etc. In an item-set time series, each observed data value is a set of discrete items. We formalize the concept of an item-set time series and present efficient algorithms for segmenting a given item-set time series. Segmentation of a time series partitions the time series into a sequence of segments where each segment is constructed by combining consecutive time points of the time series. Each segment is associated with an item set that is computed from the item sets of the time points in that segment, using a function which we call a measure function. We then define a concept called the segment difference, which measures the difference between the item set of a segment and the item sets of the time points in that segment. The segment difference values are required to construct an optimal segmentation of the time series. We describe novel and efficient algorithms to compute segment difference values for each of the measure functions described in the paper. We outline a dynamic programming based scheme to construct an optimal segmentation of the given item-set time series. We use the item-set time series segmentation techniques to analyze the temporal content of three different data sets—Enron email, stock market data, and a synthetic data set. The experimental results show that an optimal segmentation of item-set time series data captures much more temporal content than a segmentation constructed based on the number of time points in each segment, without examining the item set data at the time points, and can be used to analyze different types of temporal data.

  13. Forecasting Enrollments with Fuzzy Time Series.

    ERIC Educational Resources Information Center

    Song, Qiang; Chissom, Brad S.

    The concept of fuzzy time series is introduced and used to forecast the enrollment of a university. Fuzzy time series, an aspect of fuzzy set theory, forecasts enrollment using a first-order time-invariant model. To evaluate the model, the conventional linear regression technique is applied and the predicted values obtained are compared to the…

  14. Dollar$ & $en$e. Part V: What is your added value?

    PubMed

    Wilkinson, I

    2001-01-01

    In Part I of this series, I introduced the concept of memes (1). Memes are ideas or concepts--the information world equivalent of genes. The goal of this series of articles is to infect you with memes, so that you will assimilate, translate, and express them. No matter what our area of expertise or "-ology," we all are in the information business. Our goal is to be in the wisdom business. In the previous papers in this series, I showed that when we convert raw data into wisdom we are moving along a value chain. Each step in the chain adds a different amount of value to the final product: timely, relevant, accurate, and precise knowledge that can be applied to create the ultimate product in the value chain: wisdom. In Part II of this series, I introduced a set of memes for measuring the cost of adding value (2). In Part III of this series, I presented a new set of memes for measuring the added value of knowledge, i.e., intellectual capital (3). In Part IV of this series, I discussed practical knowledge management tools for measuring the value of people, structural, and customer capital (4). In Part V of this series, I will apply intellectual capital and knowledge management concepts at the individual level, to help answer a fundamental question: What is my added value?

  15. Impact of Complex-Valued Energy Function Singularities on the Behaviour of RAYLEIGH-SCHRöDINGER Perturbation Series. H_2CO Molecule Vibrational Energy Spectrum.

    NASA Astrophysics Data System (ADS)

    Duchko, Andrey; Bykov, Alexandr

    2015-06-01

    Nowadays the task of spectra processing is as relevant as ever in molecular spectroscopy. Nevertheless, existing techniques of vibrational energy levels and wave functions computation often come to a dead-lock. Application of standard quantum-mechanical approaches often faces inextricable difficulties. Variational method requires unimaginable computational performance. On the other hand perturbational approaches beat against divergent series. That's why this problem faces an urgent need in application of specific resummation techniques. In this research Rayleigh-Schrödinger perturbation theory is applied to vibrational energy levels calculation of excited vibrational states of H_2CO. It is known that perturbation series diverge in the case of anharmonic resonance coupling between vibrational states [1]. Nevertheless, application of advanced divergent series summation techniques makes it possible to calculate the value of energy with high precision (more than 10 true digits) even for highly excited states of the molecule [2]. For this purposes we have applied several summation techniques based on high-order Pade-Hermite approximations. Our research shows that series behaviour completely depends on the singularities of complex energy function inside unit circle. That's why choosing an approximation function modelling this singularities allows to calculate the sum of divergent series. Our calculations for formaldehyde molecule show that the efficiency of each summation technique depends on the resonant type. REFERENCES 1. J. Cizek, V. Spirko, and O. Bludsky, ON THE USE OF DIVERGENT SERIES IN VIBRATIONAL SPECTROSCOPY. TWO- AND THREE-DIMENSIONAL OSCILLATORS, J. Chem. Phys. 99, 7331 (1993). 2. A. V. Sergeev and D. Z. Goodson, SINGULARITY ANALYSIS OF FOURTH-ORDER MöLLER-PLESSET PERTURBATION THEORY, J. Chem. Phys. 124, 4111 (2006).

  16. Collateral missing value imputation: a new robust missing value estimation algorithm for microarray data.

    PubMed

    Sehgal, Muhammad Shoaib B; Gondal, Iqbal; Dooley, Laurence S

    2005-05-15

    Microarray data are used in a range of application areas in biology, although often it contains considerable numbers of missing values. These missing values can significantly affect subsequent statistical analysis and machine learning algorithms so there is a strong motivation to estimate these values as accurately as possible before using these algorithms. While many imputation algorithms have been proposed, more robust techniques need to be developed so that further analysis of biological data can be accurately undertaken. In this paper, an innovative missing value imputation algorithm called collateral missing value estimation (CMVE) is presented which uses multiple covariance-based imputation matrices for the final prediction of missing values. The matrices are computed and optimized using least square regression and linear programming methods. The new CMVE algorithm has been compared with existing estimation techniques including Bayesian principal component analysis imputation (BPCA), least square impute (LSImpute) and K-nearest neighbour (KNN). All these methods were rigorously tested to estimate missing values in three separate non-time series (ovarian cancer based) and one time series (yeast sporulation) dataset. Each method was quantitatively analyzed using the normalized root mean square (NRMS) error measure, covering a wide range of randomly introduced missing value probabilities from 0.01 to 0.2. Experiments were also undertaken on the yeast dataset, which comprised 1.7% actual missing values, to test the hypothesis that CMVE performed better not only for randomly occurring but also for a real distribution of missing values. The results confirmed that CMVE consistently demonstrated superior and robust estimation capability of missing values compared with other methods for both series types of data, for the same order of computational complexity. A concise theoretical framework has also been formulated to validate the improved performance of the CMVE

  17. Visibility graph approach to exchange rate series

    NASA Astrophysics Data System (ADS)

    Yang, Yue; Wang, Jianbo; Yang, Huijie; Mang, Jingshi

    2009-10-01

    By means of a visibility graph, we investigate six important exchange rate series. It is found that the series convert into scale-free and hierarchically structured networks. The relationship between the scaling exponents of the degree distributions and the Hurst exponents obeys the analytical prediction for fractal Brownian motions. The visibility graph can be used to obtain reliable values of Hurst exponents of the series. The characteristics are explained by using the multifractal structures of the series. The exchange rate of EURO to Japanese Yen is widely used to evaluate risk and to estimate trends in speculative investments. Interestingly, the hierarchies of the visibility graphs for the exchange rate series of these two currencies are significantly weak compared with that of the other series.

  18. On the Prony series representation of stretched exponential relaxation

    NASA Astrophysics Data System (ADS)

    Mauro, John C.; Mauro, Yihong Z.

    2018-09-01

    Stretched exponential relaxation is a ubiquitous feature of homogeneous glasses. The stretched exponential decay function can be derived from the diffusion-trap model, which predicts certain critical values of the fractional stretching exponent, β. In practical implementations of glass relaxation models, it is computationally convenient to represent the stretched exponential function as a Prony series of simple exponentials. Here, we perform a comprehensive mathematical analysis of the Prony series approximation of the stretched exponential relaxation, including optimized coefficients for certain critical values of β. The fitting quality of the Prony series is analyzed as a function of the number of terms in the series. With a sufficient number of terms, the Prony series can accurately capture the time evolution of the stretched exponential function, including its "fat tail" at long times. However, it is unable to capture the divergence of the first-derivative of the stretched exponential function in the limit of zero time. We also present a frequency-domain analysis of the Prony series representation of the stretched exponential function and discuss its physical implications for the modeling of glass relaxation behavior.

  19. Trajectories for Locomotion Systems: A Geometric and Computational Approach via Series Expansions

    DTIC Science & Technology

    2004-10-11

    speed controller. The model is endowed with a 100 count per revolution optical encoder for odometry. (2) On-board computation is performed by a single...switching networks,” Automatica, July 2003. Submitted. [17] K. M. Passino, Biomimicry for Optimization, Control, and Automation. New York: Springer

  20. State of the art review: the data revolution in critical care.

    PubMed

    Ghassemi, Marzyeh; Celi, Leo Anthony; Stone, David J

    2015-03-16

    This article is one of ten reviews selected from the Annual Update in Intensive Care and Emergency Medicine 2015 and co-published as a series in Critical Care. Other articles in the series can be found online at http://ccforum.com/series/annualupdate2015. Further information about the Annual Update in Intensive Care and Emergency Medicine is available from http://www.springer.com/series/8901.

  1. Temporal Dynamics of Two Beam Coupling and the Origin of Compensation Photorefractive Gratings in Sn2P2S6:Sb (Postprint)

    DTIC Science & Technology

    2017-03-29

    A. Grabar, and I. Stoyka, “Photorefraction in tin hypothiodiphosphate in the near infrared,” J. Opt. Soc. Am. B 13(10), 2352–2360 (1996). 2. S...Odoulov, A. Shumelyuk, U. Hellwig, R. Rupp, A. Grabar, and I. Stoyka, “Photorefractive beam coupling in tin hypothiodiphosphate in the near infrared,” Opt...Materials, P. Günter and J.-P. Huignard, eds. Vol. 113 of Springer Series in Optical Sciences (Springer, 2006), pp. 119–162. 1. Introduction Tin

  2. 31 CFR 321.12 - Redemption value of securities.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... value of each savings security is determined by the terms of its offering and the length of time it has been outstanding. The Bureau of the Fiscal Service determines redemption values for Series A-E bonds, eligible Series EE and I bonds, and savings notes, that should be used in redeeming savings securities. [63...

  3. Complexity quantification of cardiac variability time series using improved sample entropy (I-SampEn).

    PubMed

    Marwaha, Puneeta; Sunkaria, Ramesh Kumar

    2016-09-01

    The sample entropy (SampEn) has been widely used to quantify the complexity of RR-interval time series. It is a fact that higher complexity, and hence, entropy is associated with the RR-interval time series of healthy subjects. But, SampEn suffers from the disadvantage that it assigns higher entropy to the randomized surrogate time series as well as to certain pathological time series, which is a misleading observation. This wrong estimation of the complexity of a time series may be due to the fact that the existing SampEn technique updates the threshold value as a function of long-term standard deviation (SD) of a time series. However, time series of certain pathologies exhibits substantial variability in beat-to-beat fluctuations. So the SD of the first order difference (short term SD) of the time series should be considered while updating threshold value, to account for period-to-period variations inherited in a time series. In the present work, improved sample entropy (I-SampEn), a new methodology has been proposed in which threshold value is updated by considering the period-to-period variations of a time series. The I-SampEn technique results in assigning higher entropy value to age-matched healthy subjects than patients suffering atrial fibrillation (AF) and diabetes mellitus (DM). Our results are in agreement with the theory of reduction in complexity of RR-interval time series in patients suffering from chronic cardiovascular and non-cardiovascular diseases.

  4. Causal Inference and the Comparative Interrupted Time Series Design: Findings from Within-Study Comparisons

    ERIC Educational Resources Information Center

    St. Clair, Travis; Hallberg, Kelly; Cook, Thomas D.

    2014-01-01

    Researchers are increasingly using comparative interrupted time series (CITS) designs to estimate the effects of programs and policies when randomized controlled trials are not feasible. In a simple interrupted time series design, researchers compare the pre-treatment values of a treatment group time series to post-treatment values in order to…

  5. The Divergence of Balanced Harmonic-Like Series

    ERIC Educational Resources Information Center

    Lutzer, Carl V.; Marengo, James E.

    2006-01-01

    Consider the series [image omitted] where the value of each a[subscript n] is determined by the flip of a coin: heads on the "n"th toss will mean that a[subscript n] =1 and tails that a[subscript n] = -1. Assuming that the coin is "fair," what is the probability that this "harmonic-like" series converges? After a moment's thought, many people…

  6. Layered Ensemble Architecture for Time Series Forecasting.

    PubMed

    Rahman, Md Mustafizur; Islam, Md Monirul; Murase, Kazuyuki; Yao, Xin

    2016-01-01

    Time series forecasting (TSF) has been widely used in many application areas such as science, engineering, and finance. The phenomena generating time series are usually unknown and information available for forecasting is only limited to the past values of the series. It is, therefore, necessary to use an appropriate number of past values, termed lag, for forecasting. This paper proposes a layered ensemble architecture (LEA) for TSF problems. Our LEA consists of two layers, each of which uses an ensemble of multilayer perceptron (MLP) networks. While the first ensemble layer tries to find an appropriate lag, the second ensemble layer employs the obtained lag for forecasting. Unlike most previous work on TSF, the proposed architecture considers both accuracy and diversity of the individual networks in constructing an ensemble. LEA trains different networks in the ensemble by using different training sets with an aim of maintaining diversity among the networks. However, it uses the appropriate lag and combines the best trained networks to construct the ensemble. This indicates LEAs emphasis on accuracy of the networks. The proposed architecture has been tested extensively on time series data of neural network (NN)3 and NN5 competitions. It has also been tested on several standard benchmark time series data. In terms of forecasting accuracy, our experimental results have revealed clearly that LEA is better than other ensemble and nonensemble methods.

  7. Reviving Graduate Seminar Series through Non-Technical Presentations

    ERIC Educational Resources Information Center

    Madihally, Sundararajan V.

    2011-01-01

    Most chemical engineering programs that offer M.S. and Ph.D. degrees have a common seminar series for all the graduate students. Many would agree that seminars lack student interest, leading to ineffectiveness. We questioned the possibility of adding value to the seminar series by incorporating non-technical topics that may be more important to…

  8. Characterising experimental time series using local intrinsic dimension

    NASA Astrophysics Data System (ADS)

    Buzug, Thorsten M.; von Stamm, Jens; Pfister, Gerd

    1995-02-01

    Experimental strange attractors are analysed with the averaged local intrinsic dimension proposed by A. Passamante et al. [Phys. Rev. A 39 (1989) 3640] which is based on singular value decomposition of local trajectory matrices. The results are compared to the values of Kaplan-Yorke and the correlation dimension. The attractors, reconstructed with Takens' delay time coordinates from scalar velocity time series, are measured in the hydrodynamic Taylor-Couette system. A period doubling route towards chaos obtained from a very short Taylor-Couette cylinder yields a sequence of experimental time series where the local intrinsic dimension is applied.

  9. Forbidden patterns in financial time series

    NASA Astrophysics Data System (ADS)

    Zanin, Massimiliano

    2008-03-01

    The existence of forbidden patterns, i.e., certain missing sequences in a given time series, is a recently proposed instrument of potential application in the study of time series. Forbidden patterns are related to the permutation entropy, which has the basic properties of classic chaos indicators, such as Lyapunov exponent or Kolmogorov entropy, thus allowing to separate deterministic (usually chaotic) from random series; however, it requires fewer values of the series to be calculated, and it is suitable for using with small datasets. In this paper, the appearance of forbidden patterns is studied in different economical indicators such as stock indices (Dow Jones Industrial Average and Nasdaq Composite), NYSE stocks (IBM and Boeing), and others (ten year Bond interest rate), to find evidence of deterministic behavior in their evolutions. Moreover, the rate of appearance of the forbidden patterns is calculated, and some considerations about the underlying dynamics are suggested.

  10. Measuring Complexity and Predictability of Time Series with Flexible Multiscale Entropy for Sensor Networks

    PubMed Central

    Zhou, Renjie; Yang, Chen; Wan, Jian; Zhang, Wei; Guan, Bo; Xiong, Naixue

    2017-01-01

    Measurement of time series complexity and predictability is sometimes the cornerstone for proposing solutions to topology and congestion control problems in sensor networks. As a method of measuring time series complexity and predictability, multiscale entropy (MSE) has been widely applied in many fields. However, sample entropy, which is the fundamental component of MSE, measures the similarity of two subsequences of a time series with either zero or one, but without in-between values, which causes sudden changes of entropy values even if the time series embraces small changes. This problem becomes especially severe when the length of time series is getting short. For solving such the problem, we propose flexible multiscale entropy (FMSE), which introduces a novel similarity function measuring the similarity of two subsequences with full-range values from zero to one, and thus increases the reliability and stability of measuring time series complexity. The proposed method is evaluated on both synthetic and real time series, including white noise, 1/f noise and real vibration signals. The evaluation results demonstrate that FMSE has a significant improvement in reliability and stability of measuring complexity of time series, especially when the length of time series is short, compared to MSE and composite multiscale entropy (CMSE). The proposed method FMSE is capable of improving the performance of time series analysis based topology and traffic congestion control techniques. PMID:28383496

  11. Measuring Complexity and Predictability of Time Series with Flexible Multiscale Entropy for Sensor Networks.

    PubMed

    Zhou, Renjie; Yang, Chen; Wan, Jian; Zhang, Wei; Guan, Bo; Xiong, Naixue

    2017-04-06

    Measurement of time series complexity and predictability is sometimes the cornerstone for proposing solutions to topology and congestion control problems in sensor networks. As a method of measuring time series complexity and predictability, multiscale entropy (MSE) has been widely applied in many fields. However, sample entropy, which is the fundamental component of MSE, measures the similarity of two subsequences of a time series with either zero or one, but without in-between values, which causes sudden changes of entropy values even if the time series embraces small changes. This problem becomes especially severe when the length of time series is getting short. For solving such the problem, we propose flexible multiscale entropy (FMSE), which introduces a novel similarity function measuring the similarity of two subsequences with full-range values from zero to one, and thus increases the reliability and stability of measuring time series complexity. The proposed method is evaluated on both synthetic and real time series, including white noise, 1/f noise and real vibration signals. The evaluation results demonstrate that FMSE has a significant improvement in reliability and stability of measuring complexity of time series, especially when the length of time series is short, compared to MSE and composite multiscale entropy (CMSE). The proposed method FMSE is capable of improving the performance of time series analysis based topology and traffic congestion control techniques.

  12. 31 CFR 321.12 - Redemption value of securities.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... each savings security is determined by the terms of its offering and the length of time it has been outstanding. The Bureau of the Public Debt determines redemption values for Series A-E bonds, eligible Series...

  13. 31 CFR 321.12 - Redemption value of securities.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... savings security is determined by the terms of its offering and the length of time it has been outstanding. The Bureau of the Public Debt determines redemption values for Series A-E bonds, eligible Series EE...

  14. 31 CFR 321.12 - Redemption value of securities.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... savings security is determined by the terms of its offering and the length of time it has been outstanding. The Bureau of the Public Debt determines redemption values for Series A-E bonds, eligible Series EE...

  15. Pseudo-random bit generator based on lag time series

    NASA Astrophysics Data System (ADS)

    García-Martínez, M.; Campos-Cantón, E.

    2014-12-01

    In this paper, we present a pseudo-random bit generator (PRBG) based on two lag time series of the logistic map using positive and negative values in the bifurcation parameter. In order to hidden the map used to build the pseudo-random series we have used a delay in the generation of time series. These new series when they are mapped xn against xn+1 present a cloud of points unrelated to the logistic map. Finally, the pseudo-random sequences have been tested with the suite of NIST giving satisfactory results for use in stream ciphers.

  16. 31 CFR 321.12 - Redemption value of securities.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... security is determined by the terms of its offering and the length of time it has been outstanding. The Bureau of the Public Debt determines redemption values for Series A-E bonds, eligible Series EE and I...

  17. Permutation entropy of finite-length white-noise time series.

    PubMed

    Little, Douglas J; Kane, Deb M

    2016-08-01

    Permutation entropy (PE) is commonly used to discriminate complex structure from white noise in a time series. While the PE of white noise is well understood in the long time-series limit, analysis in the general case is currently lacking. Here the expectation value and variance of white-noise PE are derived as functions of the number of ordinal pattern trials, N, and the embedding dimension, D. It is demonstrated that the probability distribution of the white-noise PE converges to a χ^{2} distribution with D!-1 degrees of freedom as N becomes large. It is further demonstrated that the PE variance for an arbitrary time series can be estimated as the variance of a related metric, the Kullback-Leibler entropy (KLE), allowing the qualitative N≫D! condition to be recast as a quantitative estimate of the N required to achieve a desired PE calculation precision. Application of this theory to statistical inference is demonstrated in the case of an experimentally obtained noise series, where the probability of obtaining the observed PE value was calculated assuming a white-noise time series. Standard statistical inference can be used to draw conclusions whether the white-noise null hypothesis can be accepted or rejected. This methodology can be applied to other null hypotheses, such as discriminating whether two time series are generated from different complex system states.

  18. Process fault detection and nonlinear time series analysis for anomaly detection in safeguards

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burr, T.L.; Mullen, M.F.; Wangen, L.E.

    In this paper we discuss two advanced techniques, process fault detection and nonlinear time series analysis, and apply them to the analysis of vector-valued and single-valued time-series data. We investigate model-based process fault detection methods for analyzing simulated, multivariate, time-series data from a three-tank system. The model-predictions are compared with simulated measurements of the same variables to form residual vectors that are tested for the presence of faults (possible diversions in safeguards terminology). We evaluate two methods, testing all individual residuals with a univariate z-score and testing all variables simultaneously with the Mahalanobis distance, for their ability to detect lossmore » of material from two different leak scenarios from the three-tank system: a leak without and with replacement of the lost volume. Nonlinear time-series analysis tools were compared with the linear methods popularized by Box and Jenkins. We compare prediction results using three nonlinear and two linear modeling methods on each of six simulated time series: two nonlinear and four linear. The nonlinear methods performed better at predicting the nonlinear time series and did as well as the linear methods at predicting the linear values.« less

  19. Effects of a homologous series of linear alcohol ethoxylate surfactants on fathead minnow early life stages.

    PubMed

    Lizotte, R E; Wong, D C; Dorn, P B; Rodgers, J H

    1999-11-01

    Effects of a homologous series of three primarily linear alcohol ethoxylate surfactants were studied in laboratory flow-through 28-day early-life-stage tests with fathead minnow (Pimephales promelas Rafinesque). Surfactants were a C(9-11), C(12-13), and C(14-15) with an average of 6, 6.5, and 7 ethylene oxide units per mole of alcohol, respectively. Average measured surfactant recoveries were 103%, 81%, and 79% of nominal concentrations for the C(9-11) EO 6, C(12-13) EO 6.5, and C(14-15) EO 7 studies, respectively. Embryo survival at 48 h was not adversely affected at any of the concentrations tested. Impaired hatching and deformed fry were observed only in the C(12-13) EO 6.5 study. The 28-day LC50 values were 4.87, 2.39, and 1.02 mg/L for the C(9-11) EO 6, C(12-13) EO 6.5, and C(14-15) EO 7 surfactants, respectively. The corresponding NOECs for survival were 1.01, 1.76, and 0.74 mg/L. Posthatch fry growth was more sensitive than survival for the C(12-13) EO 6.5 and C(14-15) EO 7 surfactants. Survival of posthatch fry decreased with increasing surfactant alkyl chain length. Twenty-eight-day laboratory data were compared to 96-h laboratory, 10-day laboratory and 30-day stream mesocosm data for fathead minnow previously determined for these surfactants. Survival endpoints from the different exposures were comparable and only varied within a factor of two. Similarity of results suggests that it is possible to effectively use 96-h, 10-day, or 28-day laboratory data to predict environmental effects concentrations of these surfactants for fish. http://link.springer-ny. com/link/service/journals/00244/bibs/37n4p536.html

  20. Research in Stochastic Processes

    DTIC Science & Technology

    1988-10-10

    To appear in Proceedings Volume, Oberwolfach Conf. on Extremal Value Theory, Ed. J. HUsler and R. Reiss, Springer. 4. M.R. Leadbetter. The exceedance...Hsing, J. Husler and M.R. Leadbetter, On the exceedance point process for a stationary sequence, Probability Theor. Rel. Fields, 20, 1988, 97-112 Z.J...Oberwotfach Conf. on Extreme Value Theory. J. Husler and R. Reiss. eds.. Springer. to appear V. Mandrekar, On a limit theorem and invariance

  1. 31 CFR 351.6 - When may I redeem my Series EE savings bond?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... before January 1, 2003. You may redeem your Series EE savings bond at any time beginning six months after... 31 Money and Finance: Treasury 2 2014-07-01 2014-07-01 false When may I redeem my Series EE... SAVINGS BONDS, SERIES EE Maturities, Redemption Values, and Investment Yields of Series EE Savings Bonds...

  2. 31 CFR 351.6 - When may I redeem my Series EE savings bond?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... before January 1, 2003. You may redeem your Series EE savings bond at any time beginning six months after... 31 Money and Finance:Treasury 2 2012-07-01 2012-07-01 false When may I redeem my Series EE savings... SAVINGS BONDS, SERIES EE Maturities, Redemption Values, and Investment Yields of Series EE Savings Bonds...

  3. 31 CFR 351.6 - When may I redeem my Series EE savings bond?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... before January 1, 2003. You may redeem your Series EE savings bond at any time beginning six months after... 31 Money and Finance: Treasury 2 2010-07-01 2010-07-01 false When may I redeem my Series EE... SAVINGS BONDS, SERIES EE Maturities, Redemption Values, and Investment Yields of Series EE Savings Bonds...

  4. 31 CFR 351.6 - When may I redeem my Series EE savings bond?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... before January 1, 2003. You may redeem your Series EE savings bond at any time beginning six months after... 31 Money and Finance:Treasury 2 2013-07-01 2013-07-01 false When may I redeem my Series EE savings... SAVINGS BONDS, SERIES EE Maturities, Redemption Values, and Investment Yields of Series EE Savings Bonds...

  5. 31 CFR 351.6 - When may I redeem my Series EE savings bond?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... before January 1, 2003. You may redeem your Series EE savings bond at any time beginning six months after... 31 Money and Finance:Treasury 2 2011-07-01 2011-07-01 false When may I redeem my Series EE savings... SAVINGS BONDS, SERIES EE Maturities, Redemption Values, and Investment Yields of Series EE Savings Bonds...

  6. Time series of the northeast Pacific

    NASA Astrophysics Data System (ADS)

    Peña, M. Angelica; Bograd, Steven J.

    2007-10-01

    In July 2006, the North Pacific Marine Science Organization (PICES) and Fisheries & Oceans Canada sponsored the symposium “Time Series of the Northeast Pacific: A symposium to mark the 50th anniversary of Line P”. The symposium, which celebrated 50 years of oceanography along Line P and at Ocean Station Papa (OSP), explored the scientific value of the Line P and other long oceanographic time series of the northeast Pacific (NEP). Overviews of the principal NEP time-series were presented, which facilitated regional comparisons and promoted interaction and exchange of information among investigators working in the NEP. More than 80 scientists from 8 countries attended the symposium. This introductory essay is a brief overview of the symposium and the 10 papers that were selected for this special issue of Progress in Oceanography.

  7. Russian State Time and Earth Rotation Service: Observations, Eop Series, Prediction

    NASA Astrophysics Data System (ADS)

    Kaufman, M.; Pasynok, S.

    2010-01-01

    Russian State Time, Frequency and Earth Rotation Service provides the official EOP data and time for use in scientific, technical and metrological works in Russia. The observations of GLONASS and GPS on 30 stations in Russia, and also the Russian and worldwide observations data of VLBI (35 stations) and SLR (20 stations) are used now. To these three series of EOP the data calculated in two other Russian analysis centers are added: IAA (VLBI, GPS and SLR series) and MCC (SLR). Joint processing of these 7 series is carried out every day (the operational EOP data for the last day and the predicted values for 50 days). The EOP values are weekly refined and systematic errors of every individual series are corrected. The combined results become accessible on the VNIIFTRI server (ftp.imvp.ru) approximately at 6h UT daily.

  8. Fast Algorithms for Mining Co-evolving Time Series

    DTIC Science & Technology

    2011-09-01

    Keogh et al., 2001, 2004] and (b) forecasting, like an autoregressive integrated moving average model ( ARIMA ) and related meth- ods [Box et al., 1994...computing hardware? We develop models to mine time series with missing values, to extract compact representation from time sequences, to segment the...sequences, and to do forecasting. For large scale data, we propose algorithms for learning time series models , in particular, including Linear Dynamical

  9. Time Series Imputation via L1 Norm-Based Singular Spectrum Analysis

    NASA Astrophysics Data System (ADS)

    Kalantari, Mahdi; Yarmohammadi, Masoud; Hassani, Hossein; Silva, Emmanuel Sirimal

    Missing values in time series data is a well-known and important problem which many researchers have studied extensively in various fields. In this paper, a new nonparametric approach for missing value imputation in time series is proposed. The main novelty of this research is applying the L1 norm-based version of Singular Spectrum Analysis (SSA), namely L1-SSA which is robust against outliers. The performance of the new imputation method has been compared with many other established methods. The comparison is done by applying them to various real and simulated time series. The obtained results confirm that the SSA-based methods, especially L1-SSA can provide better imputation in comparison to other methods.

  10. Diagnostic Value of Run Chart Analysis: Using Likelihood Ratios to Compare Run Chart Rules on Simulated Data Series

    PubMed Central

    Anhøj, Jacob

    2015-01-01

    Run charts are widely used in healthcare improvement, but there is little consensus on how to interpret them. The primary aim of this study was to evaluate and compare the diagnostic properties of different sets of run chart rules. A run chart is a line graph of a quality measure over time. The main purpose of the run chart is to detect process improvement or process degradation, which will turn up as non-random patterns in the distribution of data points around the median. Non-random variation may be identified by simple statistical tests including the presence of unusually long runs of data points on one side of the median or if the graph crosses the median unusually few times. However, there is no general agreement on what defines “unusually long” or “unusually few”. Other tests of questionable value are frequently used as well. Three sets of run chart rules (Anhoej, Perla, and Carey rules) have been published in peer reviewed healthcare journals, but these sets differ significantly in their sensitivity and specificity to non-random variation. In this study I investigate the diagnostic values expressed by likelihood ratios of three sets of run chart rules for detection of shifts in process performance using random data series. The study concludes that the Anhoej rules have good diagnostic properties and are superior to the Perla and the Carey rules. PMID:25799549

  11. Emerging Perspectives on Values in Organizations. Research in Social Issues in Management Series.

    ERIC Educational Resources Information Center

    Gilliland, Stephen W., Ed.; Steiner, Dirk D., Ed.; Skarlicki, Daniel P., Ed.

    This volume considers the central role of values inherent in fairness perceptions and offers new ways to view values related to fairness, as well as work-related values, their antecedents, and consequences. Values are important because they have been shown to predict preferences, attitudes, perceptions, and behavior in organizations. The first…

  12. Heart rate time series characteristics for early detection of infections in critically ill patients.

    PubMed

    Tambuyzer, T; Guiza, F; Boonen, E; Meersseman, P; Vervenne, H; Hansen, T K; Bjerre, M; Van den Berghe, G; Berckmans, D; Aerts, J M; Meyfroidt, G

    2017-04-01

    It is difficult to make a distinction between inflammation and infection. Therefore, new strategies are required to allow accurate detection of infection. Here, we hypothesize that we can distinguish infected from non-infected ICU patients based on dynamic features of serum cytokine concentrations and heart rate time series. Serum cytokine profiles and heart rate time series of 39 patients were available for this study. The serum concentration of ten cytokines were measured using blood sampled every 10 min between 2100 and 0600 hours. Heart rate was recorded every minute. Ten metrics were used to extract features from these time series to obtain an accurate classification of infected patients. The predictive power of the metrics derived from the heart rate time series was investigated using decision tree analysis. Finally, logistic regression methods were used to examine whether classification performance improved with inclusion of features derived from the cytokine time series. The AUC of a decision tree based on two heart rate features was 0.88. The model had good calibration with 0.09 Hosmer-Lemeshow p value. There was no significant additional value of adding static cytokine levels or cytokine time series information to the generated decision tree model. The results suggest that heart rate is a better marker for infection than information captured by cytokine time series when the exact stage of infection is not known. The predictive value of (expensive) biomarkers should always be weighed against the routinely monitored data, and such biomarkers have to demonstrate added value.

  13. Multifractal analysis of visibility graph-based Ito-related connectivity time series.

    PubMed

    Czechowski, Zbigniew; Lovallo, Michele; Telesca, Luciano

    2016-02-01

    In this study, we investigate multifractal properties of connectivity time series resulting from the visibility graph applied to normally distributed time series generated by the Ito equations with multiplicative power-law noise. We show that multifractality of the connectivity time series (i.e., the series of numbers of links outgoing any node) increases with the exponent of the power-law noise. The multifractality of the connectivity time series could be due to the width of connectivity degree distribution that can be related to the exit time of the associated Ito time series. Furthermore, the connectivity time series are characterized by persistence, although the original Ito time series are random; this is due to the procedure of visibility graph that, connecting the values of the time series, generates persistence but destroys most of the nonlinear correlations. Moreover, the visibility graph is sensitive for detecting wide "depressions" in input time series.

  14. DETERMINATION OF KOW VALUES FOR A SERIES OF ARYL GLUCURONIDES

    EPA Science Inventory

    An important perameter in toxicokinetic modeling is the octanol/water partition coefficient (Kow). This parameter has often been used to predict the accumulation of contaminants from water to fish (Klamer and Beekman 1995); however, few Kow values are available for modeling the b...

  15. Appropriate use of the increment entropy for electrophysiological time series.

    PubMed

    Liu, Xiaofeng; Wang, Xue; Zhou, Xu; Jiang, Aimin

    2018-04-01

    The increment entropy (IncrEn) is a new measure for quantifying the complexity of a time series. There are three critical parameters in the IncrEn calculation: N (length of the time series), m (dimensionality), and q (quantifying precision). However, the question of how to choose the most appropriate combination of IncrEn parameters for short datasets has not been extensively explored. The purpose of this research was to provide guidance on choosing suitable IncrEn parameters for short datasets by exploring the effects of varying the parameter values. We used simulated data, epileptic EEG data and cardiac interbeat (RR) data to investigate the effects of the parameters on the calculated IncrEn values. The results reveal that IncrEn is sensitive to changes in m, q and N for short datasets (N≤500). However, IncrEn reaches stability at a data length of N=1000 with m=2 and q=2, and for short datasets (N=100), it shows better relative consistency with 2≤m≤6 and 2≤q≤8 We suggest that the value of N should be no less than 100. To enable a clear distinction between different classes based on IncrEn, we recommend that m and q should take values between 2 and 4. With appropriate parameters, IncrEn enables the effective detection of complexity variations in physiological time series, suggesting that IncrEn should be useful for the analysis of physiological time series in clinical applications. Copyright © 2018 Elsevier Ltd. All rights reserved.

  16. Characterizing artifacts in RR stress test time series.

    PubMed

    Astudillo-Salinas, Fabian; Palacio-Baus, Kenneth; Solano-Quinde, Lizandro; Medina, Ruben; Wong, Sara

    2016-08-01

    Electrocardiographic stress test records have a lot of artifacts. In this paper we explore a simple method to characterize the amount of artifacts present in unprocessed RR stress test time series. Four time series classes were defined: Very good lead, Good lead, Low quality lead and Useless lead. 65 ECG, 8 lead, records of stress test series were analyzed. Firstly, RR-time series were annotated by two experts. The automatic methodology is based on dividing the RR-time series in non-overlapping windows. Each window is marked as noisy whenever it exceeds an established standard deviation threshold (SDT). Series are classified according to the percentage of windows that exceeds a given value, based upon the first manual annotation. Different SDT were explored. Results show that SDT close to 20% (as a percentage of the mean) provides the best results. The coincidence between annotators classification is 70.77% whereas, the coincidence between the second annotator and the automatic method providing the best matches is larger than 63%. Leads classified as Very good leads and Good leads could be combined to improve automatic heartbeat labeling.

  17. Association mining of dependency between time series

    NASA Astrophysics Data System (ADS)

    Hafez, Alaaeldin

    2001-03-01

    Time series analysis is considered as a crucial component of strategic control over a broad variety of disciplines in business, science and engineering. Time series data is a sequence of observations collected over intervals of time. Each time series describes a phenomenon as a function of time. Analysis on time series data includes discovering trends (or patterns) in a time series sequence. In the last few years, data mining has emerged and been recognized as a new technology for data analysis. Data Mining is the process of discovering potentially valuable patterns, associations, trends, sequences and dependencies in data. Data mining techniques can discover information that many traditional business analysis and statistical techniques fail to deliver. In this paper, we adapt and innovate data mining techniques to analyze time series data. By using data mining techniques, maximal frequent patterns are discovered and used in predicting future sequences or trends, where trends describe the behavior of a sequence. In order to include different types of time series (e.g. irregular and non- systematic), we consider past frequent patterns of the same time sequences (local patterns) and of other dependent time sequences (global patterns). We use the word 'dependent' instead of the word 'similar' for emphasis on real life time series where two time series sequences could be completely different (in values, shapes, etc.), but they still react to the same conditions in a dependent way. In this paper, we propose the Dependence Mining Technique that could be used in predicting time series sequences. The proposed technique consists of three phases: (a) for all time series sequences, generate their trend sequences, (b) discover maximal frequent trend patterns, generate pattern vectors (to keep information of frequent trend patterns), use trend pattern vectors to predict future time series sequences.

  18. Dollar$ & $en$e. Part IV: Measuring the value of people, structural, and customer capital.

    PubMed

    Wilkinson, I

    2001-01-01

    In Part I of this series, I introduced the concept of memes (1). Memes are ideas or concepts, the information world equivalent of genes. The goal of this series of articles is to infect you with my memes, so that you will assimilate, translate, and express them. We discovered that no matter what our area of expertise or "-ology," we all are in the information business. Our goal is to be in the wisdom business. We saw that when we convert raw data into wisdom we are moving along a value chain. Each step in the chain adds a different amount of value to the final product: timely, relevant, accurate, and precise knowledge which can then be applied to create the ultimate product in the value chain: wisdom. In Part II of this series, I infected you with a set of memes for measuring the cost of adding value (2). In Part III of this series, I infected you with a new set of memes for measuring the added value of knowledge, i.e., intellectual capital (3). In Part IV of this series, I will infect you with memes for measuring the value of people, structural, and customer capital.

  19. Value and Opportunity: Comparable Pay for Comparable Worth. Series on Public Issues No. 10.

    ERIC Educational Resources Information Center

    Walker, Deborah

    In this booklet, one of a series intended to apply economic principles to major social and political issues, an argument is presented against comparable pay for comparable worth policies for women. Separate subsections present opposing viewpoints on this controversial issue as well as an examination of whether legislation has been a…

  20. Complex network approach to fractional time series

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Manshour, Pouya

    In order to extract correlation information inherited in stochastic time series, the visibility graph algorithm has been recently proposed, by which a time series can be mapped onto a complex network. We demonstrate that the visibility algorithm is not an appropriate one to study the correlation aspects of a time series. We then employ the horizontal visibility algorithm, as a much simpler one, to map fractional processes onto complex networks. The degree distributions are shown to have parabolic exponential forms with Hurst dependent fitting parameter. Further, we take into account other topological properties such as maximum eigenvalue of the adjacencymore » matrix and the degree assortativity, and show that such topological quantities can also be used to predict the Hurst exponent, with an exception for anti-persistent fractional Gaussian noises. To solve this problem, we take into account the Spearman correlation coefficient between nodes' degrees and their corresponding data values in the original time series.« less

  1. Three-in-one resonance tube for harmonic series sound wave experiments

    NASA Astrophysics Data System (ADS)

    Jaafar, Rosly; Nazihah Mat Daud, Anis; Ali, Shaharudin; Kadri Ayop, Shahrul

    2017-07-01

    In this study we constructed a special three-in-one resonance tube for a harmonic series sound waves experiment. It is designed for three different experiments: both-open-end, one-closed-end and both-closed-end tubes. The resonance tube consists of a PVC conduit with a rectangular hole, rubber tube, plastic stopper with an embedded microphone and a plastic stopper. The resonance tube is utilized with visual analyser freeware to detect, display and measure the resonance frequencies for each harmonic series. The speeds of sound in air, v, are determined from the gradient of the 2(L+e) versus n fn-1 , 4(L+e) versus n fn-1 and 2L versus n fn-1 graphs for both-open-end, one-closed-end and both-closed-end tubes, respectively. The compatibility of a resonance tube for a harmonic series experiment is determined by comparing the experimental and standard values of v. The use of a resonance tube produces accurate results for v within a 1.91% error compared to its standard value. It can also be used to determine the values of end correction, e, in both-open-end and one-closed-end tubes. The special resonance tube can also be used for the values of n for a harmonic series experiment in the three types of resonance tubes: both-open-end, one-closed-end and both-closed-end tubes.

  2. Refined composite multiscale weighted-permutation entropy of financial time series

    NASA Astrophysics Data System (ADS)

    Zhang, Yongping; Shang, Pengjian

    2018-04-01

    For quantifying the complexity of nonlinear systems, multiscale weighted-permutation entropy (MWPE) has recently been proposed. MWPE has incorporated amplitude information and been applied to account for the multiple inherent dynamics of time series. However, MWPE may be unreliable, because its estimated values show large fluctuation for slight variation of the data locations, and a significant distinction only for the different length of time series. Therefore, we propose the refined composite multiscale weighted-permutation entropy (RCMWPE). By comparing the RCMWPE results with other methods' results on both synthetic data and financial time series, RCMWPE method shows not only the advantages inherited from MWPE but also lower sensitivity to the data locations, more stable and much less dependent on the length of time series. Moreover, we present and discuss the results of RCMWPE method on the daily price return series from Asian and European stock markets. There are significant differences between Asian markets and European markets, and the entropy values of Hang Seng Index (HSI) are close to but higher than those of European markets. The reliability of the proposed RCMWPE method has been supported by simulations on generated and real data. It could be applied to a variety of fields to quantify the complexity of the systems over multiple scales more accurately.

  3. Depletions in winter total ozone values over southern England

    NASA Technical Reports Server (NTRS)

    Lapworth, A.

    1994-01-01

    A study has been made of the recently re-evaluated time series of daily total ozone values for the period 1979 to 1992 for southern England. The series consists of measurements made at two stations, Bracknell and Camborne. The series shows a steady decline in ozone values in the spring months over the period, and this is consistent with data from an earlier decade that has been published but not re-evaluated. Of exceptional note is the monthly mean for January 1992 which was very significantly reduced from the normal value, and was the lowest so far measured for this month. This winter was also noteworthy for a prolonged period during which a blocking anticyclone dominated the region, and the possibility existed that this was related to the ozone anomaly. It was possible to determine whether the origin of the low ozone value lay in ascending stratospheric motions. A linear regression analysis of ozone value deviation against 100hPa temperature deviations was used to reduce ozone values to those expected in the absence of high pressure. The assumption was made that the normal regression relation was not affected by atmospheric anomalies during the winter. This showed that vertical motions in the stratosphere only accounted for part of the ozone anomaly and that the main cause of the ozone deficit lay either in a reduced stratospheric circulation to which the anticyclone may be related or in chemical effects in the reduced stratospheric temperatures above the high pressure area. A study of the ozone time series adjusted to remove variations correlated with meteorological quantities, showed that during the period since 1979, one other winter, that of 1982/3, showed a similar although less well defined deficit in total ozone values.

  4. Water Splitting with Series-Connected Polymer Solar Cells.

    PubMed

    Esiner, Serkan; van Eersel, Harm; van Pruissen, Gijs W P; Turbiez, Mathieu; Wienk, Martijn M; Janssen, René A J

    2016-10-12

    We investigate light-driven electrochemical water splitting with series-connected polymer solar cells using a combined experimental and modeling approach. The expected maximum solar-to-hydrogen conversion efficiency (η STH ) for light-driven water splitting is modeled for two, three, and four series-connected polymer solar cells. In the modeling, we assume an electrochemical water splitting potential of 1.50 V and a polymer solar cell for which the external quantum efficiency and fill factor are both 0.65. The minimum photon energy loss (E loss ), defined as the energy difference between the optical band gap (E g ) and the open-circuit voltage (V oc ), is set to 0.8 eV, which we consider a realistic value for polymer solar cells. Within these approximations, two series-connected single junction cells with E g = 1.73 eV or three series-connected cells with E g = 1.44 eV are both expected to give an η STH of 6.9%. For four series-connected cells, the maximum η STH is slightly less at 6.2% at an optimal E g = 1.33 eV. Water splitting was performed with series-connected polymer solar cells using polymers with different band gaps. PTPTIBDT-OD (E g = 1.89 eV), PTB7-Th (E g = 1.56 eV), and PDPP5T-2 (E g = 1.44 eV) were blended with [70]PCBM as absorber layer for two, three, and four series-connected configurations, respectively, and provide η STH values of 4.1, 6.1, and 4.9% when using a retroreflective foil on top of the cell to enhance light absorption. The reasons for deviations with experiments are analyzed and found to be due to differences in E g and E loss . Light-driven electrochemical water splitting was also modeled for multijunction polymer solar cells with vertically stacked photoactive layers. Under identical assumptions, an η STH of 10.0% is predicted for multijunction cells.

  5. Complexity analysis of the turbulent environmental fluid flow time series

    NASA Astrophysics Data System (ADS)

    Mihailović, D. T.; Nikolić-Đorić, E.; Drešković, N.; Mimić, G.

    2014-02-01

    We have used the Kolmogorov complexities, sample and permutation entropies to quantify the randomness degree in river flow time series of two mountain rivers in Bosnia and Herzegovina, representing the turbulent environmental fluid, for the period 1926-1990. In particular, we have examined the monthly river flow time series from two rivers (the Miljacka and the Bosnia) in the mountain part of their flow and then calculated the Kolmogorov complexity (KL) based on the Lempel-Ziv Algorithm (LZA) (lower-KLL and upper-KLU), sample entropy (SE) and permutation entropy (PE) values for each time series. The results indicate that the KLL, KLU, SE and PE values in two rivers are close to each other regardless of the amplitude differences in their monthly flow rates. We have illustrated the changes in mountain river flow complexity by experiments using (i) the data set for the Bosnia River and (ii) anticipated human activities and projected climate changes. We have explored the sensitivity of considered measures in dependence on the length of time series. In addition, we have divided the period 1926-1990 into three subintervals: (a) 1926-1945, (b) 1946-1965, (c) 1966-1990, and calculated the KLL, KLU, SE, PE values for the various time series in these subintervals. It is found that during the period 1946-1965, there is a decrease in their complexities, and corresponding changes in the SE and PE, in comparison to the period 1926-1990. This complexity loss may be primarily attributed to (i) human interventions, after the Second World War, on these two rivers because of their use for water consumption and (ii) climate change in recent times.

  6. Rethinking Value in the Bio-economy: Finance, Assetization, and the Management of Value.

    PubMed

    Birch, Kean

    2017-05-01

    Current debates in science and technology studies emphasize that the bio-economy-or, the articulation of capitalism and biotechnology-is built on notions of commodity production, commodification, and materiality, emphasizing that it is possible to derive value from body parts, molecular and cellular tissues, biological processes, and so on. What is missing from these perspectives, however, is consideration of the political-economic actors, knowledges, and practices involved in the creation and management of value. As part of a rethinking of value in the bio-economy, this article analyzes three key political-economic processes: financialization, capitalization, and assetization. In doing so, it argues that value is managed as part of a series of valuation practices, it is not inherent in biological materialities.

  7. Predicting long-term catchment nutrient export: the use of nonlinear time series models

    NASA Astrophysics Data System (ADS)

    Valent, Peter; Howden, Nicholas J. K.; Szolgay, Jan; Komornikova, Magda

    2010-05-01

    After the Second World War the nitrate concentrations in European water bodies changed significantly as the result of increased nitrogen fertilizer use and changes in land use. However, in the last decades, as a consequence of the implementation of nitrate-reducing measures in Europe, the nitrate concentrations in water bodies slowly decrease. This causes that the mean and variance of the observed time series also changes with time (nonstationarity and heteroscedascity). In order to detect changes and properly describe the behaviour of such time series by time series analysis, linear models (such as autoregressive (AR), moving average (MA) and autoregressive moving average models (ARMA)), are no more suitable. Time series with sudden changes in statistical characteristics can cause various problems in the calibration of traditional water quality models and thus give biased predictions. Proper statistical analysis of these non-stationary and heteroscedastic time series with the aim of detecting and subsequently explaining the variations in their statistical characteristics requires the use of nonlinear time series models. This information can be then used to improve the model building and calibration of conceptual water quality model or to select right calibration periods in order to produce reliable predictions. The objective of this contribution is to analyze two long time series of nitrate concentrations of the rivers Ouse and Stour with advanced nonlinear statistical modelling techniques and compare their performance with traditional linear models of the ARMA class in order to identify changes in the time series characteristics. The time series were analysed with nonlinear models with multiple regimes represented by self-exciting threshold autoregressive (SETAR) and Markov-switching models (MSW). The analysis showed that, based on the value of residual sum of squares (RSS) in both datasets, SETAR and MSW models described the time-series better than models of the

  8. Fractional Flow Reserve: Does a Cut-off Value add Value?

    PubMed Central

    Mohdnazri, Shah R; Keeble, Thomas R

    2016-01-01

    Fractional flow reserve (FFR) has been shown to improve outcomes when used to guide percutaneous coronary intervention (PCI). There have been two proposed cut-off points for FFR. The first was derived by comparing FFR against a series of non-invasive tests, with a value of ≤0.75 shown to predict a positive ischaemia test. It was then shown in the DEFER study that a vessel FFR value of ≥0.75 was associated with safe deferral of PCI. During the validation phase, a ‘grey zone’ for FFR values of between 0.76 and 0.80 was demonstrated, where a positive non-invasive test may still occur, but sensitivity and specificity were sub-optimal. Clinical judgement was therefore advised for values in this range. The FAME studies then moved the FFR cut-off point to ≤0.80, with a view to predicting outcomes. The ≤0.80 cut-off point has been adopted into clinical practice guidelines, whereas the lower value of ≤0.75 is no longer widely used. Here, the authors discuss the data underpinning these cut-off values and the practical implications for their use when using FFR guidance in PCI. PMID:29588700

  9. Entropy of electromyography time series

    NASA Astrophysics Data System (ADS)

    Kaufman, Miron; Zurcher, Ulrich; Sung, Paul S.

    2007-12-01

    A nonlinear analysis based on Renyi entropy is applied to electromyography (EMG) time series from back muscles. The time dependence of the entropy of the EMG signal exhibits a crossover from a subdiffusive regime at short times to a plateau at longer times. We argue that this behavior characterizes complex biological systems. The plateau value of the entropy can be used to differentiate between healthy and low back pain individuals.

  10. Local sample thickness determination via scanning transmission electron microscopy defocus series.

    PubMed

    Beyer, A; Straubinger, R; Belz, J; Volz, K

    2016-05-01

    The usable aperture sizes in (scanning) transmission electron microscopy ((S)TEM) have significantly increased in the past decade due to the introduction of aberration correction. In parallel with the consequent increase of convergence angle the depth of focus has decreased severely and optical sectioning in the STEM became feasible. Here we apply STEM defocus series to derive the local sample thickness of a TEM sample. To this end experimental as well as simulated defocus series of thin Si foils were acquired. The systematic blurring of high resolution high angle annular dark field images is quantified by evaluating the standard deviation of the image intensity for each image of a defocus series. The derived dependencies exhibit a pronounced maximum at the optimum defocus and drop to a background value for higher or lower values. The full width half maximum (FWHM) of the curve is equal to the sample thickness above a minimum thickness given by the size of the used aperture and the chromatic aberration of the microscope. The thicknesses obtained from experimental defocus series applying the proposed method are in good agreement with the values derived from other established methods. The key advantages of this method compared to others are its high spatial resolution and that it does not involve any time consuming simulations. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.

  11. Neural network versus classical time series forecasting models

    NASA Astrophysics Data System (ADS)

    Nor, Maria Elena; Safuan, Hamizah Mohd; Shab, Noorzehan Fazahiyah Md; Asrul, Mohd; Abdullah, Affendi; Mohamad, Nurul Asmaa Izzati; Lee, Muhammad Hisyam

    2017-05-01

    Artificial neural network (ANN) has advantage in time series forecasting as it has potential to solve complex forecasting problems. This is because ANN is data driven approach which able to be trained to map past values of a time series. In this study the forecast performance between neural network and classical time series forecasting method namely seasonal autoregressive integrated moving average models was being compared by utilizing gold price data. Moreover, the effect of different data preprocessing on the forecast performance of neural network being examined. The forecast accuracy was evaluated using mean absolute deviation, root mean square error and mean absolute percentage error. It was found that ANN produced the most accurate forecast when Box-Cox transformation was used as data preprocessing.

  12. Approximate Entropies for Stochastic Time Series and EKG Time Series of Patients with Epilepsy and Pseudoseizures

    NASA Astrophysics Data System (ADS)

    Vyhnalek, Brian; Zurcher, Ulrich; O'Dwyer, Rebecca; Kaufman, Miron

    2009-10-01

    A wide range of heart rate irregularities have been reported in small studies of patients with temporal lobe epilepsy [TLE]. We hypothesize that patients with TLE display cardiac dysautonomia in either a subclinical or clinical manner. In a small study, we have retrospectively identified (2003-8) two groups of patients from the epilepsy monitoring unit [EMU] at the Cleveland Clinic. No patients were diagnosed with cardiovascular morbidities. The control group consisted of patients with confirmed pseudoseizures and the experimental group had confirmed right temporal lobe epilepsy through a seizure free outcome after temporal lobectomy. We quantified the heart rate variability using the approximate entropy [ApEn]. We found similar values of the ApEn in all three states of consciousness (awake, sleep, and proceeding seizure onset). In the TLE group, there is some evidence for greater variability in the awake than in either the sleep or proceeding seizure onset. Here we present results for mathematically-generated time series: the heart rate fluctuations ξ follow the γ statistics i.e., p(ξ)=γ-1(k) ξ^k exp(-ξ). This probability function has well-known properties and its Shannon entropy can be expressed in terms of the γ-function. The parameter k allows us to generate a family of heart rate time series with different statistics. The ApEn calculated for the generated time series for different values of k mimic the properties found for the TLE and pseudoseizure group. Our results suggest that the ApEn is an effective tool to probe differences in statistics of heart rate fluctuations.

  13. Model-based Clustering of Categorical Time Series with Multinomial Logit Classification

    NASA Astrophysics Data System (ADS)

    Frühwirth-Schnatter, Sylvia; Pamminger, Christoph; Winter-Ebmer, Rudolf; Weber, Andrea

    2010-09-01

    A common problem in many areas of applied statistics is to identify groups of similar time series in a panel of time series. However, distance-based clustering methods cannot easily be extended to time series data, where an appropriate distance-measure is rather difficult to define, particularly for discrete-valued time series. Markov chain clustering, proposed by Pamminger and Frühwirth-Schnatter [6], is an approach for clustering discrete-valued time series obtained by observing a categorical variable with several states. This model-based clustering method is based on finite mixtures of first-order time-homogeneous Markov chain models. In order to further explain group membership we present an extension to the approach of Pamminger and Frühwirth-Schnatter [6] by formulating a probabilistic model for the latent group indicators within the Bayesian classification rule by using a multinomial logit model. The parameters are estimated for a fixed number of clusters within a Bayesian framework using an Markov chain Monte Carlo (MCMC) sampling scheme representing a (full) Gibbs-type sampler which involves only draws from standard distributions. Finally, an application to a panel of Austrian wage mobility data is presented which leads to an interesting segmentation of the Austrian labour market.

  14. Participatory public health systems research: value of community involvement in a study series in mental health emergency preparedness.

    PubMed

    McCabe, O Lee; Marum, Felicity; Semon, Natalie; Mosley, Adrian; Gwon, Howard; Perry, Charlene; Moore, Suzanne Straub; Links, Jonathan M

    2012-01-01

    Concerns have arisen over recent years about the absence of empirically derived evidence on which to base policy and practice in the public health system, in general, and to meet the challenge of public health emergency preparedness, in particular. Related issues include the challenge of disaster-caused, behavioral health surge, and the frequent exclusion of populations from studies that the research is meant to aid. To characterize the contributions of nonacademic collaborators to a series of projects validating a set of interventions to enhance capacity and competency of public mental health preparedness planning and response. Urban, suburban, and rural communities of the state of Maryland and rural communities of the state of Iowa. Study partners and participants (both of this project and the studies examined) were representatives of academic health centers (AHCs), local health departments (LHDs), and faith-based organizations (FBOs) and their communities. A multiple-project, case study analysis was conducted, that is, four research projects implemented by the authors from 2005 through 2011 to determine the types and impact of contributions made by nonacademic collaborators to those projects. The analysis involved reviewing research records, conceptualizing contributions (and providing examples) for government, faith, and (nonacademic) institutional collaborators. Ten areas were identified where partners made valuable contributions to the study series; these "value-areas" were as follows: 1) leadership and management of the projects; 2) formulation and refinement of research topics, aims, etc; 3) recruitment and retention of participants; 4) design and enhancement of interventions; 5) delivery of interventions; 6) collection, analysis, and interpretation of data; 7) dissemination of findings; 8) ensuring sustainability of faith/government preparedness planning relationships; 9) optimizing scalability and portability of the model; and 10) facilitating

  15. RADON CONCENTRATION TIME SERIES MODELING AND APPLICATION DISCUSSION.

    PubMed

    Stránský, V; Thinová, L

    2017-11-01

    In the year 2010 a continual radon measurement was established at Mladeč Caves in the Czech Republic using a continual radon monitor RADIM3A. In order to model radon time series in the years 2010-15, the Box-Jenkins Methodology, often used in econometrics, was applied. Because of the behavior of radon concentrations (RCs), a seasonal integrated, autoregressive moving averages model with exogenous variables (SARIMAX) has been chosen to model the measured time series. This model uses the time series seasonality, previously acquired values and delayed atmospheric parameters, to forecast RC. The developed model for RC time series is called regARIMA(5,1,3). Model residuals could be retrospectively compared with seismic evidence of local or global earthquakes, which occurred during the RCs measurement. This technique enables us to asses if continuously measured RC could serve an earthquake precursor. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  16. A Laboratory Exercise in Physics: Determining Single Capacitances and Series and Parallel Combinations of Capacitance.

    ERIC Educational Resources Information Center

    Schlenker, Richard M.

    This document presents a series of physics experiments which allow students to determine the value of unknown electrical capacitors. The exercises include both parallel and series connected capacitors. (SL)

  17. On Stabilizing the Variance of Dynamic Functional Brain Connectivity Time Series.

    PubMed

    Thompson, William Hedley; Fransson, Peter

    2016-12-01

    Assessment of dynamic functional brain connectivity based on functional magnetic resonance imaging (fMRI) data is an increasingly popular strategy to investigate temporal dynamics of the brain's large-scale network architecture. Current practice when deriving connectivity estimates over time is to use the Fisher transformation, which aims to stabilize the variance of correlation values that fluctuate around varying true correlation values. It is, however, unclear how well the stabilization of signal variance performed by the Fisher transformation works for each connectivity time series, when the true correlation is assumed to be fluctuating. This is of importance because many subsequent analyses either assume or perform better when the time series have stable variance or adheres to an approximate Gaussian distribution. In this article, using simulations and analysis of resting-state fMRI data, we analyze the effect of applying different variance stabilization strategies on connectivity time series. We focus our investigation on the Fisher transformation, the Box-Cox (BC) transformation and an approach that combines both transformations. Our results show that, if the intention of stabilizing the variance is to use metrics on the time series, where stable variance or a Gaussian distribution is desired (e.g., clustering), the Fisher transformation is not optimal and may even skew connectivity time series away from being Gaussian. Furthermore, we show that the suboptimal performance of the Fisher transformation can be substantially improved by including an additional BC transformation after the dynamic functional connectivity time series has been Fisher transformed.

  18. Adjusted monthly temperature and precipitation values for Guinea Conakry (1941-2010) using HOMER.

    NASA Astrophysics Data System (ADS)

    Aguilar, Enric; Aziz Barry, Abdoul; Mestre, Olivier

    2013-04-01

    Africa is a data sparse region and there are very few studies presenting homogenized monthly records. In this work, we introduce a dataset consisting of 12 stations spread over Guinea Conakry containing daily values of maximum and minimum temperature and accumulated rainfall for the period 1941-2010. The daily values have been quality controlled using R-Climdex routines, plus other interactive quality control applications, coded by the authors. After applying the different tests, more than 200 daily values were flagged as doubtful and carefully checked against the statistical distribution of the series and the rest of the dataset. Finally, 40 values were modified or set to missing and the rest were validated. The quality controlled daily dataset was used to produce monthly means and homogenized with HOMER, a new R-pacakge which includes the relative methods that performed better in the experiments conducted in the framework of the COST-HOME action. A total number of 38 inhomogeneities were found for temperature. As a total of 788 years of data were analyzed, the average ratio was one break every 20.7 years. The station with a larger number of inhomogeneities was Conakry (5 breaks) and one station, Kissidougou, was identified as homogeneous. The average number of breaks/station was 3.2. The mean value of the monthly factors applied to maximum (minimum) temperature was 0.17 °C (-1.08 °C) . For precipitation, due to the demand of a denser network to correctly homogenize this variable, only two major inhomogeneities in Conakry (1941-1961, -12%) and Kindia (1941-1976, -10%) were corrected. The adjusted dataset was used to compute regional series for the three variables and trends for the 1941-2010 period. The regional mean has been computed by simply averaging anomalies to 1971-2000 of the 12 time series. Two different versions have been obtained: a first one (A) makes use of the missing values interpolation made by HOMER (so all annual values in the regional series

  19. Evaluation of Scaling Invariance Embedded in Short Time Series

    PubMed Central

    Pan, Xue; Hou, Lei; Stephen, Mutua; Yang, Huijie; Zhu, Chenping

    2014-01-01

    Scaling invariance of time series has been making great contributions in diverse research fields. But how to evaluate scaling exponent from a real-world series is still an open problem. Finite length of time series may induce unacceptable fluctuation and bias to statistical quantities and consequent invalidation of currently used standard methods. In this paper a new concept called correlation-dependent balanced estimation of diffusion entropy is developed to evaluate scale-invariance in very short time series with length . Calculations with specified Hurst exponent values of show that by using the standard central moving average de-trending procedure this method can evaluate the scaling exponents for short time series with ignorable bias () and sharp confidential interval (standard deviation ). Considering the stride series from ten volunteers along an approximate oval path of a specified length, we observe that though the averages and deviations of scaling exponents are close, their evolutionary behaviors display rich patterns. It has potential use in analyzing physiological signals, detecting early warning signals, and so on. As an emphasis, the our core contribution is that by means of the proposed method one can estimate precisely shannon entropy from limited records. PMID:25549356

  20. Dynamical complexity of short and noisy time series. Compression-Complexity vs. Shannon entropy

    NASA Astrophysics Data System (ADS)

    Nagaraj, Nithin; Balasubramanian, Karthi

    2017-07-01

    Shannon entropy has been extensively used for characterizing complexity of time series arising from chaotic dynamical systems and stochastic processes such as Markov chains. However, for short and noisy time series, Shannon entropy performs poorly. Complexity measures which are based on lossless compression algorithms are a good substitute in such scenarios. We evaluate the performance of two such Compression-Complexity Measures namely Lempel-Ziv complexity (LZ) and Effort-To-Compress (ETC) on short time series from chaotic dynamical systems in the presence of noise. Both LZ and ETC outperform Shannon entropy (H) in accurately characterizing the dynamical complexity of such systems. For very short binary sequences (which arise in neuroscience applications), ETC has higher number of distinct complexity values than LZ and H, thus enabling a finer resolution. For two-state ergodic Markov chains, we empirically show that ETC converges to a steady state value faster than LZ. Compression-Complexity measures are promising for applications which involve short and noisy time series.

  1. On Sums of Numerical Series and Fourier Series

    ERIC Educational Resources Information Center

    Pavao, H. Germano; de Oliveira, E. Capelas

    2008-01-01

    We discuss a class of trigonometric functions whose corresponding Fourier series, on a conveniently chosen interval, can be used to calculate several numerical series. Particular cases are presented and two recent results involving numerical series are recovered. (Contains 1 note.)

  2. A comment on measuring the Hurst exponent of financial time series

    NASA Astrophysics Data System (ADS)

    Couillard, Michel; Davison, Matt

    2005-03-01

    A fundamental hypothesis of quantitative finance is that stock price variations are independent and can be modeled using Brownian motion. In recent years, it was proposed to use rescaled range analysis and its characteristic value, the Hurst exponent, to test for independence in financial time series. Theoretically, independent time series should be characterized by a Hurst exponent of 1/2. However, finite Brownian motion data sets will always give a value of the Hurst exponent larger than 1/2 and without an appropriate statistical test such a value can mistakenly be interpreted as evidence of long term memory. We obtain a more precise statistical significance test for the Hurst exponent and apply it to real financial data sets. Our empirical analysis shows no long-term memory in some financial returns, suggesting that Brownian motion cannot be rejected as a model for price dynamics.

  3. 31 CFR 351.14 - When are rate announcements that apply to Series EE savings bonds announced?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... apply to Series EE savings bonds announced? 351.14 Section 351.14 Money and Finance: Treasury... PUBLIC DEBT OFFERING OF UNITED STATES SAVINGS BONDS, SERIES EE Maturities, Redemption Values, and Investment Yields of Series EE Savings Bonds General Provisions § 351.14 When are rate announcements that...

  4. 31 CFR 351.14 - When are rate announcements that apply to Series EE savings bonds announced?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... apply to Series EE savings bonds announced? 351.14 Section 351.14 Money and Finance: Treasury... FISCAL SERVICE OFFERING OF UNITED STATES SAVINGS BONDS, SERIES EE Maturities, Redemption Values, and Investment Yields of Series EE Savings Bonds General Provisions § 351.14 When are rate announcements that...

  5. Evaluation of physiologic complexity in time series using generalized sample entropy and surrogate data analysis

    NASA Astrophysics Data System (ADS)

    Eduardo Virgilio Silva, Luiz; Otavio Murta, Luiz

    2012-12-01

    Complexity in time series is an intriguing feature of living dynamical systems, with potential use for identification of system state. Although various methods have been proposed for measuring physiologic complexity, uncorrelated time series are often assigned high values of complexity, errouneously classifying them as a complex physiological signals. Here, we propose and discuss a method for complex system analysis based on generalized statistical formalism and surrogate time series. Sample entropy (SampEn) was rewritten inspired in Tsallis generalized entropy, as function of q parameter (qSampEn). qSDiff curves were calculated, which consist of differences between original and surrogate series qSampEn. We evaluated qSDiff for 125 real heart rate variability (HRV) dynamics, divided into groups of 70 healthy, 44 congestive heart failure (CHF), and 11 atrial fibrillation (AF) subjects, and for simulated series of stochastic and chaotic process. The evaluations showed that, for nonperiodic signals, qSDiff curves have a maximum point (qSDiffmax) for q ≠1. Values of q where the maximum point occurs and where qSDiff is zero were also evaluated. Only qSDiffmax values were capable of distinguish HRV groups (p-values 5.10×10-3, 1.11×10-7, and 5.50×10-7 for healthy vs. CHF, healthy vs. AF, and CHF vs. AF, respectively), consistently with the concept of physiologic complexity, and suggests a potential use for chaotic system analysis.

  6. Adaptive time-variant models for fuzzy-time-series forecasting.

    PubMed

    Wong, Wai-Keung; Bai, Enjian; Chu, Alice Wai-Ching

    2010-12-01

    A fuzzy time series has been applied to the prediction of enrollment, temperature, stock indices, and other domains. Related studies mainly focus on three factors, namely, the partition of discourse, the content of forecasting rules, and the methods of defuzzification, all of which greatly influence the prediction accuracy of forecasting models. These studies use fixed analysis window sizes for forecasting. In this paper, an adaptive time-variant fuzzy-time-series forecasting model (ATVF) is proposed to improve forecasting accuracy. The proposed model automatically adapts the analysis window size of fuzzy time series based on the prediction accuracy in the training phase and uses heuristic rules to generate forecasting values in the testing phase. The performance of the ATVF model is tested using both simulated and actual time series including the enrollments at the University of Alabama, Tuscaloosa, and the Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX). The experiment results show that the proposed ATVF model achieves a significant improvement in forecasting accuracy as compared to other fuzzy-time-series forecasting models.

  7. Application of Taylor's series to trajectory propagation

    NASA Technical Reports Server (NTRS)

    Stanford, R. H.; Berryman, K. W.; Breckheimer, P. J.

    1986-01-01

    This paper describes the propagation of trajectories by the application of the preprocessor ATOMCC which uses Taylor's series to solve initial value problems in ordinary differential equations. Comparison of the results obtained with those from other methods are presented. The current studies indicate that the ATOMCC preprocessor is an easy, yet fast and accurate method for generating trajectories.

  8. 31 CFR 351.14 - When are rate announcements that apply to Series EE savings bonds announced?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... to Series EE savings bonds announced? 351.14 Section 351.14 Money and Finance: Treasury Regulations... DEBT OFFERING OF UNITED STATES SAVINGS BONDS, SERIES EE Maturities, Redemption Values, and Investment Yields of Series EE Savings Bonds General Provisions § 351.14 When are rate announcements that apply to...

  9. 31 CFR 351.14 - When are rate announcements that apply to Series EE savings bonds announced?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... to Series EE savings bonds announced? 351.14 Section 351.14 Money and Finance: Treasury Regulations... DEBT OFFERING OF UNITED STATES SAVINGS BONDS, SERIES EE Maturities, Redemption Values, and Investment Yields of Series EE Savings Bonds General Provisions § 351.14 When are rate announcements that apply to...

  10. 31 CFR 351.14 - When are rate announcements that apply to Series EE savings bonds announced?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... to Series EE savings bonds announced? 351.14 Section 351.14 Money and Finance: Treasury Regulations... DEBT OFFERING OF UNITED STATES SAVINGS BONDS, SERIES EE Maturities, Redemption Values, and Investment Yields of Series EE Savings Bonds General Provisions § 351.14 When are rate announcements that apply to...

  11. Transformational principles for NEON sampling of mammalian parasites and pathogens: a response to Springer et al. (2016)

    USDA-ARS?s Scientific Manuscript database

    The National Environmental Observatory Network (NEON) has recently released a series of protocols presented with apparently broad community support for studies of small mammals and parasites. Sampling designs were outlined outlined, collectively aimed at understanding how changing environmental cond...

  12. On Stabilizing the Variance of Dynamic Functional Brain Connectivity Time Series

    PubMed Central

    Fransson, Peter

    2016-01-01

    Abstract Assessment of dynamic functional brain connectivity based on functional magnetic resonance imaging (fMRI) data is an increasingly popular strategy to investigate temporal dynamics of the brain's large-scale network architecture. Current practice when deriving connectivity estimates over time is to use the Fisher transformation, which aims to stabilize the variance of correlation values that fluctuate around varying true correlation values. It is, however, unclear how well the stabilization of signal variance performed by the Fisher transformation works for each connectivity time series, when the true correlation is assumed to be fluctuating. This is of importance because many subsequent analyses either assume or perform better when the time series have stable variance or adheres to an approximate Gaussian distribution. In this article, using simulations and analysis of resting-state fMRI data, we analyze the effect of applying different variance stabilization strategies on connectivity time series. We focus our investigation on the Fisher transformation, the Box–Cox (BC) transformation and an approach that combines both transformations. Our results show that, if the intention of stabilizing the variance is to use metrics on the time series, where stable variance or a Gaussian distribution is desired (e.g., clustering), the Fisher transformation is not optimal and may even skew connectivity time series away from being Gaussian. Furthermore, we show that the suboptimal performance of the Fisher transformation can be substantially improved by including an additional BC transformation after the dynamic functional connectivity time series has been Fisher transformed. PMID:27784176

  13. Multifractal Value at Risk model

    NASA Astrophysics Data System (ADS)

    Lee, Hojin; Song, Jae Wook; Chang, Woojin

    2016-06-01

    In this paper new Value at Risk (VaR) model is proposed and investigated. We consider the multifractal property of financial time series and develop a multifractal Value at Risk (MFVaR). MFVaR introduced in this paper is analytically tractable and not based on simulation. Empirical study showed that MFVaR can provide the more stable and accurate forecasting performance in volatile financial markets where large loss can be incurred. This implies that our multifractal VaR works well for the risk measurement of extreme credit events.

  14. A simple and fast representation space for classifying complex time series

    NASA Astrophysics Data System (ADS)

    Zunino, Luciano; Olivares, Felipe; Bariviera, Aurelio F.; Rosso, Osvaldo A.

    2017-03-01

    In the context of time series analysis considerable effort has been directed towards the implementation of efficient discriminating statistical quantifiers. Very recently, a simple and fast representation space has been introduced, namely the number of turning points versus the Abbe value. It is able to separate time series from stationary and non-stationary processes with long-range dependences. In this work we show that this bidimensional approach is useful for distinguishing complex time series: different sets of financial and physiological data are efficiently discriminated. Additionally, a multiscale generalization that takes into account the multiple time scales often involved in complex systems has been also proposed. This multiscale analysis is essential to reach a higher discriminative power between physiological time series in health and disease.

  15. 31 CFR Appendix A to Part 359 - Redemption Value Calculations

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... book-entry Series I savings bonds redemption value calculation? Assume a New Treasury Direct par investment amount in a book-entry Series I savings bonds of $34.59, with an issue date of May, 2001, and a..., 2001 and redeemed December, 2001 = $101.96. Calculation: [(Book-entry par investment) ÷ (100)] × CRV...

  16. The Black Man on Film: Racial Stereotyping. Hayden Film Attitudes and Issues Series.

    ERIC Educational Resources Information Center

    Maynard, Richard A.

    Motion pictures have long been recognized as a mirror of society's values and attitudes, and for motivational and impression-making impact they are unsurpassed. The Hayden Film Attitudes and Issues Series is based on the teacher's source book, the Celluloid Curriculum: How to Use Movies in the Classroom. This series presents written sources…

  17. Synthesis and anti-parasitic activity of a novel quinolinone-chalcone series.

    PubMed

    Roussaki, Marina; Hall, Belinda; Lima, Sofia Costa; da Silva, Anabela Cordeiro; Wilkinson, Shane; Detsi, Anastasia

    2013-12-01

    A series of novel quinolinone-chalcone hybrids and analogues were designed, synthesized and their biological activity against the mammalian stages of Trypanosoma brucei and Leishmania infantum evaluated. Promising molecular scaffolds with significant microbicidal activity and low cytotoxicity were identified. Quinolinone-chalcone 10 exhibited anti-parasitic properties against both organisms, being the most potent anti-L. infantum agent of the entire series (IC50 value of 1.3±0.1 μM). Compounds 4 and 11 showed potency toward the intracellular, amastigote stage of L. infantum (IC50 values of 2.1±0.6 and 3.1±1.05 μM, respectively). Promising trypanocidal compounds include 5 and 10 (IC50 values of 2.6±0.1 and 3.3±0.1 μM, respectively) as well as 6 and 9 (both having IC50 values of <5 μM). Chemical modifications on the quinolinone-chalcone scaffold were performed on selected compounds in order to investigate the influence of these structural features on antiparasitic activity. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. Changes in ecosystem service values in Zhoushan Island using remote sensing time series data

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoping; Qin, Yanpei; Lv, Ying; Zhen, Guangwei; Gong, Fang; Li, Chaokui

    2017-10-01

    The largest inhabited island, Zhoushan Island, is the center of economy, culture, shipping, and fishing in the Zhoushan Archipelago New Area. Its coastal wetland and tidal flats offer significant ecological services including floodwater storage, wildlife habitat, and buffers against tidal surges. Yet, large-scale land reclamation and new land development may dramatically change ecosystem services. In this research, we assess changes in ecosystem service values in Zhoushan Island during 1990-2000-2011. Three LANDSAT TM and/or ETM data sets were used to determine the spatial pattern of land use, and previously published value coefficients were used to calculate the ecosystem service values delivered by each land category. The results show that total value of ecosystem services in Zhoushan Island declined by 11% from 2920.07 billion Yuan to 2609.77 billion Yuan per year between 1990 and 2011. This decrease is largely attributable to the 51% loss of tidal flats. The combined ecosystem service values of woodland, paddy land and tidal flats were over 90% of the total values. The result indicates that future land-use policy should pay attention to the conservation of these ecosystems over uncontrolled reclamation and coastal industrial development, and that further coastal reclamation should be on rigorous environmental impact analyses.

  19. Evaluation of scaling invariance embedded in short time series.

    PubMed

    Pan, Xue; Hou, Lei; Stephen, Mutua; Yang, Huijie; Zhu, Chenping

    2014-01-01

    Scaling invariance of time series has been making great contributions in diverse research fields. But how to evaluate scaling exponent from a real-world series is still an open problem. Finite length of time series may induce unacceptable fluctuation and bias to statistical quantities and consequent invalidation of currently used standard methods. In this paper a new concept called correlation-dependent balanced estimation of diffusion entropy is developed to evaluate scale-invariance in very short time series with length ~10(2). Calculations with specified Hurst exponent values of 0.2,0.3,...,0.9 show that by using the standard central moving average de-trending procedure this method can evaluate the scaling exponents for short time series with ignorable bias (≤0.03) and sharp confidential interval (standard deviation ≤0.05). Considering the stride series from ten volunteers along an approximate oval path of a specified length, we observe that though the averages and deviations of scaling exponents are close, their evolutionary behaviors display rich patterns. It has potential use in analyzing physiological signals, detecting early warning signals, and so on. As an emphasis, the our core contribution is that by means of the proposed method one can estimate precisely shannon entropy from limited records.

  20. An algorithm of Saxena-Easo on fuzzy time series forecasting

    NASA Astrophysics Data System (ADS)

    Ramadhani, L. C.; Anggraeni, D.; Kamsyakawuni, A.; Hadi, A. F.

    2018-04-01

    This paper presents a forecast model of Saxena-Easo fuzzy time series prediction to study the prediction of Indonesia inflation rate in 1970-2016. We use MATLAB software to compute this method. The algorithm of Saxena-Easo fuzzy time series doesn’t need stationarity like conventional forecasting method, capable of dealing with the value of time series which are linguistic and has the advantage of reducing the calculation, time and simplifying the calculation process. Generally it’s focus on percentage change as the universe discourse, interval partition and defuzzification. The result indicate that between the actual data and the forecast data are close enough with Root Mean Square Error (RMSE) = 1.5289.

  1. Long-term changes (1980-2003) in total ozone time series over Northern Hemisphere midlatitudes

    NASA Astrophysics Data System (ADS)

    Białek, Małgorzata

    2006-03-01

    Long-term changes in total ozone time series for Arosa, Belsk, Boulder and Sapporo stations are examined. For each station we analyze time series of the following statistical characteristics of the distribution of daily ozone data: seasonal mean, standard deviation, maximum and minimum of total daily ozone values for all seasons. The iterative statistical model is proposed to estimate trends and long-term changes in the statistical distribution of the daily total ozone data. The trends are calculated for the period 1980-2003. We observe lessening of negative trends in the seasonal means as compared to those calculated by WMO for 1980-2000. We discuss a possibility of a change of the distribution shape of ozone daily data using the Kolmogorov-Smirnov test and comparing trend values in the seasonal mean, standard deviation, maximum and minimum time series for the selected stations and seasons. The distribution shift toward lower values without a change in the distribution shape is suggested with the following exceptions: the spreading of the distribution toward lower values for Belsk during winter and no decisive result for Sapporo and Boulder in summer.

  2. Statistical inference for classification of RRIM clone series using near IR reflectance properties

    NASA Astrophysics Data System (ADS)

    Ismail, Faridatul Aima; Madzhi, Nina Korlina; Hashim, Hadzli; Abdullah, Noor Ezan; Khairuzzaman, Noor Aishah; Azmi, Azrie Faris Mohd; Sampian, Ahmad Faiz Mohd; Harun, Muhammad Hafiz

    2015-08-01

    RRIM clone is a rubber breeding series produced by RRIM (Rubber Research Institute of Malaysia) through "rubber breeding program" to improve latex yield and producing clones attractive to farmers. The objective of this work is to analyse measurement of optical sensing device on latex of selected clone series. The device using transmitting NIR properties and its reflectance is converted in terms of voltage. The obtained reflectance index value via voltage was analyzed using statistical technique in order to find out the discrimination among the clones. From the statistical results using error plots and one-way ANOVA test, there is an overwhelming evidence showing discrimination of RRIM 2002, RRIM 2007 and RRIM 3001 clone series with p value = 0.000. RRIM 2008 cannot be discriminated with RRIM 2014; however both of these groups are distinct from the other clones.

  3. Visibility graph analysis on quarterly macroeconomic series of China based on complex network theory

    NASA Astrophysics Data System (ADS)

    Wang, Na; Li, Dong; Wang, Qiwen

    2012-12-01

    The visibility graph approach and complex network theory provide a new insight into time series analysis. The inheritance of the visibility graph from the original time series was further explored in the paper. We found that degree distributions of visibility graphs extracted from Pseudo Brownian Motion series obtained by the Frequency Domain algorithm exhibit exponential behaviors, in which the exponential exponent is a binomial function of the Hurst index inherited in the time series. Our simulations presented that the quantitative relations between the Hurst indexes and the exponents of degree distribution function are different for different series and the visibility graph inherits some important features of the original time series. Further, we convert some quarterly macroeconomic series including the growth rates of value-added of three industry series and the growth rates of Gross Domestic Product series of China to graphs by the visibility algorithm and explore the topological properties of graphs associated from the four macroeconomic series, namely, the degree distribution and correlations, the clustering coefficient, the average path length, and community structure. Based on complex network analysis we find degree distributions of associated networks from the growth rates of value-added of three industry series are almost exponential and the degree distributions of associated networks from the growth rates of GDP series are scale free. We also discussed the assortativity and disassortativity of the four associated networks as they are related to the evolutionary process of the original macroeconomic series. All the constructed networks have “small-world” features. The community structures of associated networks suggest dynamic changes of the original macroeconomic series. We also detected the relationship among government policy changes, community structures of associated networks and macroeconomic dynamics. We find great influences of government

  4. Ionic Liquids as Solvent, Catalyst Support Chemical Agent Decontamination and Detoxification

    DTIC Science & Technology

    2004-12-15

    agents. 8 3.2 Reactions in surfactant systems Currie studied the reaction between 3-bromo-1-propanol and phenol and a series of phenols carrying...Liquids; Knoche, W., Schomacker, R., Eds.; Springer-Verlag: New York, 1998, pp 1-10. (52) Gonzaga , F.; Perez, E.; Rico-Lattes, I.; Lattes, A. New Journal

  5. Water quality management using statistical analysis and time-series prediction model

    NASA Astrophysics Data System (ADS)

    Parmar, Kulwinder Singh; Bhardwaj, Rashmi

    2014-12-01

    This paper deals with water quality management using statistical analysis and time-series prediction model. The monthly variation of water quality standards has been used to compare statistical mean, median, mode, standard deviation, kurtosis, skewness, coefficient of variation at Yamuna River. Model validated using R-squared, root mean square error, mean absolute percentage error, maximum absolute percentage error, mean absolute error, maximum absolute error, normalized Bayesian information criterion, Ljung-Box analysis, predicted value and confidence limits. Using auto regressive integrated moving average model, future water quality parameters values have been estimated. It is observed that predictive model is useful at 95 % confidence limits and curve is platykurtic for potential of hydrogen (pH), free ammonia, total Kjeldahl nitrogen, dissolved oxygen, water temperature (WT); leptokurtic for chemical oxygen demand, biochemical oxygen demand. Also, it is observed that predicted series is close to the original series which provides a perfect fit. All parameters except pH and WT cross the prescribed limits of the World Health Organization /United States Environmental Protection Agency, and thus water is not fit for drinking, agriculture and industrial use.

  6. Data imputation analysis for Cosmic Rays time series

    NASA Astrophysics Data System (ADS)

    Fernandes, R. C.; Lucio, P. S.; Fernandez, J. H.

    2017-05-01

    The occurrence of missing data concerning Galactic Cosmic Rays time series (GCR) is inevitable since loss of data is due to mechanical and human failure or technical problems and different periods of operation of GCR stations. The aim of this study was to perform multiple dataset imputation in order to depict the observational dataset. The study has used the monthly time series of GCR Climax (CLMX) and Roma (ROME) from 1960 to 2004 to simulate scenarios of 10%, 20%, 30%, 40%, 50%, 60%, 70%, 80% and 90% of missing data compared to observed ROME series, with 50 replicates. Then, the CLMX station as a proxy for allocation of these scenarios was used. Three different methods for monthly dataset imputation were selected: AMÉLIA II - runs the bootstrap Expectation Maximization algorithm, MICE - runs an algorithm via Multivariate Imputation by Chained Equations and MTSDI - an Expectation Maximization algorithm-based method for imputation of missing values in multivariate normal time series. The synthetic time series compared with the observed ROME series has also been evaluated using several skill measures as such as RMSE, NRMSE, Agreement Index, R, R2, F-test and t-test. The results showed that for CLMX and ROME, the R2 and R statistics were equal to 0.98 and 0.96, respectively. It was observed that increases in the number of gaps generate loss of quality of the time series. Data imputation was more efficient with MTSDI method, with negligible errors and best skill coefficients. The results suggest a limit of about 60% of missing data for imputation, for monthly averages, no more than this. It is noteworthy that CLMX, ROME and KIEL stations present no missing data in the target period. This methodology allowed reconstructing 43 time series.

  7. A case series study on complications after breast augmentation with Macrolane™.

    PubMed

    Becchere, M P; Farace, F; Dessena, L; Marongiu, Francesco; Bulla, A; Simbula, L; Meloni, G B; Rubino, C

    2013-04-01

    The use of Macrolane™ seems to have several advantages compared to the other standard methods for breast augmentation: it is faster, less invasive, and requires only local anesthesia. Nevertheless, various complications associated with the use of Macrolane™ have been described, e.g., encapsulated lumps in breast tissue, infection, and parenchymal fibrosis. We report the results of our case series study on the clinical and imaging evaluations of patients who came to our attention after breast augmentation with Macrolane™ injection and evaluate the effect of this treatment on breast cancer screening procedures. Between September 2009 and July 2010, seven patients, treated elsewhere with intramammary Macrolane™ injection for cosmetic purposes, presented to our institution complaining of breast pain. In all patients, Macrolane™ had been injected under local anesthesia in the retromammary space through a surgical cannula. On mammography, nodules appeared as gross lobulated radiopacities with polycyclic contours. On breast ultrasound, the nodules showed hypo-anaechogenic cystlike features. In all cases, image analysis by the radiologist was hindered by the presence of the implanted substance, which did not allow the complete inspection of the whole breast tissue. From our experience, although safe in other areas, injection of Macrolane™ into breast tissue cannot be recommended at this time. Our study, along with other reports, supports the need to start a clinical trial on the use of injectable fillers in the breast to validate their safety and effectiveness. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .

  8. Evolution of the Sunspot Number and Solar Wind B Time Series

    NASA Astrophysics Data System (ADS)

    Cliver, Edward W.; Herbst, Konstantin

    2018-03-01

    The past two decades have witnessed significant changes in our knowledge of long-term solar and solar wind activity. The sunspot number time series (1700-present) developed by Rudolf Wolf during the second half of the 19th century was revised and extended by the group sunspot number series (1610-1995) of Hoyt and Schatten during the 1990s. The group sunspot number is significantly lower than the Wolf series before ˜1885. An effort from 2011-2015 to understand and remove differences between these two series via a series of workshops had the unintended consequence of prompting several alternative constructions of the sunspot number. Thus it has been necessary to expand and extend the sunspot number reconciliation process. On the solar wind side, after a decade of controversy, an ISSI International Team used geomagnetic and sunspot data to obtain a high-confidence time series of the solar wind magnetic field strength (B) from 1750-present that can be compared with two independent long-term (> ˜600 year) series of annual B-values based on cosmogenic nuclides. In this paper, we trace the twists and turns leading to our current understanding of long-term solar and solar wind activity.

  9. Rethinking Value in the Bio-economy

    PubMed Central

    2016-01-01

    Current debates in science and technology studies emphasize that the bio-economy—or, the articulation of capitalism and biotechnology—is built on notions of commodity production, commodification, and materiality, emphasizing that it is possible to derive value from body parts, molecular and cellular tissues, biological processes, and so on. What is missing from these perspectives, however, is consideration of the political-economic actors, knowledges, and practices involved in the creation and management of value. As part of a rethinking of value in the bio-economy, this article analyzes three key political-economic processes: financialization, capitalization, and assetization. In doing so, it argues that value is managed as part of a series of valuation practices, it is not inherent in biological materialities. PMID:28458406

  10. Computations of Eisenstein series on Fuchsian groups

    NASA Astrophysics Data System (ADS)

    Avelin, Helen

    2008-09-01

    We present numerical investigations of the value distribution and distribution of Fourier coefficients of the Eisenstein series E(z;s) on arithmetic and non-arithmetic Fuchsian groups. Our numerics indicate a Gaussian limit value distribution for a real-valued rotation of E(z;s) as operatorname{Re} sD1/2 , operatorname{Im} sto infty and also, on non-arithmetic groups, a complex Gaussian limit distribution for E(z;s) when operatorname{Re} s>1/2 near 1/2 and operatorname{Im} sto infty , at least if we allow operatorname{Re} sto 1/2 at some rate. Furthermore, on non-arithmetic groups and for fixed s with operatorname{Re} s ge 1/2 near 1/2 , our numerics indicate a Gaussian limit distribution for the appropriately normalized Fourier coefficients.

  11. Multiresolution analysis of Bursa Malaysia KLCI time series

    NASA Astrophysics Data System (ADS)

    Ismail, Mohd Tahir; Dghais, Amel Abdoullah Ahmed

    2017-05-01

    In general, a time series is simply a sequence of numbers collected at regular intervals over a period. Financial time series data processing is concerned with the theory and practice of processing asset price over time, such as currency, commodity data, and stock market data. The primary aim of this study is to understand the fundamental characteristics of selected financial time series by using the time as well as the frequency domain analysis. After that prediction can be executed for the desired system for in sample forecasting. In this study, multiresolution analysis which the assist of discrete wavelet transforms (DWT) and maximal overlap discrete wavelet transform (MODWT) will be used to pinpoint special characteristics of Bursa Malaysia KLCI (Kuala Lumpur Composite Index) daily closing prices and return values. In addition, further case study discussions include the modeling of Bursa Malaysia KLCI using linear ARIMA with wavelets to address how multiresolution approach improves fitting and forecasting results.

  12. Off-line tracking of series parameters in distribution systems using AMI data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Tess L.; Sun, Yannan; Schneider, Kevin

    2016-05-01

    Electric distribution systems have historically lacked measurement points, and equipment is often operated to its failure point, resulting in customer outages. The widespread deployment of sensors at the distribution level is enabling observability. This paper presents an off-line parameter value tracking procedure that takes advantage of the increasing number of measurement devices being deployed at the distribution level to estimate changes in series impedance parameter values over time. The tracking of parameter values enables non-diurnal and non-seasonal change to be flagged for investigation. The presented method uses an unbalanced Distribution System State Estimation (DSSE) and a measurement residual-based parameter estimationmore » procedure. Measurement residuals from multiple measurement snapshots are combined in order to increase the effective local redundancy and improve the robustness of the calculations in the presence of measurement noise. Data from devices on the primary distribution system and from customer meters, via an AMI system, form the input data set. Results of simulations on the IEEE 13-Node Test Feeder are presented to illustrate the proposed approach applied to changes in series impedance parameters. A 5% change in series resistance elements can be detected in the presence of 2% measurement error when combining less than 1 day of measurement snapshots into a single estimate.« less

  13. Design and Implementation of a Professional Development Course Series.

    PubMed

    Welch, Beth; Spooner, Joshua J; Tanzer, Kim; Dintzner, Matthew R

    2017-12-01

    Objective. To design and implement a longitudinal course series focused on professional development and professional identity formation in pharmacy students at Western New England University. Methods. A four-year, theme-based course series was designed to sequentially and longitudinally impart the values, attributes, and characteristics of a professional pharmacist. Requirements of the course include: goal planning and reflective assignments, submission of "Best Works," attendance at professional meetings, completion of service hours, annual completion of a Pharmacy Professionalism Instrument, attendance at Dean's Seminar, participation in roundtable discussions, and maintenance of an electronic portfolio. Though the Professional Development course series carries no credit, these courses are progression requirements and students are assessed on a pass/fail basis. Results. Course pass rates in the 2015-2016 academic year for all four classes were 99% to 100%, suggesting the majority of students take professional development seriously and are achieving the intended outcomes of the courses. Conclusion. A professional development course series was designed and implemented in the new Doctor of Pharmacy program at Western New England University to enhance the professional identity formation of students.

  14. RankExplorer: Visualization of Ranking Changes in Large Time Series Data.

    PubMed

    Shi, Conglei; Cui, Weiwei; Liu, Shixia; Xu, Panpan; Chen, Wei; Qu, Huamin

    2012-12-01

    For many applications involving time series data, people are often interested in the changes of item values over time as well as their ranking changes. For example, people search many words via search engines like Google and Bing every day. Analysts are interested in both the absolute searching number for each word as well as their relative rankings. Both sets of statistics may change over time. For very large time series data with thousands of items, how to visually present ranking changes is an interesting challenge. In this paper, we propose RankExplorer, a novel visualization method based on ThemeRiver to reveal the ranking changes. Our method consists of four major components: 1) a segmentation method which partitions a large set of time series curves into a manageable number of ranking categories; 2) an extended ThemeRiver view with embedded color bars and changing glyphs to show the evolution of aggregation values related to each ranking category over time as well as the content changes in each ranking category; 3) a trend curve to show the degree of ranking changes over time; 4) rich user interactions to support interactive exploration of ranking changes. We have applied our method to some real time series data and the case studies demonstrate that our method can reveal the underlying patterns related to ranking changes which might otherwise be obscured in traditional visualizations.

  15. Algorithm for Compressing Time-Series Data

    NASA Technical Reports Server (NTRS)

    Hawkins, S. Edward, III; Darlington, Edward Hugo

    2012-01-01

    An algorithm based on Chebyshev polynomials effects lossy compression of time-series data or other one-dimensional data streams (e.g., spectral data) that are arranged in blocks for sequential transmission. The algorithm was developed for use in transmitting data from spacecraft scientific instruments to Earth stations. In spite of its lossy nature, the algorithm preserves the information needed for scientific analysis. The algorithm is computationally simple, yet compresses data streams by factors much greater than two. The algorithm is not restricted to spacecraft or scientific uses: it is applicable to time-series data in general. The algorithm can also be applied to general multidimensional data that have been converted to time-series data, a typical example being image data acquired by raster scanning. However, unlike most prior image-data-compression algorithms, this algorithm neither depends on nor exploits the two-dimensional spatial correlations that are generally present in images. In order to understand the essence of this compression algorithm, it is necessary to understand that the net effect of this algorithm and the associated decompression algorithm is to approximate the original stream of data as a sequence of finite series of Chebyshev polynomials. For the purpose of this algorithm, a block of data or interval of time for which a Chebyshev polynomial series is fitted to the original data is denoted a fitting interval. Chebyshev approximation has two properties that make it particularly effective for compressing serial data streams with minimal loss of scientific information: The errors associated with a Chebyshev approximation are nearly uniformly distributed over the fitting interval (this is known in the art as the "equal error property"); and the maximum deviations of the fitted Chebyshev polynomial from the original data have the smallest possible values (this is known in the art as the "min-max property").

  16. The Value of Children: A Cross-National Study, Volume Two. Philippines.

    ERIC Educational Resources Information Center

    Bulatao, Rodolfo A.

    This volume, second in a series of seven reports of the Value of Children Project, discusses results of the survey in the Philippines. The study identifies major values and disvalues that Filipino parents attach to children. It also examines characteristics of parents that are related to values and disvalues. The document is presented in seven…

  17. Surveying Borders, Boundaries, and Contested Spaces in Curriculum and Pedagogy. Curriculum and Pedagogy Series

    ERIC Educational Resources Information Center

    Reilly, Cole, Ed.; Russell, Victoria, Ed.; Chehayl, Laurel K., Ed.; McDermott, Morna M., Ed.

    2011-01-01

    The Curriculum and Pedagogy book series is an enactment of the mission and values espoused by the Curriculum and Pedagogy Group, an international educational organization serving those who share a common faith in democracy and a commitment to public moral leadership in schools and society. Accordingly, the mission of this series is to advance…

  18. Measuring information interactions on the ordinal pattern of stock time series

    NASA Astrophysics Data System (ADS)

    Zhao, Xiaojun; Shang, Pengjian; Wang, Jing

    2013-02-01

    The interactions among time series as individual components of complex systems can be quantified by measuring to what extent they exchange information among each other. In many applications, one focuses not on the original series but on its ordinal pattern. In such cases, trivial noises appear more likely to be filtered and the abrupt influence of extreme values can be weakened. Cross-sample entropy and inner composition alignment have been introduced as prominent methods to estimate the information interactions of complex systems. In this paper, we modify both methods to detect the interactions among the ordinal pattern of stock return and volatility series, and we try to uncover the information exchanges across sectors in Chinese stock markets.

  19. Estimation of value at risk and conditional value at risk using normal mixture distributions model

    NASA Astrophysics Data System (ADS)

    Kamaruzzaman, Zetty Ain; Isa, Zaidi

    2013-04-01

    Normal mixture distributions model has been successfully applied in financial time series analysis. In this paper, we estimate the return distribution, value at risk (VaR) and conditional value at risk (CVaR) for monthly and weekly rates of returns for FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI) from July 1990 until July 2010 using the two component univariate normal mixture distributions model. First, we present the application of normal mixture distributions model in empirical finance where we fit our real data. Second, we present the application of normal mixture distributions model in risk analysis where we apply the normal mixture distributions model to evaluate the value at risk (VaR) and conditional value at risk (CVaR) with model validation for both risk measures. The empirical results provide evidence that using the two components normal mixture distributions model can fit the data well and can perform better in estimating value at risk (VaR) and conditional value at risk (CVaR) where it can capture the stylized facts of non-normality and leptokurtosis in returns distribution.

  20. Family Values through Children's Literature: Grades K-3. School Library Media Series, No. 20.

    ERIC Educational Resources Information Center

    Roberts, Patricia L.

    This book gives teachers, librarians, parents, and others who work with children an annotated bibliography of children's books that contain characters who display positive family-oriented values in their relationships to others. Each chapter begins with a definition of a specific value, followed by a summary, sample activities and lessons for each…

  1. Campus Ecology, Part 2 of a Series: Pond Water

    ERIC Educational Resources Information Center

    Bryan, R. C.

    1974-01-01

    Presents a series of activities which focus on the study of the physical characteristics of water, including temperatures, opacity, pH-values, oxygen concentrations, reagents, and free CO2 concentrations. Indicates that ponds can provide the students with opportunities to learn chemistry, geology, biology, botany, and the effects of weather. (CC)

  2. A taxonomic index, with names of descriptive authorities of termite genera and species: An accompaniment to Biology of Termites: A Modern Synthesis (Bignell DE, Roisin Y, Lo N, Editors. 2011. Springer, Dordrecht. 576 pp.)

    PubMed Central

    Bignell, D. E.; Jones, D. T.

    2014-01-01

    Abstract Biology of Termites: A Modern Synthesis (Bignell DE, Roisin Y, Lo N, (Editors), Springer, Dordrecht, 576pp, ISBN 978-90-481-3976-7, e-ISBN 978-90-481-3977-4, DOI 10.1007/978-90-481-3977-4) was published in 2011. With the agreement of the publishers, we give a taxonomic index of the book comprising 494 termite entries, 103 entries of other multicellular animal species mentioned as associates or predators of termites, with 9 fungal, 60 protist, and 64 prokaryote identities, which are listed as termite symbionts ( sensu stricto ). In addition, we add descriptive authorities for living (and some fossil) termite genera and species. Higher taxonomic groupings for termites are indicated by 25 code numbers. Microorganisms (prokaryotes, protists, and fungi) are listed separately, using broad modern taxonomic affiliations from the contemporary literature of bacteriology, protozoology, and mycology. PMID:25368037

  3. Power estimation using simulations for air pollution time-series studies

    PubMed Central

    2012-01-01

    Background Estimation of power to assess associations of interest can be challenging for time-series studies of the acute health effects of air pollution because there are two dimensions of sample size (time-series length and daily outcome counts), and because these studies often use generalized linear models to control for complex patterns of covariation between pollutants and time trends, meteorology and possibly other pollutants. In general, statistical software packages for power estimation rely on simplifying assumptions that may not adequately capture this complexity. Here we examine the impact of various factors affecting power using simulations, with comparison of power estimates obtained from simulations with those obtained using statistical software. Methods Power was estimated for various analyses within a time-series study of air pollution and emergency department visits using simulations for specified scenarios. Mean daily emergency department visit counts, model parameter value estimates and daily values for air pollution and meteorological variables from actual data (8/1/98 to 7/31/99 in Atlanta) were used to generate simulated daily outcome counts with specified temporal associations with air pollutants and randomly generated error based on a Poisson distribution. Power was estimated by conducting analyses of the association between simulated daily outcome counts and air pollution in 2000 data sets for each scenario. Power estimates from simulations and statistical software (G*Power and PASS) were compared. Results In the simulation results, increasing time-series length and average daily outcome counts both increased power to a similar extent. Our results also illustrate the low power that can result from using outcomes with low daily counts or short time series, and the reduction in power that can accompany use of multipollutant models. Power estimates obtained using standard statistical software were very similar to those from the simulations

  4. Power estimation using simulations for air pollution time-series studies.

    PubMed

    Winquist, Andrea; Klein, Mitchel; Tolbert, Paige; Sarnat, Stefanie Ebelt

    2012-09-20

    Estimation of power to assess associations of interest can be challenging for time-series studies of the acute health effects of air pollution because there are two dimensions of sample size (time-series length and daily outcome counts), and because these studies often use generalized linear models to control for complex patterns of covariation between pollutants and time trends, meteorology and possibly other pollutants. In general, statistical software packages for power estimation rely on simplifying assumptions that may not adequately capture this complexity. Here we examine the impact of various factors affecting power using simulations, with comparison of power estimates obtained from simulations with those obtained using statistical software. Power was estimated for various analyses within a time-series study of air pollution and emergency department visits using simulations for specified scenarios. Mean daily emergency department visit counts, model parameter value estimates and daily values for air pollution and meteorological variables from actual data (8/1/98 to 7/31/99 in Atlanta) were used to generate simulated daily outcome counts with specified temporal associations with air pollutants and randomly generated error based on a Poisson distribution. Power was estimated by conducting analyses of the association between simulated daily outcome counts and air pollution in 2000 data sets for each scenario. Power estimates from simulations and statistical software (G*Power and PASS) were compared. In the simulation results, increasing time-series length and average daily outcome counts both increased power to a similar extent. Our results also illustrate the low power that can result from using outcomes with low daily counts or short time series, and the reduction in power that can accompany use of multipollutant models. Power estimates obtained using standard statistical software were very similar to those from the simulations when properly implemented

  5. 78 FR 37623 - Transparent Value Trust, et al.; Notice of Application

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-21

    ...] Transparent Value Trust, et al.; Notice of Application June 14, 2013. AGENCY: Securities and Exchange... (e) certain registered management investment companies and unit investment trusts outside of the same group of investment companies as the series to acquire Shares. Applicants: Transparent Value Trust...

  6. Graphic analysis and multifractal on percolation-based return interval series

    NASA Astrophysics Data System (ADS)

    Pei, A. Q.; Wang, J.

    2015-05-01

    A financial time series model is developed and investigated by the oriented percolation system (one of the statistical physics systems). The nonlinear and statistical behaviors of the return interval time series are studied for the proposed model and the real stock market by applying visibility graph (VG) and multifractal detrended fluctuation analysis (MF-DFA). We investigate the fluctuation behaviors of return intervals of the model for different parameter settings, and also comparatively study these fluctuation patterns with those of the real financial data for different threshold values. The empirical research of this work exhibits the multifractal features for the corresponding financial time series. Further, the VGs deviated from both of the simulated data and the real data show the behaviors of small-world, hierarchy, high clustering and power-law tail for the degree distributions.

  7. Tropospheric ozone in the Nineteenth Century: The Moncalieri series

    NASA Astrophysics Data System (ADS)

    Anfossi, D.; Sandroni, S.; Viarengo, S.

    1991-09-01

    A 26-year (1868-1893) data series of daily ozone readings performed at Moncalieri, northern Italy, by the Schönbein test paper technique has been analyzed. The availability of a series of simultaneous readings by the Schönbein and a quantitative technique (Levy, 1877) and the conversion chart for humidity by Linvill et al. (1980) allowed us to develop a procedure to convert the Moncalieri data into parts per billion by volume values. The results seem to indicate that in comparison to one century ago, the ozone level in Europe has increased by more than twice not only at the surface but also in the free troposphere.

  8. Values, Morality, and Religion in the School. Education Guidelines Series Monograph #1.

    ERIC Educational Resources Information Center

    Flinders, Neil J.

    Many people are anxious about values, morality, and religion in the schools. Business, political, religious, and educational leaders are concerned; confusion is widespread. This document aims at assisting interested parties to understand better the source of some of the difficulties faced by parents, school board members, teachers, legislators,…

  9. Effect of temperature on series resistance of organic/inorganic semiconductor junction diode

    NASA Astrophysics Data System (ADS)

    Tripathi, Udbhav; Kaur, Ramneek; Bharti, Shivani

    2016-05-01

    The paper reports the fabrication and characterization of CuPc/n-Si organic/inorganic semiconductor diode. Copper phthalocyanine, a p-type organic semiconductor layer has been deposited on Si substrate by thermal evaporation technique. The detailed analysis of the forward and reverse bias current-voltage characteristics has been provided. Temperature dependence of the schottky diode parameters has been studied and discussed in the temperature range, 303 K to 353 K. Series resistance of the diode has been determined using Cheung's function method. Series resistance decreases with increase in temperature. The large value of series resistance at low temperature has been explained on the basis of barrier inhomogeneities in the diode.

  10. THE NUTRITIONAL VALUE OF OCD RATIONS.

    DTIC Science & Technology

    Three series of studies of the nutritional value of Office of Civil Defense shelter rations were carried out. In the first, nitrogen balance and...none of the OCD rations (biscuits, crackers or wafers) supported satisfactory reproduction in the rat when fed during pregnancy and lactation; both the survival and growth of the progency were seriously impaired. (Author)

  11. Causal judgments about empirical information in an interrupted time series design.

    PubMed

    White, Peter A

    2016-07-19

    Empirical information available for causal judgment in everyday life tends to take the form of quasi-experimental designs, lacking control groups, more than the form of contingency information that is usually presented in experiments. Stimuli were presented in which values of an outcome variable for a single individual were recorded over six time periods, and an intervention was introduced between the fifth and sixth time periods. Participants judged whether and how much the intervention affected the outcome. With numerical stimulus information, judgments were higher for a pre-intervention profile in which all values were the same than for pre-intervention profiles with any other kind of trend. With graphical stimulus information, judgments were more sensitive to trends, tending to be higher when an increase after the intervention was preceded by a decreasing series than when it was preceded by an increasing series ending on the same value at the fifth time period. It is suggested that a feature-analytic model, in which the salience of different features of information varies between presentation formats, may provide the best prospect of explaining the results.

  12. A novel encoding Lempel-Ziv complexity algorithm for quantifying the irregularity of physiological time series.

    PubMed

    Zhang, Yatao; Wei, Shoushui; Liu, Hai; Zhao, Lina; Liu, Chengyu

    2016-09-01

    The Lempel-Ziv (LZ) complexity and its variants have been extensively used to analyze the irregularity of physiological time series. To date, these measures cannot explicitly discern between the irregularity and the chaotic characteristics of physiological time series. Our study compared the performance of an encoding LZ (ELZ) complexity algorithm, a novel variant of the LZ complexity algorithm, with those of the classic LZ (CLZ) and multistate LZ (MLZ) complexity algorithms. Simulation experiments on Gaussian noise, logistic chaotic, and periodic time series showed that only the ELZ algorithm monotonically declined with the reduction in irregularity in time series, whereas the CLZ and MLZ approaches yielded overlapped values for chaotic time series and time series mixed with Gaussian noise, demonstrating the accuracy of the proposed ELZ algorithm in capturing the irregularity, rather than the complexity, of physiological time series. In addition, the effect of sequence length on the ELZ algorithm was more stable compared with those on CLZ and MLZ, especially when the sequence length was longer than 300. A sensitivity analysis for all three LZ algorithms revealed that both the MLZ and the ELZ algorithms could respond to the change in time sequences, whereas the CLZ approach could not. Cardiac interbeat (RR) interval time series from the MIT-BIH database were also evaluated, and the results showed that the ELZ algorithm could accurately measure the inherent irregularity of the RR interval time series, as indicated by lower LZ values yielded from a congestive heart failure group versus those yielded from a normal sinus rhythm group (p < 0.01). Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  13. Dobson total ozone series of Oxford: Reevaluation and applications

    NASA Astrophysics Data System (ADS)

    Vogler, C.; BröNnimann, S.; Staehelin, J.; Griffin, R. E. M.

    2007-10-01

    We have reevaluated the original total ozone measurements made in Oxford between 1924 and 1957, with a view to extending backward in time the existing total ozone series from 1957 to 1975. The Oxford measurements are the oldest Dobson observations in the world. Their prime importance, when coupled with the series from Arosa (since 1926) and Tromsø (since 1935), is for increasing basic understanding of stratospheric ozone and dynamics, while in relation to studies of the recent ozone depletion they constitute a baseline of considerable (and unique) significance and value. However, the reevaluation was made difficult on account of changes to the instruments and wavelengths as the early data collection methods evolved, while unknowns due to the influence of aerosols and the possible presence of dioxides of sulphur and nitrogen created additional problems. Our reevaluation was based on statistical procedures (comparisons with meteorological upper air data and ozone series from Arosa) and also on corrections suggested by Dobson himself. The comparisons demonstrate that the data are internally consistent and of good quality. Nevertheless, as post-1957 data were not assessed in this study, the series cannot be recommended at present for trend analysis, though the series can be used for climatological studies. By supplementing the Oxford data with other existing series, we present a European total ozone climatology for 1924-1939, 1950-1965, and 1988-2000 and analyze the data with respect to variables measuring the strength and the temperature of the polar vortex.

  14. Pearson correlation estimation for irregularly sampled time series

    NASA Astrophysics Data System (ADS)

    Rehfeld, K.; Marwan, N.; Heitzig, J.; Kurths, J.

    2012-04-01

    Many applications in the geosciences call for the joint and objective analysis of irregular time series. For automated processing, robust measures of linear and nonlinear association are needed. Up to now, the standard approach would have been to reconstruct the time series on a regular grid, using linear or spline interpolation. Interpolation, however, comes with systematic side-effects, as it increases the auto-correlation in the time series. We have searched for the best method to estimate Pearson correlation for irregular time series, i.e. the one with the lowest estimation bias and variance. We adapted a kernel-based approach, using Gaussian weights. Pearson correlation is calculated, in principle, as a mean over products of previously centralized observations. In the regularly sampled case, observations in both time series were observed at the same time and thus the allocation of measurement values into pairs of products is straightforward. In the irregularly sampled case, however, measurements were not necessarily observed at the same time. Now, the key idea of the kernel-based method is to calculate weighted means of products, with the weight depending on the time separation between the observations. If the lagged correlation function is desired, the weights depend on the absolute difference between observation time separation and the estimation lag. To assess the applicability of the approach we used extensive simulations to determine the extent of interpolation side-effects with increasing irregularity of time series. We compared different approaches, based on (linear) interpolation, the Lomb-Scargle Fourier Transform, the sinc kernel and the Gaussian kernel. We investigated the role of kernel bandwidth and signal-to-noise ratio in the simulations. We found that the Gaussian kernel approach offers significant advantages and low Root-Mean Square Errors for regular, slightly irregular and very irregular time series. We therefore conclude that it is a good

  15. Improving cluster-based missing value estimation of DNA microarray data.

    PubMed

    Brás, Lígia P; Menezes, José C

    2007-06-01

    We present a modification of the weighted K-nearest neighbours imputation method (KNNimpute) for missing values (MVs) estimation in microarray data based on the reuse of estimated data. The method was called iterative KNN imputation (IKNNimpute) as the estimation is performed iteratively using the recently estimated values. The estimation efficiency of IKNNimpute was assessed under different conditions (data type, fraction and structure of missing data) by the normalized root mean squared error (NRMSE) and the correlation coefficients between estimated and true values, and compared with that of other cluster-based estimation methods (KNNimpute and sequential KNN). We further investigated the influence of imputation on the detection of differentially expressed genes using SAM by examining the differentially expressed genes that are lost after MV estimation. The performance measures give consistent results, indicating that the iterative procedure of IKNNimpute can enhance the prediction ability of cluster-based methods in the presence of high missing rates, in non-time series experiments and in data sets comprising both time series and non-time series data, because the information of the genes having MVs is used more efficiently and the iterative procedure allows refining the MV estimates. More importantly, IKNN has a smaller detrimental effect on the detection of differentially expressed genes.

  16. Special Course on Stability and Transition of Laminar Flow

    DTIC Science & Technology

    1984-06-01

    10"^ ; the high values of T, such as those used by HALL and HISLOP , are achieved by installing grids just upstream of the test section. Figure 16...1979, Springer Verlag ( 1980 ) "On the secondary motion induced by oscillations in a shear flow Phys. Fluids, 3, (1960) 656-657 "A non linear theory...SCHLICHTING wave by a sound wave" lUTAM Symposium on Laminar-Turbulent Transition, SUTTGART 1979, Springer Verlag ( 1980 ) "The influence of sound upon

  17. Improvements to surrogate data methods for nonstationary time series.

    PubMed

    Lucio, J H; Valdés, R; Rodríguez, L R

    2012-05-01

    The method of surrogate data has been extensively applied to hypothesis testing of system linearity, when only one realization of the system, a time series, is known. Normally, surrogate data should preserve the linear stochastic structure and the amplitude distribution of the original series. Classical surrogate data methods (such as random permutation, amplitude adjusted Fourier transform, or iterative amplitude adjusted Fourier transform) are successful at preserving one or both of these features in stationary cases. However, they always produce stationary surrogates, hence existing nonstationarity could be interpreted as dynamic nonlinearity. Certain modifications have been proposed that additionally preserve some nonstationarity, at the expense of reproducing a great deal of nonlinearity. However, even those methods generally fail to preserve the trend (i.e., global nonstationarity in the mean) of the original series. This is the case of time series with unit roots in their autoregressive structure. Additionally, those methods, based on Fourier transform, either need first and last values in the original series to match, or they need to select a piece of the original series with matching ends. These conditions are often inapplicable and the resulting surrogates are adversely affected by the well-known artefact problem. In this study, we propose a simple technique that, applied within existing Fourier-transform-based methods, generates surrogate data that jointly preserve the aforementioned characteristics of the original series, including (even strong) trends. Moreover, our technique avoids the negative effects of end mismatch. Several artificial and real, stationary and nonstationary, linear and nonlinear time series are examined, in order to demonstrate the advantages of the methods. Corresponding surrogate data are produced with the classical and with the proposed methods, and the results are compared.

  18. Interglacial climate dynamics and advanced time series analysis

    NASA Astrophysics Data System (ADS)

    Mudelsee, Manfred; Bermejo, Miguel; Köhler, Peter; Lohmann, Gerrit

    2013-04-01

    , Fischer H, Joos F, Knutti R, Lohmann G, Masson-Delmotte V (2010) What caused Earth's temperature variations during the last 800,000 years? Data-based evidence on radiative forcing and constraints on climate sensitivity. Quaternary Science Reviews 29:129. Loulergue L, Schilt A, Spahni R, Masson-Delmotte V, Blunier T, Lemieux B, Barnola J-M, Raynaud D, Stocker TF, Chappellaz J (2008) Orbital and millennial-scale features of atmospheric CH4 over the past 800,000 years. Nature 453:383. L¨ü thi D, Le Floch M, Bereiter B, Blunier T, Barnola J-M, Siegenthaler U, Raynaud D, Jouzel J, Fischer H, Kawamura K, Stocker TF (2008) High-resolution carbon dioxide concentration record 650,000-800,000 years before present. Nature 453:379. Mudelsee M (2000) Ramp function regression: A tool for quantifying climate transitions. Computers and Geosciences 26:293. Mudelsee M (2002) TAUEST: A computer program for estimating persistence in unevenly spaced weather/climate time series. Computers and Geosciences 28:69. Mudelsee M (2010) Climate Time Series Analysis: Classical Statistical and Bootstrap Methods. Springer, Dordrecht, 474 pp. [www.manfredmudelsee.com/book] Siegenthaler U, Stocker TF, Monnin E, L¨ü thi D, Schwander J, Stauffer B, Raynaud D, Barnola J-M, Fischer H, Masson-Delmotte V, Jouzel J (2005) Stable carbon cycle-climate relationship during the late Pleistocene. Science 310:1313.

  19. 31 CFR Appendix A to Part 359 - Redemption Value Calculations

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 31 Money and Finance:Treasury 2 2012-07-01 2012-07-01 false Redemption Value Calculations A... of a State, except for estate or inheritance taxes. (See 31 U.S.C. 3124.) 2. What is an example of a book-entry Series I savings bonds redemption value calculation? Assume a New Treasury Direct par...

  20. 31 CFR Appendix A to Part 359 - Redemption Value Calculations

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 31 Money and Finance:Treasury 2 2011-07-01 2011-07-01 false Redemption Value Calculations A... of a State, except for estate or inheritance taxes. (See 31 U.S.C. 3124.) 2. What is an example of a book-entry Series I savings bonds redemption value calculation? Assume a New Treasury Direct par...

  1. 31 CFR Appendix A to Part 359 - Redemption Value Calculations

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 31 Money and Finance:Treasury 2 2013-07-01 2013-07-01 false Redemption Value Calculations A... of a State, except for estate or inheritance taxes. (See 31 U.S.C. 3124.) 2. What is an example of a book-entry Series I savings bonds redemption value calculation? Assume a New Treasury Direct par...

  2. 31 CFR Appendix A to Part 359 - Redemption Value Calculations

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 31 Money and Finance: Treasury 2 2014-07-01 2014-07-01 false Redemption Value Calculations A... of a State, except for estate or inheritance taxes. (See 31 U.S.C. 3124.) 2. What is an example of a book-entry Series I savings bonds redemption value calculation? Assume a New Treasury Direct par...

  3. Rational trigonometric approximations using Fourier series partial sums

    NASA Technical Reports Server (NTRS)

    Geer, James F.

    1993-01-01

    A class of approximations (S(sub N,M)) to a periodic function f which uses the ideas of Pade, or rational function, approximations based on the Fourier series representation of f, rather than on the Taylor series representation of f, is introduced and studied. Each approximation S(sub N,M) is the quotient of a trigonometric polynomial of degree N and a trigonometric polynomial of degree M. The coefficients in these polynomials are determined by requiring that an appropriate number of the Fourier coefficients of S(sub N,M) agree with those of f. Explicit expressions are derived for these coefficients in terms of the Fourier coefficients of f. It is proven that these 'Fourier-Pade' approximations converge point-wise to (f(x(exp +))+f(x(exp -)))/2 more rapidly (in some cases by a factor of 1/k(exp 2M)) than the Fourier series partial sums on which they are based. The approximations are illustrated by several examples and an application to the solution of an initial, boundary value problem for the simple heat equation is presented.

  4. How Can Value-Added Measures Be Used for Teacher Improvement? What We Know Series: Value-Added Methods and Applications. Knowledge Brief 13

    ERIC Educational Resources Information Center

    Loeb, Susanna

    2013-01-01

    The question for this brief is whether education leaders can use value-added measures as tools for improving schooling and, if so, how to do this. Districts, states, and schools can, at least in theory, generate gains in educational outcomes for students using value-added measures in three ways: creating information on effective programs, making…

  5. Estimating trends in atmospheric water vapor and temperature time series over Germany

    NASA Astrophysics Data System (ADS)

    Alshawaf, Fadwa; Balidakis, Kyriakos; Dick, Galina; Heise, Stefan; Wickert, Jens

    2017-08-01

    Ground-based GNSS (Global Navigation Satellite System) has efficiently been used since the 1990s as a meteorological observing system. Recently scientists have used GNSS time series of precipitable water vapor (PWV) for climate research. In this work, we compare the temporal trends estimated from GNSS time series with those estimated from European Center for Medium-Range Weather Forecasts (ECMWF) reanalysis (ERA-Interim) data and meteorological measurements. We aim to evaluate climate evolution in Germany by monitoring different atmospheric variables such as temperature and PWV. PWV time series were obtained by three methods: (1) estimated from ground-based GNSS observations using the method of precise point positioning, (2) inferred from ERA-Interim reanalysis data, and (3) determined based on daily in situ measurements of temperature and relative humidity. The other relevant atmospheric parameters are available from surface measurements of meteorological stations or derived from ERA-Interim. The trends are estimated using two methods: the first applies least squares to deseasonalized time series and the second uses the Theil-Sen estimator. The trends estimated at 113 GNSS sites, with 10 to 19 years temporal coverage, vary between -1.5 and 2.3 mm decade-1 with standard deviations below 0.25 mm decade-1. These results were validated by estimating the trends from ERA-Interim data over the same time windows, which show similar values. These values of the trend depend on the length and the variations of the time series. Therefore, to give a mean value of the PWV trend over Germany, we estimated the trends using ERA-Interim spanning from 1991 to 2016 (26 years) at 227 synoptic stations over Germany. The ERA-Interim data show positive PWV trends of 0.33 ± 0.06 mm decade-1 with standard errors below 0.03 mm decade-1. The increment in PWV varies between 4.5 and 6.5 % per degree Celsius rise in temperature, which is comparable to the theoretical rate of the Clausius

  6. Adaptive Anchoring Model: How Static and Dynamic Presentations of Time Series Influence Judgments and Predictions.

    PubMed

    Kusev, Petko; van Schaik, Paul; Tsaneva-Atanasova, Krasimira; Juliusson, Asgeir; Chater, Nick

    2018-01-01

    When attempting to predict future events, people commonly rely on historical data. One psychological characteristic of judgmental forecasting of time series, established by research, is that when people make forecasts from series, they tend to underestimate future values for upward trends and overestimate them for downward ones, so-called trend-damping (modeled by anchoring on, and insufficient adjustment from, the average of recent time series values). Events in a time series can be experienced sequentially (dynamic mode), or they can also be retrospectively viewed simultaneously (static mode), not experienced individually in real time. In one experiment, we studied the influence of presentation mode (dynamic and static) on two sorts of judgment: (a) predictions of the next event (forecast) and (b) estimation of the average value of all the events in the presented series (average estimation). Participants' responses in dynamic mode were anchored on more recent events than in static mode for all types of judgment but with different consequences; hence, dynamic presentation improved prediction accuracy, but not estimation. These results are not anticipated by existing theoretical accounts; we develop and present an agent-based model-the adaptive anchoring model (ADAM)-to account for the difference between processing sequences of dynamically and statically presented stimuli (visually presented data). ADAM captures how variation in presentation mode produces variation in responses (and the accuracy of these responses) in both forecasting and judgment tasks. ADAM's model predictions for the forecasting and judgment tasks fit better with the response data than a linear-regression time series model. Moreover, ADAM outperformed autoregressive-integrated-moving-average (ARIMA) and exponential-smoothing models, while neither of these models accounts for people's responses on the average estimation task. Copyright © 2017 The Authors. Cognitive Science published by Wiley

  7. Does Value-Added Work Better in Elementary than in Secondary Grades? What We Know Series: Value-Added Methods Applications. Knowledge Brief 7

    ERIC Educational Resources Information Center

    Harris, Douglas N.; Anderson, Andrew

    2013-01-01

    There is a growing body of research on the validity and reliability of value-added measures, but most of this research has focused on elementary grades. Driven by several federal initiatives such as Race to the Top, Teacher Incentive Fund, and ESEA waivers, however, many states have incorporated value-added measures into the evaluations not only…

  8. Diagnostic value of MR imaging in the Lewis-Sumner syndrome: a case series.

    PubMed

    Rajabally, Yusuf A; Knopp, Michael J; Martin-Lamb, Darren; Morlese, John

    2014-07-15

    Lewis-Sumner syndrome (LSS) is considered a variant of chronic inflammatory demyelinating polyneuropathy (CIDP), which is more frequently described with exclusive upper limb involvement. The diagnosis of LSS is clinical and electrophysiological. However, these are not always obvious and in view of its rarity, the diagnosis may be missed and patients denied effective immunomodulatory therapy. We herein describe the magnetic resonance imaging (MRI) findings in a series of five consecutive patients with a clinical diagnosis of LSS, using T2 STIR (Short Tau Inversion recovery) images without contrast. We demonstrated hyperintensity with or without hypertrophy of cervical roots and/or brachial plexus on the affected side and/or controlaterally which aided diagnostic confirmation. This helped therapeutic decision making regarding immunotherapy in all cases. MR imaging of the cervical spine/brachial plexus with T2 STIR may be helpful in suspected cases of LSS as it represents a very useful additional diagnostic tool. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. Clinical time series prediction: towards a hierarchical dynamical system framework

    PubMed Central

    Liu, Zitao; Hauskrecht, Milos

    2014-01-01

    Objective Developing machine learning and data mining algorithms for building temporal models of clinical time series is important for understanding of the patient condition, the dynamics of a disease, effect of various patient management interventions and clinical decision making. In this work, we propose and develop a novel hierarchical framework for modeling clinical time series data of varied length and with irregularly sampled observations. Materials and methods Our hierarchical dynamical system framework for modeling clinical time series combines advantages of the two temporal modeling approaches: the linear dynamical system and the Gaussian process. We model the irregularly sampled clinical time series by using multiple Gaussian process sequences in the lower level of our hierarchical framework and capture the transitions between Gaussian processes by utilizing the linear dynamical system. The experiments are conducted on the complete blood count (CBC) panel data of 1000 post-surgical cardiac patients during their hospitalization. Our framework is evaluated and compared to multiple baseline approaches in terms of the mean absolute prediction error and the absolute percentage error. Results We tested our framework by first learning the time series model from data for the patient in the training set, and then applying the model in order to predict future time series values on the patients in the test set. We show that our model outperforms multiple existing models in terms of its predictive accuracy. Our method achieved a 3.13% average prediction accuracy improvement on ten CBC lab time series when it was compared against the best performing baseline. A 5.25% average accuracy improvement was observed when only short-term predictions were considered. Conclusion A new hierarchical dynamical system framework that lets us model irregularly sampled time series data is a promising new direction for modeling clinical time series and for improving their predictive

  10. Clinical time series prediction: Toward a hierarchical dynamical system framework.

    PubMed

    Liu, Zitao; Hauskrecht, Milos

    2015-09-01

    Developing machine learning and data mining algorithms for building temporal models of clinical time series is important for understanding of the patient condition, the dynamics of a disease, effect of various patient management interventions and clinical decision making. In this work, we propose and develop a novel hierarchical framework for modeling clinical time series data of varied length and with irregularly sampled observations. Our hierarchical dynamical system framework for modeling clinical time series combines advantages of the two temporal modeling approaches: the linear dynamical system and the Gaussian process. We model the irregularly sampled clinical time series by using multiple Gaussian process sequences in the lower level of our hierarchical framework and capture the transitions between Gaussian processes by utilizing the linear dynamical system. The experiments are conducted on the complete blood count (CBC) panel data of 1000 post-surgical cardiac patients during their hospitalization. Our framework is evaluated and compared to multiple baseline approaches in terms of the mean absolute prediction error and the absolute percentage error. We tested our framework by first learning the time series model from data for the patients in the training set, and then using it to predict future time series values for the patients in the test set. We show that our model outperforms multiple existing models in terms of its predictive accuracy. Our method achieved a 3.13% average prediction accuracy improvement on ten CBC lab time series when it was compared against the best performing baseline. A 5.25% average accuracy improvement was observed when only short-term predictions were considered. A new hierarchical dynamical system framework that lets us model irregularly sampled time series data is a promising new direction for modeling clinical time series and for improving their predictive performance. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Demands, values, and burnout

    PubMed Central

    Leiter, Michael P.; Frank, Erica; Matheson, Timothy J.

    2009-01-01

    OBJECTIVE T o explore the interaction between workload and values congruence (personal values with health care system values) in the context of burnout and physician engagement and to explore the relative importance of these factors by sex, given the distinct work patterns of male and female physicians. DESIGN National mailed survey. SETTING Canada. PARTICIPANTS A random sample of 8100 Canadian physicians (response rate 40%, N = 3213); 2536 responses (from physicians working more than 35 hours per week) were analyzed. MAIN OUTCOME MEASURES Levels of burnout, values congruence, and workload, by sex, measured by the Maslach Burnout Inventory—General Scale and the Areas of Worklife Scale. RESULTS Results showed a moderate level of burnout among Canadian physicians, with relatively positive scores on exhaustion, average scores on cynicism, and mildly negative scores on professional efficacy. A series of multiple regression analyses confirmed parallel main effect contributions from manageable workload and values congruence. Both workload and values congruence predicted exhaustion and cynicism for men and women (P = .001). Only values congruence provided a significant prediction of professional efficacy for both men and women (P = .001) These predictors interacted for women on all 3 aspects of burnout (exhaustion, cynicism, and diminished efficacy). Howevever, overall levels of the burnout indicators departed only modestly from normative levels. CONCLUSION W orkload and values congruence make distinct contributions to physician burnout. Work overload contributes to predicting exhaustion and cynicism; professional values crises contribute to predicting exhaustion, cynicism, and low professional efficacy. The interaction of values and workload for women in particular has implications for the distinct work-life patterns of male and female physicians. Specifically, the congruence of individual values with values inherent in the health care system appeared to be of greater

  12. The incorrect usage of singular spectral analysis and discrete wavelet transform in hybrid models to predict hydrological time series

    NASA Astrophysics Data System (ADS)

    Du, Kongchang; Zhao, Ying; Lei, Jiaqiang

    2017-09-01

    In hydrological time series prediction, singular spectrum analysis (SSA) and discrete wavelet transform (DWT) are widely used as preprocessing techniques for artificial neural network (ANN) and support vector machine (SVM) predictors. These hybrid or ensemble models seem to largely reduce the prediction error. In current literature researchers apply these techniques to the whole observed time series and then obtain a set of reconstructed or decomposed time series as inputs to ANN or SVM. However, through two comparative experiments and mathematical deduction we found the usage of SSA and DWT in building hybrid models is incorrect. Since SSA and DWT adopt 'future' values to perform the calculation, the series generated by SSA reconstruction or DWT decomposition contain information of 'future' values. These hybrid models caused incorrect 'high' prediction performance and may cause large errors in practice.

  13. Evaluation of random errors in Williams’ series coefficients obtained with digital image correlation

    NASA Astrophysics Data System (ADS)

    Lychak, Oleh V.; Holyns'kiy, Ivan S.

    2016-03-01

    The use of the Williams’ series parameters for fracture analysis requires valid information about their error values. The aim of this investigation is the development of the method for estimation of the standard deviation of random errors of the Williams’ series parameters, obtained from the measured components of the stress field. Also, the criteria for choosing the optimal number of terms in the truncated Williams’ series for derivation of their parameters with minimal errors is proposed. The method was used for the evaluation of the Williams’ parameters, obtained from the data, and measured by the digital image correlation technique for testing a three-point bending specimen.

  14. New insights into soil temperature time series modeling: linear or nonlinear?

    NASA Astrophysics Data System (ADS)

    Bonakdari, Hossein; Moeeni, Hamid; Ebtehaj, Isa; Zeynoddin, Mohammad; Mahoammadian, Abdolmajid; Gharabaghi, Bahram

    2018-03-01

    Soil temperature (ST) is an important dynamic parameter, whose prediction is a major research topic in various fields including agriculture because ST has a critical role in hydrological processes at the soil surface. In this study, a new linear methodology is proposed based on stochastic methods for modeling daily soil temperature (DST). With this approach, the ST series components are determined to carry out modeling and spectral analysis. The results of this process are compared with two linear methods based on seasonal standardization and seasonal differencing in terms of four DST series. The series used in this study were measured at two stations, Champaign and Springfield, at depths of 10 and 20 cm. The results indicate that in all ST series reviewed, the periodic term is the most robust among all components. According to a comparison of the three methods applied to analyze the various series components, it appears that spectral analysis combined with stochastic methods outperformed the seasonal standardization and seasonal differencing methods. In addition to comparing the proposed methodology with linear methods, the ST modeling results were compared with the two nonlinear methods in two forms: considering hydrological variables (HV) as input variables and DST modeling as a time series. In a previous study at the mentioned sites, Kim and Singh Theor Appl Climatol 118:465-479, (2014) applied the popular Multilayer Perceptron (MLP) neural network and Adaptive Neuro-Fuzzy Inference System (ANFIS) nonlinear methods and considered HV as input variables. The comparison results signify that the relative error projected in estimating DST by the proposed methodology was about 6%, while this value with MLP and ANFIS was over 15%. Moreover, MLP and ANFIS models were employed for DST time series modeling. Due to these models' relatively inferior performance to the proposed methodology, two hybrid models were implemented: the weights and membership function of MLP and

  15. Extended space expectation values in quantum dynamical system evolutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Demiralp, Metin

    2014-10-06

    The time variant power series expansion for the expectation value of a given quantum dynamical operator is well-known and well-investigated issue in quantum dynamics. However, depending on the operator and Hamiltonian singularities this expansion either may not exist or may not converge for all time instances except the beginning of the evolution. This work focuses on this issue and seeks certain cures for the negativities. We work in the extended space obtained by adding all images of the initial wave function under the system Hamiltonian’s positive integer powers. This requires the introduction of certain appropriately defined weight operators. The resultingmore » better convergence in the temporal power series urges us to call the new defined entities “extended space expectation values” even though they are constructed over certain weight operators and are somehow pseudo expectation values.« less

  16. Algorithm for predicting the evolution of series of dynamics of complex systems in solving information problems

    NASA Astrophysics Data System (ADS)

    Kasatkina, T. I.; Dushkin, A. V.; Pavlov, V. A.; Shatovkin, R. R.

    2018-03-01

    In the development of information, systems and programming to predict the series of dynamics, neural network methods have recently been applied. They are more flexible, in comparison with existing analogues and are capable of taking into account the nonlinearities of the series. In this paper, we propose a modified algorithm for predicting the series of dynamics, which includes a method for training neural networks, an approach to describing and presenting input data, based on the prediction by the multilayer perceptron method. To construct a neural network, the values of a series of dynamics at the extremum points and time values corresponding to them, formed based on the sliding window method, are used as input data. The proposed algorithm can act as an independent approach to predicting the series of dynamics, and be one of the parts of the forecasting system. The efficiency of predicting the evolution of the dynamics series for a short-term one-step and long-term multi-step forecast by the classical multilayer perceptron method and a modified algorithm using synthetic and real data is compared. The result of this modification was the minimization of the magnitude of the iterative error that arises from the previously predicted inputs to the inputs to the neural network, as well as the increase in the accuracy of the iterative prediction of the neural network.

  17. Cosmological coherent state expectation values in loop quantum gravity I. Isotropic kinematics

    NASA Astrophysics Data System (ADS)

    Dapor, Andrea; Liegener, Klaus

    2018-07-01

    This is the first paper of a series dedicated to loop quantum gravity (LQG) coherent states and cosmology. The concept is based on the effective dynamics program of Loop Quantum Cosmology, where the classical dynamics generated by the expectation value of the Hamiltonian on semiclassical states is found to be in agreement with the quantum evolution of such states. We ask the question of whether this expectation value agrees with the one obtained in the full theory. The answer is in the negative, Dapor and Liegener (2017 arXiv:1706.09833). This series of papers is dedicated to detailing the computations that lead to that surprising result. In the current paper, we construct the family of coherent states in LQG which represent flat (k  =  0) Robertson–Walker spacetimes, and present the tools needed to compute expectation values of polynomial operators in holonomy and flux on such states. These tools will be applied to the LQG Hamiltonian operator (in Thiemann regularization) in the second paper of the series. The third paper will present an extension to cosmologies and a comparison with alternative regularizations of the Hamiltonian.

  18. A taxonomic index, with names of descriptive authorities of termite genera and species: an accompaniment to Biology of Termites: A Modern Synthesis (Bignell DE, Roisin Y, Lo N, Editors. 2011. Springer, Dordrecht. 576 pp.).

    PubMed

    Bignell, D E; Jones, D T

    2014-01-01

    Biology of Termites: A Modern Synthesis (Bignell DE, Roisin Y, Lo N, (Editors), Springer, Dordrecht, 576pp, ISBN 978-90-481-3976-7, e-ISBN 978-90-481-3977-4, DOI 10.1007/978-90-481-3977-4) was published in 2011. With the agreement of the publishers, we give a taxonomic index of the book comprising 494 termite entries, 103 entries of other multicellular animal species mentioned as associates or predators of termites, with 9 fungal, 60 protist, and 64 prokaryote identities, which are listed as termite symbionts (sensu stricto). In addition, we add descriptive authorities for living (and some fossil) termite genera and species. Higher taxonomic groupings for termites are indicated by 25 code numbers. Microorganisms (prokaryotes, protists, and fungi) are listed separately, using broad modern taxonomic affiliations from the contemporary literature of bacteriology, protozoology, and mycology. This is an open access paper. We use the Creative Commons Attribution 3.0 license that permits unrestricted use, provided that the paper is properly attributed.

  19. Evaluation of a series hybrid thrust bearing at DN values to three million. 2: Fabrication and testing

    NASA Technical Reports Server (NTRS)

    Eusepi, M.; Winn, L. W.

    1975-01-01

    Results of tests made to determine the experimental performance of a series hybrid bearing are reported. The bearing consists of a 150 mm ball bearing and a centrifugally actuated, conical, fluid film bearing fitting an envelope with an outer radius of 86.4 mm (3.4 in.) and inner radius of 71 mm (2.8 in.). Tests were conducted up to 16,500 rpm, at which speed an axial load of 15,568 N (3500 lb) was safely supported by the hybrid bearing system. Through the employment of the series hybrid bearing principle, it was possible to reduce the effective ball bearing speed to approximately one-half of the shaft speed. A reduction of this magnitude should result in a tenfold increase in the ball bearing fatigue life. A successful simulation of fluid film bearing lubricant supply failure, performed repeatedly at an operating speed of 10,000 rpm, resulted in complete and smooth change over to full scale ball bearing operation when the oil supply to the fluid film bearing was discontinued. Reactivation of the fluid film supply system produced a flawless return to the original mode of hybrid operation.

  20. Confidence interval or p-value?: part 4 of a series on evaluation of scientific publications.

    PubMed

    du Prel, Jean-Baptist; Hommel, Gerhard; Röhrig, Bernd; Blettner, Maria

    2009-05-01

    An understanding of p-values and confidence intervals is necessary for the evaluation of scientific articles. This article will inform the reader of the meaning and interpretation of these two statistical concepts. The uses of these two statistical concepts and the differences between them are discussed on the basis of a selective literature search concerning the methods employed in scientific articles. P-values in scientific studies are used to determine whether a null hypothesis formulated before the performance of the study is to be accepted or rejected. In exploratory studies, p-values enable the recognition of any statistically noteworthy findings. Confidence intervals provide information about a range in which the true value lies with a certain degree of probability, as well as about the direction and strength of the demonstrated effect. This enables conclusions to be drawn about the statistical plausibility and clinical relevance of the study findings. It is often useful for both statistical measures to be reported in scientific articles, because they provide complementary types of information.

  1. The Value Added National Project. Technical Report: Primary 4. Value-Added Key Stage 1 to Key Stage 2.

    ERIC Educational Resources Information Center

    Tymms, Peter

    This is the fourth in a series of technical reports that have dealt with issues surrounding the possibility of national value-added systems for primary schools in England. The main focus has been on the relative progress made by students between the ends of Key Stage 1 (KS1) and Key Stage 2 (KS2). The analysis has indicated that the strength of…

  2. Cross-sample entropy of foreign exchange time series

    NASA Astrophysics Data System (ADS)

    Liu, Li-Zhi; Qian, Xi-Yuan; Lu, Heng-Yao

    2010-11-01

    The correlation of foreign exchange rates in currency markets is investigated based on the empirical data of DKK/USD, NOK/USD, CAD/USD, JPY/USD, KRW/USD, SGD/USD, THB/USD and TWD/USD for a period from 1995 to 2002. Cross-SampEn (cross-sample entropy) method is used to compare the returns of every two exchange rate time series to assess their degree of asynchrony. The calculation method of confidence interval of SampEn is extended and applied to cross-SampEn. The cross-SampEn and its confidence interval for every two of the exchange rate time series in periods 1995-1998 (before the Asian currency crisis) and 1999-2002 (after the Asian currency crisis) are calculated. The results show that the cross-SampEn of every two of these exchange rates becomes higher after the Asian currency crisis, indicating a higher asynchrony between the exchange rates. Especially for Singapore, Thailand and Taiwan, the cross-SampEn values after the Asian currency crisis are significantly higher than those before the Asian currency crisis. Comparison with the correlation coefficient shows that cross-SampEn is superior to describe the correlation between time series.

  3. Clustering Multivariate Time Series Using Hidden Markov Models

    PubMed Central

    Ghassempour, Shima; Girosi, Federico; Maeder, Anthony

    2014-01-01

    In this paper we describe an algorithm for clustering multivariate time series with variables taking both categorical and continuous values. Time series of this type are frequent in health care, where they represent the health trajectories of individuals. The problem is challenging because categorical variables make it difficult to define a meaningful distance between trajectories. We propose an approach based on Hidden Markov Models (HMMs), where we first map each trajectory into an HMM, then define a suitable distance between HMMs and finally proceed to cluster the HMMs with a method based on a distance matrix. We test our approach on a simulated, but realistic, data set of 1,255 trajectories of individuals of age 45 and over, on a synthetic validation set with known clustering structure, and on a smaller set of 268 trajectories extracted from the longitudinal Health and Retirement Survey. The proposed method can be implemented quite simply using standard packages in R and Matlab and may be a good candidate for solving the difficult problem of clustering multivariate time series with categorical variables using tools that do not require advanced statistic knowledge, and therefore are accessible to a wide range of researchers. PMID:24662996

  4. CI2 for creating and comparing confidence-intervals for time-series bivariate plots.

    PubMed

    Mullineaux, David R

    2017-02-01

    Currently no method exists for calculating and comparing the confidence-intervals (CI) for the time-series of a bivariate plot. The study's aim was to develop 'CI2' as a method to calculate the CI on time-series bivariate plots, and to identify if the CI between two bivariate time-series overlap. The test data were the knee and ankle angles from 10 healthy participants running on a motorised standard-treadmill and non-motorised curved-treadmill. For a recommended 10+ trials, CI2 involved calculating 95% confidence-ellipses at each time-point, then taking as the CI the points on the ellipses that were perpendicular to the direction vector between the means of two adjacent time-points. Consecutive pairs of CI created convex quadrilaterals, and any overlap of these quadrilaterals at the same time or ±1 frame as a time-lag calculated using cross-correlations, indicated where the two time-series differed. CI2 showed no group differences between left and right legs on both treadmills, but the same legs between treadmills for all participants showed differences of less knee extension on the curved-treadmill before heel-strike. To improve and standardise the use of CI2 it is recommended to remove outlier time-series, use 95% confidence-ellipses, and scale the ellipse by the fixed Chi-square value as opposed to the sample-size dependent F-value. For practical use, and to aid in standardisation or future development of CI2, Matlab code is provided. CI2 provides an effective method to quantify the CI of bivariate plots, and to explore the differences in CI between two bivariate time-series. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Modeling pollen time series using seasonal-trend decomposition procedure based on LOESS smoothing.

    PubMed

    Rojo, Jesús; Rivero, Rosario; Romero-Morte, Jorge; Fernández-González, Federico; Pérez-Badia, Rosa

    2017-02-01

    Analysis of airborne pollen concentrations provides valuable information on plant phenology and is thus a useful tool in agriculture-for predicting harvests in crops such as the olive and for deciding when to apply phytosanitary treatments-as well as in medicine and the environmental sciences. Variations in airborne pollen concentrations, moreover, are indicators of changing plant life cycles. By modeling pollen time series, we can not only identify the variables influencing pollen levels but also predict future pollen concentrations. In this study, airborne pollen time series were modeled using a seasonal-trend decomposition procedure based on LOcally wEighted Scatterplot Smoothing (LOESS) smoothing (STL). The data series-daily Poaceae pollen concentrations over the period 2006-2014-was broken up into seasonal and residual (stochastic) components. The seasonal component was compared with data on Poaceae flowering phenology obtained by field sampling. Residuals were fitted to a model generated from daily temperature and rainfall values, and daily pollen concentrations, using partial least squares regression (PLSR). This method was then applied to predict daily pollen concentrations for 2014 (independent validation data) using results for the seasonal component of the time series and estimates of the residual component for the period 2006-2013. Correlation between predicted and observed values was r = 0.79 (correlation coefficient) for the pre-peak period (i.e., the period prior to the peak pollen concentration) and r = 0.63 for the post-peak period. Separate analysis of each of the components of the pollen data series enables the sources of variability to be identified more accurately than by analysis of the original non-decomposed data series, and for this reason, this procedure has proved to be a suitable technique for analyzing the main environmental factors influencing airborne pollen concentrations.

  6. 31 CFR 351.71 - How can I find out what my book-entry Series EE savings bonds are worth?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 31 Money and Finance:Treasury 2 2011-07-01 2011-07-01 false How can I find out what my book-entry... OFFERING OF UNITED STATES SAVINGS BONDS, SERIES EE Book-Entry Series EE Savings Bonds § 351.71 How can I find out what my book-entry Series EE savings bonds are worth? (a) Redemption values. You may access...

  7. 31 CFR 359.56 - How can I find out what my book-entry Series I savings bonds are worth?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 31 Money and Finance:Treasury 2 2011-07-01 2011-07-01 false How can I find out what my book-entry... OFFERING OF UNITED STATES SAVINGS BONDS, SERIES I Book-Entry Series I Savings Bonds § 359.56 How can I find out what my book-entry Series I savings bonds are worth? (a) Redemption values. You may access...

  8. A New Hybrid-Multiscale SSA Prediction of Non-Stationary Time Series

    NASA Astrophysics Data System (ADS)

    Ghanbarzadeh, Mitra; Aminghafari, Mina

    2016-02-01

    Singular spectral analysis (SSA) is a non-parametric method used in the prediction of non-stationary time series. It has two parameters, which are difficult to determine and very sensitive to their values. Since, SSA is a deterministic-based method, it does not give good results when the time series is contaminated with a high noise level and correlated noise. Therefore, we introduce a novel method to handle these problems. It is based on the prediction of non-decimated wavelet (NDW) signals by SSA and then, prediction of residuals by wavelet regression. The advantages of our method are the automatic determination of parameters and taking account of the stochastic structure of time series. As shown through the simulated and real data, we obtain better results than SSA, a non-parametric wavelet regression method and Holt-Winters method.

  9. Laser hazards and safety in the military environment. Lecture series

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    The Lecture Series is intended to provide an understanding of the safety problems associated with the military use of lasers. The most important hazard is the inadvertent irradiation of the eye and so the series will include contributions from the physical and biological sciences, as well as from ophthalmologists. Those involved with laser safety come from many backgrounds -- from physics to engineering and from vision physiology to clinical ophthalmology and it is essential that each understands the contribution of the other. The lectures include an introductory part and from this, the more advanced aspects of each subject are covered,more » leading to the issues involved in the design of safety codes and the control of laser hazards. The final session deals with medical surveillance of laser personnel. The Series is of value to both military and civilian personnel involved with safety, whether they are concerned with land, sea or airborne laser systems. (GRA)« less

  10. Legends Lecture Series

    NASA Image and Video Library

    2010-11-09

    John C. Stennis Space Center Director Patrick Scheuermann (second from right) stands with Legends Lecture Series presenters George Hopson (l to r), Jerry Hlass and J.R. Thompson. The three former leaders reflected on their experiences in the first of several planned lecture series sessions on Nov. 9, 2010. The lecture series is part of yearlong celebration of the 50th anniversary of Stennis.

  11. Research tools: ethylene preparation. In: Chi-Kuang Wen editor. Ethylene in plants. Springer Netherlands. Springer Link

    USDA-ARS?s Scientific Manuscript database

    Ethylene is a plant hormone that regulates many aspects of plant growth and development, germination, fruit ripening, senescence, sex determination, abscission, defense, gravitropism, epinasty, and more. For experimental purposes, one needs to treat plant material with ethylene and its inhibitors t...

  12. Value for money assessment for public-private partnerships : a primer.

    DOT National Transportation Integrated Search

    2015-01-01

    This primer addresses Value for Money Assessment for public-private partnerships (P3s). Companion primers on Financial Assessment and Risk Assessment for P3s are also available as part of this series of primers.

  13. Inflation: Causes and Cures. Series on Public Issues No. 9.

    ERIC Educational Resources Information Center

    Saving, Thomas R.

    This booklet, one of a series intended to apply economic principles to major social and political issues of the day, focuses on the relationship between growth of the money supply, growth of productivity, and inflation. Provided first is a definition of inflation along with discussions of price indexes, the value of money, and the concept of…

  14. On the Nodal Lines of Eisenstein Series on Schottky Surfaces

    NASA Astrophysics Data System (ADS)

    Jakobson, Dmitry; Naud, Frédéric

    2017-04-01

    On convex co-compact hyperbolic surfaces {X=Γ backslash H2}, we investigate the behavior of nodal curves of real valued Eisenstein series {F_λ(z,ξ)}, where {λ} is the spectral parameter, {ξ} the direction at infinity. Eisenstein series are (non-{L^2}) eigenfunctions of the Laplacian {Δ_X} satisfying {Δ_X F_λ=(1/4+λ^2)F_λ}. As {λ} goes to infinity (the high energy limit), we show that, for generic {ξ}, the number of intersections of nodal lines with any compact segment of geodesic grows like {λ}, up to multiplicative constants. Applications to the number of nodal domains inside the convex core of the surface are then derived.

  15. Statistical significance approximation in local trend analysis of high-throughput time-series data using the theory of Markov chains.

    PubMed

    Xia, Li C; Ai, Dongmei; Cram, Jacob A; Liang, Xiaoyi; Fuhrman, Jed A; Sun, Fengzhu

    2015-09-21

    Local trend (i.e. shape) analysis of time series data reveals co-changing patterns in dynamics of biological systems. However, slow permutation procedures to evaluate the statistical significance of local trend scores have limited its applications to high-throughput time series data analysis, e.g., data from the next generation sequencing technology based studies. By extending the theories for the tail probability of the range of sum of Markovian random variables, we propose formulae for approximating the statistical significance of local trend scores. Using simulations and real data, we show that the approximate p-value is close to that obtained using a large number of permutations (starting at time points >20 with no delay and >30 with delay of at most three time steps) in that the non-zero decimals of the p-values obtained by the approximation and the permutations are mostly the same when the approximate p-value is less than 0.05. In addition, the approximate p-value is slightly larger than that based on permutations making hypothesis testing based on the approximate p-value conservative. The approximation enables efficient calculation of p-values for pairwise local trend analysis, making large scale all-versus-all comparisons possible. We also propose a hybrid approach by integrating the approximation and permutations to obtain accurate p-values for significantly associated pairs. We further demonstrate its use with the analysis of the Polymouth Marine Laboratory (PML) microbial community time series from high-throughput sequencing data and found interesting organism co-occurrence dynamic patterns. The software tool is integrated into the eLSA software package that now provides accelerated local trend and similarity analysis pipelines for time series data. The package is freely available from the eLSA website: http://bitbucket.org/charade/elsa.

  16. Publications - DGGS Digital Data Series Series | Alaska Division of

    Science.gov Websites

    Sections Geologic Communications Alaska Geologic Data Index (AGDI) Volcanology Alaska Volcano Observatory and Location Policy and Facilities Staff Seismic and Well Data Data Reports Contact Us Frequently Publications DGGS Series DDS main content DGGS Digital Data Series Publications These icons indicate the

  17. Loop series for discrete statistical models on graphs

    NASA Astrophysics Data System (ADS)

    Chertkov, Michael; Chernyak, Vladimir Y.

    2006-06-01

    In this paper we present the derivation details, logic, and motivation for the three loop calculus introduced in Chertkov and Chernyak (2006 Phys. Rev. E 73 065102(R)). Generating functions for each of the three interrelated discrete statistical models are expressed in terms of a finite series. The first term in the series corresponds to the Bethe-Peierls belief-propagation (BP) contribution; the other terms are labelled by loops on the factor graph. All loop contributions are simple rational functions of spin correlation functions calculated within the BP approach. We discuss two alternative derivations of the loop series. One approach implements a set of local auxiliary integrations over continuous fields with the BP contribution corresponding to an integrand saddle-point value. The integrals are replaced by sums in the complementary approach, briefly explained in Chertkov and Chernyak (2006 Phys. Rev. E 73 065102(R)). Local gauge symmetry transformations that clarify an important invariant feature of the BP solution are revealed in both approaches. The individual terms change under the gauge transformation while the partition function remains invariant. The requirement for all individual terms to be nonzero only for closed loops in the factor graph (as opposed to paths with loose ends) is equivalent to fixing the first term in the series to be exactly equal to the BP contribution. Further applications of the loop calculus to problems in statistical physics, computer and information sciences are discussed.

  18. Comparison of time-series registration methods in breast dynamic infrared imaging

    NASA Astrophysics Data System (ADS)

    Riyahi-Alam, S.; Agostini, V.; Molinari, F.; Knaflitz, M.

    2015-03-01

    Automated motion reduction in dynamic infrared imaging is on demand in clinical applications, since movement disarranges time-temperature series of each pixel, thus originating thermal artifacts that might bias the clinical decision. All previously proposed registration methods are feature based algorithms requiring manual intervention. The aim of this work is to optimize the registration strategy specifically for Breast Dynamic Infrared Imaging and to make it user-independent. We implemented and evaluated 3 different 3D time-series registration methods: 1. Linear affine, 2. Non-linear Bspline, 3. Demons applied to 12 datasets of healthy breast thermal images. The results are evaluated through normalized mutual information with average values of 0.70 ±0.03, 0.74 ±0.03 and 0.81 ±0.09 (out of 1) for Affine, Bspline and Demons registration, respectively, as well as breast boundary overlap and Jacobian determinant of the deformation field. The statistical analysis of the results showed that symmetric diffeomorphic Demons' registration method outperforms also with the best breast alignment and non-negative Jacobian values which guarantee image similarity and anatomical consistency of the transformation, due to homologous forces enforcing the pixel geometric disparities to be shortened on all the frames. We propose Demons' registration as an effective technique for time-series dynamic infrared registration, to stabilize the local temperature oscillation.

  19. Volterra series truncation and kernel estimation of nonlinear systems in the frequency domain

    NASA Astrophysics Data System (ADS)

    Zhang, B.; Billings, S. A.

    2017-02-01

    The Volterra series model is a direct generalisation of the linear convolution integral and is capable of displaying the intrinsic features of a nonlinear system in a simple and easy to apply way. Nonlinear system analysis using Volterra series is normally based on the analysis of its frequency-domain kernels and a truncated description. But the estimation of Volterra kernels and the truncation of Volterra series are coupled with each other. In this paper, a novel complex-valued orthogonal least squares algorithm is developed. The new algorithm provides a powerful tool to determine which terms should be included in the Volterra series expansion and to estimate the kernels and thus solves the two problems all together. The estimated results are compared with those determined using the analytical expressions of the kernels to validate the method. To further evaluate the effectiveness of the method, the physical parameters of the system are also extracted from the measured kernels. Simulation studies demonstrates that the new approach not only can truncate the Volterra series expansion and estimate the kernels of a weakly nonlinear system, but also can indicate the applicability of the Volterra series analysis in a severely nonlinear system case.

  20. Intrathoracic pressure regulation during cardiopulmonary resuscitation: a feasibility case-series.

    PubMed

    Segal, Nicolas; Parquette, Brent; Ziehr, Jonathon; Yannopoulos, Demetris; Lindstrom, David

    2013-04-01

    Intrathoracic pressure regulation (IPR) is a novel, noninvasive therapy intended to increase cardiac output and blood pressure in hypotensive states by generating a negative end expiratory pressure of -12 cm H2O between positive pressure ventilations. In this first feasibility case-series, we tested the hypothesis that IPR improves End tidal (ET) CO2 during cardiopulmonary resuscitation (CPR). ETCO2 was used as a surrogate measure for circulation. All patients were treated initially with manual CPR and an impedance threshold device (ITD). When IPR-trained medics arrived on scene the ITD was removed and an IPR device (CirQLATOR™) was attached to the patient's advanced airway (intervention group). The IPR device lowered airway pressures to -9 mmHg after each positive pressure ventilation for the duration of the expiratory phase. ETCO2, was measured using a capnometer incorporated into the defibrillator system (LifePak™). Values are expressed as mean ± SEM. Results were compared using paired and unpaired Student's t test. p values of <0.05 were considered statistically significant. ETCO2 values in 11 patients in the case series were compared pre and during IPR therapy and also compared to 74 patients in the control group not treated with the new IPR device. ETCO2 values increased from an average of 21 ± 1 mmHg immediately before IPR application to an average value of 32 ± 5 mmHg and to a maximum value of 45 ± 5mmHg during IPR treatment (p<0.001). In the control group ETCO2 values did not change significantly. Return of spontaneous circulation (ROSC) rates were 46% (34/74) with standard CPR and ITD versus 73% (8/11) with standard CPR and the IPR device (p<0.001). ETCO2 levels and ROSC rates were significantly higher in the study intervention group. These findings demonstrate that during CPR circulation may be significantly augmented by generation of a negative end expiratory pressure between each breath. Copyright © 2012 Elsevier Ireland Ltd. All rights

  1. Values in an American Government Textbook. Three Appraisals.

    ERIC Educational Resources Information Center

    Novak, Michael; And Others

    These critiques of a high school American government textbook, "American Government in Action," (Resnik and Nerenberg, 1973) represent the first in a series of studies designed to assess the effectiveness of social science textbooks in communicating and reinforcing Western values. The critiques are followed by a response by the authors of the…

  2. GPS Position Time Series @ JPL

    NASA Technical Reports Server (NTRS)

    Owen, Susan; Moore, Angelyn; Kedar, Sharon; Liu, Zhen; Webb, Frank; Heflin, Mike; Desai, Shailen

    2013-01-01

    Different flavors of GPS time series analysis at JPL - Use same GPS Precise Point Positioning Analysis raw time series - Variations in time series analysis/post-processing driven by different users. center dot JPL Global Time Series/Velocities - researchers studying reference frame, combining with VLBI/SLR/DORIS center dot JPL/SOPAC Combined Time Series/Velocities - crustal deformation for tectonic, volcanic, ground water studies center dot ARIA Time Series/Coseismic Data Products - Hazard monitoring and response focused center dot ARIA data system designed to integrate GPS and InSAR - GPS tropospheric delay used for correcting InSAR - Caltech's GIANT time series analysis uses GPS to correct orbital errors in InSAR - Zhen Liu's talking tomorrow on InSAR Time Series analysis

  3. Sample entropy applied to the analysis of synthetic time series and tachograms

    NASA Astrophysics Data System (ADS)

    Muñoz-Diosdado, A.; Gálvez-Coyt, G. G.; Solís-Montufar, E.

    2017-01-01

    Entropy is a method of non-linear analysis that allows an estimate of the irregularity of a system, however, there are different types of computational entropy that were considered and tested in order to obtain one that would give an index of signals complexity taking into account the data number of the analysed time series, the computational resources demanded by the method, and the accuracy of the calculation. An algorithm for the generation of fractal time-series with a certain value of β was used for the characterization of the different entropy algorithms. We obtained a significant variation for most of the algorithms in terms of the series size, which could result counterproductive for the study of real signals of different lengths. The chosen method was sample entropy, which shows great independence of the series size. With this method, time series of heart interbeat intervals or tachograms of healthy subjects and patients with congestive heart failure were analysed. The calculation of sample entropy was carried out for 24-hour tachograms and time subseries of 6-hours for sleepiness and wakefulness. The comparison between the two populations shows a significant difference that is accentuated when the patient is sleeping.

  4. 78 FR 69885 - AIM Growth Series (Invesco Growth Series), et al.; Notice of Application

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-21

    ... designed to ensure that Sub-Advisors comply with a Subadvised Series' investment objective, policies and...] AIM Growth Series (Invesco Growth Series), et al.; Notice of Application November 15, 2013. AGENCY... approval and would grant relief from certain disclosure requirements. APPLICANTS: AIM Growth Series...

  5. Ecological values of shallow-water habitats: Implications for the restoration of disturbed ecosystems

    USGS Publications Warehouse

    Lopez, C.B.; Cloern, J.E.; Schraga, T.S.; Little, A.J.; Lucas, L.V.; Thompson, J.K.; Burau, J.R.

    2006-01-01

    A presumed value of shallow-habitat enhanced pelagic productivity derives from the principle that in nutrient-rich aquatic systems phytoplankton growth rate is controlled by light availability, which varies inversely with habitat depth. We measured a set of biological indicators across the gradient of habitat depth within the Sacramento-San Joaquin River Delta (California) to test the hypothesis that plankton biomass, production, and pelagic energy flow also vary systematically with habitat depth. Results showed that phytoplankton biomass and production were only weakly related to phytoplankton growth rates whereas other processes (transport, consumption) were important controls. Distribution of the invasive clam Corbicula fluminea was patchy, and heavily colonized habitats all supported low phytoplankton biomass and production and functioned as food sinks. Surplus primary production in shallow, uncolonized habitats provided potential subsidies to neighboring recipient habitats. Zooplankton in deeper habitats, where grazing exceeded phytoplankton production, were likely supported by significant fluxes of phytoplankton biomass from connected donor habitats. Our results provide three important lessons for ecosystem science: (a) in the absence of process measurements, derived indices provide valuable information to improve our mechanistic understanding of ecosystem function and to benefit adaptive management strategies; (b) the benefits of some ecosystem functions are displaced by water movements, so the value of individual habitat types can only be revealed through a regional perspective that includes connectedness among habitats; and (c) invasive species can act as overriding controls of habitat function, adding to the uncertainty of management outcomes. ?? 2006 Springer Science+Business Media, Inc.

  6. [The value of lidocaine through different routes of administration in the treatment of tinnitus: a Meta-analysis].

    PubMed

    Li, Hui; Li, Ming; Zhang, Jianning; Li, Xiangcui; Tan, Junying; Ji, Bobo

    2016-01-01

    To evaluate the clinical value of lidocain in the treatment of tinnitus through three routes of administration (intravenous, intratympanic and acupoint injection) by analyzing literatures. Articles were collected through Hownet, Wanfang, VIP, Pubmed, SciVerse ScienceDirect, Springer and OVID, etc. The articles were strictly evaluated based on their quality. The Meta-analysis was performed to evaluate the outcomes by RevMan 5. 2 software. A total of 16 articles with 1203 patients were enrolled in the analysis. Their tinnitus history ranged from 7 hours to 20 years. Assessment methods include tinnitus loudness levels, severity scales and subjective feelings. None of articles refer to maintaining time, instead of "short-term", "short" and so on. A total of 133 cases received intravenous injection and the effective rate was 73.4% (98 cases). 50 cases and 332 cases received intratympanic and acupoint injection respectively and their effective rates were 74.0% and 87.7%, respectively. The effective rate ranged from 42.4% to 58.3% in control group. Meta-analysis results indicate that all three routes of lidocaine administrations are more effective than conventional methods (P < 0.05). Different routes of lidocaine administration have a good but short time effects on the tinnitus control. It can effectively reduce the time of tinnitus habituation as a complementary treatment. But its value still needs further evaluation.

  7. Detection of Undocumented Changepoints Using Multiple Test Statistics and Composite Reference Series.

    NASA Astrophysics Data System (ADS)

    Menne, Matthew J.; Williams, Claude N., Jr.

    2005-10-01

    An evaluation of three hypothesis test statistics that are commonly used in the detection of undocumented changepoints is described. The goal of the evaluation was to determine whether the use of multiple tests could improve undocumented, artificial changepoint detection skill in climate series. The use of successive hypothesis testing is compared to optimal approaches, both of which are designed for situations in which multiple undocumented changepoints may be present. In addition, the importance of the form of the composite climate reference series is evaluated, particularly with regard to the impact of undocumented changepoints in the various component series that are used to calculate the composite.In a comparison of single test changepoint detection skill, the composite reference series formulation is shown to be less important than the choice of the hypothesis test statistic, provided that the composite is calculated from the serially complete and homogeneous component series. However, each of the evaluated composite series is not equally susceptible to the presence of changepoints in its components, which may be erroneously attributed to the target series. Moreover, a reference formulation that is based on the averaging of the first-difference component series is susceptible to random walks when the composition of the component series changes through time (e.g., values are missing), and its use is, therefore, not recommended. When more than one test is required to reject the null hypothesis of no changepoint, the number of detected changepoints is reduced proportionately less than the number of false alarms in a wide variety of Monte Carlo simulations. Consequently, a consensus of hypothesis tests appears to improve undocumented changepoint detection skill, especially when reference series homogeneity is violated. A consensus of successive hypothesis tests using a semihierarchic splitting algorithm also compares favorably to optimal solutions, even when

  8. Effect of noise and filtering on largest Lyapunov exponent of time series associated with human walking.

    PubMed

    Mehdizadeh, Sina; Sanjari, Mohammad Ali

    2017-11-07

    This study aimed to determine the effect of added noise, filtering and time series length on the largest Lyapunov exponent (LyE) value calculated for time series obtained from a passive dynamic walker. The simplest passive dynamic walker model comprising of two massless legs connected by a frictionless hinge joint at the hip was adopted to generate walking time series. The generated time series was used to construct a state space with the embedding dimension of 3 and time delay of 100 samples. The LyE was calculated as the exponential rate of divergence of neighboring trajectories of the state space using Rosenstein's algorithm. To determine the effect of noise on LyE values, seven levels of Gaussian white noise (SNR=55-25dB with 5dB steps) were added to the time series. In addition, the filtering was performed using a range of cutoff frequencies from 3Hz to 19Hz with 2Hz steps. The LyE was calculated for both noise-free and noisy time series with different lengths of 6, 50, 100 and 150 strides. Results demonstrated a high percent error in the presence of noise for LyE. Therefore, these observations suggest that Rosenstein's algorithm might not perform well in the presence of added experimental noise. Furthermore, findings indicated that at least 50 walking strides are required to calculate LyE to account for the effect of noise. Finally, observations support that a conservative filtering of the time series with a high cutoff frequency might be more appropriate prior to calculating LyE. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Long-range fluctuations and multifractality in connectivity density time series of a wind speed monitoring network

    NASA Astrophysics Data System (ADS)

    Laib, Mohamed; Telesca, Luciano; Kanevski, Mikhail

    2018-03-01

    This paper studies the daily connectivity time series of a wind speed-monitoring network using multifractal detrended fluctuation analysis. It investigates the long-range fluctuation and multifractality in the residuals of the connectivity time series. Our findings reveal that the daily connectivity of the correlation-based network is persistent for any correlation threshold. Further, the multifractality degree is higher for larger absolute values of the correlation threshold.

  10. Mobile Visualization and Analysis Tools for Spatial Time-Series Data

    NASA Astrophysics Data System (ADS)

    Eberle, J.; Hüttich, C.; Schmullius, C.

    2013-12-01

    The Siberian Earth System Science Cluster (SIB-ESS-C) provides access and analysis services for spatial time-series data build on products from the Moderate Resolution Imaging Spectroradiometer (MODIS) and climate data from meteorological stations. Until now a webportal for data access, visualization and analysis with standard-compliant web services was developed for SIB-ESS-C. As a further enhancement a mobile app was developed to provide an easy access to these time-series data for field campaigns. The app sends the current position from the GPS receiver and a specific dataset (like land surface temperature or vegetation indices) - selected by the user - to our SIB-ESS-C web service and gets the requested time-series data for the identified pixel back in real-time. The data is then being plotted directly in the app. Furthermore the user has possibilities to analyze the time-series data for breaking points and other phenological values. These processings are executed on demand of the user on our SIB-ESS-C web server and results are transfered to the app. Any processing can also be done at the SIB-ESS-C webportal. The aim of this work is to make spatial time-series data and analysis functions available for end users without the need of data processing. In this presentation the author gives an overview on this new mobile app, the functionalities, the technical infrastructure as well as technological issues (how the app was developed, our made experiences).

  11. Highly comparative time-series analysis: the empirical structure of time series and their methods.

    PubMed

    Fulcher, Ben D; Little, Max A; Jones, Nick S

    2013-06-06

    The process of collecting and organizing sets of observations represents a common theme throughout the history of science. However, despite the ubiquity of scientists measuring, recording and analysing the dynamics of different processes, an extensive organization of scientific time-series data and analysis methods has never been performed. Addressing this, annotated collections of over 35 000 real-world and model-generated time series, and over 9000 time-series analysis algorithms are analysed in this work. We introduce reduced representations of both time series, in terms of their properties measured by diverse scientific methods, and of time-series analysis methods, in terms of their behaviour on empirical time series, and use them to organize these interdisciplinary resources. This new approach to comparing across diverse scientific data and methods allows us to organize time-series datasets automatically according to their properties, retrieve alternatives to particular analysis methods developed in other scientific disciplines and automate the selection of useful methods for time-series classification and regression tasks. The broad scientific utility of these tools is demonstrated on datasets of electroencephalograms, self-affine time series, heartbeat intervals, speech signals and others, in each case contributing novel analysis techniques to the existing literature. Highly comparative techniques that compare across an interdisciplinary literature can thus be used to guide more focused research in time-series analysis for applications across the scientific disciplines.

  12. Highly comparative time-series analysis: the empirical structure of time series and their methods

    PubMed Central

    Fulcher, Ben D.; Little, Max A.; Jones, Nick S.

    2013-01-01

    The process of collecting and organizing sets of observations represents a common theme throughout the history of science. However, despite the ubiquity of scientists measuring, recording and analysing the dynamics of different processes, an extensive organization of scientific time-series data and analysis methods has never been performed. Addressing this, annotated collections of over 35 000 real-world and model-generated time series, and over 9000 time-series analysis algorithms are analysed in this work. We introduce reduced representations of both time series, in terms of their properties measured by diverse scientific methods, and of time-series analysis methods, in terms of their behaviour on empirical time series, and use them to organize these interdisciplinary resources. This new approach to comparing across diverse scientific data and methods allows us to organize time-series datasets automatically according to their properties, retrieve alternatives to particular analysis methods developed in other scientific disciplines and automate the selection of useful methods for time-series classification and regression tasks. The broad scientific utility of these tools is demonstrated on datasets of electroencephalograms, self-affine time series, heartbeat intervals, speech signals and others, in each case contributing novel analysis techniques to the existing literature. Highly comparative techniques that compare across an interdisciplinary literature can thus be used to guide more focused research in time-series analysis for applications across the scientific disciplines. PMID:23554344

  13. Tissue classification using depth-dependent ultrasound time series analysis: in-vitro animal study

    NASA Astrophysics Data System (ADS)

    Imani, Farhad; Daoud, Mohammad; Moradi, Mehdi; Abolmaesumi, Purang; Mousavi, Parvin

    2011-03-01

    Time series analysis of ultrasound radio-frequency (RF) signals has been shown to be an effective tissue classification method. Previous studies of this method for tissue differentiation at high and clinical-frequencies have been reported. In this paper, analysis of RF time series is extended to improve tissue classification at the clinical frequencies by including novel features extracted from the time series spectrum. The primary feature examined is the Mean Central Frequency (MCF) computed for regions of interest (ROIs) in the tissue extending along the axial axis of the transducer. In addition, the intercept and slope of a line fitted to the MCF-values of the RF time series as a function of depth have been included. To evaluate the accuracy of the new features, an in vitro animal study is performed using three tissue types: bovine muscle, bovine liver, and chicken breast, where perfect two-way classification is achieved. The results show statistically significant improvements over the classification accuracies with previously reported features.

  14. An improvement of the measurement of time series irreversibility with visibility graph approach

    NASA Astrophysics Data System (ADS)

    Wu, Zhenyu; Shang, Pengjian; Xiong, Hui

    2018-07-01

    We propose a method to improve the measure of real-valued time series irreversibility which contains two tools: the directed horizontal visibility graph and the Kullback-Leibler divergence. The degree of time irreversibility is estimated by the Kullback-Leibler divergence between the in and out degree distributions presented in the associated visibility graph. In our work, we reframe the in and out degree distributions by encoding them with different embedded dimensions used in calculating permutation entropy(PE). With this improved method, we can not only estimate time series irreversibility efficiently, but also detect time series irreversibility from multiple dimensions. We verify the validity of our method and then estimate the amount of time irreversibility of series generated by chaotic maps as well as global stock markets over the period 2005-2015. The result shows that the amount of time irreversibility reaches the peak with embedded dimension d = 3 under circumstances of experiment and financial markets.

  15. Extreme events in total ozone over the northern mid-latitudes: A case study based on long-term data sets from 5 ground-based stations

    NASA Astrophysics Data System (ADS)

    Rieder, Harald E.; Jancso, Leonhardt M.; Staehelin, Johannes; Maeder, Jörg A.; Ribatet, Mathieu; Peter, Thomas; Davison, Anthony C.

    2010-05-01

    (factor of 2.5), and Hohenpeissenberg and Belsk (both about a factor of 2). In general the reduction of trend is strongest during winter and spring. Throughout all stations the influence of ELOs on observed trends is larger than those of EHOs. Especially from the 1990s on ELOs dominate the picture as only a relatively small fraction of EHOs can be observed in the records (due to strong influence of Mt. Pinatubo eruption and polar vortex ozone loss contributions). Additionally it is evidenced that the number of observed mini-holes can be estimated highly accurate by the GPD-model. Overall the results of this thesis show that extreme events play a major role in total ozone and the "ozone extremes concept" provides deeper insight in the influence of chemical and physical features on column ozone. References: Coles, S.: An Introduction to Statistical Modeling of Extreme Values, Springer Series in Statistics, ISBN:1852334592, Springer, Berlin, 2001. Ribatet, M.: POT: Modelling peaks over a threshold, R News, 7, 34-36, 2007. Rieder ,H.E., Staehelin, J., Maeder, J.A., Ribatet, M., Stübi, R., Weihs, P., Holawe, F., Peter, T., and A.D., Davison (2010): Extreme events in total ozone over Arosa - Part I: Application of extreme value theory, to be submitted to ACPD. Rieder, H.E., Staehelin, J., Maeder, J.A., Ribatet, M., Stübi, R., Weihs, P., Holawe, F., Peter, T., and A.D., Davison (2010): Extreme events in total ozone over Arosa - Part II: Fingerprints of atmospheric dynamics and chemistry and effects on mean values and long-term changes, to be submitted to ACPD.

  16. An Overview of Environmental Attitudes, Values, and Ethics: A Symposium.

    ERIC Educational Resources Information Center

    1974

    This series of symposium papers examines the phenomena of environmental attitudes, values, and ethics from a psychological, philosophical/religious, and Western religion perspective. The psychological view is examined from three standpoints: the internalist position, explaining behavior from events within the individual; the interactionist…

  17. Modelling short time series in metabolomics: a functional data analysis approach.

    PubMed

    Montana, Giovanni; Berk, Maurice; Ebbels, Tim

    2011-01-01

    Metabolomics is the study of the complement of small molecule metabolites in cells, biofluids and tissues. Many metabolomic experiments are designed to compare changes observed over time under two or more experimental conditions (e.g. a control and drug-treated group), thus producing time course data. Models from traditional time series analysis are often unsuitable because, by design, only very few time points are available and there are a high number of missing values. We propose a functional data analysis approach for modelling short time series arising in metabolomic studies which overcomes these obstacles. Our model assumes that each observed time series is a smooth random curve, and we propose a statistical approach for inferring this curve from repeated measurements taken on the experimental units. A test statistic for detecting differences between temporal profiles associated with two experimental conditions is then presented. The methodology has been applied to NMR spectroscopy data collected in a pre-clinical toxicology study.

  18. Novel series of 1,2,4-trioxane derivatives as antimalarial agents.

    PubMed

    Rudrapal, Mithun; Chetia, Dipak; Singh, Vineeta

    2017-12-01

    Among three series of 1,2,4-trioxane derivatives, five compounds showed good in vitro antimalarial activity, three compounds of which exhibited better activity against P. falciparum resistant (RKL9) strain than the sensitive (3D7) one. Two best compounds were one from aryl series and the other from heteroaryl series with IC 50 values of 1.24 µM and 1.24 µM and 1.06 µM and 1.17 µM, against sensitive and resistant strains, respectively. Further, trioxane derivatives exhibited good binding affinity for the P. falciparum cysteine protease falcipain 2 receptor (PDB id: 3BPF) with well defined drug-like and pharmacokinetic properties based on Lipinski's rule of five with additional physicochemical and ADMET parameters. In view of having antimalarial potential, 1,2,4-trioxane derivative(s) reported herein may be useful as novel antimalarial lead(s) in the discovery and development of future antimalarial drug candidates as P. falciparum falcipain 2 inhibitors against resistant malaria.

  19. DLA Class II Alleles and Haplotypes Are Associated with Risk for and Protection from Chronic Hepatitis in the English Springer Spaniel

    PubMed Central

    Bexfield, Nicholas H.; Watson, Penny J.; Aguirre-Hernandez, Jesús; Sargan, David R.; Tiley, Laurence; Heeney, Jonathan L.; Kennedy, Lorna J.

    2012-01-01

    Chronic hepatitis (CH) is common in dogs in the United Kingdom. An increased prevalence of the disease is seen in the English Springer spaniel (ESS), and this breed suffer from a severe form with young to middle aged female dogs being predisposed. The disease shares histological features with those of human viral hepatitis, although the specific aetiological agent has not yet been identified. The aim of the current study was to investigate whether dog leucocyte antigen (DLA) class II alleles and haplotypes are associated with susceptibility/resistance to CH in the ESS. Sequence-based genotyping of the polymorphic exon 2 from DLA-DRB1, -DQA1 and -DQB1 class II loci were performed in 66 ESSs with CH and 84 healthy controls. There was a significant difference in the distribution of the protective alleles DRB1*00501 (3.0% vs. 12.0%, odds ratio [OR] = 0.23, 95% confidence interval [CI] = 0.06–0.74) and DQB1*00501 (3.8% vs. 12.0%, OR = 0.29, 95% CI = 0.09–0.85) between cases and controls. The haplotype DLA-DRB1*00501/DQA1*00301/DQB1*00501 was present in 11.9% of controls and 3.0% of cases and was significantly associated with protection against disease development (OR = 0.26, 95% CI = 0.08–0.80). There was a significant difference in the distribution of the risk alleles DRB1*00601 (14.4% vs. 6.5%, OR = 2.40, 95% CI = 1.10–5.63) and DQB1*00701 (14.4% vs. 6.5%, OR = 2.40, 95% CI = 1.10–5.63) between cases and controls. A risk haplotype (DLA-DRB1*00601/DQA1*005011/DQB1*00701) was present in 14.4% of cases and 6.5% of controls and conferred an elevated risk of developing CH with an OR of 3.13 (95% CI = 1.20–8.26). These results demonstrate that DLA class II is significantly associated with risk and protection from developing CH in ESSs. PMID:22870335

  20. Organosolv extraction of lignin from hydrolyzed almond shells and application of the delta-value theory.

    PubMed

    Quesada-Medina, Joaquín; López-Cremades, Francisco Javier; Olivares-Carrillo, Pilar

    2010-11-01

    The solubility of lignin from hydrolyzed almond (Prunus amygdalus) shells in different acetone, ethanol and dioxane-water mixtures and conditions (extraction time and temperature) was studied. The concept of the solubility parameter (delta-value) was applied to explain the effect of organic solvent concentration on lignin solubility. The organic solvent-water mixture that led to the highest lignin extraction was composed of a 75% vol. of organic solvent for all the solvent series investigated (acetone, ethanol and dioxane). Moreover, the best lignin extraction conditions were a temperature of 210 degrees C and an extraction time of 40 min for the acetone and ethanol series, and 25 min for the dioxane series. The delta-value of the hydrolyzed almond shell lignin [14.60 (cal/cm(3))(1/2)] and that of the organic solvent-water mixtures was calculated. The experimental delignification capacity of the aqueous organic solvents clearly reflected the proximity of their delta-value to that of lignin. The hydrogen-bonding capacity of the solvent-water mixtures was also taken into account. Copyright 2010 Elsevier Ltd. All rights reserved.

  1. Will Courts Shape Value-Added Methods for Teacher Evaluation? ACT Working Paper Series. WP-2014-2

    ERIC Educational Resources Information Center

    Croft, Michelle; Buddin, Richard

    2014-01-01

    As more states begin to adopt teacher evaluation systems based on value-added measures, legal challenges have been filed both seeking to limit the use of value-added measures ("Cook v. Stewart") and others seeking to require more robust evaluation systems ("Vergara v. California"). This study reviews existing teacher evaluation…

  2. Time series analysis for psychological research: examining and forecasting change

    PubMed Central

    Jebb, Andrew T.; Tay, Louis; Wang, Wei; Huang, Qiming

    2015-01-01

    Psychological research has increasingly recognized the importance of integrating temporal dynamics into its theories, and innovations in longitudinal designs and analyses have allowed such theories to be formalized and tested. However, psychological researchers may be relatively unequipped to analyze such data, given its many characteristics and the general complexities involved in longitudinal modeling. The current paper introduces time series analysis to psychological research, an analytic domain that has been essential for understanding and predicting the behavior of variables across many diverse fields. First, the characteristics of time series data are discussed. Second, different time series modeling techniques are surveyed that can address various topics of interest to psychological researchers, including describing the pattern of change in a variable, modeling seasonal effects, assessing the immediate and long-term impact of a salient event, and forecasting future values. To illustrate these methods, an illustrative example based on online job search behavior is used throughout the paper, and a software tutorial in R for these analyses is provided in the Supplementary Materials. PMID:26106341

  3. Time series analysis for psychological research: examining and forecasting change.

    PubMed

    Jebb, Andrew T; Tay, Louis; Wang, Wei; Huang, Qiming

    2015-01-01

    Psychological research has increasingly recognized the importance of integrating temporal dynamics into its theories, and innovations in longitudinal designs and analyses have allowed such theories to be formalized and tested. However, psychological researchers may be relatively unequipped to analyze such data, given its many characteristics and the general complexities involved in longitudinal modeling. The current paper introduces time series analysis to psychological research, an analytic domain that has been essential for understanding and predicting the behavior of variables across many diverse fields. First, the characteristics of time series data are discussed. Second, different time series modeling techniques are surveyed that can address various topics of interest to psychological researchers, including describing the pattern of change in a variable, modeling seasonal effects, assessing the immediate and long-term impact of a salient event, and forecasting future values. To illustrate these methods, an illustrative example based on online job search behavior is used throughout the paper, and a software tutorial in R for these analyses is provided in the Supplementary Materials.

  4. Intervention for Infants at Risk of Developing Autism: A Case Series

    ERIC Educational Resources Information Center

    Green, Jonathan; Wan, Ming Wai; Guiraud, Jeanne; Holsgrove, Samina; McNally, Janet; Slonims, Vicky; Elsabbagh, Mayada; Charman, Tony; Pickles, Andrew; Johnson, Mark

    2013-01-01

    Theory and evidence suggest the potential value of prodromal intervention for infants at risk of developing autism. We report an initial case series (n = 8) of a parent-mediated, video-aided and interaction-focused intervention with infant siblings of autistic probands, beginning at 8-10 months of age. We outline the theory and evidence base…

  5. Springer - Encylcopedia of Immigrant Health (EIH)

    EPA Science Inventory

    The following description is based on the section of the EIH that is devoted to pesticides: Exposure to chemical pesticides occurs via 3 major pathways of exposure, i.e., inhalation, ingestion (dietary and non dietary) and dermal. Health response varies among individuals and is l...

  6. 31 CFR 359.55 - How are redemption values calculated for book-entry Series I savings bonds?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 31 Money and Finance:Treasury 2 2013-07-01 2013-07-01 false How are redemption values calculated... prorated to the book-entry par investment amount for the corresponding issue and redemption dates... to $25.04; calculated value of $25.045 rounds to $25.05. [Book-entry par investment ÷ 100] × [CRV...

  7. 31 CFR 359.55 - How are redemption values calculated for book-entry Series I savings bonds?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 31 Money and Finance:Treasury 2 2012-07-01 2012-07-01 false How are redemption values calculated... prorated to the book-entry par investment amount for the corresponding issue and redemption dates... to $25.04; calculated value of $25.045 rounds to $25.05. [Book-entry par investment ÷ 100] × [CRV...

  8. 31 CFR 359.55 - How are redemption values calculated for book-entry Series I savings bonds?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 31 Money and Finance: Treasury 2 2014-07-01 2014-07-01 false How are redemption values calculated... prorated to the book-entry par investment amount for the corresponding issue and redemption dates... to $25.04; calculated value of $25.045 rounds to $25.05. [Book-entry par investment ÷ 100] × [CRV...

  9. Modeling pollen time series using seasonal-trend decomposition procedure based on LOESS smoothing

    NASA Astrophysics Data System (ADS)

    Rojo, Jesús; Rivero, Rosario; Romero-Morte, Jorge; Fernández-González, Federico; Pérez-Badia, Rosa

    2017-02-01

    Analysis of airborne pollen concentrations provides valuable information on plant phenology and is thus a useful tool in agriculture—for predicting harvests in crops such as the olive and for deciding when to apply phytosanitary treatments—as well as in medicine and the environmental sciences. Variations in airborne pollen concentrations, moreover, are indicators of changing plant life cycles. By modeling pollen time series, we can not only identify the variables influencing pollen levels but also predict future pollen concentrations. In this study, airborne pollen time series were modeled using a seasonal-trend decomposition procedure based on LOcally wEighted Scatterplot Smoothing (LOESS) smoothing (STL). The data series—daily Poaceae pollen concentrations over the period 2006-2014—was broken up into seasonal and residual (stochastic) components. The seasonal component was compared with data on Poaceae flowering phenology obtained by field sampling. Residuals were fitted to a model generated from daily temperature and rainfall values, and daily pollen concentrations, using partial least squares regression (PLSR). This method was then applied to predict daily pollen concentrations for 2014 (independent validation data) using results for the seasonal component of the time series and estimates of the residual component for the period 2006-2013. Correlation between predicted and observed values was r = 0.79 (correlation coefficient) for the pre-peak period (i.e., the period prior to the peak pollen concentration) and r = 0.63 for the post-peak period. Separate analysis of each of the components of the pollen data series enables the sources of variability to be identified more accurately than by analysis of the original non-decomposed data series, and for this reason, this procedure has proved to be a suitable technique for analyzing the main environmental factors influencing airborne pollen concentrations.

  10. Probabilistic reasoning over seismic RMS time series: volcano monitoring through HMMs and SAX technique

    NASA Astrophysics Data System (ADS)

    Aliotta, M. A.; Cassisi, C.; Prestifilippo, M.; Cannata, A.; Montalto, P.; Patanè, D.

    2014-12-01

    During the last years, volcanic activity at Mt. Etna was often characterized by cyclic occurrences of fountains. In the period between January 2011 and June 2013, 38 episodes of lava fountains has been observed. Automatic recognition of the volcano's states related to lava fountain episodes (Quiet, Pre-Fountaining, Fountaining, Post-Fountaining) is very useful for monitoring purposes. We discovered that such states are strongly related to the trend of RMS (Root Mean Square) of the seismic signal recorded in the summit area. In the framework of the project PON SIGMA (Integrated Cloud-Sensor System for Advanced Multirisk Management) work, we tried to model the system generating its sampled values (assuming to be a Markov process and assuming that RMS time series is a stochastic process), by using Hidden Markov models (HMMs), that are a powerful tool for modeling any time-varying series. HMMs analysis seeks to discover the sequence of hidden states from the observed emissions. In our framework, observed emissions are characters generated by SAX (Symbolic Aggregate approXimation) technique. SAX is able to map RMS time series values with discrete literal emissions. Our experiments showed how to predict volcano states by means of SAX and HMMs.

  11. Excused and Unexcused--The Value of Labeling an Absence. Chronic Absenteeism in Oregon Elementary Schools. Part 4 of 4. September 2016. Research Brief Series

    ERIC Educational Resources Information Center

    Oregon Department of Education, 2016

    2016-01-01

    This four part series of research briefs summarized detailed analysis of attendance and chronic absenteeism in Oregon. Brief 1 highlighted the importance of tracking chronic absenteeism rather than average daily attendance. The second brief in this series focused on student outcomes and attendance. Research suggests, and Oregon Department of…

  12. Multifractal detrended cross-correlation analysis on gold, crude oil and foreign exchange rate time series

    NASA Astrophysics Data System (ADS)

    Pal, Mayukha; Madhusudana Rao, P.; Manimaran, P.

    2014-12-01

    We apply the recently developed multifractal detrended cross-correlation analysis method to investigate the cross-correlation behavior and fractal nature between two non-stationary time series. We analyze the daily return price of gold, West Texas Intermediate and Brent crude oil, foreign exchange rate data, over a period of 18 years. The cross correlation has been measured from the Hurst scaling exponents and the singularity spectrum quantitatively. From the results, the existence of multifractal cross-correlation between all of these time series is found. We also found that the cross correlation between gold and oil prices possess uncorrelated behavior and the remaining bivariate time series possess persistent behavior. It was observed for five bivariate series that the cross-correlation exponents are less than the calculated average generalized Hurst exponents (GHE) for q<0 and greater than GHE when q>0 and for one bivariate series the cross-correlation exponent is greater than GHE for all q values.

  13. Forecasting malaria cases using climatic factors in delhi, India: a time series analysis.

    PubMed

    Kumar, Varun; Mangal, Abha; Panesar, Sanjeet; Yadav, Geeta; Talwar, Richa; Raut, Deepak; Singh, Saudan

    2014-01-01

    Background. Malaria still remains a public health problem in developing countries and changing environmental and climatic factors pose the biggest challenge in fighting against the scourge of malaria. Therefore, the study was designed to forecast malaria cases using climatic factors as predictors in Delhi, India. Methods. The total number of monthly cases of malaria slide positives occurring from January 2006 to December 2013 was taken from the register maintained at the malaria clinic at Rural Health Training Centre (RHTC), Najafgarh, Delhi. Climatic data of monthly mean rainfall, relative humidity, and mean maximum temperature were taken from Regional Meteorological Centre, Delhi. Expert modeler of SPSS ver. 21 was used for analyzing the time series data. Results. Autoregressive integrated moving average, ARIMA (0,1,1) (0,1,0)(12), was the best fit model and it could explain 72.5% variability in the time series data. Rainfall (P value = 0.004) and relative humidity (P value = 0.001) were found to be significant predictors for malaria transmission in the study area. Seasonal adjusted factor (SAF) for malaria cases shows peak during the months of August and September. Conclusion. ARIMA models of time series analysis is a simple and reliable tool for producing reliable forecasts for malaria in Delhi, India.

  14. The solution of the point kinetics equations via converged accelerated Taylor series (CATS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ganapol, B.; Picca, P.; Previti, A.

    This paper deals with finding accurate solutions of the point kinetics equations including non-linear feedback, in a fast, efficient and straightforward way. A truncated Taylor series is coupled to continuous analytical continuation to provide the recurrence relations to solve the ordinary differential equations of point kinetics. Non-linear (Wynn-epsilon) and linear (Romberg) convergence accelerations are employed to provide highly accurate results for the evaluation of Taylor series expansions and extrapolated values of neutron and precursor densities at desired edits. The proposed Converged Accelerated Taylor Series, or CATS, algorithm automatically performs successive mesh refinements until the desired accuracy is obtained, making usemore » of the intermediate results for converged initial values at each interval. Numerical performance is evaluated using case studies available from the literature. Nearly perfect agreement is found with the literature results generally considered most accurate. Benchmark quality results are reported for several cases of interest including step, ramp, zigzag and sinusoidal prescribed insertions and insertions with adiabatic Doppler feedback. A larger than usual (9) number of digits is included to encourage honest benchmarking. The benchmark is then applied to the enhanced piecewise constant algorithm (EPCA) currently being developed by the second author. (authors)« less

  15. Two States Mapping Based Time Series Neural Network Model for Compensation Prediction Residual Error

    NASA Astrophysics Data System (ADS)

    Jung, Insung; Koo, Lockjo; Wang, Gi-Nam

    2008-11-01

    The objective of this paper was to design a model of human bio signal data prediction system for decreasing of prediction error using two states mapping based time series neural network BP (back-propagation) model. Normally, a lot of the industry has been applied neural network model by training them in a supervised manner with the error back-propagation algorithm for time series prediction systems. However, it still has got a residual error between real value and prediction result. Therefore, we designed two states of neural network model for compensation residual error which is possible to use in the prevention of sudden death and metabolic syndrome disease such as hypertension disease and obesity. We determined that most of the simulation cases were satisfied by the two states mapping based time series prediction model. In particular, small sample size of times series were more accurate than the standard MLP model.

  16. Geometric Series via Probability

    ERIC Educational Resources Information Center

    Tesman, Barry

    2012-01-01

    Infinite series is a challenging topic in the undergraduate mathematics curriculum for many students. In fact, there is a vast literature in mathematics education research on convergence issues. One of the most important types of infinite series is the geometric series. Their beauty lies in the fact that they can be evaluated explicitly and that…

  17. Emerging interdependence between stock values during financial crashes.

    PubMed

    Rocchi, Jacopo; Tsui, Enoch Yan Lok; Saad, David

    2017-01-01

    To identify emerging interdependencies between traded stocks we investigate the behavior of the stocks of FTSE 100 companies in the period 2000-2015, by looking at daily stock values. Exploiting the power of information theoretical measures to extract direct influences between multiple time series, we compute the information flow across stock values to identify several different regimes. While small information flows is detected in most of the period, a dramatically different situation occurs in the proximity of global financial crises, where stock values exhibit strong and substantial interdependence for a prolonged period. This behavior is consistent with what one would generally expect from a complex system near criticality in physical systems, showing the long lasting effects of crashes on stock markets.

  18. Emerging interdependence between stock values during financial crashes

    PubMed Central

    Tsui, Enoch Yan Lok; Saad, David

    2017-01-01

    To identify emerging interdependencies between traded stocks we investigate the behavior of the stocks of FTSE 100 companies in the period 2000-2015, by looking at daily stock values. Exploiting the power of information theoretical measures to extract direct influences between multiple time series, we compute the information flow across stock values to identify several different regimes. While small information flows is detected in most of the period, a dramatically different situation occurs in the proximity of global financial crises, where stock values exhibit strong and substantial interdependence for a prolonged period. This behavior is consistent with what one would generally expect from a complex system near criticality in physical systems, showing the long lasting effects of crashes on stock markets. PMID:28542278

  19. Measuring Nursing Value from the Electronic Health Record.

    PubMed

    Welton, John M; Harper, Ellen M

    2016-01-01

    We report the findings of a big data nursing value expert group made up of 14 members of the nursing informatics, leadership, academic and research communities within the United States tasked with 1. Defining nursing value, 2. Developing a common data model and metrics for nursing care value, and 3. Developing nursing business intelligence tools using the nursing value data set. This work is a component of the Big Data and Nursing Knowledge Development conference series sponsored by the University Of Minnesota School Of Nursing. The panel met by conference calls for fourteen 1.5 hour sessions for a total of 21 total hours of interaction from August 2014 through May 2015. Primary deliverables from the bit data expert group were: development and publication of definitions and metrics for nursing value; construction of a common data model to extract key data from electronic health records; and measures of nursing costs and finance to provide a basis for developing nursing business intelligence and analysis systems.

  20. Record statistics of financial time series and geometric random walks

    NASA Astrophysics Data System (ADS)

    Sabir, Behlool; Santhanam, M. S.

    2014-09-01

    The study of record statistics of correlated series in physics, such as random walks, is gaining momentum, and several analytical results have been obtained in the past few years. In this work, we study the record statistics of correlated empirical data for which random walk models have relevance. We obtain results for the records statistics of select stock market data and the geometric random walk, primarily through simulations. We show that the distribution of the age of records is a power law with the exponent α lying in the range 1.5≤α≤1.8. Further, the longest record ages follow the Fréchet distribution of extreme value theory. The records statistics of geometric random walk series is in good agreement with that obtained from empirical stock data.

  1. Activation barriers for series of exothermic homologous reactions. V. Boron group diatomic species reactions

    NASA Astrophysics Data System (ADS)

    Blue, Alan S.; Belyung, David P.; Fontijn, Arthur

    1997-09-01

    Semiempirical configuration interaction (SECI) theory is used to predict activation barriers E, as defined by k(T)=ATn exp(-E/RT). Previously SECI has been applied to homologous series of oxidation reactions of s1, s2, and s2p1 metal atoms. Here it is extended to oxidation reactions of diatomic molecules containing one s2p1 atom. E values are calculated for the reactions of BH, BF, BCl, AlF, AlCl, AlBr, GaF, GaI, InCl, InBr, InI, TlF, TlCl, TlBr, and TlI with O2, CO2, SO2, or N2O. These values correlate with the sums of the ionization potentials and Σ-Π promotion energies of the former minus the electron affinities of the latter. In the earlier work n was chosen somewhat arbitrarily, which affected the absolute values of E. Here it is shown that examination of available experimental and theoretical results allows determination of the best values of n. Using this approach yields n=1.9 for the present series. For the seven reactions which have been studied experimentally, the average deviation of the SECI activation barrier prediction from experiment is 4.0 kJ mol-1. Energy barriers are calculated for another 52 reactions.

  2. Data visualization in interactive maps and time series

    NASA Astrophysics Data System (ADS)

    Maigne, Vanessa; Evano, Pascal; Brockmann, Patrick; Peylin, Philippe; Ciais, Philippe

    2014-05-01

    State-of-the-art data visualization has nothing to do with plots and maps we used few years ago. Many opensource tools are now available to provide access to scientific data and implement accessible, interactive, and flexible web applications. Here we will present a web site opened November 2013 to create custom global and regional maps and time series from research models and datasets. For maps, we explore and get access to data sources from a THREDDS Data Server (TDS) with the OGC WMS protocol (using the ncWMS implementation) then create interactive maps with the OpenLayers javascript library and extra information layers from a GeoServer. Maps become dynamic, zoomable, synchroneaously connected to each other, and exportable to Google Earth. For time series, we extract data from a TDS with the Netcdf Subset Service (NCSS) then display interactive graphs with a custom library based on the Data Driven Documents javascript library (D3.js). This time series application provides dynamic functionalities such as interpolation, interactive zoom on different axes, display of point values, and export to different formats. These tools were implemented for the Global Carbon Atlas (http://www.globalcarbonatlas.org): a web portal to explore, visualize, and interpret global and regional carbon fluxes from various model simulations arising from both human activities and natural processes, a work led by the Global Carbon Project.

  3. Inverse sequential procedures for the monitoring of time series

    NASA Technical Reports Server (NTRS)

    Radok, Uwe; Brown, Timothy J.

    1995-01-01

    When one or more new values are added to a developing time series, they change its descriptive parameters (mean, variance, trend, coherence). A 'change index (CI)' is developed as a quantitative indicator that the changed parameters remain compatible with the existing 'base' data. CI formulate are derived, in terms of normalized likelihood ratios, for small samples from Poisson, Gaussian, and Chi-Square distributions, and for regression coefficients measuring linear or exponential trends. A substantial parameter change creates a rapid or abrupt CI decrease which persists when the length of the bases is changed. Except for a special Gaussian case, the CI has no simple explicit regions for tests of hypotheses. However, its design ensures that the series sampled need not conform strictly to the distribution form assumed for the parameter estimates. The use of the CI is illustrated with both constructed and observed data samples, processed with a Fortran code 'Sequitor'.

  4. Integrated Campaign Probabilistic Cost, Schedule, Performance, and Value for Program Office Support

    NASA Technical Reports Server (NTRS)

    Cornelius, David; Sasamoto, Washito; Daugherty, Kevin; Deacon, Shaun

    2012-01-01

    This paper describes an integrated assessment tool developed at NASA Langley Research Center that incorporates probabilistic analysis of life cycle cost, schedule, launch performance, on-orbit performance, and value across a series of planned space-based missions, or campaign. Originally designed as an aid in planning the execution of missions to accomplish the National Research Council 2007 Earth Science Decadal Survey, it utilizes Monte Carlo simulation of a series of space missions for assessment of resource requirements and expected return on investment. Interactions between simulated missions are incorporated, such as competition for launch site manifest, to capture unexpected and non-linear system behaviors. A novel value model is utilized to provide an assessment of the probabilistic return on investment. A demonstration case is discussed to illustrate the tool utility.

  5. An Extension of the Mean Value Theorem for Integrals

    ERIC Educational Resources Information Center

    Khalili, Parviz; Vasiliu, Daniel

    2010-01-01

    In this note we present an extension of the mean value theorem for integrals. The extension we consider is motivated by an older result (here referred as Corollary 2), which is quite classical for the literature of Mathematical Analysis or Calculus. We also show an interesting application for computing the sum of a harmonic series.

  6. Common pitfalls in statistical analysis: “P” values, statistical significance and confidence intervals

    PubMed Central

    Ranganathan, Priya; Pramesh, C. S.; Buyse, Marc

    2015-01-01

    In the second part of a series on pitfalls in statistical analysis, we look at various ways in which a statistically significant study result can be expressed. We debunk some of the myths regarding the ‘P’ value, explain the importance of ‘confidence intervals’ and clarify the importance of including both values in a paper PMID:25878958

  7. Temporal relationships between awakening cortisol and psychosocial variables in inpatients with anorexia nervosa - A time series approach.

    PubMed

    Wild, Beate; Stadnitski, Tatjana; Wesche, Daniela; Stroe-Kunold, Esther; Schultz, Jobst-Hendrik; Rudofsky, Gottfried; Maser-Gluth, Christiane; Herzog, Wolfgang; Friederich, Hans-Christoph

    2016-04-01

    The aim of the study was to investigate the characteristics of the awakening salivary cortisol in patients with anorexia nervosa (AN) using a time series design. We included ten AN inpatients, six with a very low BMI (high symptom severity, HSS group) and four patients with less severe symptoms (low symptom severity, LSS group). Patients collected salivary cortisol daily upon awakening. The number of collected saliva samples varied across patients between n=65 and n=229 (due to the different lengths of their inpatient stay). In addition, before retiring, the patients answered questions daily on the handheld regarding disorder-related psychosocial variables. The analysis of cortisol and diary data was conducted by using a time series approach. Time series showed that the awakening cortisol of the AN patients was elevated as compared to a control group. Cortisol measurements of patients with LSS essentially fluctuated in a stationary manner around a constant mean. The series of patients with HSS were generally less stable; four HSS patients showed a non-stationary cortisol awakening series. Antipsychotic medication did not change awakening cortisol in a specific way. The lagged dependencies between cortisol and depressive feelings became significant for four patients. Here, higher cortisol values were temporally associated with higher values of depressive feelings. Upon awakening, the cortisol of all AN patients was in the standard range but elevated as compared to healthy controls. Patients with HSS appeared to show less stable awakening cortisol time series compared to patients with LSS. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Regenerating time series from ordinal networks.

    PubMed

    McCullough, Michael; Sakellariou, Konstantinos; Stemler, Thomas; Small, Michael

    2017-03-01

    Recently proposed ordinal networks not only afford novel methods of nonlinear time series analysis but also constitute stochastic approximations of the deterministic flow time series from which the network models are constructed. In this paper, we construct ordinal networks from discrete sampled continuous chaotic time series and then regenerate new time series by taking random walks on the ordinal network. We then investigate the extent to which the dynamics of the original time series are encoded in the ordinal networks and retained through the process of regenerating new time series by using several distinct quantitative approaches. First, we use recurrence quantification analysis on traditional recurrence plots and order recurrence plots to compare the temporal structure of the original time series with random walk surrogate time series. Second, we estimate the largest Lyapunov exponent from the original time series and investigate the extent to which this invariant measure can be estimated from the surrogate time series. Finally, estimates of correlation dimension are computed to compare the topological properties of the original and surrogate time series dynamics. Our findings show that ordinal networks constructed from univariate time series data constitute stochastic models which approximate important dynamical properties of the original systems.

  9. Regenerating time series from ordinal networks

    NASA Astrophysics Data System (ADS)

    McCullough, Michael; Sakellariou, Konstantinos; Stemler, Thomas; Small, Michael

    2017-03-01

    Recently proposed ordinal networks not only afford novel methods of nonlinear time series analysis but also constitute stochastic approximations of the deterministic flow time series from which the network models are constructed. In this paper, we construct ordinal networks from discrete sampled continuous chaotic time series and then regenerate new time series by taking random walks on the ordinal network. We then investigate the extent to which the dynamics of the original time series are encoded in the ordinal networks and retained through the process of regenerating new time series by using several distinct quantitative approaches. First, we use recurrence quantification analysis on traditional recurrence plots and order recurrence plots to compare the temporal structure of the original time series with random walk surrogate time series. Second, we estimate the largest Lyapunov exponent from the original time series and investigate the extent to which this invariant measure can be estimated from the surrogate time series. Finally, estimates of correlation dimension are computed to compare the topological properties of the original and surrogate time series dynamics. Our findings show that ordinal networks constructed from univariate time series data constitute stochastic models which approximate important dynamical properties of the original systems.

  10. Workshop: Valuing and Managing Ecosystems: Economic Research Sponsored by NSF/EPA (1998)

    EPA Pesticide Factsheets

    Materials from first workshop in series of Environmental Policy and Economics Workshops. Focus on valuing and managing ecosystems, with papers on use of stated preference methods, examining markets for diverse biologic resources and conservation measures.

  11. Extreme events in total ozone: Spatio-temporal analysis from local to global scale

    NASA Astrophysics Data System (ADS)

    Rieder, Harald E.; Staehelin, Johannes; Maeder, Jörg A.; Ribatet, Mathieu; di Rocco, Stefania; Jancso, Leonhardt M.; Peter, Thomas; Davison, Anthony C.

    2010-05-01

    dynamics (NAO, ENSO) on total ozone is a global feature in the northern mid-latitudes (Rieder et al., 2010c). In a next step frequency distributions of extreme events are analyzed on global scale (northern and southern mid-latitudes). A specific focus here is whether findings gained through analysis of long-term European ground based stations can be clearly identified as a global phenomenon. By showing results from these three types of studies an overview of extreme events in total ozone (and the dynamical and chemical features leading to those) will be presented from local to global scales. References: Coles, S.: An Introduction to Statistical Modeling of Extreme Values, Springer Series in Statistics, ISBN:1852334592, Springer, Berlin, 2001. Ribatet, M.: POT: Modelling peaks over a threshold, R News, 7, 34-36, 2007. Rieder, H.E., Staehelin, J., Maeder, J.A., Ribatet, M., Stübi, R., Weihs, P., Holawe, F., Peter, T., and A.D., Davison (2010): Extreme events in total ozone over Arosa - Part I: Application of extreme value theory, to be submitted to ACPD. Rieder, H.E., Staehelin, J., Maeder, J.A., Ribatet, M., Stübi, R., Weihs, P., Holawe, F., Peter, T., and A.D., Davison (2010): Extreme events in total ozone over Arosa - Part II: Fingerprints of atmospheric dynamics and chemistry and effects on mean values and long-term changes, to be submitted to ACPD. Rieder, H.E., Jancso, L., Staehelin, J., Maeder, J.A., Ribatet, Peter, T., and A.D., Davison (2010): Extreme events in total ozone over the northern mid-latitudes: A case study based on long-term data sets from 5 ground-based stations, in preparation. Staehelin, J., Renaud, A., Bader, J., McPeters, R., Viatte, P., Hoegger, B., Bugnion, V., Giroud, M., and Schill, H.: Total ozone series at Arosa (Switzerland): Homogenization and data comparison, J. Geophys. Res., 103(D5), 5827-5842, doi:10.1029/97JD02402, 1998a. Staehelin, J., Kegel, R., and Harris, N. R.: Trend analysis of the homogenized total ozone series of Arosa

  12. Modeling climate change impacts on combined sewer overflow using synthetic precipitation time series.

    PubMed

    Bendel, David; Beck, Ferdinand; Dittmer, Ulrich

    2013-01-01

    In the presented study climate change impacts on combined sewer overflows (CSOs) in Baden-Wuerttemberg, Southern Germany, were assessed based on continuous long-term rainfall-runoff simulations. As input data, synthetic rainfall time series were used. The applied precipitation generator NiedSim-Klima accounts for climate change effects on precipitation patterns. Time series for the past (1961-1990) and future (2041-2050) were generated for various locations. Comparing the simulated CSO activity of both periods we observe significantly higher overflow frequencies for the future. Changes in overflow volume and overflow duration depend on the type of overflow structure. Both values will increase at simple CSO structures that merely divide the flow, whereas they will decrease when the CSO structure is combined with a storage tank. However, there is a wide variation between the results of different precipitation time series (representative for different locations).

  13. Impact of arachidonic versus eicosapentaenoic acid on exotonin-induced lung vascular leakage: relation to 4-series versus 5-series leukotriene generation.

    PubMed

    Grimminger, F; Wahn, H; Mayer, K; Kiss, L; Walmrath, D; Seeger, W

    1997-02-01

    Escherichia coli hemolysin (HlyA) is a proteinaceous pore-forming exotoxin that is implicated as a significant pathogenicity factor in extraintestinal E. coli infections including sepsis. In perfused rabbit lungs, subcytolytic concentrations of the toxin evoke thromboxane-mediated vasoconstriction and prostanoid-independent protracted vascular permeability increase (11). In the present study, the influence of submicromolar concentrations of free arachidonic acid (AA) and eicosapentaenoic acid (EPA) on the HlyA-induced leakage response was investigated. HlyA at concentration from 0.02 to 0.06 hemolytic units/ml provoked a dose-dependent, severalfold increase in the capillary filtration coefficient (Kfc), accompanied by the release of leukotriene(LT)B4, LTC4, and LTE4 into the recirculating buffer fluid. Simultaneous application of 100 nmol/L AA markedly augmented the HlyA-elicited leakage response, concomitant with an amplification of LTB4 release and a change in the kinetics of cysteinyl-LT generation. In contrast, 50 to 200 nmol/L EPA suppressed in a dose-dependent manner the HlyA-induced increase in Kfc values. This was accompanied by a blockage of 4-series LT generation and a dose-dependent appearance of LTB5, LTC5, and LTE5. In addition, EPA fully antagonized the AA-induced amplification of the HlyA-provoked Kfc increase, again accompanied by a shift from 4-series to 5-series LT generation. We conclude that the vascular leakage provoked by HlyA in rabbit lungs is differentially influenced by free AA versus free EPA, related to the generation of 4- versus 5-series leukotrienes. The composition of lipid emulsions used for parenteral nutrition may thus influence inflammatory capillary leakage.

  14. A series solution for horizontal infiltration in an initially dry aquifer

    NASA Astrophysics Data System (ADS)

    Furtak-Cole, Eden; Telyakovskiy, Aleksey S.; Cooper, Clay A.

    2018-06-01

    The porous medium equation (PME) is a generalization of the traditional Boussinesq equation for hydraulic conductivity as a power law function of height. We analyze the horizontal recharge of an initially dry unconfined aquifer of semi-infinite extent, as would be found in an aquifer adjacent a rising river. If the water level can be modeled as a power law function of time, similarity variables can be introduced and the original problem can be reduced to a boundary value problem for a nonlinear ordinary differential equation. The position of the advancing front is not known ahead of time and must be found in the process of solution. We present an analytical solution in the form of a power series, with the coefficients of the series given by a recurrence relation. The analytical solution compares favorably with a highly accurate numerical solution, and only a small number of terms of the series are needed to achieve high accuracy in the scenarios considered here. We also conduct a series of physical experiments in an initially dry wedged Hele-Shaw cell, where flow is modeled by a special form of the PME. Our analytical solution closely matches the hydraulic head profiles in the Hele-Shaw cell experiment.

  15. Implementing Target Value Design.

    PubMed

    Alves, Thais da C L; Lichtig, Will; Rybkowski, Zofia K

    2017-04-01

    An alternative to the traditional way of designing projects is the process of target value design (TVD), which takes different departure points to start the design process. The TVD process starts with the client defining an allowable cost that needs to be met by the design and construction teams. An expected cost in the TVD process is defined through multiple interactions between multiple stakeholders who define wishes and others who define ways of achieving these wishes. Finally, a target cost is defined based on the expected profit the design and construction teams are expecting to make. TVD follows a series of continuous improvement efforts aimed at reaching the desired goals for the project and its associated target value cost. The process takes advantage of rapid cycles of suggestions, analyses, and implementation that starts with the definition of value for the client. In the traditional design process, the goal is to identify user preferences and find solutions that meet the needs of the client's expressed preferences. In the lean design process, the goal is to educate users about their values and advocate for a better facility over the long run; this way owners can help contractors and designers to identify better solutions. This article aims to inform the healthcare community about tools and techniques commonly used during the TVD process and how they can be used to educate and support project participants in developing better solutions to meet their needs now as well as in the future.

  16. Extreme events in total ozone over Arosa - Part 1: Application of extreme value theory

    NASA Astrophysics Data System (ADS)

    Rieder, H. E.; Staehelin, J.; Maeder, J. A.; Peter, T.; Ribatet, M.; Davison, A. C.; Stübi, R.; Weihs, P.; Holawe, F.

    2010-10-01

    In this study ideas from extreme value theory are for the first time applied in the field of stratospheric ozone research, because statistical analysis showed that previously used concepts assuming a Gaussian distribution (e.g. fixed deviations from mean values) of total ozone data do not adequately address the structure of the extremes. We show that statistical extreme value methods are appropriate to identify ozone extremes and to describe the tails of the Arosa (Switzerland) total ozone time series. In order to accommodate the seasonal cycle in total ozone, a daily moving threshold was determined and used, with tools from extreme value theory, to analyse the frequency of days with extreme low (termed ELOs) and high (termed EHOs) total ozone at Arosa. The analysis shows that the Generalized Pareto Distribution (GPD) provides an appropriate model for the frequency distribution of total ozone above or below a mathematically well-defined threshold, thus providing a statistical description of ELOs and EHOs. The results show an increase in ELOs and a decrease in EHOs during the last decades. The fitted model represents the tails of the total ozone data set with high accuracy over the entire range (including absolute monthly minima and maxima), and enables a precise computation of the frequency distribution of ozone mini-holes (using constant thresholds). Analyzing the tails instead of a small fraction of days below constant thresholds provides deeper insight into the time series properties. Fingerprints of dynamical (e.g. ENSO, NAO) and chemical features (e.g. strong polar vortex ozone loss), and major volcanic eruptions, can be identified in the observed frequency of extreme events throughout the time series. Overall the new approach to analysis of extremes provides more information on time series properties and variability than previous approaches that use only monthly averages and/or mini-holes and mini-highs.

  17. Extreme events in total ozone over Arosa - Part 1: Application of extreme value theory

    NASA Astrophysics Data System (ADS)

    Rieder, H. E.; Staehelin, J.; Maeder, J. A.; Peter, T.; Ribatet, M.; Davison, A. C.; Stübi, R.; Weihs, P.; Holawe, F.

    2010-05-01

    In this study ideas from extreme value theory are for the first time applied in the field of stratospheric ozone research, because statistical analysis showed that previously used concepts assuming a Gaussian distribution (e.g. fixed deviations from mean values) of total ozone data do not adequately address the structure of the extremes. We show that statistical extreme value methods are appropriate to identify ozone extremes and to describe the tails of the Arosa (Switzerland) total ozone time series. In order to accommodate the seasonal cycle in total ozone, a daily moving threshold was determined and used, with tools from extreme value theory, to analyse the frequency of days with extreme low (termed ELOs) and high (termed EHOs) total ozone at Arosa. The analysis shows that the Generalized Pareto Distribution (GPD) provides an appropriate model for the frequency distribution of total ozone above or below a mathematically well-defined threshold, thus providing a statistical description of ELOs and EHOs. The results show an increase in ELOs and a decrease in EHOs during the last decades. The fitted model represents the tails of the total ozone data set with high accuracy over the entire range (including absolute monthly minima and maxima), and enables a precise computation of the frequency distribution of ozone mini-holes (using constant thresholds). Analyzing the tails instead of a small fraction of days below constant thresholds provides deeper insight into the time series properties. Fingerprints of dynamical (e.g. ENSO, NAO) and chemical features (e.g. strong polar vortex ozone loss), and major volcanic eruptions, can be identified in the observed frequency of extreme events throughout the time series. Overall the new approach to analysis of extremes provides more information on time series properties and variability than previous approaches that use only monthly averages and/or mini-holes and mini-highs.

  18. Scaling properties of Polish rain series

    NASA Astrophysics Data System (ADS)

    Licznar, P.

    2009-04-01

    implementation of double trace moment method allowed for estimation of local universal multifractal rainfall parameters (α=0.69; C1=0.34; H=-0.01). The research proved the fractal character of rainfall process support and multifractal character of the rainfall intensity values variability among analyzed time series. It is believed that scaling of local Wroclaw's rainfalls for timescales at the range from 24 hours up to 5 minutes opens the door for future research concerning for example random cascades implementation for daily precipitation totals disaggregation for smaller time intervals. The results of such a random cascades functioning in a form of 5 minute artificial rainfall scenarios could be of great practical usability for needs of urban hydrology, and design and hydrodynamic modeling of storm water and combined sewage conveyance systems.

  19. Flood return level analysis of Peaks over Threshold series under changing climate

    NASA Astrophysics Data System (ADS)

    Li, L.; Xiong, L.; Hu, T.; Xu, C. Y.; Guo, S.

    2016-12-01

    Obtaining insights into future flood estimation is of great significance for water planning and management. Traditional flood return level analysis with the stationarity assumption has been challenged by changing environments. A method that takes into consideration the nonstationarity context has been extended to derive flood return levels for Peaks over Threshold (POT) series. With application to POT series, a Poisson distribution is normally assumed to describe the arrival rate of exceedance events, but this distribution assumption has at times been reported as invalid. The Negative Binomial (NB) distribution is therefore proposed as an alternative to the Poisson distribution assumption. Flood return levels were extrapolated in nonstationarity context for the POT series of the Weihe basin, China under future climate scenarios. The results show that the flood return levels estimated under nonstationarity can be different with an assumption of Poisson and NB distribution, respectively. The difference is found to be related to the threshold value of POT series. The study indicates the importance of distribution selection in flood return level analysis under nonstationarity and provides a reference on the impact of climate change on flood estimation in the Weihe basin for the future.

  20. Robust and Adaptive Online Time Series Prediction with Long Short-Term Memory

    PubMed Central

    Tao, Qing

    2017-01-01

    Online time series prediction is the mainstream method in a wide range of fields, ranging from speech analysis and noise cancelation to stock market analysis. However, the data often contains many outliers with the increasing length of time series in real world. These outliers can mislead the learned model if treated as normal points in the process of prediction. To address this issue, in this paper, we propose a robust and adaptive online gradient learning method, RoAdam (Robust Adam), for long short-term memory (LSTM) to predict time series with outliers. This method tunes the learning rate of the stochastic gradient algorithm adaptively in the process of prediction, which reduces the adverse effect of outliers. It tracks the relative prediction error of the loss function with a weighted average through modifying Adam, a popular stochastic gradient method algorithm for training deep neural networks. In our algorithm, the large value of the relative prediction error corresponds to a small learning rate, and vice versa. The experiments on both synthetic data and real time series show that our method achieves better performance compared to the existing methods based on LSTM. PMID:29391864

  1. Robust and Adaptive Online Time Series Prediction with Long Short-Term Memory.

    PubMed

    Yang, Haimin; Pan, Zhisong; Tao, Qing

    2017-01-01

    Online time series prediction is the mainstream method in a wide range of fields, ranging from speech analysis and noise cancelation to stock market analysis. However, the data often contains many outliers with the increasing length of time series in real world. These outliers can mislead the learned model if treated as normal points in the process of prediction. To address this issue, in this paper, we propose a robust and adaptive online gradient learning method, RoAdam (Robust Adam), for long short-term memory (LSTM) to predict time series with outliers. This method tunes the learning rate of the stochastic gradient algorithm adaptively in the process of prediction, which reduces the adverse effect of outliers. It tracks the relative prediction error of the loss function with a weighted average through modifying Adam, a popular stochastic gradient method algorithm for training deep neural networks. In our algorithm, the large value of the relative prediction error corresponds to a small learning rate, and vice versa. The experiments on both synthetic data and real time series show that our method achieves better performance compared to the existing methods based on LSTM.

  2. Spectral Unmixing Analysis of Time Series Landsat 8 Images

    NASA Astrophysics Data System (ADS)

    Zhuo, R.; Xu, L.; Peng, J.; Chen, Y.

    2018-05-01

    Temporal analysis of Landsat 8 images opens up new opportunities in the unmixing procedure. Although spectral analysis of time series Landsat imagery has its own advantage, it has rarely been studied. Nevertheless, using the temporal information can provide improved unmixing performance when compared to independent image analyses. Moreover, different land cover types may demonstrate different temporal patterns, which can aid the discrimination of different natures. Therefore, this letter presents time series K-P-Means, a new solution to the problem of unmixing time series Landsat imagery. The proposed approach is to obtain the "purified" pixels in order to achieve optimal unmixing performance. The vertex component analysis (VCA) is used to extract endmembers for endmember initialization. First, nonnegative least square (NNLS) is used to estimate abundance maps by using the endmember. Then, the estimated endmember is the mean value of "purified" pixels, which is the residual of the mixed pixel after excluding the contribution of all nondominant endmembers. Assembling two main steps (abundance estimation and endmember update) into the iterative optimization framework generates the complete algorithm. Experiments using both simulated and real Landsat 8 images show that the proposed "joint unmixing" approach provides more accurate endmember and abundance estimation results compared with "separate unmixing" approach.

  3. Promoting Health and Mental Health in Children, Youth, and Families. Springer Series on Behavior Therapy and Behavioral Medicine, Volume 27.

    ERIC Educational Resources Information Center

    Glenwick, David S., Ed.; Jason, Leonard A., Ed.

    In the last decade, there has been increased attention paid to the scope of mental and physical health problems that affect individuals at different points over the entire life span. This volume presents many problem areas and the range of their impact on individuals, families, and society at large. The impact of intervention programs is described…

  4. Presentaciones escolares. Serie de programas para conmemorar acontecimientos de valor cultural para el mexico americano (School Assembly Presentations. Series of Programs to Commemorate Events of Cultural Value to the Mexican American).

    ERIC Educational Resources Information Center

    Villarreal, Abelardo; And Others

    This material consists of a series of cultural presentations designed for elementary school assemblies or special programs. The activities are intended to strengthen Mexican-American children's awareness of their cultural heritage. Program scripts, poems, songs, historical narratives and skits are included to illustrate and celebrate Mexican and…

  5. [Usefulness of upper gastrointestinal series to detect leaks in the early postoperative period of bariatric surgery].

    PubMed

    Medina, Francisco J; Miranda-Merchak, Andrés; Martínez, Alonso; Sánchez, Felipe; Bravo, Sebastián; Contreras, Juan Eduardo; Alliende, Isabel; Canals, Andrea

    2016-04-01

    Postoperative leaks are the most undesirable complication of bariatric surgery and upper gastrointestinal (GI) series are routinely ordered to rule them out. Despite the published literature recommending against its routine use, it is still being customarily used in Chile. To examine the usefulness of routine upper GI series using water-soluble iodinated contrast media for the detection of early postoperative leaks in patients undergoing bariatric surgery. A cohort of 328 patients subjected to bariatric surgery was followed from October 2012 to October 2013. Most of them underwent sleeve gastrectomy. Upper GI series on the first postoperative day were ordered to 308 (94%) patients. Postoperative leaks were observed in two patients, with an incidence of 0.6%. The sensitivity for upper GI series detection of leak was 0% and the negative predictive value was 99%. Routine upper GI series after bariatric surgery is not useful for the diagnosis of postoperative leak, given the low incidence of this complication and the low sensitivity of the technique.

  6. Simulation of rockfalls triggered by earthquakes

    USGS Publications Warehouse

    Kobayashi, Y.; Harp, E.L.; Kagawa, T.

    1990-01-01

    A computer program to simulate the downslope movement of boulders in rolling or bouncing modes has been developed and applied to actual rockfalls triggered by the Mammoth Lakes, California, earthquake sequence in 1980 and the Central Idaho earthquake in 1983. In order to reproduce a movement mode where bouncing predominated, we introduced an artificial unevenness to the slope surface by adding a small random number to the interpolated value of the mid-points between the adjacent surveyed points. Three hundred simulations were computed for each site by changing the random number series, which determined distances and bouncing intervals. The movement of the boulders was, in general, rather erratic depending on the random numbers employed, and the results could not be seen as deterministic but stochastic. The closest agreement between calculated and actual movements was obtained at the site with the most detailed and accurate topographic measurements. ?? 1990 Springer-Verlag.

  7. The Evidence Value Matrix for Diagnostic Imaging.

    PubMed

    Seidel, David; Frank, Richard A; Schmidt, Sebastian

    2016-10-01

    Evidence and value are independent factors that together affect the adoption of diagnostic imaging. For example, noncoverage decisions by reimbursement authorities can be justified by a lack of evidence and/or value. To create transparency and a common understanding among various stakeholders, we have proposed a two-dimensional matrix that allows classification of imaging devices into three distinct categories based on the available evidence and value: "question marks" (low value demonstrated in studies of any evidence level), "candidates" (high value demonstrated in retrospective case-control studies and smaller case series), and "stars" (high value demonstrated in large prospective cohort studies or, preferably, randomized controlled trials). We use several examples to illustrate the application of our matrix. A major benefit of the matrix includes the development of specific strategies for evidence and value generation. High-evidence/low-value studies are expensive and unlikely to convince decision makers, given the uncertainty of the impact on patient management and outcomes. Developing question marks into candidates first and then into stars will often be quicker and less expensive ("success sequence"). Only this more sophisticated and objective approach can justify the additional funding necessary to generate the evidence base to inform reimbursement by payers and adoption by providers. Copyright © 2016 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  8. Rheumatoid Arthritis Educational Video Series

    MedlinePlus Videos and Cool Tools

    ... Corner / Patient Webcasts / Rheumatoid Arthritis Educational Video Series Rheumatoid Arthritis Educational Video Series This series of five videos was designed to help you learn more about Rheumatoid Arthritis (RA). You will learn how the diagnosis of ...

  9. Deriving crop calendar using NDVI time-series

    NASA Astrophysics Data System (ADS)

    Patel, J. H.; Oza, M. P.

    2014-11-01

    Agricultural intensification is defined in terms as cropping intensity, which is the numbers of crops (single, double and triple) per year in a unit cropland area. Information about crop calendar (i.e. number of crops in a parcel of land and their planting & harvesting dates and date of peak vegetative stage) is essential for proper management of agriculture. Remote sensing sensors provide a regular, consistent and reliable measurement of vegetation response at various growth stages of crop. Therefore it is ideally suited for monitoring purpose. The spectral response of vegetation, as measured by the Normalized Difference Vegetation Index (NDVI) and its profiles, can provide a new dimension for describing vegetation growth cycle. The analysis based on values of NDVI at regular time interval provides useful information about various crop growth stages and performance of crop in a season. However, the NDVI data series has considerable amount of local fluctuation in time domain and needs to be smoothed so that dominant seasonal behavior is enhanced. Based on temporal analysis of smoothed NDVI series, it is possible to extract number of crop cycles per year and their crop calendar. In the present study, a methodology is developed to extract key elements of crop growth cycle (i.e. number of crops per year and their planting - peak - harvesting dates). This is illustrated by analysing MODIS-NDVI data series of one agricultural year (from June 2012 to May 2013) over Gujarat. Such an analysis is very useful for analysing dynamics of kharif and rabi crops.

  10. Analysis of Zenith Tropospheric Delay above Europe based on long time series derived from the EPN data

    NASA Astrophysics Data System (ADS)

    Baldysz, Zofia; Nykiel, Grzegorz; Figurski, Mariusz; Szafranek, Karolina; Kroszczynski, Krzysztof; Araszkiewicz, Andrzej

    2015-04-01

    trend - only for 4 stations the size of linear trend was exactly the same for two periods of time. In one case, the nature of the trend has changed from negative (16-year time series) for positive (18-year time series). The average value of a linear trends for 16-year time series is 1,5 mm/decade, but their spatial distribution is not uniform. The average value of linear trends for all 18-year time series is 2,0 mm/decade, with better spatial distribution and smaller discrepancies.

  11. Multivariate stochastic analysis for Monthly hydrological time series at Cuyahoga River Basin

    NASA Astrophysics Data System (ADS)

    zhang, L.

    2011-12-01

    Copula has become a very powerful statistic and stochastic methodology in case of the multivariate analysis in Environmental and Water resources Engineering. In recent years, the popular one-parameter Archimedean copulas, e.g. Gumbel-Houggard copula, Cook-Johnson copula, Frank copula, the meta-elliptical copula, e.g. Gaussian Copula, Student-T copula, etc. have been applied in multivariate hydrological analyses, e.g. multivariate rainfall (rainfall intensity, duration and depth), flood (peak discharge, duration and volume), and drought analyses (drought length, mean and minimum SPI values, and drought mean areal extent). Copula has also been applied in the flood frequency analysis at the confluences of river systems by taking into account the dependence among upstream gauge stations rather than by using the hydrological routing technique. In most of the studies above, the annual time series have been considered as stationary signal which the time series have been assumed as independent identically distributed (i.i.d.) random variables. But in reality, hydrological time series, especially the daily and monthly hydrological time series, cannot be considered as i.i.d. random variables due to the periodicity existed in the data structure. Also, the stationary assumption is also under question due to the Climate Change and Land Use and Land Cover (LULC) change in the fast years. To this end, it is necessary to revaluate the classic approach for the study of hydrological time series by relaxing the stationary assumption by the use of nonstationary approach. Also as to the study of the dependence structure for the hydrological time series, the assumption of same type of univariate distribution also needs to be relaxed by adopting the copula theory. In this paper, the univariate monthly hydrological time series will be studied through the nonstationary time series analysis approach. The dependence structure of the multivariate monthly hydrological time series will be

  12. From Networks to Time Series

    NASA Astrophysics Data System (ADS)

    Shimada, Yutaka; Ikeguchi, Tohru; Shigehara, Takaomi

    2012-10-01

    In this Letter, we propose a framework to transform a complex network to a time series. The transformation from complex networks to time series is realized by the classical multidimensional scaling. Applying the transformation method to a model proposed by Watts and Strogatz [Nature (London) 393, 440 (1998)], we show that ring lattices are transformed to periodic time series, small-world networks to noisy periodic time series, and random networks to random time series. We also show that these relationships are analytically held by using the circulant-matrix theory and the perturbation theory of linear operators. The results are generalized to several high-dimensional lattices.

  13. Identifying the values and preferences of prosthetic users: a case study series using the repertory grid technique.

    PubMed

    Schaffalitzky, Elisabeth; NiMhurchadha, Sinead; Gallagher, Pamela; Hofkamp, Susan; MacLachlan, Malcolm; Wegener, Stephen T

    2009-06-01

    The matching of prosthetic devices to the needs of the individual is a challenge for providers and patients. The aims of this study are to explore the values and preferences that prosthetic users have of their prosthetic devices; to investigate users' perceptions of alternative prosthetic options and to demonstrate a novel method for exploring the values and preferences of prosthetic users. This study describes four case studies of upper limb and lower limb high tech and conventional prosthetic users. Participants were interviewed using the repertory grid technique (RGT), a qualitative technique to explore individual values and preferences regarding specific choices and events. The participants generated distinctive patterns of personal constructs and ratings regarding prosthetic use and different prosthetic options available. The RGT produced a unique profile of preferences regarding prosthetic technologies for each participant. User choice is an important factor when matching prosthetic technology to the user. The consumer's values regarding different prosthetic options are likely to be a critical factor in prosthetic acceptance and ultimate quality of life. The RGT offers a structured method of exploring these attitudes and values without imposing researcher or practitioner bias and identifies personalized dimensions for providers and users to evaluate the individuals' preferences in prosthetic technology.

  14. Duality between Time Series and Networks

    PubMed Central

    Campanharo, Andriana S. L. O.; Sirer, M. Irmak; Malmgren, R. Dean; Ramos, Fernando M.; Amaral, Luís A. Nunes.

    2011-01-01

    Studying the interaction between a system's components and the temporal evolution of the system are two common ways to uncover and characterize its internal workings. Recently, several maps from a time series to a network have been proposed with the intent of using network metrics to characterize time series. Although these maps demonstrate that different time series result in networks with distinct topological properties, it remains unclear how these topological properties relate to the original time series. Here, we propose a map from a time series to a network with an approximate inverse operation, making it possible to use network statistics to characterize time series and time series statistics to characterize networks. As a proof of concept, we generate an ensemble of time series ranging from periodic to random and confirm that application of the proposed map retains much of the information encoded in the original time series (or networks) after application of the map (or its inverse). Our results suggest that network analysis can be used to distinguish different dynamic regimes in time series and, perhaps more importantly, time series analysis can provide a powerful set of tools that augment the traditional network analysis toolkit to quantify networks in new and useful ways. PMID:21858093

  15. Series Transmission Line Transformer

    DOEpatents

    Buckles, Robert A.; Booth, Rex; Yen, Boris T.

    2004-06-29

    A series transmission line transformer is set forth which includes two or more of impedance matched sets of at least two transmissions lines such as shielded cables, connected in parallel at one end ans series at the other in a cascading fashion. The cables are wound about a magnetic core. The series transmission line transformer (STLT) which can provide for higher impedance ratios and bandwidths, which is scalable, and which is of simpler design and construction.

  16. Modelling of extreme rainfall events in Peninsular Malaysia based on annual maximum and partial duration series

    NASA Astrophysics Data System (ADS)

    Zin, Wan Zawiah Wan; Shinyie, Wendy Ling; Jemain, Abdul Aziz

    2015-02-01

    In this study, two series of data for extreme rainfall events are generated based on Annual Maximum and Partial Duration Methods, derived from 102 rain-gauge stations in Peninsular from 1982-2012. To determine the optimal threshold for each station, several requirements must be satisfied and Adapted Hill estimator is employed for this purpose. A semi-parametric bootstrap is then used to estimate the mean square error (MSE) of the estimator at each threshold and the optimal threshold is selected based on the smallest MSE. The mean annual frequency is also checked to ensure that it lies in the range of one to five and the resulting data is also de-clustered to ensure independence. The two data series are then fitted to Generalized Extreme Value and Generalized Pareto distributions for annual maximum and partial duration series, respectively. The parameter estimation methods used are the Maximum Likelihood and the L-moment methods. Two goodness of fit tests are then used to evaluate the best-fitted distribution. The results showed that the Partial Duration series with Generalized Pareto distribution and Maximum Likelihood parameter estimation provides the best representation for extreme rainfall events in Peninsular Malaysia for majority of the stations studied. Based on these findings, several return values are also derived and spatial mapping are constructed to identify the distribution characteristic of extreme rainfall in Peninsular Malaysia.

  17. Geodesic regression for image time-series.

    PubMed

    Niethammer, Marc; Huang, Yang; Vialard, François-Xavier

    2011-01-01

    Registration of image-time series has so far been accomplished (i) by concatenating registrations between image pairs, (ii) by solving a joint estimation problem resulting in piecewise geodesic paths between image pairs, (iii) by kernel based local averaging or (iv) by augmenting the joint estimation with additional temporal irregularity penalties. Here, we propose a generative model extending least squares linear regression to the space of images by using a second-order dynamic formulation for image registration. Unlike previous approaches, the formulation allows for a compact representation of an approximation to the full spatio-temporal trajectory through its initial values. The method also opens up possibilities to design image-based approximation algorithms. The resulting optimization problem is solved using an adjoint method.

  18. 31 CFR 351.10 - What do I need to know about market yields, or market bid yields, to understand redemption value...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ..., SERIES EE Maturities, Redemption Values, and Investment Yields of Series EE Savings Bonds General... securities. This curve relates the yield on a security to its time to maturity. Yields at particular points...

  19. 31 CFR 351.10 - What do I need to know about market yields, or market bid yields, to understand redemption value...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ..., SERIES EE Maturities, Redemption Values, and Investment Yields of Series EE Savings Bonds General... securities. This curve relates the yield on a security to its time to maturity. Yields at particular points...

  20. 31 CFR 351.10 - What do I need to know about market yields, or market bid yields, to understand redemption value...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ..., SERIES EE Maturities, Redemption Values, and Investment Yields of Series EE Savings Bonds General... securities. This curve relates the yield on a security to its time to maturity. Yields at particular points...

  1. Topological data analysis of financial time series: Landscapes of crashes

    NASA Astrophysics Data System (ADS)

    Gidea, Marian; Katz, Yuri

    2018-02-01

    We explore the evolution of daily returns of four major US stock market indices during the technology crash of 2000, and the financial crisis of 2007-2009. Our methodology is based on topological data analysis (TDA). We use persistence homology to detect and quantify topological patterns that appear in multidimensional time series. Using a sliding window, we extract time-dependent point cloud data sets, to which we associate a topological space. We detect transient loops that appear in this space, and we measure their persistence. This is encoded in real-valued functions referred to as a 'persistence landscapes'. We quantify the temporal changes in persistence landscapes via their Lp-norms. We test this procedure on multidimensional time series generated by various non-linear and non-equilibrium models. We find that, in the vicinity of financial meltdowns, the Lp-norms exhibit strong growth prior to the primary peak, which ascends during a crash. Remarkably, the average spectral density at low frequencies of the time series of Lp-norms of the persistence landscapes demonstrates a strong rising trend for 250 trading days prior to either dotcom crash on 03/10/2000, or to the Lehman bankruptcy on 09/15/2008. Our study suggests that TDA provides a new type of econometric analysis, which complements the standard statistical measures. The method can be used to detect early warning signals of imminent market crashes. We believe that this approach can be used beyond the analysis of financial time series presented here.

  2. American Society and Economic Policy: What Should Our Goals Be? Public Talk Series.

    ERIC Educational Resources Information Center

    Niedergang, Mark

    This program guide encourages discussion on the fundamental values on which the government's economic policies are based. This public talk series program is designed for the discussion of critical social and political issues through a balanced, nonpartisan presentation of a spectrum of views. The core of the program is consideration of four…

  3. Using lagged dependence to identify (de)coupled surface and subsurface soil moisture values

    NASA Astrophysics Data System (ADS)

    Carranza, Coleen D. U.; van der Ploeg, Martine J.; Torfs, Paul J. J. F.

    2018-04-01

    Recent advances in radar remote sensing popularized the mapping of surface soil moisture at different spatial scales. Surface soil moisture measurements are used in combination with hydrological models to determine subsurface soil moisture values. However, variability of soil moisture across the soil column is important for estimating depth-integrated values, as decoupling between surface and subsurface can occur. In this study, we employ new methods to investigate the occurrence of (de)coupling between surface and subsurface soil moisture. Using time series datasets, lagged dependence was incorporated in assessing (de)coupling with the idea that surface soil moisture conditions will be reflected at the subsurface after a certain delay. The main approach involves the application of a distributed-lag nonlinear model (DLNM) to simultaneously represent both the functional relation and the lag structure in the time series. The results of an exploratory analysis using residuals from a fitted loess function serve as a posteriori information to determine (de)coupled values. Both methods allow for a range of (de)coupled soil moisture values to be quantified. Results provide new insights into the decoupled range as its occurrence among the sites investigated is not limited to dry conditions.

  4. Fractal dynamics of heartbeat time series of young persons with metabolic syndrome

    NASA Astrophysics Data System (ADS)

    Muñoz-Diosdado, A.; Alonso-Martínez, A.; Ramírez-Hernández, L.; Martínez-Hernández, G.

    2012-10-01

    Many physiological systems have been in recent years quantitatively characterized using fractal analysis. We applied it to study heart variability of young subjects with metabolic syndrome (MS); we examined the RR time series (time between two R waves in ECG) with the detrended fluctuation analysis (DFA) method, the Higuchi's fractal dimension method and the multifractal analysis to detect the possible presence of heart problems. The results show that although the young persons have MS, the majority do not present alterations in the heart dynamics. However, there were cases where the fractal parameter values differed significantly from the healthy people values.

  5. Do Value-Added Methods Level the Playing Field for Teachers? What We Know Series: Value-Added Methods and Applications. Knowledge Brief 2

    ERIC Educational Resources Information Center

    McCaffrey, Daniel F.

    2012-01-01

    Value-added models have caught the interest of policymakers because, unlike using student tests scores for other means of accountability, they purport to "level the playing field." That is, they supposedly reflect only a teacher's effectiveness, not whether she teaches high- or low-income students, for instance, or students in accelerated or…

  6. Monitoring cotton root rot by synthetic Sentinel-2 NDVI time series using improved spatial and temporal data fusion.

    PubMed

    Wu, Mingquan; Yang, Chenghai; Song, Xiaoyu; Hoffmann, Wesley Clint; Huang, Wenjiang; Niu, Zheng; Wang, Changyao; Li, Wang; Yu, Bo

    2018-01-31

    To better understand the progression of cotton root rot within the season, time series monitoring is required. In this study, an improved spatial and temporal data fusion approach (ISTDFA) was employed to combine 250-m Moderate Resolution Imaging Spectroradiometer (MODIS) Normalized Different Vegetation Index (NDVI) and 10-m Sentinetl-2 NDVI data to generate a synthetic Sentinel-2 NDVI time series for monitoring this disease. Then, the phenology of healthy cotton and infected cotton was modeled using a logistic model. Finally, several phenology parameters, including the onset day of greenness minimum (OGM), growing season length (GLS), onset of greenness increase (OGI), max NDVI value, and integral area of the phenology curve, were calculated. The results showed that ISTDFA could be used to combine time series MODIS and Sentinel-2 NDVI data with a correlation coefficient of 0.893. The logistic model could describe the phenology curves with R-squared values from 0.791 to 0.969. Moreover, the phenology curve of infected cotton showed a significant difference from that of healthy cotton. The max NDVI value, OGM, GSL and the integral area of the phenology curve for infected cotton were reduced by 0.045, 30 days, 22 days, and 18.54%, respectively, compared with those for healthy cotton.

  7. Legends Lecture Series

    NASA Image and Video Library

    2011-10-13

    Stennis Space Center Director Patrick Scheuermann (right) welcomes former leaders to the fourth Legends Lecture Series presentation Oct. 13. Stennis launched the series in November 2010 as part of a yearlong 50th anniversary celebration. The recent session focused on past rocket engine test work. Visiting Stennis legends were: (l to r) Dave Geiger, Patrick Mooney, Boyce Mix, J. Stephens Dick, James Taylor and Marvin Carpenter.

  8. Non-invasive breast biopsy method using GD-DTPA contrast enhanced MRI series and F-18-FDG PET/CT dynamic image series

    NASA Astrophysics Data System (ADS)

    Magri, Alphonso William

    algorithm. The best-fit parameters were used to create 3D parametric images. Compartmental modeling evaluation was based on the ability of parameter values to differentiate between tissue types. This evaluation was used on registered and unregistered image series and found that registration improved results. (5) PET and MR parametric images were registered through FEM- and FFD-based registration. Parametric image registration was evaluated using similarity measurements, target registration error, and qualitative comparison. Comparing FFD and FEM-based registration results showed that the FEM method is superior. This five-step process constitutes a novel multifaceted approach to a nonsurgical breast biopsy that successfully executes each step. Comparison of this method to biopsy still needs to be done with a larger set of subject data.

  9. Normalization of time-series satellite reflectance data to a standard sun-target-sensor geometry using a semi-empirical model

    NASA Astrophysics Data System (ADS)

    Zhao, Yongguang; Li, Chuanrong; Ma, Lingling; Tang, Lingli; Wang, Ning; Zhou, Chuncheng; Qian, Yonggang

    2017-10-01

    Time series of satellite reflectance data have been widely used to characterize environmental phenomena, describe trends in vegetation dynamics and study climate change. However, several sensors with wide spatial coverage and high observation frequency are usually designed to have large field of view (FOV), which cause variations in the sun-targetsensor geometry in time-series reflectance data. In this study, on the basis of semiempirical kernel-driven BRDF model, a new semi-empirical model was proposed to normalize the sun-target-sensor geometry of remote sensing image. To evaluate the proposed model, bidirectional reflectance under different canopy growth conditions simulated by Discrete Anisotropic Radiative Transfer (DART) model were used. The semi-empirical model was first fitted by using all simulated bidirectional reflectance. Experimental result showed a good fit between the bidirectional reflectance estimated by the proposed model and the simulated value. Then, MODIS time-series reflectance data was normalized to a common sun-target-sensor geometry by the proposed model. The experimental results showed the proposed model yielded good fits between the observed and estimated values. The noise-like fluctuations in time-series reflectance data was also reduced after the sun-target-sensor normalization process.

  10. Diagnostic significance of rib series in minor thorax trauma compared to plain chest film and computed tomography.

    PubMed

    Hoffstetter, Patrick; Dornia, Christian; Schäfer, Stephan; Wagner, Merle; Dendl, Lena M; Stroszczynski, Christian; Schreyer, Andreas G

    2014-01-01

    Rib series (RS) are a special radiological technique to improve the visualization of the bony parts of the chest. The aim of this study was to evaluate the diagnostic accuracy of rib series in minor thorax trauma. Retrospective study of 56 patients who received RS, 39 patients where additionally evaluated by plain chest film (PCF). All patients underwent a computed tomography (CT) of the chest. RS and PCF were re-read independently by three radiologists, the results were compared with the CT as goldstandard. Sensitivity, specificity, negative and positive predictive value were calculated. Significance in the differences of findings was determined by McNemar test, interobserver variability by Cohens kappa test. 56 patients were evaluated (34 men, 22 women, mean age =61 y.). In 22 patients one or more rib fracture could be identified by CT. In 18 of these cases (82%) the correct diagnosis was made by RS, in 16 cases (73%) the correct number of involved ribs was detected. These differences were significant (p = 0.03). Specificity was 100%, negative and positive predictive value were 85% and 100%. Kappa values for the interobserver agreement was 0.92-0.96. Sensitivity of PCF was 46% and was significantly lower (p = 0.008) compared to CT. Rib series does not seem to be an useful examination in evaluating minor thorax trauma. CT seems to be the method of choice to detect rib fractures, but the clinical value of the radiological proof has to be discussed and investigated in larger follow up studies.

  11. A 305 year monthly rainfall series for the Island of Ireland (1711-2016)

    NASA Astrophysics Data System (ADS)

    Murphy, Conor; Burt, Tim P.; Broderick, Ciaran; Duffy, Catriona; Macdonald, Neil; Matthews, Tom; McCarthy, Mark P.; Mullan, Donal; Noone, Simon; Ryan, Ciara; Thorne, Peter; Walsh, Seamus; Wilby, Robert L.

    2017-04-01

    This paper derives a continuous 305-year monthly rainfall series for the Island of Ireland (IoI) for the period 1711-2016. Two key data sources are employed: i) a previously unpublished UK Met Office Note which compiled annual rainfall anomalies and corresponding monthly per mille amounts from weather diaries and early observational records for the period 1711-1977; and ii) a long-term, homogenised monthly IoI rainfall series for the period 1850-2016. Using estimates of long-term average precipitation sampled from the quality assured series, the full record is reconstituted and insights drawn regarding notable periods and the range of climate variability and change experienced. Consistency with other long records for the region is examined, including: the England and Wales Precipitation series (EWP; 1766-2016); the early EWP Glasspoole series (1716-1765) and the Central England Temperature series (CET; 1711-2016). Strong correspondence between all records is noted from 1780 onwards. While disparities are evident between the early EWP and Ireland series, the latter shows strong decadal consistency with CET throughout the record. In addition, independent, early observations from Cork and Dublin, along with available documentary sources, corroborate the derived series and add confidence to our reconstruction. The new IoI rainfall record reveals that the wettest decades occurred in the early 18th Century, despite the fact that IoI has experienced a long-term winter wetting trend consistent with climate model projections. These exceptionally wet winters of the 1720s and 1730s were concurrent with almost unprecedented warmth in the CET, glacial advance throughout Scandinavia, and glacial retreat in West Greenland, consistent with a wintertime NAO-type forcing. Our study therefore demonstrates the value of long-term observational records for providing insight to the natural climate variability of the North Atlantic region.

  12. Exponential series approaches for nonparametric graphical models

    NASA Astrophysics Data System (ADS)

    Janofsky, Eric

    Markov Random Fields (MRFs) or undirected graphical models are parsimonious representations of joint probability distributions. This thesis studies high-dimensional, continuous-valued pairwise Markov Random Fields. We are particularly interested in approximating pairwise densities whose logarithm belongs to a Sobolev space. For this problem we propose the method of exponential series which approximates the log density by a finite-dimensional exponential family with the number of sufficient statistics increasing with the sample size. We consider two approaches to estimating these models. The first is regularized maximum likelihood. This involves optimizing the sum of the log-likelihood of the data and a sparsity-inducing regularizer. We then propose a variational approximation to the likelihood based on tree-reweighted, nonparametric message passing. This approximation allows for upper bounds on risk estimates, leverages parallelization and is scalable to densities on hundreds of nodes. We show how the regularized variational MLE may be estimated using a proximal gradient algorithm. We then consider estimation using regularized score matching. This approach uses an alternative scoring rule to the log-likelihood, which obviates the need to compute the normalizing constant of the distribution. For general continuous-valued exponential families, we provide parameter and edge consistency results. As a special case we detail a new approach to sparse precision matrix estimation which has statistical performance competitive with the graphical lasso and computational performance competitive with the state-of-the-art glasso algorithm. We then describe results for model selection in the nonparametric pairwise model using exponential series. The regularized score matching problem is shown to be a convex program; we provide scalable algorithms based on consensus alternating direction method of multipliers (ADMM) and coordinate-wise descent. We use simulations to compare our

  13. Multiple Indicator Stationary Time Series Models.

    ERIC Educational Resources Information Center

    Sivo, Stephen A.

    2001-01-01

    Discusses the propriety and practical advantages of specifying multivariate time series models in the context of structural equation modeling for time series and longitudinal panel data. For time series data, the multiple indicator model specification improves on classical time series analysis. For panel data, the multiple indicator model…

  14. Nonparametric autocovariance estimation from censored time series by Gaussian imputation.

    PubMed

    Park, Jung Wook; Genton, Marc G; Ghosh, Sujit K

    2009-02-01

    One of the most frequently used methods to model the autocovariance function of a second-order stationary time series is to use the parametric framework of autoregressive and moving average models developed by Box and Jenkins. However, such parametric models, though very flexible, may not always be adequate to model autocovariance functions with sharp changes. Furthermore, if the data do not follow the parametric model and are censored at a certain value, the estimation results may not be reliable. We develop a Gaussian imputation method to estimate an autocovariance structure via nonparametric estimation of the autocovariance function in order to address both censoring and incorrect model specification. We demonstrate the effectiveness of the technique in terms of bias and efficiency with simulations under various rates of censoring and underlying models. We describe its application to a time series of silicon concentrations in the Arctic.

  15. Modeling Philippine Stock Exchange Composite Index Using Time Series Analysis

    NASA Astrophysics Data System (ADS)

    Gayo, W. S.; Urrutia, J. D.; Temple, J. M. F.; Sandoval, J. R. D.; Sanglay, J. E. A.

    2015-06-01

    This study was conducted to develop a time series model of the Philippine Stock Exchange Composite Index and its volatility using the finite mixture of ARIMA model with conditional variance equations such as ARCH, GARCH, EG ARCH, TARCH and PARCH models. Also, the study aimed to find out the reason behind the behaviorof PSEi, that is, which of the economic variables - Consumer Price Index, crude oil price, foreign exchange rate, gold price, interest rate, money supply, price-earnings ratio, Producers’ Price Index and terms of trade - can be used in projecting future values of PSEi and this was examined using Granger Causality Test. The findings showed that the best time series model for Philippine Stock Exchange Composite index is ARIMA(1,1,5) - ARCH(1). Also, Consumer Price Index, crude oil price and foreign exchange rate are factors concluded to Granger cause Philippine Stock Exchange Composite Index.

  16. Volatility of linear and nonlinear time series

    NASA Astrophysics Data System (ADS)

    Kalisky, Tomer; Ashkenazy, Yosef; Havlin, Shlomo

    2005-07-01

    Previous studies indicated that nonlinear properties of Gaussian distributed time series with long-range correlations, ui , can be detected and quantified by studying the correlations in the magnitude series ∣ui∣ , the “volatility.” However, the origin for this empirical observation still remains unclear and the exact relation between the correlations in ui and the correlations in ∣ui∣ is still unknown. Here we develop analytical relations between the scaling exponent of linear series ui and its magnitude series ∣ui∣ . Moreover, we find that nonlinear time series exhibit stronger (or the same) correlations in the magnitude time series compared with linear time series with the same two-point correlations. Based on these results we propose a simple model that generates multifractal time series by explicitly inserting long range correlations in the magnitude series; the nonlinear multifractal time series is generated by multiplying a long-range correlated time series (that represents the magnitude series) with uncorrelated time series [that represents the sign series sgn(ui) ]. We apply our techniques on daily deep ocean temperature records from the equatorial Pacific, the region of the El-Ninõ phenomenon, and find: (i) long-range correlations from several days to several years with 1/f power spectrum, (ii) significant nonlinear behavior as expressed by long-range correlations of the volatility series, and (iii) broad multifractal spectrum.

  17. Forecasting air quality time series using deep learning.

    PubMed

    Freeman, Brian S; Taylor, Graham; Gharabaghi, Bahram; Thé, Jesse

    2018-04-13

    This paper presents one of the first applications of deep learning (DL) techniques to predict air pollution time series. Air quality management relies extensively on time series data captured at air monitoring stations as the basis of identifying population exposure to airborne pollutants and determining compliance with local ambient air standards. In this paper, 8 hr averaged surface ozone (O 3 ) concentrations were predicted using deep learning consisting of a recurrent neural network (RNN) with long short-term memory (LSTM). Hourly air quality and meteorological data were used to train and forecast values up to 72 hours with low error rates. The LSTM was able to forecast the duration of continuous O 3 exceedances as well. Prior to training the network, the dataset was reviewed for missing data and outliers. Missing data were imputed using a novel technique that averaged gaps less than eight time steps with incremental steps based on first-order differences of neighboring time periods. Data were then used to train decision trees to evaluate input feature importance over different time prediction horizons. The number of features used to train the LSTM model was reduced from 25 features to 5 features, resulting in improved accuracy as measured by Mean Absolute Error (MAE). Parameter sensitivity analysis identified look-back nodes associated with the RNN proved to be a significant source of error if not aligned with the prediction horizon. Overall, MAE's less than 2 were calculated for predictions out to 72 hours. Novel deep learning techniques were used to train an 8-hour averaged ozone forecast model. Missing data and outliers within the captured data set were replaced using a new imputation method that generated calculated values closer to the expected value based on the time and season. Decision trees were used to identify input variables with the greatest importance. The methods presented in this paper allow air managers to forecast long range air pollution

  18. Two mantle sources, two plumbing systems: Tholeiitic and alkaline magmatism of the Maymecha River basin, Siberian flood volcanic province

    USGS Publications Warehouse

    Arndt, N.; Chauvel, C.; Czamanske, G.; Fedorenko, V.

    1998-01-01

    Rocks of two distinctly different magma series are found in a ???4000-m-thick sequence of lavas and tuffs in the Maymecha River basin which is part of the Siberian flood-volcanic province. The tholeiites are typical low-Ti continental flood basalts with remarkably restricted, petrologically evolved compositions. They have basaltic MgO contents, moderate concentrations of incompatible trace elements, moderate fractionation of incompatible from compatible elements, distinct negative Ta(Nb) anomalies, and ??Nd values of 0 to + 2. The primary magmas were derived from a relatively shallow mantle source, and evolved in large crustal magma chambers where they acquired their relatively uniform compositions and became contaminated with continental crust. An alkaline series, in contrast, contains a wide range of rock types, from meymechite and picrite to trachytes, with a wide range of compositions (MgO from 0.7 to 38 wt%, SiO2 from 40 to 69 wt%, Ce from 14 to 320 ppm), high concentrations of incompatible elements and extreme fractionation of incompatible from compatible elements (Al2O3/TiO2 ??? 1; Sm/Yb up to 11). These rocks lack Ta(Nb) anomalies and have a broad range of ??Nd values, from -2 to +5. The parental magmas are believed to have formed by low-degree melting at extreme mantle depths (>200 km). They bypassed the large crustal magma chambers and ascended rapidly to the surface, a consequence, perhaps, of high volatile contents in the primary magmas. The tholeiitic series dominates the lower part of the sequence and the alkaline series the upper part; at the interface, the two types are interlayered. The succession thus provides evidence of a radical change in the site of mantle melting, and the simultaneous operation of two very different crustal plumbing systems, during the evolution of this flood-volcanic province. ?? Springer-Verlag 1998.

  19. Rydberg series in the lanthanides and actinides observed by stepwise laser excitation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Worden, E.F.; Solarz, R.W.; Paisner, J.A.

    1977-05-18

    The techniques of stepwise laser excitation were applied to obtain Ryberg series in the lanthanides and in uranium. The methods employed circumvent many of the experimental difficulties inherent in conventional absorption spectrosopy of these heavy atoms with very complex spectra. The Rydberg series observed have allowed the determination of accurate ionization limits. The values in eV are: Ce, 5.5387(4);Nd, 5.5250(6); Sm, 5.6437(10); Eu, 5.6704(3); Gd, 6.1502(6); Tb, 5.8639(6); Dy, 5.9390(6); Ho, 6.0216(6); Er 6.1077(6); U, 6.1941(5). A comparison of the f/sup n/s/sup 2/-f/sup n/s ionization limits as a function of n with theoretical calculations is made.

  20. The transformed-stationary approach: a generic and simplified methodology for non-stationary extreme value analysis

    NASA Astrophysics Data System (ADS)

    Mentaschi, Lorenzo; Vousdoukas, Michalis; Voukouvalas, Evangelos; Sartini, Ludovica; Feyen, Luc; Besio, Giovanni; Alfieri, Lorenzo

    2016-09-01

    Statistical approaches to study extreme events require, by definition, long time series of data. In many scientific disciplines, these series are often subject to variations at different temporal scales that affect the frequency and intensity of their extremes. Therefore, the assumption of stationarity is violated and alternative methods to conventional stationary extreme value analysis (EVA) must be adopted. Using the example of environmental variables subject to climate change, in this study we introduce the transformed-stationary (TS) methodology for non-stationary EVA. This approach consists of (i) transforming a non-stationary time series into a stationary one, to which the stationary EVA theory can be applied, and (ii) reverse transforming the result into a non-stationary extreme value distribution. As a transformation, we propose and discuss a simple time-varying normalization of the signal and show that it enables a comprehensive formulation of non-stationary generalized extreme value (GEV) and generalized Pareto distribution (GPD) models with a constant shape parameter. A validation of the methodology is carried out on time series of significant wave height, residual water level, and river discharge, which show varying degrees of long-term and seasonal variability. The results from the proposed approach are comparable with the results from (a) a stationary EVA on quasi-stationary slices of non-stationary series and (b) the established method for non-stationary EVA. However, the proposed technique comes with advantages in both cases. For example, in contrast to (a), the proposed technique uses the whole time horizon of the series for the estimation of the extremes, allowing for a more accurate estimation of large return levels. Furthermore, with respect to (b), it decouples the detection of non-stationary patterns from the fitting of the extreme value distribution. As a result, the steps of the analysis are simplified and intermediate diagnostics are

  1. Advanced methods for modeling water-levels and estimating drawdowns with SeriesSEE, an Excel add-in

    USGS Publications Warehouse

    Halford, Keith; Garcia, C. Amanda; Fenelon, Joe; Mirus, Benjamin B.

    2012-12-21

    Water-level modeling is used for multiple-well aquifer tests to reliably differentiate pumping responses from natural water-level changes in wells, or “environmental fluctuations.” Synthetic water levels are created during water-level modeling and represent the summation of multiple component fluctuations, including those caused by environmental forcing and pumping. Pumping signals are modeled by transforming step-wise pumping records into water-level changes by using superimposed Theis functions. Water-levels can be modeled robustly with this Theis-transform approach because environmental fluctuations and pumping signals are simulated simultaneously. Water-level modeling with Theis transforms has been implemented in the program SeriesSEE, which is a Microsoft® Excel add-in. Moving average, Theis, pneumatic-lag, and gamma functions transform time series of measured values into water-level model components in SeriesSEE. Earth tides and step transforms are additional computed water-level model components. Water-level models are calibrated by minimizing a sum-of-squares objective function where singular value decomposition and Tikhonov regularization stabilize results. Drawdown estimates from a water-level model are the summation of all Theis transforms minus residual differences between synthetic and measured water levels. The accuracy of drawdown estimates is limited primarily by noise in the data sets, not the Theis-transform approach. Drawdowns much smaller than environmental fluctuations have been detected across major fault structures, at distances of more than 1 mile from the pumping well, and with limited pre-pumping and recovery data at sites across the United States. In addition to water-level modeling, utilities exist in SeriesSEE for viewing, cleaning, manipulating, and analyzing time-series data.

  2. On statistical inference in time series analysis of the evolution of road safety.

    PubMed

    Commandeur, Jacques J F; Bijleveld, Frits D; Bergel-Hayat, Ruth; Antoniou, Constantinos; Yannis, George; Papadimitriou, Eleonora

    2013-11-01

    Data collected for building a road safety observatory usually include observations made sequentially through time. Examples of such data, called time series data, include annual (or monthly) number of road traffic accidents, traffic fatalities or vehicle kilometers driven in a country, as well as the corresponding values of safety performance indicators (e.g., data on speeding, seat belt use, alcohol use, etc.). Some commonly used statistical techniques imply assumptions that are often violated by the special properties of time series data, namely serial dependency among disturbances associated with the observations. The first objective of this paper is to demonstrate the impact of such violations to the applicability of standard methods of statistical inference, which leads to an under or overestimation of the standard error and consequently may produce erroneous inferences. Moreover, having established the adverse consequences of ignoring serial dependency issues, the paper aims to describe rigorous statistical techniques used to overcome them. In particular, appropriate time series analysis techniques of varying complexity are employed to describe the development over time, relating the accident-occurrences to explanatory factors such as exposure measures or safety performance indicators, and forecasting the development into the near future. Traditional regression models (whether they are linear, generalized linear or nonlinear) are shown not to naturally capture the inherent dependencies in time series data. Dedicated time series analysis techniques, such as the ARMA-type and DRAG approaches are discussed next, followed by structural time series models, which are a subclass of state space methods. The paper concludes with general recommendations and practice guidelines for the use of time series models in road safety research. Copyright © 2012 Elsevier Ltd. All rights reserved.

  3. Neutron monitors and muon detectors for solar modulation studies: 2. ϕ time series

    NASA Astrophysics Data System (ADS)

    Ghelfi, A.; Maurin, D.; Cheminet, A.; Derome, L.; Hubert, G.; Melot, F.

    2017-08-01

    The level of solar modulation at different times (related to the solar activity) is a central question of solar and galactic cosmic-ray physics. In the first paper of this series, we have established a correspondence between the uncertainties on ground-based detectors count rates and the parameter ϕ (modulation level in the force-field approximation) reconstructed from these count rates. In this second paper, we detail a procedure to obtain a reference ϕ time series from neutron monitor data. We show that we can have an unbiased and accurate ϕ reconstruction (Δϕ / ϕ ≃ 10 %). We also discuss the potential of Bonner spheres spectrometers and muon detectors to provide ϕ time series. Two by-products of this calculation are updated ϕ values for the cosmic-ray database and a web interface to retrieve and plot ϕ from the 50's to today (http://lpsc.in2p3.fr/crdb).

  4. Automated Bayesian model development for frequency detection in biological time series.

    PubMed

    Granqvist, Emma; Oldroyd, Giles E D; Morris, Richard J

    2011-06-24

    series often deviate significantly from the requirements of optimality for Fourier transformation. In this paper we present an alternative approach based on Bayesian inference. We show the value of placing spectral analysis in the framework of Bayesian inference and demonstrate how model comparison can automate this procedure.

  5. Automated Bayesian model development for frequency detection in biological time series

    PubMed Central

    2011-01-01

    sampled data. Biological time series often deviate significantly from the requirements of optimality for Fourier transformation. In this paper we present an alternative approach based on Bayesian inference. We show the value of placing spectral analysis in the framework of Bayesian inference and demonstrate how model comparison can automate this procedure. PMID:21702910

  6. Extreme Events: low and high total ozone over Arosa, Switzerland

    NASA Astrophysics Data System (ADS)

    Rieder, H. E.; Staehelin, J.; Maeder, J. A.; Ribatet, M.; Stübi, R.; Weihs, P.; Holawe, F.; Peter, T.; Davison, A. C.

    2009-04-01

    Extreme Values, Springer Series in Statistics, ISBN:1852334592, Springer, Berlin, 2001. Koch, G., H. Wernli, C. Schwierz, J. Staehelin, and T. Peter (2005), A composite study on the structure and formation of ozone miniholes and minihighs over central Europe, Geophys. Res. Lett., 32, L12810, doi:10.1029/2004GL022062. Pickands, J.: Statistical-Inference using extreme order Statistics, Ann. Stat., 3, 1, 119-131, 1975. Ribatet, M.: POT: Modelling peaks over a threshold, R News, 7, 34-36, 2007. Rieder, H.E., Staehelin, J., Maeder, J.A., Ribatet, M., Stübi, R., Weihs, P., Holawe, F., Peter, T., and Davison, A.C.: From ozone mini holes and mini highs towards extreme value theory: New insights from extreme events and non stationarity, submitted to J. Geophys. Res., 2009. Staehelin, J., Kegel, R., and Harris, N. R.: Trend analysis of the homogenized total ozone series of Arosa (Switzerland), 1929-1996, J. Geophys. Res., 103(D7), 8389-8400, doi:10.1029/97JD03650, 1998a. Staehelin, J., Renaud, A., Bader, J., McPeters, R., Viatte, P., Hoegger, B., Bugnion, V., Giroud, M., and Schill, H.: Total ozone series at Arosa (Switzerland): Homogenization and data comparison, J. Geophys. Res., 103(D5), 5827-5842, doi:10.1029/97JD02402, 1998b.

  7. Principal components and iterative regression analysis of geophysical series: Application to Sunspot number (1750 2004)

    NASA Astrophysics Data System (ADS)

    Nordemann, D. J. R.; Rigozo, N. R.; de Souza Echer, M. P.; Echer, E.

    2008-11-01

    We present here an implementation of a least squares iterative regression method applied to the sine functions embedded in the principal components extracted from geophysical time series. This method seems to represent a useful improvement for the non-stationary time series periodicity quantitative analysis. The principal components determination followed by the least squares iterative regression method was implemented in an algorithm written in the Scilab (2006) language. The main result of the method is to obtain the set of sine functions embedded in the series analyzed in decreasing order of significance, from the most important ones, likely to represent the physical processes involved in the generation of the series, to the less important ones that represent noise components. Taking into account the need of a deeper knowledge of the Sun's past history and its implication to global climate change, the method was applied to the Sunspot Number series (1750-2004). With the threshold and parameter values used here, the application of the method leads to a total of 441 explicit sine functions, among which 65 were considered as being significant and were used for a reconstruction that gave a normalized mean squared error of 0.146.

  8. Modeling Periodic Impulsive Effects on Online TV Series Diffusion.

    PubMed

    Fu, Peihua; Zhu, Anding; Fang, Qiwen; Wang, Xi

    also represents a highly correlated analysis tool to evaluate the advertising value of TV series.

  9. Robust, automatic GPS station velocities and velocity time series

    NASA Astrophysics Data System (ADS)

    Blewitt, G.; Kreemer, C.; Hammond, W. C.

    2014-12-01

    Automation in GPS coordinate time series analysis makes results more objective and reproducible, but not necessarily as robust as the human eye to detect problems. Moreover, it is not a realistic option to manually scan our current load of >20,000 time series per day. This motivates us to find an automatic way to estimate station velocities that is robust to outliers, discontinuities, seasonality, and noise characteristics (e.g., heteroscedasticity). Here we present a non-parametric method based on the Theil-Sen estimator, defined as the median of velocities vij=(xj-xi)/(tj-ti) computed between all pairs (i, j). Theil-Sen estimators produce statistically identical solutions to ordinary least squares for normally distributed data, but they can tolerate up to 29% of data being problematic. To mitigate seasonality, our proposed estimator only uses pairs approximately separated by an integer number of years (N-δt)<(tj-ti )<(N+δt), where δt is chosen to be small enough to capture seasonality, yet large enough to reduce random error. We fix N=1 to maximally protect against discontinuities. In addition to estimating an overall velocity, we also use these pairs to estimate velocity time series. To test our methods, we process real data sets that have already been used with velocities published in the NA12 reference frame. Accuracy can be tested by the scatter of horizontal velocities in the North American plate interior, which is known to be stable to ~0.3 mm/yr. This presents new opportunities for time series interpretation. For example, the pattern of velocity variations at the interannual scale can help separate tectonic from hydrological processes. Without any step detection, velocity estimates prove to be robust for stations affected by the Mw7.2 2010 El Mayor-Cucapah earthquake, and velocity time series show a clear change after the earthquake, without any of the usual parametric constraints, such as relaxation of postseismic velocities to their preseismic values.

  10. Modeling Periodic Impulsive Effects on Online TV Series Diffusion

    PubMed Central

    Fang, Qiwen; Wang, Xi

    2016-01-01

    . The buzz in public social communities also represents a highly correlated analysis tool to evaluate the advertising value of TV series. PMID:27669520

  11. Analysis of series resonant converter with series-parallel connection

    NASA Astrophysics Data System (ADS)

    Lin, Bor-Ren; Huang, Chien-Lan

    2011-02-01

    In this study, a parallel inductor-inductor-capacitor (LLC) resonant converter series-connected on the primary side and parallel-connected on the secondary side is presented for server power supply systems. Based on series resonant behaviour, the power metal-oxide-semiconductor field-effect transistors are turned on at zero voltage switching and the rectifier diodes are turned off at zero current switching. Thus, the switching losses on the power semiconductors are reduced. In the proposed converter, the primary windings of the two LLC converters are connected in series. Thus, the two converters have the same primary currents to ensure that they can supply the balance load current. On the output side, two LLC converters are connected in parallel to share the load current and to reduce the current stress on the secondary windings and the rectifier diodes. In this article, the principle of operation, steady-state analysis and design considerations of the proposed converter are provided and discussed. Experiments with a laboratory prototype with a 24 V/21 A output for server power supply were performed to verify the effectiveness of the proposed converter.

  12. Patients' Values in Clinical Decision-Making.

    PubMed

    Faggion, Clovis Mariano; Pachur, Thorsten; Giannakopoulos, Nikolaos Nikitas

    2017-09-01

    Shared decision-making involves the participation of patient and dental practitioner. Well-informed decision-making requires that both parties understand important concepts that may influence the decision. This fourth article in a series of 4 aims to discuss the importance of patients' values when a clinical decision is made. We report on how to incorporate important concepts for well-informed, shared decision-making. Here, we present patient values as an important issue, in addition to previously established topics such as the risk of bias of a study, cost-effectiveness of treatment approaches, and a comparison of therapeutic benefit with potential side effects. We provide 2 clinical examples and suggestions for a decision tree, based on the available evidence. The information reported in this article may improve the relationship between patient and dental practitioner, resulting in more well-informed clinical decisions. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Common mode error in Antarctic GPS coordinate time series on its effect on bedrock-uplift estimates

    NASA Astrophysics Data System (ADS)

    Liu, Bin; King, Matt; Dai, Wujiao

    2018-05-01

    Spatially-correlated common mode error always exists in regional, or-larger, GPS networks. We applied independent component analysis (ICA) to GPS vertical coordinate time series in Antarctica from 2010 to 2014 and made a comparison with the principal component analysis (PCA). Using PCA/ICA, the time series can be decomposed into a set of temporal components and their spatial responses. We assume the components with common spatial responses are common mode error (CME). An average reduction of ˜40% about the RMS values was achieved in both PCA and ICA filtering. However, the common mode components obtained from the two approaches have different spatial and temporal features. ICA time series present interesting correlations with modeled atmospheric and non-tidal ocean loading displacements. A white noise (WN) plus power law noise (PL) model was adopted in the GPS velocity estimation using maximum likelihood estimation (MLE) analysis, with ˜55% reduction of the velocity uncertainties after filtering using ICA. Meanwhile, spatiotemporal filtering reduces the amplitude of PL and periodic terms in the GPS time series. Finally, we compare the GPS uplift velocities, after correction for elastic effects, with recent models of glacial isostatic adjustment (GIA). The agreements of the GPS observed velocities and four GIA models are generally improved after the spatiotemporal filtering, with a mean reduction of ˜0.9 mm/yr of the WRMS values, possibly allowing for more confident separation of various GIA model predictions.

  14. Total ozone patterns over the northern mid-latitudes: spatial correlations, extreme events and dynamical contributions

    NASA Astrophysics Data System (ADS)

    Rieder, H. E.; Staehelin, J.; Maeder, J. A.; Ribatet, M.; Bodeker, G. E.; Davison, A. C.

    2009-04-01

    Tools from geostatistics and extreme value theory are applied to analyze spatial correlations in total ozone for the northern mid-latitudes. The dataset used in this study is the NIWA combined total ozone dataset (Bodeker et al., 2001; Müller et al., 2008). New tools from extreme value theory (Coles, 2001; Ribatet, 2007) have recently been applied to the world's longest total ozone record from Arosa, Switzerland (e.g. Staehelin 1998a,b), in order to describe extreme events in low and high total ozone (Rieder et al., 200x). Within the current study, patterns in spatial correlation and frequency distributions of extreme events (e.g. ELOs and EHOs) are studied for the northern mid-latitudes. New insights in spatial patterns of total ozone for the northern mid-latitudes are presented. Koch et al. (2005) found that the increase in fast isentropic transport of tropical air to northern mid-latitudes contributed significantly to ozone changes between 1980 and 1989. Within this study the influence of changes in atmospheric dynamics (e.g. tropospheric and lower stratospheric pressure systems) on column ozone over the northern mid-latitudes is analyzed for the time period 1979-2007. References: Bodeker, G.E., J.C. Scott, K. Kreher, and R.L. McKenzie, Global ozone trends in potential vorticity coordinates using TOMS and GOME intercompared against the Dobson network: 1978-1998, J. Geophys. Res., 106 (D19), 23029-23042, 2001. Coles, S.: An Introduction to Statistical Modeling of Extreme Values, Springer Series in Statistics, ISBN:1852334592, Springer, Berlin, 2001. Koch, G., H. Wernli, C. Schwierz, J. Staehelin, and T. Peter (2005), A composite study on the structure and formation of ozone miniholes and minihighs over central Europe, Geophys. Res. Lett., 32, L12810, doi:10.1029/2004GL022062. Müller, R., Grooß, J.-U., Lemmen, C., Heinze, D., Dameris, M., and Bodeker, G.: Simple measures of ozone depletion in the polar stratosphere, Atmos. Chem. Phys., 8, 251-264, 2008. Ribatet

  15. Evaluation of the best fit distribution for partial duration series of daily rainfall in Madinah, western Saudi Arabia

    NASA Astrophysics Data System (ADS)

    Alahmadi, F.; Rahman, N. A.; Abdulrazzak, M.

    2014-09-01

    Rainfall frequency analysis is an essential tool for the design of water related infrastructure. It can be used to predict future flood magnitudes for a given magnitude and frequency of extreme rainfall events. This study analyses the application of rainfall partial duration series (PDS) in the vast growing urban Madinah city located in the western part of Saudi Arabia. Different statistical distributions were applied (i.e. Normal, Log Normal, Extreme Value type I, Generalized Extreme Value, Pearson Type III, Log Pearson Type III) and their distribution parameters were estimated using L-moments methods. Also, different selection criteria models are applied, e.g. Akaike Information Criterion (AIC), Corrected Akaike Information Criterion (AICc), Bayesian Information Criterion (BIC) and Anderson-Darling Criterion (ADC). The analysis indicated the advantage of Generalized Extreme Value as the best fit statistical distribution for Madinah partial duration daily rainfall series. The outcome of such an evaluation can contribute toward better design criteria for flood management, especially flood protection measures.

  16. Relations between elliptic multiple zeta values and a special derivation algebra

    NASA Astrophysics Data System (ADS)

    Broedel, Johannes; Matthes, Nils; Schlotterer, Oliver

    2016-04-01

    We investigate relations between elliptic multiple zeta values (eMZVs) and describe a method to derive the number of indecomposable elements of given weight and length. Our method is based on representing eMZVs as iterated integrals over Eisenstein series and exploiting the connection with a special derivation algebra. Its commutator relations give rise to constraints on the iterated integrals over Eisenstein series relevant for eMZVs and thereby allow to count the indecomposable representatives. Conversely, the above connection suggests apparently new relations in the derivation algebra. Under https://tools.aei.mpg.de/emzv we provide relations for eMZVs over a wide range of weights and lengths.

  17. Updating Landsat time series of surface-reflectance composites and forest change products with new observations

    NASA Astrophysics Data System (ADS)

    Hermosilla, Txomin; Wulder, Michael A.; White, Joanne C.; Coops, Nicholas C.; Hobart, Geordie W.

    2017-12-01

    The use of time series satellite data allows for the temporally dense, systematic, transparent, and synoptic capture of land dynamics over time. Subsequent to the opening of the Landsat archive, several time series approaches for characterizing landscape change have been developed, often representing a particular analytical time window. The information richness and widespread utility of these time series data have created a need to maintain the currency of time series information via the addition of new data, as it becomes available. When an existing time series is temporally extended, it is critical that previously generated change information remains consistent, thereby not altering reported change statistics or science outcomes based on that change information. In this research, we investigate the impacts and implications of adding additional years to an existing 29-year annual Landsat time series for forest change. To do so, we undertook a spatially explicit comparison of the 29 overlapping years of a time series representing 1984-2012, with a time series representing 1984-2016. Surface reflectance values, and presence, year, and type of change were compared. We found that the addition of years to extend the time series had minimal effect on the annual surface reflectance composites, with slight band-specific differences (r ≥ 0.1) in the final years of the original time series being updated. The area of stand replacing disturbances and determination of change year are virtually unchanged for the overlapping period between the two time-series products. Over the overlapping temporal period (1984-2012), the total area of change differs by 0.53%, equating to an annual difference in change area of 0.019%. Overall, the spatial and temporal agreement of the changes detected by both time series was 96%. Further, our findings suggest that the entire pre-existing historic time series does not need to be re-processed during the update process. Critically, given the time

  18. Visibility Graph Based Time Series Analysis

    PubMed Central

    Stephen, Mutua; Gu, Changgui; Yang, Huijie

    2015-01-01

    Network based time series analysis has made considerable achievements in the recent years. By mapping mono/multivariate time series into networks, one can investigate both it’s microscopic and macroscopic behaviors. However, most proposed approaches lead to the construction of static networks consequently providing limited information on evolutionary behaviors. In the present paper we propose a method called visibility graph based time series analysis, in which series segments are mapped to visibility graphs as being descriptions of the corresponding states and the successively occurring states are linked. This procedure converts a time series to a temporal network and at the same time a network of networks. Findings from empirical records for stock markets in USA (S&P500 and Nasdaq) and artificial series generated by means of fractional Gaussian motions show that the method can provide us rich information benefiting short-term and long-term predictions. Theoretically, we propose a method to investigate time series from the viewpoint of network of networks. PMID:26571115

  19. Visibility Graph Based Time Series Analysis.

    PubMed

    Stephen, Mutua; Gu, Changgui; Yang, Huijie

    2015-01-01

    Network based time series analysis has made considerable achievements in the recent years. By mapping mono/multivariate time series into networks, one can investigate both it's microscopic and macroscopic behaviors. However, most proposed approaches lead to the construction of static networks consequently providing limited information on evolutionary behaviors. In the present paper we propose a method called visibility graph based time series analysis, in which series segments are mapped to visibility graphs as being descriptions of the corresponding states and the successively occurring states are linked. This procedure converts a time series to a temporal network and at the same time a network of networks. Findings from empirical records for stock markets in USA (S&P500 and Nasdaq) and artificial series generated by means of fractional Gaussian motions show that the method can provide us rich information benefiting short-term and long-term predictions. Theoretically, we propose a method to investigate time series from the viewpoint of network of networks.

  20. Flood Frequency Analysis For Partial Duration Series In Ganjiang River Basin

    NASA Astrophysics Data System (ADS)

    zhangli, Sun; xiufang, Zhu; yaozhong, Pan

    2016-04-01

    Accurate estimation of flood frequency is key to effective, nationwide flood damage abatement programs. The partial duration series (PDS) method is widely used in hydrologic studies because it considers all events above a certain threshold level as compared to the annual maximum series (AMS) method, which considers only the annual maximum value. However, the PDS has a drawback in that it is difficult to define the thresholds and maintain an independent and identical distribution of the partial duration time series; this drawback is discussed in this paper. The Ganjiang River is the seventh largest tributary of the Yangtze River, the longest river in China. The Ganjiang River covers a drainage area of 81,258 km2 at the Wanzhou hydrologic station as the basin outlet. In this work, 56 years of daily flow data (1954-2009) from the Wanzhou station were used to analyze flood frequency, and the Pearson-III model was employed as the hydrologic probability distribution. Generally, three tasks were accomplished: (1) the threshold of PDS by percentile rank of daily runoff was obtained; (2) trend analysis of the flow series was conducted using PDS; and (3) flood frequency analysis was conducted for partial duration flow series. The results showed a slight upward trend of the annual runoff in the Ganjiang River basin. The maximum flow with a 0.01 exceedance probability (corresponding to a 100-year flood peak under stationary conditions) was 20,000 m3/s, while that with a 0.1 exceedance probability was 15,000 m3/s. These results will serve as a guide to hydrological engineering planning, design, and management for policymakers and decision makers associated with hydrology.

  1. Scaling of Dielectric Breakdown Thresholds in Earth's and CO2-rich atmospheres: Impact for Predictions of Extraterrestrial Transient Luminous Events and Lightning Discharges

    NASA Astrophysics Data System (ADS)

    Riousset, J. A.

    2016-12-01

    Earth's atmospheric electricity manifests itself in the form of glow, corona, streamer, and leader discharges observed as Saint Elmo's fire, sprites, lightning and jets discharges, and other Transient Luminous Events (TLEs). All of these are types of dielectric breakdown, but are governed by different physics. In particular, their initiation is associated with the crossing of specific electric field thresholds: relativistic runaway, streamer propagation, conventional breakdown, or thermal runaway thresholds, some better understood than others. For example, the initiation of a lightning discharge is known to occur when the local electric field exceeds a value similar to relativistic runaway field, but the exact threshold, as well as the physical mechanisms at work, remain rather unclear to date. Scaling laws for electric fields (and other quantities) have been established by Pasko et al. [GRL, 25(12), 2123-2126, 1998] and Pasko [NATO Sci. Series, Springer, 253-311, 2006]. In this work, we develop profiles for initiation criteria in air and in other atmospheric environments. We further calculate their associated scaling laws to determine the ability to trigger lightning flashes and TLEs in our solar system. This lets us predict the likelihood of electrical discharges on, e.g., Mars, Venus and Titan, and calculate the expected electric field conditions, under which discharges have been observed on Jupiter, Saturn, Uranus, and Neptune [Leblanc et al., ISSI Spa. Sci. Series, Springer, 2008, Yair, Adv. Space Res., 50(3), 293-310, 2012]. Our results anticipate the arrival of ExoMars 2016's Schiaparelli module, which will provide the first records of electric field at the surface of the planet [Déprez et al., EGU GA, 16, 16613, 2014]. This research is also motived by the increasing probability of manned missions to Mars and the potential electrostatic hazards it may face [Yair, 2012], and by the role of electrical discharges in the creation of active radicals, some of

  2. Time series analysis of gold production in Malaysia

    NASA Astrophysics Data System (ADS)

    Muda, Nora; Hoon, Lee Yuen

    2012-05-01

    Gold is a soft, malleable, bright yellow metallic element and unaffected by air or most reagents. It is highly valued as an asset or investment commodity and is extensively used in jewellery, industrial application, dentistry and medical applications. In Malaysia, gold mining is limited in several areas such as Pahang, Kelantan, Terengganu, Johor and Sarawak. The main purpose of this case study is to obtain a suitable model for the production of gold in Malaysia. The model can also be used to predict the data of Malaysia's gold production in the future. Box-Jenkins time series method was used to perform time series analysis with the following steps: identification, estimation, diagnostic checking and forecasting. In addition, the accuracy of prediction is tested using mean absolute percentage error (MAPE). From the analysis, the ARIMA (3,1,1) model was found to be the best fitted model with MAPE equals to 3.704%, indicating the prediction is very accurate. Hence, this model can be used for forecasting. This study is expected to help the private and public sectors to understand the gold production scenario and later plan the gold mining activities in Malaysia.

  3. Value-Added Systems for Information and Instruction at Vocational-Technical Centers.

    ERIC Educational Resources Information Center

    Boyd, Betty Sue; Turner, Marsha K.

    Information resources can be considered a series of formal processes or activities by which the potential usefulness of specific information messages being processed is enhanced. These processes may add value to the information for the user. In order to increase the possibility that the information will be useful to recipients and users,…

  4. Investigation of the 16-year and 18-year ZTD Time Series Derived from GPS Data Processing

    NASA Astrophysics Data System (ADS)

    Bałdysz, Zofia; Nykiel, Grzegorz; Figurski, Mariusz; Szafranek, Karolina; KroszczyńSki, Krzysztof

    2015-08-01

    The GPS system can play an important role in activities related to the monitoring of climate. Long time series, coherent strategy, and very high quality of tropospheric parameter Zenith Tropospheric Delay (ZTD) estimated on the basis of GPS data analysis allows to investigate its usefulness for climate research as a direct GPS product. This paper presents results of analysis of 16-year time series derived from EUREF Permanent Network (EPN) reprocessing performed by the Military University of Technology. For 58 stations Lomb-Scargle periodograms were performed in order to obtain information about the oscillations in ZTD time series. Seasonal components and linear trend were estimated using Least Square Estimation (LSE) and Mann—Kendall trend test was used to confirm the presence of a linear trend designated by LSE method. In order to verify the impact of the length of time series on trend value, comparison between 16 and 18 years were performed.

  5. Smoothing of climate time series revisited

    NASA Astrophysics Data System (ADS)

    Mann, Michael E.

    2008-08-01

    We present an easily implemented method for smoothing climate time series, generalizing upon an approach previously described by Mann (2004). The method adaptively weights the three lowest order time series boundary constraints to optimize the fit with the raw time series. We apply the method to the instrumental global mean temperature series from 1850-2007 and to various surrogate global mean temperature series from 1850-2100 derived from the CMIP3 multimodel intercomparison project. These applications demonstrate that the adaptive method systematically out-performs certain widely used default smoothing methods, and is more likely to yield accurate assessments of long-term warming trends.

  6. Graphical Data Analysis on the Circle: Wrap-Around Time Series Plots for (Interrupted) Time Series Designs.

    PubMed

    Rodgers, Joseph Lee; Beasley, William Howard; Schuelke, Matthew

    2014-01-01

    Many data structures, particularly time series data, are naturally seasonal, cyclical, or otherwise circular. Past graphical methods for time series have focused on linear plots. In this article, we move graphical analysis onto the circle. We focus on 2 particular methods, one old and one new. Rose diagrams are circular histograms and can be produced in several different forms using the RRose software system. In addition, we propose, develop, illustrate, and provide software support for a new circular graphical method, called Wrap-Around Time Series Plots (WATS Plots), which is a graphical method useful to support time series analyses in general but in particular in relation to interrupted time series designs. We illustrate the use of WATS Plots with an interrupted time series design evaluating the effect of the Oklahoma City bombing on birthrates in Oklahoma County during the 10 years surrounding the bombing of the Murrah Building in Oklahoma City. We compare WATS Plots with linear time series representations and overlay them with smoothing and error bands. Each method is shown to have advantages in relation to the other; in our example, the WATS Plots more clearly show the existence and effect size of the fertility differential.

  7. Frontiers in Chemical Sensors: Novel Principles and Techniques

    NASA Astrophysics Data System (ADS)

    Orellana, Guillermo; Moreno-Bondi, Maria Cruz

    This third volume of Springer Series on Chemical Sensors and Biosensors aims to enable the researcher or technologist to become acquainted with the latest principles and techniques that keep on enlarging the applications in this fascinating field. It deals with the novel luminescence lifetime-based techniques for interrogation of sensor arrays in high-throughput screening, cataluminescence, chemical sensing with hollow waveguides, new ways in sensor design and fabrication by means of either combinatorial methods or engineered indicator/support couples.

  8. Decentralized Riemannian Particle Filtering with Applications to Multi-Agent Localization

    DTIC Science & Technology

    2012-06-14

    literature. As early as 1990 in a series of publications by Rudolph Kulhavy [155–159], the theoretical work of Rao [236], Efron [95], and Amari [12] began...influencing portions of the nonlinear estimation and filtering literature. 71 The work of Rudolph Kulhavy was primarily concerned with parameter...Essential Topology. Springer, 2005. 70. Csiszár, I. “Generalized Cutoff Rates and Rényi Information Measures”. IEEE Trans. Inform. Theory, 41(1):26–34, Jan

  9. Young Women's Work Values and Role Salience in Grade 11: Are There Changes Three Years Later?

    ERIC Educational Resources Information Center

    Madill, H. M.; Montgomerie, T. C.; Stewin, L. L.; Fitzsimmons, G. W.; Tovell, D. R.; Armour, M-A.; Ciccocioppo, A-L.

    2000-01-01

    Describes longitudinal study of 11th grade female students who completed a series of career-related inventories and follow-up interviews. Little change was noted in work-related values between the two administrations of the Values. Outlines D.E. Super's theory of career development and its applications to career counseling. (Author/JDM)

  10. A 305-year continuous monthly rainfall series for the island of Ireland (1711-2016)

    NASA Astrophysics Data System (ADS)

    Murphy, Conor; Broderick, Ciaran; Burt, Timothy P.; Curley, Mary; Duffy, Catriona; Hall, Julia; Harrigan, Shaun; Matthews, Tom K. R.; Macdonald, Neil; McCarthy, Gerard; McCarthy, Mark P.; Mullan, Donal; Noone, Simon; Osborn, Timothy J.; Ryan, Ciara; Sweeney, John; Thorne, Peter W.; Walsh, Seamus; Wilby, Robert L.

    2018-03-01

    A continuous 305-year (1711-2016) monthly rainfall series (IoI_1711) is created for the Island of Ireland. The post 1850 series draws on an existing quality assured rainfall network for Ireland, while pre-1850 values come from instrumental and documentary series compiled, but not published by the UK Met Office. The series is evaluated by comparison with independent long-term observations and reconstructions of precipitation, temperature and circulation indices from across the British-Irish Isles. Strong decadal consistency of IoI_1711 with other long-term observations is evident throughout the annual, boreal spring and autumn series. Annually, the most recent decade (2006-2015) is found to be the wettest in over 300 years. The winter series is probably too dry between the 1740s and 1780s, but strong consistency with other long-term observations strengthens confidence from 1790 onwards. The IoI_1711 series has remarkably wet winters during the 1730s, concurrent with a period of strong westerly airflow, glacial advance throughout Scandinavia and near unprecedented warmth in the Central England Temperature record - all consistent with a strongly positive phase of the North Atlantic Oscillation. Unusually wet summers occurred in the 1750s, consistent with proxy (tree-ring) reconstructions of summer precipitation in the region. Our analysis shows that inter-decadal variability of precipitation is much larger than previously thought, while relationships with key modes of climate variability are time-variant. The IoI_1711 series reveals statistically significant multi-centennial trends in winter (increasing) and summer (decreasing) seasonal precipitation. However, given uncertainties in the early winter record, the former finding should be regarded as tentative. The derived record, one of the longest continuous series in Europe, offers valuable insights for understanding multi-decadal and centennial rainfall variability in Ireland, and provides a firm basis for

  11. Test Series 2. 4: detailed test plan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    Test Series 2.4 comprises the fourth sub-series of tests to be scheduled as a part of Test Series 2, the second stage of the combustion research program to be carried out at the Grimethorpe Experimental Pressurized Fluidized Bed Combustion Facility. Test Series 2.1, the first sub-series of tests, was completed in February 1983, and the first part of the second sub-series, Test Series 2.3, in October 1983. Test Series 2.2 was completed in February 1984 after which the second part of Test Series 2.3 commenced. The Plan for Test Series 2.4 consists of 350 data gathering hours to be completedmore » within 520 coal burning hours. This document provides a brief description of the Facility and modifications which have been made following the completion of Test Series 2.1. No further modifications were made following the completion of the first part of Test Series 2.3 or Test Series 2.2. The operating requirements for Test Series 2.4 are specified. The tests will be performed using a UK coal (Lady Windsor), and a UK limestone (Middleton) both nominated by the FRG. Seven objectives are proposed which are to be fulfilled by thirteen test conditions. Six part load tests based on input supplied by Kraftwerk Union AG are included. The cascade is expected to be on line for each test condition and total cascade exposure is expected to be in excess of 450 hours. Details of sampling and special measurements are given. A test plan schedule envisages the full test series being completed within a two month calendar period. Finally, a number of contingency strategies are proposed. 3 figures, 14 tables.« less

  12. A Filtering of Incomplete GNSS Position Time Series with Probabilistic Principal Component Analysis

    NASA Astrophysics Data System (ADS)

    Gruszczynski, Maciej; Klos, Anna; Bogusz, Janusz

    2018-04-01

    For the first time, we introduced the probabilistic principal component analysis (pPCA) regarding the spatio-temporal filtering of Global Navigation Satellite System (GNSS) position time series to estimate and remove Common Mode Error (CME) without the interpolation of missing values. We used data from the International GNSS Service (IGS) stations which contributed to the latest International Terrestrial Reference Frame (ITRF2014). The efficiency of the proposed algorithm was tested on the simulated incomplete time series, then CME was estimated for a set of 25 stations located in Central Europe. The newly applied pPCA was compared with previously used algorithms, which showed that this method is capable of resolving the problem of proper spatio-temporal filtering of GNSS time series characterized by different observation time span. We showed, that filtering can be carried out with pPCA method when there exist two time series in the dataset having less than 100 common epoch of observations. The 1st Principal Component (PC) explained more than 36% of the total variance represented by time series residuals' (series with deterministic model removed), what compared to the other PCs variances (less than 8%) means that common signals are significant in GNSS residuals. A clear improvement in the spectral indices of the power-law noise was noticed for the Up component, which is reflected by an average shift towards white noise from - 0.98 to - 0.67 (30%). We observed a significant average reduction in the accuracy of stations' velocity estimated for filtered residuals by 35, 28 and 69% for the North, East, and Up components, respectively. CME series were also subjected to analysis in the context of environmental mass loading influences of the filtering results. Subtraction of the environmental loading models from GNSS residuals provides to reduction of the estimated CME variance by 20 and 65% for horizontal and vertical components, respectively.

  13. Informing the Selection of Screening Hit Series with in Silico Absorption, Distribution, Metabolism, Excretion, and Toxicity Profiles.

    PubMed

    Sanders, John M; Beshore, Douglas C; Culberson, J Christopher; Fells, James I; Imbriglio, Jason E; Gunaydin, Hakan; Haidle, Andrew M; Labroli, Marc; Mattioni, Brian E; Sciammetta, Nunzio; Shipe, William D; Sheridan, Robert P; Suen, Linda M; Verras, Andreas; Walji, Abbas; Joshi, Elizabeth M; Bueters, Tjerk

    2017-08-24

    High-throughput screening (HTS) has enabled millions of compounds to be assessed for biological activity, but challenges remain in the prioritization of hit series. While biological, absorption, distribution, metabolism, excretion, and toxicity (ADMET), purity, and structural data are routinely used to select chemical matter for further follow-up, the scarcity of historical ADMET data for screening hits limits our understanding of early hit compounds. Herein, we describe a process that utilizes a battery of in-house quantitative structure-activity relationship (QSAR) models to generate in silico ADMET profiles for hit series to enable more complete characterizations of HTS chemical matter. These profiles allow teams to quickly assess hit series for desirable ADMET properties or suspected liabilities that may require significant optimization. Accordingly, these in silico data can direct ADMET experimentation and profoundly impact the progression of hit series. Several prospective examples are presented to substantiate the value of this approach.

  14. The Value of Children: A Cross-National Study, Volume Three. Hawaii.

    ERIC Educational Resources Information Center

    Arnold, Fred; Fawcett, James T.

    The document, one in a series of seven reports from the Value of Children Project, discusses results of the survey in Hawaii. Specifically, the study investigated the social, psychological, and economic costs and benefits associated with having children. The volume is presented in seven chapters. Chapter I describes the background of the study and…

  15. Kapteyn series arising in radiation problems

    NASA Astrophysics Data System (ADS)

    Lerche, I.; Tautz, R. C.

    2008-01-01

    In discussing radiation from multiple point charges or magnetic dipoles, moving in circles or ellipses, a variety of Kapteyn series of the second kind arises. Some of the series have been known in closed form for a hundred years or more, others appear not to be available to analytic persuasion. This paper shows how 12 such generic series can be developed to produce either closed analytic expressions or integrals that are not analytically tractable. In addition, the method presented here may be of benefit when one has other Kapteyn series of the second kind to consider, thereby providing an additional reason to consider such series anew.

  16. Uranium series isotopes concentration in sediments at San Marcos and Luis L. Leon reservoirs, Chihuahua, Mexico

    NASA Astrophysics Data System (ADS)

    Méndez-García, C.; Renteria-Villalobos, M.; García-Tenorio, R.; Montero-Cabrera, M. E.

    2014-07-01

    Spatial and temporal distribution of the radioisotopes concentrations were determined in sediments near the surface and core samples extracted from two reservoirs located in an arid region close to Chihuahua City, Mexico. At San Marcos reservoir one core was studied, while from Luis L. Leon reservoir one core from the entrance and another one close to the wall were investigated. 232Th-series, 238U-series, 40K and 137Cs activity concentrations (AC, Bq kg-1) were determined by gamma spectrometry with a high purity Ge detector. 238U and 234U ACs were obtained by liquid scintillation and alpha spectrometry with a surface barrier detector. Dating of core sediments was performed applying CRS method to 210Pb activities. Results were verified by 137Cs AC. Resulting activity concentrations were compared among corresponding surface and core sediments. High 238U-series AC values were found in sediments from San Marcos reservoir, because this site is located close to the Victorino uranium deposit. Low AC values found in Luis L. Leon reservoir suggest that the uranium present in the source of the Sacramento - Chuviscar Rivers is not transported up to the Conchos River. Activity ratios (AR) 234U/overflow="scroll">238U and 238U/overflow="scroll">226Ra in sediments have values between 0.9-1.2, showing a behavior close to radioactive equilibrium in the entire basin. 232Th/overflow="scroll">238U, 228Ra/overflow="scroll">226Ra ARs are witnesses of the different geological origin of sediments from San Marcos and Luis L. Leon reservoirs.

  17. International Christian Schoolteachers' Traits, Characteristics, and Qualities Valued by Third Culture Kids

    ERIC Educational Resources Information Center

    Linton, Dale B.

    2015-01-01

    In this qualitative grounded theory study, 24 participants, referred to as "third culture kids" (or TCKs), ages 18-30 years, who had previously attended international Christian schools were interviewed to determine the dispositions they valued in their teachers. Incorporating principles of grounded theory, a series of rigorous steps were…

  18. Fuzzy time series forecasting model with natural partitioning length approach for predicting the unemployment rate under different degree of confidence

    NASA Astrophysics Data System (ADS)

    Ramli, Nazirah; Mutalib, Siti Musleha Ab; Mohamad, Daud

    2017-08-01

    Fuzzy time series forecasting model has been proposed since 1993 to cater for data in linguistic values. Many improvement and modification have been made to the model such as enhancement on the length of interval and types of fuzzy logical relation. However, most of the improvement models represent the linguistic term in the form of discrete fuzzy sets. In this paper, fuzzy time series model with data in the form of trapezoidal fuzzy numbers and natural partitioning length approach is introduced for predicting the unemployment rate. Two types of fuzzy relations are used in this study which are first order and second order fuzzy relation. This proposed model can produce the forecasted values under different degree of confidence.

  19. Fourier Series Optimization Opportunity

    ERIC Educational Resources Information Center

    Winkel, Brian

    2008-01-01

    This note discusses the introduction of Fourier series as an immediate application of optimization of a function of more than one variable. Specifically, it is shown how the study of Fourier series can be motivated to enrich a multivariable calculus class. This is done through discovery learning and use of technology wherein students build the…

  20. Summing Certain p-Series.

    ERIC Educational Resources Information Center

    Fay, Temple H.

    1997-01-01

    Presents an exercise suitable for beginning calculus students that may give insight into series representations and allow students to see some elementary application of these representations. The Fourier series is used to approximate by taking sums of trigonometric functions of the form sin(ns) and cos(nx) for n is greater than or = zero. (PVD)

  1. Estimating and Valuing Morbidity in a Policy Context: Proceedings of June 1989 AERE Workshop (1989)

    EPA Pesticide Factsheets

    Contains the proceedings for the 1989 Association of Environmental and Resource Economists Workshop on valuing reductions in human health morbidity risks. Series of papers and discussions were collected and reported in the document.

  2. 76 FR 62470 - MFS Series Trust I, et al.

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-07

    ...] MFS Series Trust I, et al.; Notice of Application September 30, 2011. AGENCY: Securities and Exchange... Series Trust I, MFS Series Trust II, MFS Series Trust III, MFS Series Trust IV, MFS Series Trust V, MFS Series Trust VI, MFS Series Trust VII, MFS Series Trust VIII, MFS Series Trust IX, MFS Series Trust X...

  3. Regolith production rates calculated with uranium-series isotopes at Susquehanna/Shale Hills Critical Zone Observatory

    NASA Astrophysics Data System (ADS)

    Ma, Lin; Chabaux, Francois; Pelt, Eric; Blaes, Estelle; Jin, Lixin; Brantley, Susan

    2010-08-01

    In the Critical Zone where rocks and life interact, bedrock equilibrates to Earth surface conditions, transforming to regolith. The factors that control the rates and mechanisms of formation of regolith, defined here as material that can be augered, are still not fully understood. To quantify regolith formation rates on shale lithology, we measured uranium-series (U-series) isotopes ( 238U, 234U, and 230Th) in three weathering profiles along a planar hillslope at the Susquehanna/Shale Hills Observatory (SSHO) in central Pennsylvania. All regolith samples show significant U-series disequilibrium: ( 234U/ 238U) and ( 230Th/ 238U) activity ratios range from 0.934 to 1.072 and from 0.903 to 1.096, respectively. These values display depth trends that are consistent with fractionation of U-series isotopes during chemical weathering and element transport, i.e., the relative mobility decreases in the order 234U > 238U > 230Th. The activity ratios observed in the regolith samples are explained by i) loss of U-series isotopes during water-rock interactions and ii) re-deposition of U-series isotopes downslope. Loss of U and Th initiates in the meter-thick zone of "bedrock" that cannot be augered but that nonetheless consists of up to 40% clay/silt/sand inferred to have lost K, Mg, Al, and Fe. Apparent equivalent regolith production rates calculated with these isotopes for these profiles decrease exponentially from 45 m/Myr to 17 m/Myr, with increasing regolith thickness from the ridge top to the valley floor. With increasing distance from the ridge top toward the valley, apparent equivalent regolith residence times increase from 7 kyr to 40 kyr. Given that the SSHO experienced peri-glacial climate ˜ 15 kyr ago and has a catchment-wide averaged erosion rate of ˜ 15 m/Myr as inferred from cosmogenic 10Be, we conclude that the hillslope retains regolith formed before the peri-glacial period and is not at geomorphologic steady state. Both chemical weathering reactions of clay

  4. TaiWan Ionospheric Model (TWIM) prediction based on time series autoregressive analysis

    NASA Astrophysics Data System (ADS)

    Tsai, L. C.; Macalalad, Ernest P.; Liu, C. H.

    2014-10-01

    As described in a previous paper, a three-dimensional ionospheric electron density (Ne) model has been constructed from vertical Ne profiles retrieved from the FormoSat3/Constellation Observing System for Meteorology, Ionosphere, and Climate GPS radio occultation measurements and worldwide ionosonde foF2 and foE data and named the TaiWan Ionospheric Model (TWIM). The TWIM exhibits vertically fitted α-Chapman-type layers with distinct F2, F1, E, and D layers, and surface spherical harmonic approaches for the fitted layer parameters including peak density, peak density height, and scale height. To improve the TWIM into a real-time model, we have developed a time series autoregressive model to forecast short-term TWIM coefficients. The time series of TWIM coefficients are considered as realizations of stationary stochastic processes within a processing window of 30 days. These autocorrelation coefficients are used to derive the autoregressive parameters and then forecast the TWIM coefficients, based on the least squares method and Lagrange multiplier technique. The forecast root-mean-square relative TWIM coefficient errors are generally <30% for 1 day predictions. The forecast TWIM values of foE and foF2 values are also compared and evaluated using worldwide ionosonde data.

  5. Wet tropospheric delays forecast based on Vienna Mapping Function time series analysis

    NASA Astrophysics Data System (ADS)

    Rzepecka, Zofia; Kalita, Jakub

    2016-04-01

    It is well known that the dry part of the zenith tropospheric delay (ZTD) is much easier to model than the wet part (ZTW). The aim of the research is applying stochastic modeling and prediction of ZTW using time series analysis tools. Application of time series analysis enables closer understanding of ZTW behavior as well as short-term prediction of future ZTW values. The ZTW data used for the studies were obtained from the GGOS service hold by Vienna technical University. The resolution of the data is six hours. ZTW for the years 2010 -2013 were adopted for the study. The International GNSS Service (IGS) permanent stations LAMA and GOPE, located in mid-latitudes, were admitted for the investigations. Initially the seasonal part was separated and modeled using periodic signals and frequency analysis. The prominent annual and semi-annual signals were removed using sines and consines functions. The autocorrelation of the resulting signal is significant for several days (20-30 samples). The residuals of this fitting were further analyzed and modeled with ARIMA processes. For both the stations optimal ARMA processes based on several criterions were obtained. On this basis predicted ZTW values were computed for one day ahead, leaving the white process residuals. Accuracy of the prediction can be estimated at about 3 cm.

  6. Network structure of multivariate time series.

    PubMed

    Lacasa, Lucas; Nicosia, Vincenzo; Latora, Vito

    2015-10-21

    Our understanding of a variety of phenomena in physics, biology and economics crucially depends on the analysis of multivariate time series. While a wide range tools and techniques for time series analysis already exist, the increasing availability of massive data structures calls for new approaches for multidimensional signal processing. We present here a non-parametric method to analyse multivariate time series, based on the mapping of a multidimensional time series into a multilayer network, which allows to extract information on a high dimensional dynamical system through the analysis of the structure of the associated multiplex network. The method is simple to implement, general, scalable, does not require ad hoc phase space partitioning, and is thus suitable for the analysis of large, heterogeneous and non-stationary time series. We show that simple structural descriptors of the associated multiplex networks allow to extract and quantify nontrivial properties of coupled chaotic maps, including the transition between different dynamical phases and the onset of various types of synchronization. As a concrete example we then study financial time series, showing that a multiplex network analysis can efficiently discriminate crises from periods of financial stability, where standard methods based on time-series symbolization often fail.

  7. Analysis and testing of numerical formulas for the initial value problem

    NASA Technical Reports Server (NTRS)

    Brown, R. L.; Kovach, K. R.; Popyack, J. L.

    1980-01-01

    Three computer programs for evaluating and testing numerical integration formulas used with fixed stepsize programs to solve initial value systems of ordinary differential equations are described. A program written in PASCAL SERIES, takes as input the differential equations and produces a FORTRAN subroutine for the derivatives of the system and for computing the actual solution through recursive power series techniques. Both of these are used by STAN, a FORTRAN program that interactively displays a discrete analog of the Liapunov stability region of any two dimensional subspace of the system. The derivatives may be used by CLMP, a FORTRAN program, to test the fixed stepsize formula against a good numerical result and interactively display the solutions.

  8. A Look at Infinite Series

    ERIC Educational Resources Information Center

    Basor, Estelle

    1978-01-01

    Sums for divergent series that were seriously considered by eighteenth century mathematicians are shown to have reappeared as result of new interpretations for divergent series that make these previous conclusions valid. (MN)

  9. Values in Tension: Israel Education at a U.S. Jewish Day School

    ERIC Educational Resources Information Center

    Zakai, Sivan

    2011-01-01

    The Naphtali Herz Imber Jewish Day School proudly proclaimed its commitment to Israel, yet many of its students experienced profound ambivalence toward the Jewish State. Why? The school was committed to a series of contradictory values which surfaced in its approach to Israel education. This article outlines three distinct yet interrelated…

  10. A convergent series expansion for hyperbolic systems of conservation laws

    NASA Technical Reports Server (NTRS)

    Harabetian, E.

    1985-01-01

    The discontinuities piecewise analytic initial value problem for a wide class of conservation laws is considered which includes the full three-dimensional Euler equations. The initial interaction at an arbitrary curved surface is resolved in time by a convergent series. Among other features the solution exhibits shock, contact, and expansion waves as well as sound waves propagating on characteristic surfaces. The expansion waves correspond to he one-dimensional rarefactions but have a more complicated structure. The sound waves are generated in place of zero strength shocks, and they are caused by mismatches in derivatives.

  11. Norway's New Culture Policy and the Arts: Values in Conflict.

    ERIC Educational Resources Information Center

    Klausen, Arne Martin

    The basis for the New Norwegian Culture policy (NCC) is discussed in terms of the political attempt to extend the fundamental values of equality and social security into art and cultural life. The NCC is a result of a series of reports presented in the early 1970s which reflected a desire to see a broader welfare policy in Norway. The old form of…

  12. A Method for Oscillation Errors Restriction of SINS Based on Forecasted Time Series.

    PubMed

    Zhao, Lin; Li, Jiushun; Cheng, Jianhua; Jia, Chun; Wang, Qiufan

    2015-07-17

    Continuity, real-time, and accuracy are the key technical indexes of evaluating comprehensive performance of a strapdown inertial navigation system (SINS). However, Schuler, Foucault, and Earth periodic oscillation errors significantly cut down the real-time accuracy of SINS. A method for oscillation error restriction of SINS based on forecasted time series is proposed by analyzing the characteristics of periodic oscillation errors. The innovative method gains multiple sets of navigation solutions with different phase delays in virtue of the forecasted time series acquired through the measurement data of the inertial measurement unit (IMU). With the help of curve-fitting based on least square method, the forecasted time series is obtained while distinguishing and removing small angular motion interference in the process of initial alignment. Finally, the periodic oscillation errors are restricted on account of the principle of eliminating the periodic oscillation signal with a half-wave delay by mean value. Simulation and test results show that the method has good performance in restricting the Schuler, Foucault, and Earth oscillation errors of SINS.

  13. A Method for Oscillation Errors Restriction of SINS Based on Forecasted Time Series

    PubMed Central

    Zhao, Lin; Li, Jiushun; Cheng, Jianhua; Jia, Chun; Wang, Qiufan

    2015-01-01

    Continuity, real-time, and accuracy are the key technical indexes of evaluating comprehensive performance of a strapdown inertial navigation system (SINS). However, Schuler, Foucault, and Earth periodic oscillation errors significantly cut down the real-time accuracy of SINS. A method for oscillation error restriction of SINS based on forecasted time series is proposed by analyzing the characteristics of periodic oscillation errors. The innovative method gains multiple sets of navigation solutions with different phase delays in virtue of the forecasted time series acquired through the measurement data of the inertial measurement unit (IMU). With the help of curve-fitting based on least square method, the forecasted time series is obtained while distinguishing and removing small angular motion interference in the process of initial alignment. Finally, the periodic oscillation errors are restricted on account of the principle of eliminating the periodic oscillation signal with a half-wave delay by mean value. Simulation and test results show that the method has good performance in restricting the Schuler, Foucault, and Earth oscillation errors of SINS. PMID:26193283

  14. Series of Reciprocal Triangular Numbers

    ERIC Educational Resources Information Center

    Bruckman, Paul; Dence, Joseph B.; Dence, Thomas P.; Young, Justin

    2013-01-01

    Reciprocal triangular numbers have appeared in series since the very first infinite series were summed. Here we attack a number of subseries of the reciprocal triangular numbers by methodically expressing them as integrals.

  15. Improving the Instruction of Infinite Series

    ERIC Educational Resources Information Center

    Lindaman, Brian; Gay, A. Susan

    2012-01-01

    Calculus instructors struggle to teach infinite series, and students have difficulty understanding series and related concepts. Four instructional strategies, prominently used during the calculus reform movement, were implemented during a 3-week unit on infinite series in one class of second-semester calculus students. A description of each…

  16. Repetitive deliberate fires: Development and validation of a methodology to detect series.

    PubMed

    Bruenisholz, Eva; Delémont, Olivier; Ribaux, Olivier; Wilson-Wilde, Linzi

    2017-08-01

    The detection of repetitive deliberate fire events is challenging and still often ineffective due to a case-by-case approach. A previous study provided a critical review of the situation and analysis of the main challenges. This study suggested that the intelligence process, integrating forensic data, could be a valid framework to provide a follow-up and systematic analysis provided it is adapted to the specificities of repetitive deliberate fires. In this current manuscript, a specific methodology to detect deliberate fires series, i.e. set by the same perpetrators, is presented and validated. It is based on case profiles relying on specific elements previously identified. The method was validated using a dataset of approximately 8000 deliberate fire events collected over 12 years in a Swiss state. Twenty possible series were detected, including 6 of 9 known series. These results are very promising and lead the way to a systematic implementation of this methodology in an intelligence framework, whilst demonstrating the need and benefit of increasing the collection of forensic specific information to strengthen the value of links between cases. Crown Copyright © 2017. Published by Elsevier B.V. All rights reserved.

  17. A Review of Subsequence Time Series Clustering

    PubMed Central

    Teh, Ying Wah

    2014-01-01

    Clustering of subsequence time series remains an open issue in time series clustering. Subsequence time series clustering is used in different fields, such as e-commerce, outlier detection, speech recognition, biological systems, DNA recognition, and text mining. One of the useful fields in the domain of subsequence time series clustering is pattern recognition. To improve this field, a sequence of time series data is used. This paper reviews some definitions and backgrounds related to subsequence time series clustering. The categorization of the literature reviews is divided into three groups: preproof, interproof, and postproof period. Moreover, various state-of-the-art approaches in performing subsequence time series clustering are discussed under each of the following categories. The strengths and weaknesses of the employed methods are evaluated as potential issues for future studies. PMID:25140332

  18. A review of subsequence time series clustering.

    PubMed

    Zolhavarieh, Seyedjamal; Aghabozorgi, Saeed; Teh, Ying Wah

    2014-01-01

    Clustering of subsequence time series remains an open issue in time series clustering. Subsequence time series clustering is used in different fields, such as e-commerce, outlier detection, speech recognition, biological systems, DNA recognition, and text mining. One of the useful fields in the domain of subsequence time series clustering is pattern recognition. To improve this field, a sequence of time series data is used. This paper reviews some definitions and backgrounds related to subsequence time series clustering. The categorization of the literature reviews is divided into three groups: preproof, interproof, and postproof period. Moreover, various state-of-the-art approaches in performing subsequence time series clustering are discussed under each of the following categories. The strengths and weaknesses of the employed methods are evaluated as potential issues for future studies.

  19. Extracting uranium from seawater: Promising AF series adsorbents

    DOE PAGES

    Das, Sadananda; Oyola, Y.; Mayes, Richard T.; ...

    2015-11-02

    Here, a new family of high surface area polyethylene fiber adsorbents (AF series) was recently developed at the Oak Ridge National Laboratory (ORNL). The AF series of were synthesized by radiation-induced graft polymerization of acrylonitrile and itaconic acid (at different monomer/co-monomer mol ratios) onto high surface area polyethylene fibers. The degree of grafting (%DOG) of AF series adsorbents was found to be 154 354%. The grafted nitrile groups were converted to amidoxime groups by treating with hydroxylamine. The amidoximated adsorbents were then conditioned with 0.44M KOH at 80 C followed by screening at ORNL with simulated seawater spiked with 8more » ppm uranium. Uranium adsorption capacity in simulated seawater screening ranged from 170-200 g-U/kg-ads irrespective of %DOG. A monomer/co-monomer mol ratio in the range of 7.57-10.14 seemed to be optimum for highest uranium loading capacity. Subsequently, the adsorbents were also tested with natural seawater at Pacific Northwest National Laboratory (PNNL) using flow-through exposure uptake experiments to determine uranium loading capacity with varying KOH conditioning time at 80 C. The highest adsorption capacity of AF1 measured after 56 days of marine testing was demonstrated as 3.9 g-U/kg-adsorbent and 3.2 g-U/kg-adsorbent for 1hr and 3hrs of KOH conditioning at 80 C, respectively. Based on capacity values of several AF1 samples, it was observed that changing KOH conditioning from 3hrs to 1hr at 80 C resulted in 22-27% increase in uranium loading capacity in seawater.« less

  20. Extracting Uranium from Seawater: Promising AF Series Adsorbents

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Das, S.; Oyola, Y.; Mayes, Richard T.

    A new family of high-surface-area polyethylene fiber adsorbents named the AF series was recently developed at the Oak Ridge National Laboratory (ORNL). The AF series adsorbents were synthesized by radiation-induced graft polymerization of acrylonitrile and itaconic acid (at different monomer/comonomer mol ratios) onto high surface area polyethylene fibers. The degree of grafting (%DOG) of AF series adsorbents was found to be 154-354%. The grafted nitrile groups were converted to amidoxime groups by treating with hydroxylamine. The amidoximated adsorbents were then conditioned with 0.44 M KOH at 80 °C followed by screening at ORNL with sodium-based synthetic aqueous solution, spiked withmore » 8 ppm uranium. The uranium adsorption capacity in simulated seawater screening ranged from 170 to 200 g-U/kg-ads irrespective of %DOG. A monomer/comonomer molar ratio in the range of 7.57-10.14 seemed to be optimum for highest uranium loading capacity. Subsequently, the adsorbents were also tested with natural seawater at Pacific Northwest National Laboratory (PNNL) using flow-through column experiments to determine uranium loading capacity with varying KOH conditioning times at 80 °C. The highest adsorption capacity of AF1 measured after 56 days of marine testing was demonstrated as 3.9 g-U/kg-adsorbent and 3.2 g-U/kg-adsorbent for 1 and 3 h of KOH conditioning at 80 °C, respectively. Based on capacity values of several AF1 samples, it was observed that changing KOH conditioning from 1 to 3 h at 80 °C resulted in a 22-27% decrease in uranium adsorption capacity in seawater.« less

  1. Extracting uranium from seawater: Promising AF series adsorbents

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Das, Sadananda; Oyola, Y.; Mayes, Richard T.

    Here, a new family of high surface area polyethylene fiber adsorbents (AF series) was recently developed at the Oak Ridge National Laboratory (ORNL). The AF series of were synthesized by radiation-induced graft polymerization of acrylonitrile and itaconic acid (at different monomer/co-monomer mol ratios) onto high surface area polyethylene fibers. The degree of grafting (%DOG) of AF series adsorbents was found to be 154 354%. The grafted nitrile groups were converted to amidoxime groups by treating with hydroxylamine. The amidoximated adsorbents were then conditioned with 0.44M KOH at 80 C followed by screening at ORNL with simulated seawater spiked with 8more » ppm uranium. Uranium adsorption capacity in simulated seawater screening ranged from 170-200 g-U/kg-ads irrespective of %DOG. A monomer/co-monomer mol ratio in the range of 7.57-10.14 seemed to be optimum for highest uranium loading capacity. Subsequently, the adsorbents were also tested with natural seawater at Pacific Northwest National Laboratory (PNNL) using flow-through exposure uptake experiments to determine uranium loading capacity with varying KOH conditioning time at 80 C. The highest adsorption capacity of AF1 measured after 56 days of marine testing was demonstrated as 3.9 g-U/kg-adsorbent and 3.2 g-U/kg-adsorbent for 1hr and 3hrs of KOH conditioning at 80 C, respectively. Based on capacity values of several AF1 samples, it was observed that changing KOH conditioning from 3hrs to 1hr at 80 C resulted in 22-27% increase in uranium loading capacity in seawater.« less

  2. Mixed-state fidelity susceptibility through iterated commutator series expansion

    NASA Astrophysics Data System (ADS)

    Tonchev, N. S.

    2014-11-01

    We present a perturbative approach to the problem of computation of mixed-state fidelity susceptibility (MFS) for thermal states. The mathematical techniques used provide an analytical expression for the MFS as a formal expansion in terms of the thermodynamic mean values of successively higher commutators of the Hamiltonian with the operator involved through the control parameter. That expression is naturally divided into two parts: the usual isothermal susceptibility and a constituent in the form of an infinite series of thermodynamic mean values which encodes the noncommutativity in the problem. If the symmetry properties of the Hamiltonian are given in terms of the generators of some (finite-dimensional) algebra, the obtained expansion may be evaluated in a closed form. This issue is tested on several popular models, for which it is shown that the calculations are much simpler if they are based on the properties from the representation theory of the Heisenberg or SU(1, 1) Lie algebra.

  3. Using NASA's Giovanni System to Simulate Time-Series Stations in the Outflow Region of California's Eel River

    NASA Technical Reports Server (NTRS)

    Acker, James G.; Shen, Suhung; Leptoukh, Gregory G.; Lee, Zhongping

    2012-01-01

    Oceanographic time-series stations provide vital data for the monitoring of oceanic processes, particularly those associated with trends over time and interannual variability. There are likely numerous locations where the establishment of a time-series station would be desirable, but for reasons of funding or logistics, such establishment may not be feasible. An alternative to an operational time-series station is monitoring of sites via remote sensing. In this study, the NASA Giovanni data system is employed to simulate the establishment of two time-series stations near the outflow region of California s Eel River, which carries a high sediment load. Previous time-series analysis of this location (Acker et al. 2009) indicated that remotely-sensed chl a exhibits a statistically significant increasing trend during summer (low flow) months, but no apparent trend during winter (high flow) months. Examination of several newly-available ocean data parameters in Giovanni, including 8-day resolution data, demonstrates the differences in ocean parameter trends at the two locations compared to regionally-averaged time-series. The hypothesis that the increased summer chl a values are related to increasing SST is evaluated, and the signature of the Eel River plume is defined with ocean optical parameters.

  4. Strategic positioning. Part 1: The sources of value under managed care.

    PubMed

    Kauer, R T; Berkowitz, E

    1997-01-01

    Part 1 of this series organizes and discusses the sources of value against a background of an evolving managed care market. Part 2 will present, in more detail, the marketing and financial challenges to organizational positioning and performance across the four stages of managed care. What are the basic principles or tenets of value and how do they apply to the health care industry? Why is strategic positioning so important to health care organizations struggling in a managed care environment and what are the sources of value? Service motivated employees and the systems that educate them represent a stronger competitive advantage than having assets and technology that are available to anyone. As the health care marketplace evolves, organizations must develop a strategic position that will provide such value and for which the customer will be willing to pay.

  5. Sea change: Charting the course for biogeochemical ocean time-series research in a new millennium

    NASA Astrophysics Data System (ADS)

    Church, Matthew J.; Lomas, Michael W.; Muller-Karger, Frank

    2013-09-01

    implementation, testing, and validation of autonomous and remote observing technologies at time-series sites provide new insights into spatiotemporal variability underlying ecosystem changes. 5. The value of existing time-series data for formulating and validating ecosystem models should be promoted. In summary, the scientific underpinnings of ocean time-series programs remain as strong and important today as when these programs were initiated. The emerging data inform our knowledge of the ocean's biogeochemistry and ecology, and improve our predictive capacity about planetary change.

  6. Transient electromagnetic scattering by a radially uniaxial dielectric sphere: Debye series, Mie series and ray tracing methods

    NASA Astrophysics Data System (ADS)

    Yazdani, Mohsen

    Transient electromagnetic scattering by a radially uniaxial dielectric sphere is explored using three well-known methods: Debye series, Mie series, and ray tracing theory. In the first approach, the general solutions for the impulse and step responses of a uniaxial sphere are evaluated using the inverse Laplace transformation of the generalized Mie series solution. Following high frequency scattering solution of a large uniaxial sphere, the Mie series summation is split into the high frequency (HF) and low frequency terms where the HF term is replaced by its asymptotic expression allowing a significant reduction in computation time of the numerical Bromwich integral. In the second approach, the generalized Debye series for a radially uniaxial dielectric sphere is introduced and the Mie series coefficients are replaced by their equivalent Debye series formulations. The results are then applied to examine the transient response of each individual Debye term allowing the identification of impulse returns in the transient response of the uniaxial sphere. In the third approach, the ray tracing theory in a uniaxial sphere is investigated to evaluate the propagation path as well as the arrival time of the ordinary and extraordinary returns in the transient response of the uniaxial sphere. This is achieved by extracting the reflection and transmission angles of a plane wave obliquely incident on the radially oriented air-uniaxial and uniaxial-air boundaries, and expressing the phase velocities as well as the refractive indices of the ordinary and extraordinary waves in terms of the incident angle, optic axis and propagation direction. The results indicate a satisfactory agreement between Debye series, Mie series and ray tracing methods.

  7. Test Series 2. 2: Detailed Test Plan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    Test Series 2.2 comprises the third sub-series of tests to be scheduled as a part of Test Series 2, the second stage of the combustion research program to be carried out at the Grimethorpe Experimental Pressurized Fluidized Bed Combustion Facility. Test Series 2.1, the first sub-series of tests, was completed in February 1983, and the first half of the second sub-series, Test Series 2.3, in October 1983. Test Series 2.2 is to consist of 350 data gathering hours, which it is hoped to complete within 560 coal burning hours. This document provides a brief description of the Facility and modificationsmore » which have been made following the completion of Test Series 2.1. No further modifications were made following the completion of the first half of Test Series 2.3. The operating requirements are specified. The tests will be performed using a UK coal (Kiveton Park), and a UK limestone (Middleton) both nominated by the FRG. Nine objectives are proposed which are to be fulfilled by thirteen test conditions. Six part load tests are included, as defined by Kraftwerk Union AG. The cascade is expected to be on line for each test condition and total cascade exposure is expected to be in excess of 450 hours. Details of sampling and special measurements are given. A test plan schedule envisages the test series being completed within a two month calendar period. Finally, a number of contingency strategies are proposed.« less

  8. Computation of convex bounds for present value functions with random payments

    NASA Astrophysics Data System (ADS)

    Ahcan, Ales; Darkiewicz, Grzegorz; Goovaerts, Marc; Hoedemakers, Tom

    2006-02-01

    In this contribution we study the distribution of the present value function of a series of random payments in a stochastic financial environment. Such distributions occur naturally in a wide range of applications within fields of insurance and finance. We obtain accurate approximations by developing upper and lower bounds in the convex-order sense for present value functions. Technically speaking, our methodology is an extension of the results of Dhaene et al. [Insur. Math. Econom. 31(1) (2002) 3-33, Insur. Math. Econom. 31(2) (2002) 133-161] to the case of scalar products of mutually independent random vectors.

  9. Weighted statistical parameters for irregularly sampled time series

    NASA Astrophysics Data System (ADS)

    Rimoldini, Lorenzo

    2014-01-01

    Unevenly spaced time series are common in astronomy because of the day-night cycle, weather conditions, dependence on the source position in the sky, allocated telescope time and corrupt measurements, for example, or inherent to the scanning law of satellites like Hipparcos and the forthcoming Gaia. Irregular sampling often causes clumps of measurements and gaps with no data which can severely disrupt the values of estimators. This paper aims at improving the accuracy of common statistical parameters when linear interpolation (in time or phase) can be considered an acceptable approximation of a deterministic signal. A pragmatic solution is formulated in terms of a simple weighting scheme, adapting to the sampling density and noise level, applicable to large data volumes at minimal computational cost. Tests on time series from the Hipparcos periodic catalogue led to significant improvements in the overall accuracy and precision of the estimators with respect to the unweighted counterparts and those weighted by inverse-squared uncertainties. Automated classification procedures employing statistical parameters weighted by the suggested scheme confirmed the benefits of the improved input attributes. The classification of eclipsing binaries, Mira, RR Lyrae, Delta Cephei and Alpha2 Canum Venaticorum stars employing exclusively weighted descriptive statistics achieved an overall accuracy of 92 per cent, about 6 per cent higher than with unweighted estimators.

  10. Series-Connected Buck Boost Regulators

    NASA Technical Reports Server (NTRS)

    Birchenough, Arthur G.

    2005-01-01

    A series-connected buck boost regulator (SCBBR) is an electronic circuit that bucks a power-supply voltage to a lower regulated value or boosts it to a higher regulated value. The concept of the SCBBR is a generalization of the concept of the SCBR, which was reported in "Series-Connected Boost Regulators" (LEW-15918), NASA Tech Briefs, Vol. 23, No. 7 (July 1997), page 42. Relative to prior DC-voltage-regulator concepts, the SCBBR concept can yield significant reductions in weight and increases in power-conversion efficiency in many applications in which input/output voltage ratios are relatively small and isolation is not required, as solar-array regulation or battery charging with DC-bus regulation. Usually, a DC voltage regulator is designed to include a DC-to-DC converter to reduce its power loss, size, and weight. Advances in components, increases in operating frequencies, and improved circuit topologies have led to continual increases in efficiency and/or decreases in the sizes and weights of DC voltage regulators. The primary source of inefficiency in the DC-to-DC converter portion of a voltage regulator is the conduction loss and, especially at high frequencies, the switching loss. Although improved components and topology can reduce the switching loss, the reduction is limited by the fact that the converter generally switches all the power being regulated. Like the SCBR concept, the SCBBR concept involves a circuit configuration in which only a fraction of the power is switched, so that the switching loss is reduced by an amount that is largely independent of the specific components and circuit topology used. In an SCBBR, the amount of power switched by the DC-to-DC converter is only the amount needed to make up the difference between the input and output bus voltage. The remaining majority of the power passes through the converter without being switched. The weight and power loss of a DC-to-DC converter are determined primarily by the amount of power

  11. A complexity measure based method for studying the dependance of 222Rn concentration time series on indoor air temperature and humidity.

    PubMed

    Mihailovic, D T; Udovičić, V; Krmar, M; Arsenić, I

    2014-02-01

    We have suggested a complexity measure based method for studying the dependence of measured (222)Rn concentration time series on indoor air temperature and humidity. This method is based on the Kolmogorov complexity (KL). We have introduced (i) the sequence of the KL, (ii) the Kolmogorov complexity highest value in the sequence (KLM) and (iii) the KL of the product of time series. The noticed loss of the KLM complexity of (222)Rn concentration time series can be attributed to the indoor air humidity that keeps the radon daughters in air. © 2013 Published by Elsevier Ltd.

  12. The Heuristic Value of p in Inductive Statistical Inference

    PubMed Central

    Krueger, Joachim I.; Heck, Patrick R.

    2017-01-01

    Many statistical methods yield the probability of the observed data – or data more extreme – under the assumption that a particular hypothesis is true. This probability is commonly known as ‘the’ p-value. (Null Hypothesis) Significance Testing ([NH]ST) is the most prominent of these methods. The p-value has been subjected to much speculation, analysis, and criticism. We explore how well the p-value predicts what researchers presumably seek: the probability of the hypothesis being true given the evidence, and the probability of reproducing significant results. We also explore the effect of sample size on inferential accuracy, bias, and error. In a series of simulation experiments, we find that the p-value performs quite well as a heuristic cue in inductive inference, although there are identifiable limits to its usefulness. We conclude that despite its general usefulness, the p-value cannot bear the full burden of inductive inference; it is but one of several heuristic cues available to the data analyst. Depending on the inferential challenge at hand, investigators may supplement their reports with effect size estimates, Bayes factors, or other suitable statistics, to communicate what they think the data say. PMID:28649206

  13. The Heuristic Value of p in Inductive Statistical Inference.

    PubMed

    Krueger, Joachim I; Heck, Patrick R

    2017-01-01

    Many statistical methods yield the probability of the observed data - or data more extreme - under the assumption that a particular hypothesis is true. This probability is commonly known as 'the' p -value. (Null Hypothesis) Significance Testing ([NH]ST) is the most prominent of these methods. The p -value has been subjected to much speculation, analysis, and criticism. We explore how well the p -value predicts what researchers presumably seek: the probability of the hypothesis being true given the evidence, and the probability of reproducing significant results. We also explore the effect of sample size on inferential accuracy, bias, and error. In a series of simulation experiments, we find that the p -value performs quite well as a heuristic cue in inductive inference, although there are identifiable limits to its usefulness. We conclude that despite its general usefulness, the p -value cannot bear the full burden of inductive inference; it is but one of several heuristic cues available to the data analyst. Depending on the inferential challenge at hand, investigators may supplement their reports with effect size estimates, Bayes factors, or other suitable statistics, to communicate what they think the data say.

  14. Visualization of time series statistical data by shape analysis (GDP ratio changes among Asia countries)

    NASA Astrophysics Data System (ADS)

    Shirota, Yukari; Hashimoto, Takako; Fitri Sari, Riri

    2018-03-01

    It has been very significant to visualize time series big data. In the paper we shall discuss a new analysis method called “statistical shape analysis” or “geometry driven statistics” on time series statistical data in economics. In the paper, we analyse the agriculture, value added and industry, value added (percentage of GDP) changes from 2000 to 2010 in Asia. We handle the data as a set of landmarks on a two-dimensional image to see the deformation using the principal components. The point of the analysis method is the principal components of the given formation which are eigenvectors of its bending energy matrix. The local deformation can be expressed as the set of non-Affine transformations. The transformations give us information about the local differences between in 2000 and in 2010. Because the non-Affine transformation can be decomposed into a set of partial warps, we present the partial warps visually. The statistical shape analysis is widely used in biology but, in economics, no application can be found. In the paper, we investigate its potential to analyse the economic data.

  15. Predicting Jakarta composite index using hybrid of fuzzy time series and support vector regression models

    NASA Astrophysics Data System (ADS)

    Febrian Umbara, Rian; Tarwidi, Dede; Budi Setiawan, Erwin

    2018-03-01

    The paper discusses the prediction of Jakarta Composite Index (JCI) in Indonesia Stock Exchange. The study is based on JCI historical data for 1286 days to predict the value of JCI one day ahead. This paper proposes predictions done in two stages., The first stage using Fuzzy Time Series (FTS) to predict values of ten technical indicators, and the second stage using Support Vector Regression (SVR) to predict the value of JCI one day ahead, resulting in a hybrid prediction model FTS-SVR. The performance of this combined prediction model is compared with the performance of the single stage prediction model using SVR only. Ten technical indicators are used as input for each model.

  16. Modeling Polio Data Using the First Order Non-Negative Integer-Valued Autoregressive, INAR(1), Model

    NASA Astrophysics Data System (ADS)

    Vazifedan, Turaj; Shitan, Mahendran

    Time series data may consists of counts, such as the number of road accidents, the number of patients in a certain hospital, the number of customers waiting for service at a certain time and etc. When the value of the observations are large it is usual to use Gaussian Autoregressive Moving Average (ARMA) process to model the time series. However if the observed counts are small, it is not appropriate to use ARMA process to model the observed phenomenon. In such cases we need to model the time series data by using Non-Negative Integer valued Autoregressive (INAR) process. The modeling of counts data is based on the binomial thinning operator. In this paper we illustrate the modeling of counts data using the monthly number of Poliomyelitis data in United States between January 1970 until December 1983. We applied the AR(1), Poisson regression model and INAR(1) model and the suitability of these models were assessed by using the Index of Agreement(I.A.). We found that INAR(1) model is more appropriate in the sense it had a better I.A. and it is natural since the data are counts.

  17. Valuing natural gas power generation assets in the new competitive marketplace

    NASA Astrophysics Data System (ADS)

    Hsu, Michael Chun-Wei

    1999-10-01

    The profitability of natural gas fired power plants depends critically on the spread between electricity and natural gas prices. The price levels of these two energy commodities are the key uncertain variables in determining the operating margin and therefore the value of a power plant. The owner of a generation unit has the decision of dispatching the plant only when profit margins are positive. This operating flexibility is a real option with real value. In this dissertation I introduce the spark spread call options and illustrate how such paper contracts replicate the uncertain payoff space facing power asset owners and, therefore, how the financial options framework can be applied in estimating the value of natural gas generation plants. The intrinsic value of gas power plants is approximated as the sum of a series of spark spread call options with succeeding maturity dates. The Black-Scholes spread option pricing model, with volatility and correlation term structure adjustments, is utilized to price the spark spread options. Sensitivity analysis is also performed on the BS spread option formulation to compare different asset types. In addition I explore the potential of using compound and compound-exchange option concepts to evaluate, respectively, the benefits of delaying investment in new generation and in repowering existing antiquated units. The compound option designates an option on top of another option. In this case the series of spark spread call options is the 'underlying' option while the option to delay new investments is the 'overlying.' The compound-exchange option characterizes the opportunity to 'exchange' the old power plant, with its series of spark spread call options, for a set of new spark spread call options that comes with the new generation unit. The strike price of the compound-exchange option is the repowering capital investment and typically includes the purchase of new steam generators and combustion turbines, as well as other

  18. Production, consumption, and prices of softwood products in North America: regional time series data, 1950 to 1985.

    Treesearch

    Darius M. Adams; Kristine C. Jackson; Richard W. Haynes

    1988-01-01

    This report provides 35 years of information on softwood timber production and consumption in the United States and Canada. Included are regional time series on production and prices of softwood lumber, plywood, residues, and pulpwood; timber harvest volumes and values; production costs; and recovery factors.

  19. Errors in the estimation of approximate entropy and other recurrence-plot-derived indices due to the finite resolution of RR time series.

    PubMed

    García-González, Miguel A; Fernández-Chimeno, Mireya; Ramos-Castro, Juan

    2009-02-01

    An analysis of the errors due to the finite resolution of RR time series in the estimation of the approximate entropy (ApEn) is described. The quantification errors in the discrete RR time series produce considerable errors in the ApEn estimation (bias and variance) when the signal variability or the sampling frequency is low. Similar errors can be found in indices related to the quantification of recurrence plots. An easy way to calculate a figure of merit [the signal to resolution of the neighborhood ratio (SRN)] is proposed in order to predict when the bias in the indices could be high. When SRN is close to an integer value n, the bias is higher than when near n - 1/2 or n + 1/2. Moreover, if SRN is close to an integer value, the lower this value, the greater the bias is.

  20. Illuminating and inspiring: using television historical drama to cultivate contemporary nursing values and critical thinking.

    PubMed

    McAllister, Margaret; Rogers, Irene; Lee Brien, Donna

    2015-01-01

    As the world prepares to commemorate the centenary of the First World War, it is timely to discuss meaningful learning activities that students of nursing could be engaged in to encourage them to reflect on the nurse's role then and now. Several films and television series about the war and featuring nursing have already been aired. No doubt there will be many more stories to come. Such stories have the potential to do more than eulogise nursing for students and practitioners. Stories, such as The crimson field, have potential to stimulate serious contemplation about values and cultural practices that have remained constant or have changed and to assist students to develop and articulate values that will be fitting for contemporary practice. Recently, excerpts from the series were examined with a group of nursing students and key learnings were found. These are shared in this paper for the benefit of educators planning to utilise public discourse as triggers to engage nursing students in discussions about nursing values, nursing history and representations of the profession.

  1. Community Values and Unconventional Teacher Behavior: A National Canadian Study (1945-1985).

    ERIC Educational Resources Information Center

    Manley-Casimir, Michael; And Others

    This study focuses on the tension between community norms/values and teacher behavior in Canada. A series of instances where teachers in public or denominational schools are accused of misconduct are presented, and these instances are traced from their beginnings to their ends (that is, to the resolution or acceptance of the social conflict). Part…

  2. Characterizability of Metabolic Pathway Systems from Time Series Data

    PubMed Central

    Voit, Eberhard O.

    2013-01-01

    Over the past decade, the biomathematical community has devoted substantial effort to the complicated challenge of estimating parameter values for biological systems models. An even more difficult issue is the characterization of functional forms for the processes that govern these systems. Most parameter estimation approaches tacitly assume that these forms are known or can be assumed with some validity. However, this assumption is not always true. The recently proposed method of Dynamic Flux Estimation (DFE) addresses this problem in a genuinely novel fashion for metabolic pathway systems. Specifically, DFE allows the characterization of fluxes within such systems through an analysis of metabolic time series data. Its main drawback is the fact that DFE can only directly be applied if the pathway system contains as many metabolites as unknown fluxes. This situation is unfortunately rare. To overcome this roadblock, earlier work in this field had proposed strategies for augmenting the set of unknown fluxes with independent kinetic information, which however is not always available. Employing Moore-Penrose pseudo-inverse methods of linear algebra, the present article discusses an approach for characterizing fluxes from metabolic time series data that is applicable even if the pathway system is underdetermined and contains more fluxes than metabolites. Intriguingly, this approach is independent of a specific modeling framework and unaffected by noise in the experimental time series data. The results reveal whether any fluxes may be characterized and, if so, which subset is characterizable. They also help with the identification of fluxes that, if they could be determined independently, would allow the application of DFE. PMID:23391489

  3. Predicting Physical Time Series Using Dynamic Ridge Polynomial Neural Networks

    PubMed Central

    Al-Jumeily, Dhiya; Ghazali, Rozaida; Hussain, Abir

    2014-01-01

    Forecasting naturally occurring phenomena is a common problem in many domains of science, and this has been addressed and investigated by many scientists. The importance of time series prediction stems from the fact that it has wide range of applications, including control systems, engineering processes, environmental systems and economics. From the knowledge of some aspects of the previous behaviour of the system, the aim of the prediction process is to determine or predict its future behaviour. In this paper, we consider a novel application of a higher order polynomial neural network architecture called Dynamic Ridge Polynomial Neural Network that combines the properties of higher order and recurrent neural networks for the prediction of physical time series. In this study, four types of signals have been used, which are; The Lorenz attractor, mean value of the AE index, sunspot number, and heat wave temperature. The simulation results showed good improvements in terms of the signal to noise ratio in comparison to a number of higher order and feedforward neural networks in comparison to the benchmarked techniques. PMID:25157950

  4. Statistical analysis of CSP plants by simulating extensive meteorological series

    NASA Astrophysics Data System (ADS)

    Pavón, Manuel; Fernández, Carlos M.; Silva, Manuel; Moreno, Sara; Guisado, María V.; Bernardos, Ana

    2017-06-01

    The feasibility analysis of any power plant project needs the estimation of the amount of energy it will be able to deliver to the grid during its lifetime. To achieve this, its feasibility study requires a precise knowledge of the solar resource over a long term period. In Concentrating Solar Power projects (CSP), financing institutions typically requires several statistical probability of exceedance scenarios of the expected electric energy output. Currently, the industry assumes a correlation between probabilities of exceedance of annual Direct Normal Irradiance (DNI) and energy yield. In this work, this assumption is tested by the simulation of the energy yield of CSP plants using as input a 34-year series of measured meteorological parameters and solar irradiance. The results of this work show that, even if some correspondence between the probabilities of exceedance of annual DNI values and energy yields is found, the intra-annual distribution of DNI may significantly affect this correlation. This result highlights the need of standardized procedures for the elaboration of representative DNI time series representative of a given probability of exceedance of annual DNI.

  5. Predicting physical time series using dynamic ridge polynomial neural networks.

    PubMed

    Al-Jumeily, Dhiya; Ghazali, Rozaida; Hussain, Abir

    2014-01-01

    Forecasting naturally occurring phenomena is a common problem in many domains of science, and this has been addressed and investigated by many scientists. The importance of time series prediction stems from the fact that it has wide range of applications, including control systems, engineering processes, environmental systems and economics. From the knowledge of some aspects of the previous behaviour of the system, the aim of the prediction process is to determine or predict its future behaviour. In this paper, we consider a novel application of a higher order polynomial neural network architecture called Dynamic Ridge Polynomial Neural Network that combines the properties of higher order and recurrent neural networks for the prediction of physical time series. In this study, four types of signals have been used, which are; The Lorenz attractor, mean value of the AE index, sunspot number, and heat wave temperature. The simulation results showed good improvements in terms of the signal to noise ratio in comparison to a number of higher order and feedforward neural networks in comparison to the benchmarked techniques.

  6. A Physiological Time Series Dynamics-Based Approach to Patient Monitoring and Outcome Prediction

    PubMed Central

    Lehman, Li-Wei H.; Adams, Ryan P.; Mayaud, Louis; Moody, George B.; Malhotra, Atul; Mark, Roger G.; Nemati, Shamim

    2015-01-01

    Cardiovascular variables such as heart rate (HR) and blood pressure (BP) are regulated by an underlying control system, and therefore, the time series of these vital signs exhibit rich dynamical patterns of interaction in response to external perturbations (e.g., drug administration), as well as pathological states (e.g., onset of sepsis and hypotension). A question of interest is whether “similar” dynamical patterns can be identified across a heterogeneous patient cohort, and be used for prognosis of patients’ health and progress. In this paper, we used a switching vector autoregressive framework to systematically learn and identify a collection of vital sign time series dynamics, which are possibly recurrent within the same patient and may be shared across the entire cohort. We show that these dynamical behaviors can be used to characterize the physiological “state” of a patient. We validate our technique using simulated time series of the cardiovascular system, and human recordings of HR and BP time series from an orthostatic stress study with known postural states. Using the HR and BP dynamics of an intensive care unit (ICU) cohort of over 450 patients from the MIMIC II database, we demonstrate that the discovered cardiovascular dynamics are significantly associated with hospital mortality (dynamic modes 3 and 9, p = 0.001, p = 0.006 from logistic regression after adjusting for the APACHE scores). Combining the dynamics of BP time series and SAPS-I or APACHE-III provided a more accurate assessment of patient survival/mortality in the hospital than using SAPS-I and APACHE-III alone (p = 0.005 and p = 0.045). Our results suggest that the discovered dynamics of vital sign time series may contain additional prognostic value beyond that of the baseline acuity measures, and can potentially be used as an independent predictor of outcomes in the ICU. PMID:25014976

  7. Validation of Vegetation Index Time Series from Suomi NPP Visible Infrared Imaging Radiometer Suite Using Tower Radiation Flux Measurements

    NASA Astrophysics Data System (ADS)

    Miura, T.; Kato, A.; Wang, J.; Vargas, M.; Lindquist, M.

    2015-12-01

    Satellite vegetation index (VI) time series data serve as an important means to monitor and characterize seasonal changes of terrestrial vegetation and their interannual variability. It is, therefore, critical to ensure quality of such VI products and one method of validating VI product quality is cross-comparison with in situ flux tower measurements. In this study, we evaluated the quality of VI time series derived from Visible Infrared Imaging Radiometer Suite (VIIRS) onboard the Suomi National Polar-orbiting Partnership (NPP) spacecraft by cross-comparison with in situ radiation flux measurements at select flux tower sites over North America and Europe. VIIRS is a new polar-orbiting satellite sensor series, slated to replace National Oceanic and Atmospheric Administration's Advanced Very High Resolution Radiometer in the afternoon overpass and to continue the highly-calibrated data streams initiated with Moderate Resolution Imaging Spectrometer of National Aeronautics and Space Administration's Earth Observing System. The selected sites covered a wide range of biomes, including croplands, grasslands, evergreen needle forest, woody savanna, and open shrublands. The two VIIRS indices of the Top-of-Atmosphere (TOA) Normalized Difference Vegetation Index (NDVI) and the atmospherically-corrected, Top-of-Canopy (TOC) Enhanced Vegetation Index (EVI) (daily, 375 m spatial resolution) were compared against the TOC NDVI and a two-band version of EVI (EVI2) calculated from tower radiation flux measurements, respectively. VIIRS and Tower VI time series showed comparable seasonal profiles across biomes with statistically significant correlations (> 0.60; p-value < 0.01). "Start-of-season (SOS)" phenological metric values extracted from VIIRS and Tower VI time series were also highly compatible (R2 > 0.95), with mean differences of 2.3 days and 5.0 days for the NDVI and the EVI, respectively. These results indicate that VIIRS VI time series can capture seasonal evolution of

  8. Convergence of a Catalan Series

    ERIC Educational Resources Information Center

    Koshy, Thomas; Gao, Zhenguang

    2012-01-01

    This article studies the convergence of the infinite series of the reciprocals of the Catalan numbers. We extract the sum of the series as well as some related ones, illustrating the power of the calculus in the study of the Catalan numbers.

  9. Precision Tests of a Quantum Hall Effect Device DC Equivalent Circuit Using Double-Series and Triple-Series Connections

    PubMed Central

    Jeffery, A.; Elmquist, R. E.; Cage, M. E.

    1995-01-01

    Precision tests verify the dc equivalent circuit used by Ricketts and Kemeny to describe a quantum Hall effect device in terms of electrical circuit elements. The tests employ the use of cryogenic current comparators and the double-series and triple-series connection techniques of Delahaye. Verification of the dc equivalent circuit in double-series and triple-series connections is a necessary step in developing the ac quantum Hall effect as an intrinsic standard of resistance. PMID:29151768

  10. Simulation of Ground Winds Time Series

    NASA Technical Reports Server (NTRS)

    Adelfang, S. I.

    2008-01-01

    A simulation process has been developed for generation of the longitudinal and lateral components of ground wind atmospheric turbulence as a function of mean wind speed, elevation, temporal frequency range and distance between locations. The distance between locations influences the spectral coherence between the simulated series at adjacent locations. Short distances reduce correlation only at high frequencies; as distances increase correlation is reduced over a wider range of frequencies. The choice of values for the constants d1 and d3 in the PSD model is the subject of work in progress. An improved knowledge of the values for zO as a function of wind direction at the ARES-1 launch pads is necessary for definition of d1. Results of other studies at other locations may be helpful as summarized in Fichtl's recent correspondence. Ideally, further research is needed based on measurements of ground wind turbulence with high resolution anemometers at a number of altitudes at a new KSC tower located closer to the ARES-1 launch pad .The proposed research would be based on turbulence measurements that may be influenced by surface terrain roughness that may be significantly different from roughness prior to 1970 in Fichtl's measurements. Significant improvements in instrumentation, data storage end processing will greatly enhance the capability to model ground wind profiles and ground wind turbulence.

  11. A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method.

    PubMed

    Yang, Jun-He; Cheng, Ching-Hsue; Chan, Chia-Pan

    2017-01-01

    Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir's water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting model summarily has three foci. First, this study uses five imputation methods to directly delete the missing value. Second, we identified the key variable via factor analysis and then deleted the unimportant variables sequentially via the variable selection method. Finally, the proposed model uses a Random Forest to build the forecasting model of the reservoir's water level. This was done to compare with the listing method under the forecasting error. These experimental results indicate that the Random Forest forecasting model when applied to variable selection with full variables has better forecasting performance than the listing model. In addition, this experiment shows that the proposed variable selection can help determine five forecast methods used here to improve the forecasting capability.

  12. Structure- and ligand-based structure-activity relationships for a series of inhibitors of aldolase.

    PubMed

    Ferreira, Leonardo G; Andricopulo, Adriano D

    2012-12-01

    Aldolase has emerged as a promising molecular target for the treatment of human African trypanosomiasis. Over the last years, due to the increasing number of patients infected with Trypanosoma brucei, there is an urgent need for new drugs to treat this neglected disease. In the present study, two-dimensional fragment-based quantitative-structure activity relationship (QSAR) models were generated for a series of inhibitors of aldolase. Through the application of leave-one-out and leave-many-out cross-validation procedures, significant correlation coefficients were obtained (r²=0.98 and q²=0.77) as an indication of the statistical internal and external consistency of the models. The best model was employed to predict pKi values for a series of test set compounds, and the predicted values were in good agreement with the experimental results, showing the power of the model for untested compounds. Moreover, structure-based molecular modeling studies were performed to investigate the binding mode of the inhibitors in the active site of the parasitic target enzyme. The structural and QSAR results provided useful molecular information for the design of new aldolase inhibitors within this structural class.

  13. Time series with tailored nonlinearities

    NASA Astrophysics Data System (ADS)

    Räth, C.; Laut, I.

    2015-10-01

    It is demonstrated how to generate time series with tailored nonlinearities by inducing well-defined constraints on the Fourier phases. Correlations between the phase information of adjacent phases and (static and dynamic) measures of nonlinearities are established and their origin is explained. By applying a set of simple constraints on the phases of an originally linear and uncorrelated Gaussian time series, the observed scaling behavior of the intensity distribution of empirical time series can be reproduced. The power law character of the intensity distributions being typical for, e.g., turbulence and financial data can thus be explained in terms of phase correlations.

  14. Utility of routine versus selective upper gastrointestinal series to detect anastomotic leaks after laparoscopic gastric bypass.

    PubMed

    Schiesser, Marc; Guber, Josef; Wildi, Stefan; Guber, Ivo; Weber, Markus; Muller, Markus K

    2011-08-01

    In up to 4% of laparoscopic Roux-en-Y gastric bypass (LRYGB) procedures, anastomotic leaks occur. Early detection of gastrointestinal leakage is important for successful treatment. Consequently, many centers advocate routine postoperative upper gastrointestinal (UGI) series. The aim of this study was to determine the utility of this practice after LRYGB. Eight hundred four consecutive patients undergoing LRYGB from June 2000 to April 2010 were analyzed prospectively. The first 382 patients received routine UGI series between the third and fifth postoperative days (group A). Thereafter, the test was only performed when clinical findings (tachycardia, fever, and drainage content) were suspicious for a leak of the gastrointestinal anastomosis (group B; n = 422). Overall, nine of 804 (1.1%) patients suffered from leaks at the gastroenterostomy. In group A, four of 382 (1%) patients had a leak, but only two were detected by the routine UGI series. This corresponds to a sensitivity of 50%. In group B, the sensitivity was higher with 80%. Specificities were comparable with 97% and 91%, respectively. Routine UGI series cost only 1.6% of the overall costs of a non-complicated gastric bypass procedure. With this leak rate and sensitivity, US $86,800 would have to be spent on 200 routine UGI series to find one leak which is not justified. This study shows that routine UGI series have a low sensitivity for the detection of anastomotic leaks after LRYGB. In most cases, the diagnosis is initiated by clinical findings. Therefore, routine upper gastrointestinal series are of limited value for the diagnosis of a leak.

  15. Homogenising time series: Beliefs, dogmas and facts

    NASA Astrophysics Data System (ADS)

    Domonkos, P.

    2010-09-01

    For obtaining reliable information about climate change and climate variability the use of high quality data series is essentially important, and one basic tool of quality improvements is the statistical homogenisation of observed time series. In the recent decades large number of homogenisation methods has been developed, but the real effects of their application on time series are still not known entirely. The ongoing COST HOME project (COST ES0601) is devoted to reveal the real impacts of homogenisation methods more detailed and with higher confidence than earlier. As part of the COST activity, a benchmark dataset was built whose characteristics approach well the characteristics of real networks of observed time series. This dataset offers much better opportunity than ever to test the wide variety of homogenisation methods, and analyse the real effects of selected theoretical recommendations. The author believes that several old theoretical rules have to be re-evaluated. Some examples of the hot questions, a) Statistically detected change-points can be accepted only with the confirmation of metadata information? b) Do semi-hierarchic algorithms for detecting multiple change-points in time series function effectively in practise? c) Is it good to limit the spatial comparison of candidate series with up to five other series in the neighbourhood? Empirical results - those from the COST benchmark, and other experiments too - show that real observed time series usually include several inhomogeneities of different sizes. Small inhomogeneities seem like part of the climatic variability, thus the pure application of classic theory that change-points of observed time series can be found and corrected one-by-one is impossible. However, after homogenisation the linear trends, seasonal changes and long-term fluctuations of time series are usually much closer to the reality, than in raw time series. The developers and users of homogenisation methods have to bear in mind that

  16. Legends Lecture Series III

    NASA Image and Video Library

    2011-07-27

    Marina Benigno (far right) at Stennis Space Center, welcomes former administrative assistants and secretaries to the third Legends Lecture Series session. Lecture participants spoke about their work experiences with Stennis directors and deputy directors. Panel participants included Janet Austill (l to r), Mary Lou Matthews, Helen Paul, Wanda Howard, Ann Westendorf and Mary Gene Dick. The Legends Lecture Series is part of a yearlong celebration of the 50th anniversary of Stennis Space Center.

  17. Expression Templates for Truncated Power Series

    NASA Astrophysics Data System (ADS)

    Cary, John R.; Shasharina, Svetlana G.

    1997-05-01

    Truncated power series are used extensively in accelerator transport modeling for rapid tracking and analysis of nonlinearity. Such mathematical objects are naturally represented computationally as objects in C++. This is more intuitive and produces more transparent code through operator overloading. However, C++ object use often comes with a computational speed loss due, e.g., to the creation of temporaries. We have developed a subset of truncated power series expression templates(http://monet.uwaterloo.ca/blitz/). Such expression templates use the powerful template processing facility of C++ to combine complicated expressions into series operations that exectute more rapidly. We compare computational speeds with existing truncated power series libraries.

  18. The Italian contribution to the World Soils Book Series: The Soils of Italy

    NASA Astrophysics Data System (ADS)

    Costantini, Edoardo; Dazzi, Carmelo

    2015-04-01

    Passing to the age of "Anthropocene", man has forgotten the ancient bond that ties him to the soil, and turning from "homo sapiens" to "homo technologicus" he has stopped considering how much his well-being and the quality of life on Earth are fundamentally linked to the quality of soils. Yet today, as never before, maintaining the quality of soils is of paramount relevance for the sustainable development of humanity. Unfortunately, as soils are a crypto-resource, not many lay-people recognize its importance in the biosphere equilibrium and, unfortunately, seldom consider it among the environmental resources that must be protected! To fill such a gap in knowledge, the Springer editor, under the leading of professor Alfred Hartemink, has published the World Soils Book Series, whose aim is to spread the knowledge on the soils in a particular country in a concise and highly reader-friendly way. The volume "The Soils of Italy" belongs to this international series of books. Its ambitious goals are to establish a broad base for the knowledge of the soils of Italy, and to give useful information on i) their characteristics, diffusion and fertility, ii) the main threats they are subjected, and iii) the future scenarios of relationships between soil sciences and the disciplines, which are not traditionally linked to the world of agriculture, such as urban development, medicine, economics, sociology, archaeology. In Italy there is about 75% of the global pedodiversity. A vast majority of the WRB reference soil groups (25 out of 32), as well as soil orders of Soil Taxonomy (10 out of 12) are represented in the main Italian soil typological units (STUs). More than a fourth of STUs belongs to Cambisols, more than a half to only four reference soil groups (Cambisols, Luvisols, Regosols, Phaeozems), and 88% to nine RSGs (the former plus Calcisols, Vertisols, Fluvisols, Leptosols, and Andosols), while the remaining 16 RSGs are represented in 12% of STUs. The clear skewness and

  19. Fuzzy Inference System Approach for Locating Series, Shunt, and Simultaneous Series-Shunt Faults in Double Circuit Transmission Lines

    PubMed Central

    Swetapadma, Aleena; Yadav, Anamika

    2015-01-01

    Many schemes are reported for shunt fault location estimation, but fault location estimation of series or open conductor faults has not been dealt with so far. The existing numerical relays only detect the open conductor (series) fault and give the indication of the faulty phase(s), but they are unable to locate the series fault. The repair crew needs to patrol the complete line to find the location of series fault. In this paper fuzzy based fault detection/classification and location schemes in time domain are proposed for both series faults, shunt faults, and simultaneous series and shunt faults. The fault simulation studies and fault location algorithm have been developed using Matlab/Simulink. Synchronized phasors of voltage and current signals of both the ends of the line have been used as input to the proposed fuzzy based fault location scheme. Percentage of error in location of series fault is within 1% and shunt fault is 5% for all the tested fault cases. Validation of percentage of error in location estimation is done using Chi square test with both 1% and 5% level of significance. PMID:26413088

  20. Clustering of financial time series

    NASA Astrophysics Data System (ADS)

    D'Urso, Pierpaolo; Cappelli, Carmela; Di Lallo, Dario; Massari, Riccardo

    2013-05-01

    This paper addresses the topic of classifying financial time series in a fuzzy framework proposing two fuzzy clustering models both based on GARCH models. In general clustering of financial time series, due to their peculiar features, needs the definition of suitable distance measures. At this aim, the first fuzzy clustering model exploits the autoregressive representation of GARCH models and employs, in the framework of a partitioning around medoids algorithm, the classical autoregressive metric. The second fuzzy clustering model, also based on partitioning around medoids algorithm, uses the Caiado distance, a Mahalanobis-like distance, based on estimated GARCH parameters and covariances that takes into account the information about the volatility structure of time series. In order to illustrate the merits of the proposed fuzzy approaches an application to the problem of classifying 29 time series of Euro exchange rates against international currencies is presented and discussed, also comparing the fuzzy models with their crisp version.

  1. Reverse engineering gene regulatory networks from measurement with missing values.

    PubMed

    Ogundijo, Oyetunji E; Elmas, Abdulkadir; Wang, Xiaodong

    2016-12-01

    Gene expression time series data are usually in the form of high-dimensional arrays. Unfortunately, the data may sometimes contain missing values: for either the expression values of some genes at some time points or the entire expression values of a single time point or some sets of consecutive time points. This significantly affects the performance of many algorithms for gene expression analysis that take as an input, the complete matrix of gene expression measurement. For instance, previous works have shown that gene regulatory interactions can be estimated from the complete matrix of gene expression measurement. Yet, till date, few algorithms have been proposed for the inference of gene regulatory network from gene expression data with missing values. We describe a nonlinear dynamic stochastic model for the evolution of gene expression. The model captures the structural, dynamical, and the nonlinear natures of the underlying biomolecular systems. We present point-based Gaussian approximation (PBGA) filters for joint state and parameter estimation of the system with one-step or two-step missing measurements . The PBGA filters use Gaussian approximation and various quadrature rules, such as the unscented transform (UT), the third-degree cubature rule and the central difference rule for computing the related posteriors. The proposed algorithm is evaluated with satisfying results for synthetic networks, in silico networks released as a part of the DREAM project, and the real biological network, the in vivo reverse engineering and modeling assessment (IRMA) network of yeast Saccharomyces cerevisiae . PBGA filters are proposed to elucidate the underlying gene regulatory network (GRN) from time series gene expression data that contain missing values. In our state-space model, we proposed a measurement model that incorporates the effect of the missing data points into the sequential algorithm. This approach produces a better inference of the model parameters and hence

  2. How Do Value-Added Indicators Compare to Other Measures of Teacher Effectiveness? What We Know Series: Value-Added Methods and Applications. Knowledge Brief 5

    ERIC Educational Resources Information Center

    Harris, Douglas N.

    2012-01-01

    In the recent drive to revamp teacher evaluation and accountability, measures of a teacher's value added have played the starring role. But the star of the show is not always the best actor, nor can the star succeed without a strong supporting cast. In assessing teacher performance, observations of classroom practice, portfolios of teachers' work,…

  3. Introductory lecture series for first-year radiology residents: implementation, investment and assessment.

    PubMed

    Chapman, Teresa; Chew, Felix S

    2013-03-01

    A lecture series aimed at providing new radiology residents a rapid course on the fundamental concepts of professionalism, safety, and interpretation of diagnostic imaging was established. Evaluation of the course's educational value was attempted through surveys. Twenty-six live 45-minute lectures presented by 16 or 17 faculty members were organized exclusively for the first class of radiology residents, held over a 2-month period at the beginning of certain weekdays. Online surveys were conducted after the course to gather feedback from residents. Average resident rotation evaluation scores were measured over the first semester for the two classes before and after this new course implementation. The lecture series was successfully organized and implemented. A total of 33 residents sat through the course over three summers. Faculty reported a reasonable number of preparation hours, and 100% of residents indicated they valued the course. Comparison of class average evaluation scores before and after the existence of this 2-month course did not significantly change. This collection of introductory lectures on professionalism, safety, and diagnostic imaging, delivered early in the first year of the radiology residency, requires a reasonable number of invested preparation hours by the faculty but results in a universal increase in resident confidence. However, we were unable to demonstrate an objective improvement in resident performance on clinical rotations. Copyright © 2013 AUR. Published by Elsevier Inc. All rights reserved.

  4. Probabilistic Reasoning Over Seismic Time Series: Volcano Monitoring by Hidden Markov Models at Mt. Etna

    NASA Astrophysics Data System (ADS)

    Cassisi, Carmelo; Prestifilippo, Michele; Cannata, Andrea; Montalto, Placido; Patanè, Domenico; Privitera, Eugenio

    2016-07-01

    From January 2011 to December 2015, Mt. Etna was mainly characterized by a cyclic eruptive behavior with more than 40 lava fountains from New South-East Crater. Using the RMS (Root Mean Square) of the seismic signal recorded by stations close to the summit area, an automatic recognition of the different states of volcanic activity (QUIET, PRE-FOUNTAIN, FOUNTAIN, POST-FOUNTAIN) has been applied for monitoring purposes. Since values of the RMS time series calculated on the seismic signal are generated from a stochastic process, we can try to model the system generating its sampled values, assumed to be a Markov process, using Hidden Markov Models (HMMs). HMMs analysis seeks to recover the sequence of hidden states from the observations. In our framework, observations are characters generated by the Symbolic Aggregate approXimation (SAX) technique, which maps RMS time series values with symbols of a pre-defined alphabet. The main advantages of the proposed framework, based on HMMs and SAX, with respect to other automatic systems applied on seismic signals at Mt. Etna, are the use of multiple stations and static thresholds to well characterize the volcano states. Its application on a wide seismic dataset of Etna volcano shows the possibility to guess the volcano states. The experimental results show that, in most of the cases, we detected lava fountains in advance.

  5. Optimizing Functional Network Representation of Multivariate Time Series

    PubMed Central

    Zanin, Massimiliano; Sousa, Pedro; Papo, David; Bajo, Ricardo; García-Prieto, Juan; Pozo, Francisco del; Menasalvas, Ernestina; Boccaletti, Stefano

    2012-01-01

    By combining complex network theory and data mining techniques, we provide objective criteria for optimization of the functional network representation of generic multivariate time series. In particular, we propose a method for the principled selection of the threshold value for functional network reconstruction from raw data, and for proper identification of the network's indicators that unveil the most discriminative information on the system for classification purposes. We illustrate our method by analysing networks of functional brain activity of healthy subjects, and patients suffering from Mild Cognitive Impairment, an intermediate stage between the expected cognitive decline of normal aging and the more pronounced decline of dementia. We discuss extensions of the scope of the proposed methodology to network engineering purposes, and to other data mining tasks. PMID:22953051

  6. Optimizing Functional Network Representation of Multivariate Time Series

    NASA Astrophysics Data System (ADS)

    Zanin, Massimiliano; Sousa, Pedro; Papo, David; Bajo, Ricardo; García-Prieto, Juan; Pozo, Francisco Del; Menasalvas, Ernestina; Boccaletti, Stefano

    2012-09-01

    By combining complex network theory and data mining techniques, we provide objective criteria for optimization of the functional network representation of generic multivariate time series. In particular, we propose a method for the principled selection of the threshold value for functional network reconstruction from raw data, and for proper identification of the network's indicators that unveil the most discriminative information on the system for classification purposes. We illustrate our method by analysing networks of functional brain activity of healthy subjects, and patients suffering from Mild Cognitive Impairment, an intermediate stage between the expected cognitive decline of normal aging and the more pronounced decline of dementia. We discuss extensions of the scope of the proposed methodology to network engineering purposes, and to other data mining tasks.

  7. What's in a Name? The Incorrect Use of Case Series as a Study Design Label in Studies Involving Dogs and Cats.

    PubMed

    Sargeant, J M; O'Connor, A M; Cullen, J N; Makielski, K M; Jones-Bitton, A

    2017-07-01

    Study design labels are used to identify relevant literature to address specific clinical and research questions and to aid in evaluating the evidentiary value of research. Evidence from the human healthcare literature indicates that the label "case series" may be used inconsistently and inappropriately. Our primary objective was to determine the proportion of studies in the canine and feline veterinary literature labeled as case series that actually corresponded to descriptive cohort studies, population-based cohort studies, or other study designs. Our secondary objective was to identify the proportion of case series in which potentially inappropriate inferential statements were made. Descriptive evaluation of published literature. One-hundred published studies (from 19 journals) labeled as case series. Studies were identified by a structured literature search, with random selection of 100 studies from the relevant citations. Two reviewers independently characterized each study, with disagreements resolved by consensus. Of the 100 studies, 16 were case series. The remaining studies were descriptive cohort studies (35), population-based cohort studies (36), or other observational or experimental study designs (13). Almost half (48.8%) of the case series or descriptive cohort studies, with no control group and no formal statistical analysis, included inferential statements about the efficacy of treatment or statistical significance of potential risk factors. Authors, peer-reviewers, and editors should carefully consider the design elements of a study to accurately identify and label the study design. Doing so will facilitate an understanding of the evidentiary value of the results. Copyright © 2017 The Authors. Journal of Veterinary Internal Medicine published by Wiley Periodicals, Inc. on behalf of the American College of Veterinary Internal Medicine.

  8. A framework to measure the value of public health services.

    PubMed

    Jacobson, Peter D; Neumann, Peter J

    2009-10-01

    To develop a framework that public health practitioners could use to measure the value of public health services. Primary data were collected from August 2006 through March 2007. We interviewed (n=46) public health practitioners in four states, leaders of national public health organizations, and academic researchers. Using a semi-structured interview protocol, we conducted a series of qualitative interviews to define the component parts of value for public health services and identify methodologies used to measure value and data collected. The primary form of analysis is descriptive, synthesizing information across respondents as to how they measure the value of their services. Our interviews did not reveal a consensus on how to measure value or a specific framework for doing so. Nonetheless, the interviews identified some potential strategies, such as cost accounting and performance-based contracting mechanisms. The interviews noted implementation barriers, including limits to staff capacity and data availability. We developed a framework that considers four component elements to measure value: external factors that must be taken into account (i.e., mandates); key internal actions that a local health department must take (i.e., staff assessment); using appropriate quantitative measures; and communicating value to elected officials and the public.

  9. Modeling commodity salam contract between two parties for discrete and continuous time series

    NASA Astrophysics Data System (ADS)

    Hisham, Azie Farhani Badrol; Jaffar, Maheran Mohd

    2017-08-01

    In order for Islamic finance to remain competitive as the conventional, there needs a new development of Islamic compliance product such as Islamic derivative that can be used to manage the risk. However, under syariah principles and regulations, all financial instruments must not be conflicting with five syariah elements which are riba (interest paid), rishwah (corruption), gharar (uncertainty or unnecessary risk), maysir (speculation or gambling) and jahl (taking advantage of the counterparty's ignorance). This study has proposed a traditional Islamic contract namely salam that can be built as an Islamic derivative product. Although a lot of studies has been done on discussing and proposing the implementation of salam contract as the Islamic product however they are more into qualitative and law issues. Since there is lack of quantitative study of salam contract being developed, this study introduces mathematical models that can value the appropriate salam price for a commodity salam contract between two parties. In modeling the commodity salam contract, this study has modified the existing conventional derivative model and come out with some adjustments to comply with syariah rules and regulations. The cost of carry model has been chosen as the foundation to develop the commodity salam model between two parties for discrete and continuous time series. However, the conventional time value of money results from the concept of interest that is prohibited in Islam. Therefore, this study has adopted the idea of Islamic time value of money which is known as the positive time preference, in modeling the commodity salam contract between two parties for discrete and continuous time series.

  10. Economic value of dengue vaccine in Thailand.

    PubMed

    Lee, Bruce Y; Connor, Diana L; Kitchen, Sarah B; Bacon, Kristina M; Shah, Mirat; Brown, Shawn T; Bailey, Rachel R; Laosiritaworn, Yongjua; Burke, Donald S; Cummings, Derek A T

    2011-05-01

    With several candidate dengue vaccines under development, this is an important time to help stakeholders (e.g., policy makers, scientists, clinicians, and manufacturers) better understand the potential economic value (cost-effectiveness) of a dengue vaccine, especially while vaccine characteristics and strategies might be readily altered. We developed a decision analytic Markov simulation model to evaluate the potential health and economic value of administering a dengue vaccine to an individual (≤ 1 year of age) in Thailand from the societal perspective. Sensitivity analyses evaluated the effects of ranging various vaccine (e.g., cost, efficacy, side effect), epidemiological (dengue risk), and disease (treatment-seeking behavior) characteristics. A ≥ 50% efficacious vaccine was highly cost-effective [< 1× per capita gross domestic product (GDP) ($4,289)] up to a total vaccination cost of $60 and cost-effective [< 3× per capita GDP ($12,868)] up to a total vaccination cost of $200. When the total vaccine series was $1.50, many scenarios were cost saving.

  11. Economic Value of Dengue Vaccine in Thailand

    PubMed Central

    Lee, Bruce Y.; Connor, Diana L.; Kitchen, Sarah B.; Bacon, Kristina M.; Shah, Mirat; Brown, Shawn T.; Bailey, Rachel R.; Laosiritaworn, Yongjua; Burke, Donald S.; Cummings, Derek A. T.

    2011-01-01

    With several candidate dengue vaccines under development, this is an important time to help stakeholders (e.g., policy makers, scientists, clinicians, and manufacturers) better understand the potential economic value (cost-effectiveness) of a dengue vaccine, especially while vaccine characteristics and strategies might be readily altered. We developed a decision analytic Markov simulation model to evaluate the potential health and economic value of administering a dengue vaccine to an individual (≤ 1 year of age) in Thailand from the societal perspective. Sensitivity analyses evaluated the effects of ranging various vaccine (e.g., cost, efficacy, side effect), epidemiological (dengue risk), and disease (treatment-seeking behavior) characteristics. A ≥ 50% efficacious vaccine was highly cost-effective [< 1× per capita gross domestic product (GDP) ($4,289)] up to a total vaccination cost of $60 and cost-effective [< 3× per capita GDP ($12,868)] up to a total vaccination cost of $200. When the total vaccine series was $1.50, many scenarios were cost saving. PMID:21540387

  12. Allan deviation analysis of financial return series

    NASA Astrophysics Data System (ADS)

    Hernández-Pérez, R.

    2012-05-01

    We perform a scaling analysis for the return series of different financial assets applying the Allan deviation (ADEV), which is used in the time and frequency metrology to characterize quantitatively the stability of frequency standards since it has demonstrated to be a robust quantity to analyze fluctuations of non-stationary time series for different observation intervals. The data used are opening price daily series for assets from different markets during a time span of around ten years. We found that the ADEV results for the return series at short scales resemble those expected for an uncorrelated series, consistent with the efficient market hypothesis. On the other hand, the ADEV results for absolute return series for short scales (first one or two decades) decrease following approximately a scaling relation up to a point that is different for almost each asset, after which the ADEV deviates from scaling, which suggests that the presence of clustering, long-range dependence and non-stationarity signatures in the series drive the results for large observation intervals.

  13. Increasing the temporal resolution of direct normal solar irradiance forecasted series

    NASA Astrophysics Data System (ADS)

    Fernández-Peruchena, Carlos M.; Gastón, Martin; Schroedter-Homscheidt, Marion; Marco, Isabel Martínez; Casado-Rubio, José L.; García-Moya, José Antonio

    2017-06-01

    A detailed knowledge of the solar resource is a critical point in the design and control of Concentrating Solar Power (CSP) plants. In particular, accurate forecasting of solar irradiance is essential for the efficient operation of solar thermal power plants, the management of energy markets, and the widespread implementation of this technology. Numerical weather prediction (NWP) models are commonly used for solar radiation forecasting. In the ECMWF deterministic forecasting system, all forecast parameters are commercially available worldwide at 3-hourly intervals. Unfortunately, as Direct Normal solar Irradiance (DNI) exhibits a great variability due to the dynamic effects of passing clouds, 3-h time resolution is insufficient for accurate simulations of CSP plants due to their nonlinear response to DNI, governed by various thermal inertias due to their complex response characteristics. DNI series of hourly or sub-hourly frequency resolution are normally used for an accurate modeling and analysis of transient processes in CSP technologies. In this context, the objective of this study is to propose a methodology for generating synthetic DNI time series at 1-h (or higher) temporal resolution from 3-h DNI series. The methodology is based upon patterns as being defined with help of the clear-sky envelope approach together with a forecast of maximum DNI value, and it has been validated with high quality measured DNI data.

  14. A workshop series using peer-grading to build drug information, writing, critical-thinking, and constructive feedback skills.

    PubMed

    Davis, Lindsay E

    2014-12-15

    To utilize a skills-based workshop series to develop pharmacy students' drug information, writing, critical-thinking, and evaluation skills during the final didactic year of training. A workshop series was implemented to focus on written (researched) responses to drug information questions. These workshops used blinded peer-grading to facilitate timely feedback and strengthen assessment skills. Each workshop was aligned to the didactic coursework content to complement and extend learning, while bridging and advancing research, writing, and critical thinking skills. Attainment of knowledge and skills was assessed by rubric-facilitated peer grades, faculty member grading, peer critique, and faculty member-guided discussion of drug information responses. Annual instructor and course evaluations consistently revealed favorable student feedback regarding workshop value. A drug information workshop series using peer-grading as the primary assessment tool was successfully implemented and was well received by pharmacy students.

  15. Predictive models of alcohol use based on attitudes and individual values.

    PubMed

    García del Castillo Rodríguez, José A; López-Sánchez, Carmen; Quiles Soler, M Carmen; García del Castillo-López, Alvaro; Gázquez Pertusa, Mónica; Marzo Campos, Juan Carlos; Inglés, Candido J

    2013-01-01

    Two predictive models are developed in this article: the first is designed to predict people's attitudes to alcoholic drinks, while the second sets out to predict the use of alcohol in relation to selected individual values. University students (N = 1,500) were recruited through stratified sampling based on sex and academic discipline. The questionnaire used obtained information on participants' alcohol use, attitudes and personal values. The results show that the attitudes model correctly classifies 76.3% of cases. Likewise, the model for level of alcohol use correctly classifies 82% of cases. According to our results, we can conclude that there are a series of individual values that influence drinking and attitudes to alcohol use, which therefore provides us with a potentially powerful instrument for developing preventive intervention programs.

  16. Stratigraphy and geochemical characterization of the Oligocene Miocene Maikop series: Implications for the paleogeography of Eastern Azerbaijan

    NASA Astrophysics Data System (ADS)

    Hudson, Samuel M.; Johnson, Cari L.; Efendiyeva, Malakhat A.; Rowe, Harold D.; Feyzullayev, Akper A.; Aliyev, Chingiz S.

    2008-04-01

    The Oligocene-Miocene Maikop Series is a world-class source rock responsible for much of the oil and gas found in the South Caspian Basin. It is composed of up to 3 km of marine mudstone, and contains a nearly continuous record of deposition during progressive tectonic closure of the basin as the Arabian Peninsula converged northward into Eurasia. Historically, the stratigraphy of this interval has been difficult to define due to the homogenous nature of the fine-grained, clay-dominated strata. Outcrop exposures in eastern Azerbaijan allow direct observation and detailed sampling of the interval, yielding a more comprehensive stratigraphic context and a more advanced understanding of syndepositional conditions in the eastern Paratethys Sea. Specifically, the present investigation reveals that coupling field-based stratigraphic characterization with geochemical analyses (e.g., bulk elemental geochemistry, Rock-Eval pyrolysis, bulk stable isotope geochemistry) yields a more robust understanding of internal variations within the Maikop Series. Samples from seven sections located within the Shemakha-Gobustan oil province reveal consistent stratigraphic and spatial geochemical trends. It is proposed that the Maikop Series be divided into three members based on these data along with lithostratigraphic and biostratigraphic data reported herein. When comparing Rupelian (Early Oligocene) and Chattian (Late Oligocene) strata, the Rupelian-age strata commonly possess higher TOC values, more negative δ 15N tot values, more positive δ 13C org values, and higher radioactivity relative to Chattian-age rocks. The trace metals Mo and V (normalized to Al) are positively correlated with TOC, with maximum values occurring at the Rupelian-Chattian boundary and overall higher average values in the Rupelian. Across the Oligocene-Miocene boundary, a slight drop in V/Al, Mo/Al ratios is observed, along with drops in %S and TOC. These results indicate that geochemical signatures of the

  17. [Infinite optical thickness of dentine porcelain of IPS E.max A color series].

    PubMed

    Sun, Ting; Shao, Long-quan; Yi, Yuan-fu; Deng, Bin; Wen, Ning; Zhang, Wei-wei

    2011-02-01

    To determine the infinite optical thickness of dentine porcelain of IPS E.max A color series. Cylindrical dentine porcelain specimens of the IPS E.max A color series were prepared with a diameter of 13 mm and thickness of 1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5, and 5.0 mm. The chromatic value of all the specimens was determined with CM-5 spectrometer against standard black and white background. The chromatic aberration (deltaE) was calculated by regression equation. The infinite optical thickness of dentine porcelain of the IPS E.max A color series ranged from 2.341 to 3.333 mm for a deltaE of 1.0, and from 2.064 to 2.904 mm for a deltaE of 1.5. As the chromaticity or thickness increased, the influence by the background color decreased, and the color of specimens became gradually close to the intrinsic color. The thickness of the background dentine porcelain specimens must exceed its infinite optical thickness to represent the intrinsic color and avoid the influence by the extrinsic color.

  18. Integrative missing value estimation for microarray data.

    PubMed

    Hu, Jianjun; Li, Haifeng; Waterman, Michael S; Zhou, Xianghong Jasmine

    2006-10-12

    Missing value estimation is an important preprocessing step in microarray analysis. Although several methods have been developed to solve this problem, their performance is unsatisfactory for datasets with high rates of missing data, high measurement noise, or limited numbers of samples. In fact, more than 80% of the time-series datasets in Stanford Microarray Database contain less than eight samples. We present the integrative Missing Value Estimation method (iMISS) by incorporating information from multiple reference microarray datasets to improve missing value estimation. For each gene with missing data, we derive a consistent neighbor-gene list by taking reference data sets into consideration. To determine whether the given reference data sets are sufficiently informative for integration, we use a submatrix imputation approach. Our experiments showed that iMISS can significantly and consistently improve the accuracy of the state-of-the-art Local Least Square (LLS) imputation algorithm by up to 15% improvement in our benchmark tests. We demonstrated that the order-statistics-based integrative imputation algorithms can achieve significant improvements over the state-of-the-art missing value estimation approaches such as LLS and is especially good for imputing microarray datasets with a limited number of samples, high rates of missing data, or very noisy measurements. With the rapid accumulation of microarray datasets, the performance of our approach can be further improved by incorporating larger and more appropriate reference datasets.

  19. 17 CFR 270.18f-2 - Fair and equitable treatment for holders of each class or series of stock of series investment...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... for holders of each class or series of stock of series investment companies. 270.18f-2 Section 270.18f... or series of stock of series investment companies. (a) For purposes of this § 270.18f-2 a series...(f)(2) of the Act, issues two or more classes or series of preferred or special stock each of which...

  20. 17 CFR 270.18f-2 - Fair and equitable treatment for holders of each class or series of stock of series investment...

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... for holders of each class or series of stock of series investment companies. 270.18f-2 Section 270.18f... or series of stock of series investment companies. (a) For purposes of this § 270.18f-2 a series...(f)(2) of the Act, issues two or more classes or series of preferred or special stock each of which...

  1. 17 CFR 270.18f-2 - Fair and equitable treatment for holders of each class or series of stock of series investment...

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... for holders of each class or series of stock of series investment companies. 270.18f-2 Section 270.18f... or series of stock of series investment companies. (a) For purposes of this § 270.18f-2 a series...(f)(2) of the Act, issues two or more classes or series of preferred or special stock each of which...

  2. 17 CFR 270.18f-2 - Fair and equitable treatment for holders of each class or series of stock of series investment...

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... for holders of each class or series of stock of series investment companies. 270.18f-2 Section 270.18f... or series of stock of series investment companies. (a) For purposes of this § 270.18f-2 a series...(f)(2) of the Act, issues two or more classes or series of preferred or special stock each of which...

  3. 17 CFR 270.18f-2 - Fair and equitable treatment for holders of each class or series of stock of series investment...

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... for holders of each class or series of stock of series investment companies. 270.18f-2 Section 270.18f... or series of stock of series investment companies. (a) For purposes of this § 270.18f-2 a series...(f)(2) of the Act, issues two or more classes or series of preferred or special stock each of which...

  4. Leadership for Social Justice and Democracy in Our Schools. The Soul of Educational Leadership Series. Volume 9

    ERIC Educational Resources Information Center

    Blankstein, Alan M.; Houston, Paul D.

    2011-01-01

    Research shows that students' sense of belonging in their school communities is critically linked to academic achievement. This ninth and final book in "The Soul of Educational Leadership" series offers practical strategies for promoting socially responsible school cultures that foster greater student engagement and democratic values. A joint…

  5. Complex Landscape Terms in Seri

    ERIC Educational Resources Information Center

    O'Meara, Carolyn; Bohnemeyer, Jurgen

    2008-01-01

    The nominal lexicon of Seri is characterized by a prevalence of analytical descriptive terms. We explore the consequences of this typological trait in the landscape domain. The complex landscape terms of Seri classify geographic entities in terms of their material make-up and spatial properties such as shape, orientation, and merological…

  6. Relative inactivity during the last 140,000 years of a portion of the La Paz fault, southern Baja California Sur, Mexico

    USGS Publications Warehouse

    Szabo, B. J.; Hausback, B.P.; Smith, Joe T.

    1990-01-01

    Uranium-series dating of corals overlying the undeformed Punta Coyote gravels indicates that the underlying La Paz fault zone has been relatively inactive in this part of the Baja California peninsula during the last 140,000 years, and possibly for a significantly longer period. However, Holocene seismic activities along extensions of the fault zone north of Cabo San Lucas suggest potential seismic hazards for the city of La Paz (population 200,000), which lies about 6 km from the fault. ?? 1990 Springer-Verlag New York Inc.

  7. Geometric Methods for ATR: Shape Spaces, Metrics, Object/Image Relations, and Shapelets

    DTIC Science & Technology

    2007-09-30

    our techniques as a tool for adding depth information to existing video content. In addition, we learned that researchers at the University of...and only if Kr - 4 C L r - 3 C H r - l C r This fact and the incidence relations given in Theorem I, §5, Chapter VII of Hodge and Pedoe [4] give us our...Springer-Verlag, 1992. 4. W.V.D. Hodge and D. Pedoe , Methods of Algebraic Geometry, nos. 1, 2, and 3, in Mathematical Library Series, Cambridge

  8. A systematic review of teleophthalmological studies in Europe

    PubMed Central

    Labiris, Georgios; Panagiotopoulou, Eirini-Kanella; Kozobolis, Vassilios P.

    2018-01-01

    A systematic review of the recent literature regarding a series of ocular diseases involved in European telemedicine projects was performed based on the PubMed, Google Scholar and Springer databases in June 2017. Literature review returned 44 eligible studies; among them, emergency ophthalmology, diabetic retinopathy, glaucoma, age-related macular disease, cataract and retinopathy of prematurity. The majority of studies indicate teleophthalmology as a valid, reliable and cost-efficient method for care-provision in ophthalmology patients which delivers comparable outcomes to the traditional examination methods. PMID:29487825

  9. Multisource image fusion method using support value transform.

    PubMed

    Zheng, Sheng; Shi, Wen-Zhong; Liu, Jian; Zhu, Guang-Xi; Tian, Jin-Wen

    2007-07-01

    With the development of numerous imaging sensors, many images can be simultaneously pictured by various sensors. However, there are many scenarios where no one sensor can give the complete picture. Image fusion is an important approach to solve this problem and produces a single image which preserves all relevant information from a set of different sensors. In this paper, we proposed a new image fusion method using the support value transform, which uses the support value to represent the salient features of image. This is based on the fact that, in support vector machines (SVMs), the data with larger support values have a physical meaning in the sense that they reveal relative more importance of the data points for contributing to the SVM model. The mapped least squares SVM (mapped LS-SVM) is used to efficiently compute the support values of image. The support value analysis is developed by using a series of multiscale support value filters, which are obtained by filling zeros in the basic support value filter deduced from the mapped LS-SVM to match the resolution of the desired level. Compared with the widely used image fusion methods, such as the Laplacian pyramid, discrete wavelet transform methods, the proposed method is an undecimated transform-based approach. The fusion experiments are undertaken on multisource images. The results demonstrate that the proposed approach is effective and is superior to the conventional image fusion methods in terms of the pertained quantitative fusion evaluation indexes, such as quality of visual information (Q(AB/F)), the mutual information, etc.

  10. Uranium series isotopes concentration in sediments at San Marcos and Luis L. Leon reservoirs, Chihuahua, Mexico

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Méndez-García, C.; Montero-Cabrera, M. E., E-mail: elena.montero@cimav.edu.mx; Renteria-Villalobos, M.

    2008-01-01

    Spatial and temporal distribution of the radioisotopes concentrations were determined in sediments near the surface and core samples extracted from two reservoirs located in an arid region close to Chihuahua City, Mexico. At San Marcos reservoir one core was studied, while from Luis L. Leon reservoir one core from the entrance and another one close to the wall were investigated. ²³²Th-series, ²³⁸U-series, ⁴⁰K and ¹³⁷Cs activity concentrations (AC, Bq kg⁻¹) were determined by gamma spectrometry with a high purity Ge detector. ²³⁸U and ²³⁴U ACs were obtained by liquid scintillation and alpha spectrometry with a surface barrier detector. Dating ofmore » core sediments was performed applying CRS method to ²¹⁰Pb activities. Results were verified by ¹³⁷Cs AC. Resulting activity concentrations were compared among corresponding surface and core sediments. High ²³⁸U-series AC values were found in sediments from San Marcos reservoir, because this site is located close to the Victorino uranium deposit. Low AC values found in Luis L. Leon reservoir suggest that the uranium present in the source of the Sacramento – Chuviscar Rivers is not transported up to the Conchos River. Activity ratios (AR) ²³⁴U/²³⁸U and ²³⁸U/²²⁶Ra in sediments have values between 0.9–1.2, showing a behavior close to radioactive equilibrium in the entire basin. ²³²Th/²³⁸U, ²²⁸Ra/²²⁶Ra ARs are witnesses of the different geological origin of sediments from San Marcos and Luis L. Leon reservoirs.« less

  11. 31 CFR 359.25 - What are the denominations and prices of definitive Series I savings bonds?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 31 Money and Finance:Treasury 2 2011-07-01 2011-07-01 false What are the denominations and prices of definitive Series I savings bonds? 359.25 Section 359.25 Money and Finance: Treasury Regulations... sold at par; that is, the purchase price is the same as the denomination (face value). [67 FR 64278...

  12. Decoding divergent series in nonparaxial optics.

    PubMed

    Borghi, Riccardo; Gori, Franco; Guattari, Giorgio; Santarsiero, Massimo

    2011-03-15

    A theoretical analysis aimed at investigating the divergent character of perturbative series involved in the study of free-space nonparaxial propagation of vectorial optical beams is proposed. Our analysis predicts a factorial divergence for such series and provides a theoretical framework within which the results of recently published numerical experiments concerning nonparaxial propagation of vectorial Gaussian beams find a meaningful interpretation in terms of the decoding operated on such series by the Weniger transformation.

  13. Predictability of monthly temperature and precipitation using automatic time series forecasting methods

    NASA Astrophysics Data System (ADS)

    Papacharalampous, Georgia; Tyralis, Hristos; Koutsoyiannis, Demetris

    2018-02-01

    We investigate the predictability of monthly temperature and precipitation by applying automatic univariate time series forecasting methods to a sample of 985 40-year-long monthly temperature and 1552 40-year-long monthly precipitation time series. The methods include a naïve one based on the monthly values of the last year, as well as the random walk (with drift), AutoRegressive Fractionally Integrated Moving Average (ARFIMA), exponential smoothing state-space model with Box-Cox transformation, ARMA errors, Trend and Seasonal components (BATS), simple exponential smoothing, Theta and Prophet methods. Prophet is a recently introduced model inspired by the nature of time series forecasted at Facebook and has not been applied to hydrometeorological time series before, while the use of random walk, BATS, simple exponential smoothing and Theta is rare in hydrology. The methods are tested in performing multi-step ahead forecasts for the last 48 months of the data. We further investigate how different choices of handling the seasonality and non-normality affect the performance of the models. The results indicate that: (a) all the examined methods apart from the naïve and random walk ones are accurate enough to be used in long-term applications; (b) monthly temperature and precipitation can be forecasted to a level of accuracy which can barely be improved using other methods; (c) the externally applied classical seasonal decomposition results mostly in better forecasts compared to the automatic seasonal decomposition used by the BATS and Prophet methods; and (d) Prophet is competitive, especially when it is combined with externally applied classical seasonal decomposition.

  14. Winter Video Series Coming in January | Poster

    Cancer.gov

    The Scientific Library’s annual Summer Video Series was so successful that it will be offering a new Winter Video Series beginning in January. For this inaugural event, the staff is showing the eight-part series from National Geographic titled “American Genius.” 

  15. The removal of ammonia from sanitary landfill leachate using a series of shallow waste stabilization ponds.

    PubMed

    Leite, V D; Pearson, H W; de Sousa, J T; Lopes, W S; de Luna, M L D

    2011-01-01

    This study evaluated the efficiency of a shallow (0.5 m deep) waste stabilization pond series to remove high concentrations of ammonia from sanitary landfill leachate. The pond system was located at EXTRABES, Campina Grande, Paraiba, Northeast Brazil. The pond series was fed with sanitary landfill leachate transported by road tanker to the experimental site from the sanitary landfill of the City of Joao Pessoa, Paraiba. The ammoniacal-N surface loading on the first pond of the series was equivalent to 364 kg ha(-1) d(-1) and the COD surface loading equivalent to 3,690 kg ha(-1) d(-1). The maximum mean ammonia removal efficiency was 99.5% achieved by the third pond in the series which had an effluent concentration of 5.3 mg L(-1) ammoniacal-N for an accumulative HRT of 39.5 days. The removal process was mainly attributed to ammonia volatilization (stripping) from the pond surfaces as a result of high surface pH values and water temperatures of 22-26°C. Shallow pond systems would appear to be a promising technology for stripping ammonia from landfill leachate under tropical conditions.

  16. Transformation between surface spherical harmonic expansion of arbitrary high degree and order and double Fourier series on sphere

    NASA Astrophysics Data System (ADS)

    Fukushima, Toshio

    2018-02-01

    In order to accelerate the spherical harmonic synthesis and/or analysis of arbitrary function on the unit sphere, we developed a pair of procedures to transform between a truncated spherical harmonic expansion and the corresponding two-dimensional Fourier series. First, we obtained an analytic expression of the sine/cosine series coefficient of the 4 π fully normalized associated Legendre function in terms of the rectangle values of the Wigner d function. Then, we elaborated the existing method to transform the coefficients of the surface spherical harmonic expansion to those of the double Fourier series so as to be capable with arbitrary high degree and order. Next, we created a new method to transform inversely a given double Fourier series to the corresponding surface spherical harmonic expansion. The key of the new method is a couple of new recurrence formulas to compute the inverse transformation coefficients: a decreasing-order, fixed-degree, and fixed-wavenumber three-term formula for general terms, and an increasing-degree-and-order and fixed-wavenumber two-term formula for diagonal terms. Meanwhile, the two seed values are analytically prepared. Both of the forward and inverse transformation procedures are confirmed to be sufficiently accurate and applicable to an extremely high degree/order/wavenumber as 2^{30} {≈ } 10^9. The developed procedures will be useful not only in the synthesis and analysis of the spherical harmonic expansion of arbitrary high degree and order, but also in the evaluation of the derivatives and integrals of the spherical harmonic expansion.

  17. Characterizability of metabolic pathway systems from time series data.

    PubMed

    Voit, Eberhard O

    2013-12-01

    Over the past decade, the biomathematical community has devoted substantial effort to the complicated challenge of estimating parameter values for biological systems models. An even more difficult issue is the characterization of functional forms for the processes that govern these systems. Most parameter estimation approaches tacitly assume that these forms are known or can be assumed with some validity. However, this assumption is not always true. The recently proposed method of Dynamic Flux Estimation (DFE) addresses this problem in a genuinely novel fashion for metabolic pathway systems. Specifically, DFE allows the characterization of fluxes within such systems through an analysis of metabolic time series data. Its main drawback is the fact that DFE can only directly be applied if the pathway system contains as many metabolites as unknown fluxes. This situation is unfortunately rare. To overcome this roadblock, earlier work in this field had proposed strategies for augmenting the set of unknown fluxes with independent kinetic information, which however is not always available. Employing Moore-Penrose pseudo-inverse methods of linear algebra, the present article discusses an approach for characterizing fluxes from metabolic time series data that is applicable even if the pathway system is underdetermined and contains more fluxes than metabolites. Intriguingly, this approach is independent of a specific modeling framework and unaffected by noise in the experimental time series data. The results reveal whether any fluxes may be characterized and, if so, which subset is characterizable. They also help with the identification of fluxes that, if they could be determined independently, would allow the application of DFE. Copyright © 2013 Elsevier Inc. All rights reserved.

  18. Stock price forecasting based on time series analysis

    NASA Astrophysics Data System (ADS)

    Chi, Wan Le

    2018-05-01

    Using the historical stock price data to set up a sequence model to explain the intrinsic relationship of data, the future stock price can forecasted. The used models are auto-regressive model, moving-average model and autoregressive-movingaverage model. The original data sequence of unit root test was used to judge whether the original data sequence was stationary. The non-stationary original sequence as a first order difference needed further processing. Then the stability of the sequence difference was re-inspected. If it is still non-stationary, the second order differential processing of the sequence is carried out. Autocorrelation diagram and partial correlation diagram were used to evaluate the parameters of the identified ARMA model, including coefficients of the model and model order. Finally, the model was used to forecast the fitting of the shanghai composite index daily closing price with precision. Results showed that the non-stationary original data series was stationary after the second order difference. The forecast value of shanghai composite index daily closing price was closer to actual value, indicating that the ARMA model in the paper was a certain accuracy.

  19. The natural neighbor series manuals and source codes

    NASA Astrophysics Data System (ADS)

    Watson, Dave

    1999-05-01

    This software series is concerned with reconstruction of spatial functions by interpolating a set of discrete observations having two or three independent variables. There are three components in this series: (1) nngridr: an implementation of natural neighbor interpolation, 1994, (2) modemap: an implementation of natural neighbor interpolation on the sphere, 1998 and (3) orebody: an implementation of natural neighbor isosurface generation (publication incomplete). Interpolation is important to geologists because it can offer graphical insights into significant geological structure and behavior, which, although inherent in the data, may not be otherwise apparent. It also is the first step in numerical integration, which provides a primary avenue to detailed quantification of the observed spatial function. Interpolation is implemented by selecting a surface-generating rule that controls the form of a `bridge' built across the interstices between adjacent observations. The cataloging and classification of the many such rules that have been reported is a subject in itself ( Watson, 1992), and the merits of various approaches have been debated at length. However, for practical purposes, interpolation methods are usually judged on how satisfactorily they handle problematic data sets. Sparse scattered data or traverse data, especially if the functional values are highly variable, generally tests interpolation methods most severely; but one method, natural neighbor interpolation, usually does produce preferable results for such data.

  20. Constructing Weyl group multiple Dirichlet series

    NASA Astrophysics Data System (ADS)

    Chinta, Gautam; Gunnells, Paul E.

    2010-01-01

    Let Phi be a reduced root system of rank r . A Weyl group multiple Dirichlet series for Phi is a Dirichlet series in r complex variables s_1,dots,s_r , initially converging for {Re}(s_i) sufficiently large, that has meromorphic continuation to {{C}}^r and satisfies functional equations under the transformations of {{C}}^r corresponding to the Weyl group of Phi . A heuristic definition of such a series was given by Brubaker, Bump, Chinta, Friedberg, and Hoffstein, and they have been investigated in certain special cases by others. In this paper we generalize results by Chinta and Gunnells to construct Weyl group multiple Dirichlet series by a uniform method and show in all cases that they have the expected properties.

  1. EVMDD-Based Analysis and Diagnosis Methods of Multi-State Systems with Multi-State Components

    DTIC Science & Technology

    2014-01-01

    Springer-Verlag New York Inc., 2001. [7] T. Kam, T. Villa, R. K. Brayton , and A. L. Sangiovanni-Vincentelli, “Multi-valued deci- sion diagrams: Theory and...Decision Diagram Techniques for Micro- and Nanoelectronic Design, CRC Press, Taylor & Francis Group, 2006. [22] X. Zang, D. Wang, H. Sun, and K. S. Trivedi

  2. Stackable Credentials: Do They Have Labor Market Value? CCRC Working Paper No. 97

    ERIC Educational Resources Information Center

    Bailey, Thomas; Belfield, Clive R.

    2017-01-01

    Stackable credentials--sequential postsecondary awards that allow individuals to progress on a career path--have been suggested as a way to enhance the labor market prospects of middle-skill workers. Yet, thus far, little evidence has been provided on the economic value of these credentials. Here, we report a series of estimates on the association…

  3. Assiniboine Series.

    ERIC Educational Resources Information Center

    Allen, Minerva

    This series of illustrated booklets presents 13 Indian stories in a bilingual format of English and Assiniboine, an Indian tribal language. Written on the first grade level, the stories have the following titles: (1) "Orange Tree in Lodgepole"; (2) "Pretty Flower"; (3) Inktomi and the Rock"; (4) "Inktomi and the…

  4. Revision of Primary Series Maps

    USGS Publications Warehouse

    ,

    2000-01-01

    In 1992, the U.S. Geological Survey (USGS) completed a 50-year effort to provide primary series map coverage of the United States. Many of these maps now need to be updated to reflect the construction of new roads and highways and other changes that have taken place over time. The USGS has formulated a graphic revision plan to help keep the primary series maps current. Primary series maps include 1:20,000-scale quadrangles of Puerto Rico, 1:24,000- or 1:25,000-scale quadrangles of the conterminous United States, Hawaii, and U.S. Territories, and 1:63,360-scale quadrangles of Alaska. The revision of primary series maps from new collection sources is accomplished using a variety of processes. The raster revision process combines the scanned content of paper maps with raster updating technologies. The vector revision process involves the automated plotting of updated vector files. Traditional processes use analog stereoplotters and manual scribing instruments on specially coated map separates. The ability to select from or combine these processes increases the efficiency of the National Mapping Division map revision program.

  5. Imaging of Lipids and Metabolites Using Nanospray Desorption Electrospray Ionization Mass Spectrometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lanekoff, Ingela; Laskin, Julia

    In recent years, mass spectroscopy imaging (MSI) has emerged as a foundational technique in metabolomics and drug screening providing deeper understanding of complex mechanistic pathways within biochemical systems and biological organisms. We have been invited to contribute a chapter to a new Springer series volume, entitled “Mass Spectrometry Imaging of Small Molecules”. The volume is planned for the highly successful lab protocol series Methods in Molecular Biology, published by Humana Press, USA. The volume is aimed to equip readers with step-by-step mass spectrometric imaging protocols and bring rapidly maturing methods of MS imaging to life science researchers. The chapter willmore » provide a detailed protocol of ambient MSI by use of nanospray desorption electrospray ionization.« less

  6. Conditional Spectral Analysis of Replicated Multiple Time Series with Application to Nocturnal Physiology.

    PubMed

    Krafty, Robert T; Rosen, Ori; Stoffer, David S; Buysse, Daniel J; Hall, Martica H

    2017-01-01

    This article considers the problem of analyzing associations between power spectra of multiple time series and cross-sectional outcomes when data are observed from multiple subjects. The motivating application comes from sleep medicine, where researchers are able to non-invasively record physiological time series signals during sleep. The frequency patterns of these signals, which can be quantified through the power spectrum, contain interpretable information about biological processes. An important problem in sleep research is drawing connections between power spectra of time series signals and clinical characteristics; these connections are key to understanding biological pathways through which sleep affects, and can be treated to improve, health. Such analyses are challenging as they must overcome the complicated structure of a power spectrum from multiple time series as a complex positive-definite matrix-valued function. This article proposes a new approach to such analyses based on a tensor-product spline model of Cholesky components of outcome-dependent power spectra. The approach exibly models power spectra as nonparametric functions of frequency and outcome while preserving geometric constraints. Formulated in a fully Bayesian framework, a Whittle likelihood based Markov chain Monte Carlo (MCMC) algorithm is developed for automated model fitting and for conducting inference on associations between outcomes and spectral measures. The method is used to analyze data from a study of sleep in older adults and uncovers new insights into how stress and arousal are connected to the amount of time one spends in bed.

  7. Analysis of financial time series using multiscale entropy based on skewness and kurtosis

    NASA Astrophysics Data System (ADS)

    Xu, Meng; Shang, Pengjian

    2018-01-01

    There is a great interest in studying dynamic characteristics of the financial time series of the daily stock closing price in different regions. Multi-scale entropy (MSE) is effective, mainly in quantifying the complexity of time series on different time scales. This paper applies a new method for financial stability from the perspective of MSE based on skewness and kurtosis. To better understand the superior coarse-graining method for the different kinds of stock indexes, we take into account the developmental characteristics of the three continents of Asia, North America and European stock markets. We study the volatility of different financial time series in addition to analyze the similarities and differences of coarsening time series from the perspective of skewness and kurtosis. A kind of corresponding relationship between the entropy value of stock sequences and the degree of stability of financial markets, were observed. The three stocks which have particular characteristics in the eight piece of stock sequences were discussed, finding the fact that it matches the result of applying the MSE method to showing results on a graph. A comparative study is conducted to simulate over synthetic and real world data. Results show that the modified method is more effective to the change of dynamics and has more valuable information. The result is obtained at the same time, finding the results of skewness and kurtosis discrimination is obvious, but also more stable.

  8. 75 FR 55699 - Series LLCs and Cell Companies

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-14

    ... Series LLCs and Cell Companies AGENCY: Internal Revenue Service (IRS), Treasury. ACTION: Notice of... Federal tax purposes of a series of a domestic series limited liability company (LLC), a cell of a domestic cell company, or a foreign series or cell that conducts an insurance business. The proposed...

  9. Use of a Principal Components Analysis for the Generation of Daily Time Series.

    NASA Astrophysics Data System (ADS)

    Dreveton, Christine; Guillou, Yann

    2004-07-01

    A new approach for generating daily time series is considered in response to the weather-derivatives market. This approach consists of performing a principal components analysis to create independent variables, the values of which are then generated separately with a random process. Weather derivatives are financial or insurance products that give companies the opportunity to cover themselves against adverse climate conditions. The aim of a generator is to provide a wider range of feasible situations to be used in an assessment of risk. Generation of a temperature time series is required by insurers or bankers for pricing weather options. The provision of conditional probabilities and a good representation of the interannual variance are the main challenges of a generator when used for weather derivatives. The generator was developed according to this new approach using a principal components analysis and was applied to the daily average temperature time series of the Paris-Montsouris station in France. The observed dataset was homogenized and the trend was removed to represent correctly the present climate. The results obtained with the generator show that it represents correctly the interannual variance of the observed climate; this is the main result of the work, because one of the main discrepancies of other generators is their inability to represent accurately the observed interannual climate variance—this discrepancy is not acceptable for an application to weather derivatives. The generator was also tested to calculate conditional probabilities: for example, the knowledge of the aggregated value of heating degree-days in the middle of the heating season allows one to estimate the probability if reaching a threshold at the end of the heating season. This represents the main application of a climate generator for use with weather derivatives.


  10. Chemical composition measurements of the low activity waste (LAW) EPA-Series glasses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fox, K.; Edwards, T. B.

    2016-03-01

    In this report, the Savannah River National Laboratory provides chemical analysis results for a series of simulated low activity waste glasses provided by Pacific Northwest National Laboratory as part of an ongoing development task. The measured chemical composition data are reported and compared with the targeted values for each component for each glass. A detailed review showed no indications of errors in the preparation or measurement of the study glasses. All of the measured sums of oxides for the study glasses fell within the interval of 100.2 to 100.8 wt %, indicating recovery of all components. Comparisons of the targetedmore » and measured chemical compositions showed that the measured values for the glasses met the targeted concentrations within 10% for those components present at more than 5 wt %.« less

  11. Fluctuation of similarity to detect transitions between distinct dynamical regimes in short time series

    NASA Astrophysics Data System (ADS)

    Malik, Nishant; Marwan, Norbert; Zou, Yong; Mucha, Peter J.; Kurths, Jürgen

    2014-06-01

    A method to identify distinct dynamical regimes and transitions between those regimes in a short univariate time series was recently introduced [N. Malik et al., Europhys. Lett. 97, 40009 (2012), 10.1209/0295-5075/97/40009], employing the computation of fluctuations in a measure of nonlinear similarity based on local recurrence properties. In this work, we describe the details of the analytical relationships between this newly introduced measure and the well-known concepts of attractor dimensions and Lyapunov exponents. We show that the new measure has linear dependence on the effective dimension of the attractor and it measures the variations in the sum of the Lyapunov spectrum. To illustrate the practical usefulness of the method, we identify various types of dynamical transitions in different nonlinear models. We present testbed examples for the new method's robustness against noise and missing values in the time series. We also use this method to analyze time series of social dynamics, specifically an analysis of the US crime record time series from 1975 to 1993. Using this method, we find that dynamical complexity in robberies was influenced by the unemployment rate until the late 1980s. We have also observed a dynamical transition in homicide and robbery rates in the late 1980s and early 1990s, leading to increase in the dynamical complexity of these rates.

  12. Adapting an Agent-Based Model of Socio-Technical Systems to Analyze Security Failures

    DTIC Science & Technology

    2016-10-17

    total number of non-blackouts differed from the total number in the baseline data to a statistically significant extent with a p- valueɘ.0003...the total number of non-blackouts differed from the total number in the baseline data to a statistically significant extent with a p-valueɘ.0003...I. Nikolic, and Z. Lukszo, Eds., Agent-based modelling of socio-technical systems. Springer Science & Business Media, 2013, vol. 9. [12] A. P. Shaw

  13. Students' Conception of Infinite Series

    ERIC Educational Resources Information Center

    Martinez-Planell, Rafael; Gonzalez, Ana Carmen; DiCristina, Gladys; Acevedo, Vanessa

    2012-01-01

    This is a report of a study of students' understanding of infinite series. It has a three-fold purpose: to show that students may construct two essentially different notions of infinite series, to show that one of the constructions is particularly difficult for students, and to examine the way in which these two different constructions may be…

  14. Time-series analysis of lung texture on bone-suppressed dynamic chest radiograph for the evaluation of pulmonary function: a preliminary study

    NASA Astrophysics Data System (ADS)

    Tanaka, Rie; Matsuda, Hiroaki; Sanada, Shigeru

    2017-03-01

    The density of lung tissue changes as demonstrated on imagery is dependent on the relative increases and decreases in the volume of air and lung vessels per unit volume of lung. Therefore, a time-series analysis of lung texture can be used to evaluate relative pulmonary function. This study was performed to assess a time-series analysis of lung texture on dynamic chest radiographs during respiration, and to demonstrate its usefulness in the diagnosis of pulmonary impairments. Sequential chest radiographs of 30 patients were obtained using a dynamic flat-panel detector (FPD; 100 kV, 0.2 mAs/pulse, 15 frames/s, SID = 2.0 m; Prototype, Konica Minolta). Imaging was performed during respiration, and 210 images were obtained over 14 seconds. Commercial bone suppression image-processing software (Clear Read Bone Suppression; Riverain Technologies, Miamisburg, Ohio, USA) was applied to the sequential chest radiographs to create corresponding bone suppression images. Average pixel values, standard deviation (SD), kurtosis, and skewness were calculated based on a density histogram analysis in lung regions. Regions of interest (ROIs) were manually located in the lungs, and the same ROIs were traced by the template matching technique during respiration. Average pixel value effectively differentiated regions with ventilatory defects and normal lung tissue. The average pixel values in normal areas changed dynamically in synchronization with the respiratory phase, whereas those in regions of ventilatory defects indicated reduced variations in pixel value. There were no significant differences between ventilatory defects and normal lung tissue in the other parameters. We confirmed that time-series analysis of lung texture was useful for the evaluation of pulmonary function in dynamic chest radiography during respiration. Pulmonary impairments were detected as reduced changes in pixel value. This technique is a simple, cost-effective diagnostic tool for the evaluation of regional

  15. Global error estimation based on the tolerance proportionality for some adaptive Runge-Kutta codes

    NASA Astrophysics Data System (ADS)

    Calvo, M.; González-Pinto, S.; Montijano, J. I.

    2008-09-01

    Modern codes for the numerical solution of Initial Value Problems (IVPs) in ODEs are based in adaptive methods that, for a user supplied tolerance [delta], attempt to advance the integration selecting the size of each step so that some measure of the local error is [similar, equals][delta]. Although this policy does not ensure that the global errors are under the prescribed tolerance, after the early studies of Stetter [Considerations concerning a theory for ODE-solvers, in: R. Burlisch, R.D. Grigorieff, J. Schröder (Eds.), Numerical Treatment of Differential Equations, Proceedings of Oberwolfach, 1976, Lecture Notes in Mathematics, vol. 631, Springer, Berlin, 1978, pp. 188-200; Tolerance proportionality in ODE codes, in: R. März (Ed.), Proceedings of the Second Conference on Numerical Treatment of Ordinary Differential Equations, Humbold University, Berlin, 1980, pp. 109-123] and the extensions of Higham [Global error versus tolerance for explicit Runge-Kutta methods, IMA J. Numer. Anal. 11 (1991) 457-480; The tolerance proportionality of adaptive ODE solvers, J. Comput. Appl. Math. 45 (1993) 227-236; The reliability of standard local error control algorithms for initial value ordinary differential equations, in: Proceedings: The Quality of Numerical Software: Assessment and Enhancement, IFIP Series, Springer, Berlin, 1997], it has been proved that in many existing explicit Runge-Kutta codes the global errors behave asymptotically as some rational power of [delta]. This step-size policy, for a given IVP, determines at each grid point tn a new step-size hn+1=h(tn;[delta]) so that h(t;[delta]) is a continuous function of t. In this paper a study of the tolerance proportionality property under a discontinuous step-size policy that does not allow to change the size of the step if the step-size ratio between two consecutive steps is close to unity is carried out. This theory is applied to obtain global error estimations in a few problems that have been solved with

  16. Value of Construction Company and its Dependence on Significant Variables

    NASA Astrophysics Data System (ADS)

    Vítková, E.; Hromádka, V.; Ondrušková, E.

    2017-10-01

    The paper deals with the value of the construction company assessment respecting usable approaches and determinable variables. The reasons of the value of the construction company assessment are different, but the most important reasons are the sale or the purchase of the company, the liquidation of the company, the fusion of the company with another subject or the others. According the reason of the value assessment it is possible to determine theoretically different approaches for valuation, mainly it concerns about the yield method of valuation and the proprietary method of valuation. Both approaches are dependant of detailed input variables, which quality will influence the final assessment of the company´s value. The main objective of the paper is to suggest, according to the analysis, possible ways of input variables, mainly in the form of expected cash-flows or the profit, determination. The paper is focused mainly on methods of time series analysis, regression analysis and mathematical simulation utilization. As the output, the results of the analysis on the case study will be demonstrated.

  17. The role of social values in the management of ecological systems.

    PubMed

    Ives, Christopher D; Kendal, Dave

    2014-11-01

    The concept of value is central to the practice and science of ecological management and conservation. There is a well-developed body of theory and evidence that explores concepts of value in different ways across different disciplines including philosophy, economics, sociology and psychology. Insight from these disciplines provides a robust and sophisticated platform for considering the role of social values in ecological conservation, management and research. This paper reviews theories of value from these disciplines and discusses practical tools and instruments that can be utilised by researchers and practitioners. A distinction is highlighted between underlying values that shape people's perception of the world (e.g. altruistic or biospheric value orientations), and the values that people assign to things in the world (e.g. natural heritage, money). Evidence from numerous studies has shown that there are multiple pathways between these values and attitudes, beliefs and behaviours relevant to ecological management and conservation. In an age of increasing anthropogenic impacts on natural systems, recognising how and why people value different aspects of ecological systems can allow ecological managers to act to minimise conflict between stakeholders and promote the social acceptability of management activities. A series of practical guidelines are provided to enable social values to be better considered in ecosystem management and research. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Analytical methods for solving boundary value heat conduction problems with heterogeneous boundary conditions on lines. I - Review

    NASA Astrophysics Data System (ADS)

    Kartashov, E. M.

    1986-10-01

    Analytical methods for solving boundary value problems for the heat conduction equation with heterogeneous boundary conditions on lines, on a plane, and in space are briefly reviewed. In particular, the method of dual integral equations and summator series is examined with reference to stationary processes. A table of principal solutions to dual integral equations and pair summator series is proposed which presents the known results in a systematic manner. Newly obtained results are presented in addition to the known ones.

  19. All-nanotube stretchable supercapacitor with low equivalent series resistance.

    PubMed

    Gilshteyn, Evgenia P; Amanbayev, Daler; Anisimov, Anton S; Kallio, Tanja; Nasibulin, Albert G

    2017-12-12

    We report high-performance, stable, low equivalent series resistance all-nanotube stretchable supercapacitor based on single-walled carbon nanotube film electrodes and a boron nitride nanotube separator. A layer of boron nitride nanotubes, fabricated by airbrushing from isopropanol dispersion, allows avoiding problem of high internal resistance and short-circuiting of supercapacitors. The device, fabricated in a two-electrode test cell configuration, demonstrates electrochemical double layer capacitance mechanism and retains 96% of its initial capacitance after 20 000 electrochemical charging/discharging cycles with the specific capacitance value of 82 F g -1 and low equivalent series resistance of 4.6 Ω. The stretchable supercapacitor prototype withstands at least 1000 cycles of 50% strain with a slight increase in the volumetric capacitance from 0.4 to 0.5 mF cm -3 and volumetric power density from 32 mW cm -3 to 40 mW cm -3 after stretching, which is higher than reported before. Moreover, a low resistance of 250 Ω for the as-fabricated stretchable prototype was obtained, which slightly decreased with the strain applied up to 200 Ω. Simple fabrication process of such devices can be easily extended making the all-nanotube stretchable supercapacitors, presented here, promising elements in future wearable devices.

  20. Non-linear motions in reprocessed GPS station position time series

    NASA Astrophysics Data System (ADS)

    Rudenko, Sergei; Gendt, Gerd

    2010-05-01

    Global Positioning System (GPS) data of about 400 globally distributed stations obtained at time span from 1998 till 2007 were reprocessed using GFZ Potsdam EPOS (Earth Parameter and Orbit System) software within International GNSS Service (IGS) Tide Gauge Benchmark Monitoring (TIGA) Pilot Project and IGS Data Reprocessing Campaign with the purpose to determine weekly precise coordinates of GPS stations located at or near tide gauges. Vertical motions of these stations are used to correct the vertical motions of tide gauges for local motions and to tie tide gauge measurements to the geocentric reference frame. Other estimated parameters include daily values of the Earth rotation parameters and their rates, as well as satellite antenna offsets. The solution GT1 derived is based on using absolute phase center variation model, ITRF2005 as a priori reference frame, and other new models. The solution contributed also to ITRF2008. The time series of station positions are analyzed to identify non-linear motions caused by different effects. The paper presents the time series of GPS station coordinates and investigates apparent non-linear motions and their influence on GPS station height rates.