Sample records for vector time series

  1. Detection of a sudden change of the field time series based on the Lorenz system.

    PubMed

    Da, ChaoJiu; Li, Fang; Shen, BingLu; Yan, PengCheng; Song, Jian; Ma, DeShan

    2017-01-01

    We conducted an exploratory study of the detection of a sudden change of the field time series based on the numerical solution of the Lorenz system. First, the time when the Lorenz path jumped between the regions on the left and right of the equilibrium point of the Lorenz system was quantitatively marked and the sudden change time of the Lorenz system was obtained. Second, the numerical solution of the Lorenz system was regarded as a vector; thus, this solution could be considered as a vector time series. We transformed the vector time series into a time series using the vector inner product, considering the geometric and topological features of the Lorenz system path. Third, the sudden change of the resulting time series was detected using the sliding t-test method. Comparing the test results with the quantitatively marked time indicated that the method could detect every sudden change of the Lorenz path, thus the method is effective. Finally, we used the method to detect the sudden change of the pressure field time series and temperature field time series, and obtained good results for both series, which indicates that the method can apply to high-dimension vector time series. Mathematically, there is no essential difference between the field time series and vector time series; thus, we provide a new method for the detection of the sudden change of the field time series.

  2. Detection of a sudden change of the field time series based on the Lorenz system

    PubMed Central

    Li, Fang; Shen, BingLu; Yan, PengCheng; Song, Jian; Ma, DeShan

    2017-01-01

    We conducted an exploratory study of the detection of a sudden change of the field time series based on the numerical solution of the Lorenz system. First, the time when the Lorenz path jumped between the regions on the left and right of the equilibrium point of the Lorenz system was quantitatively marked and the sudden change time of the Lorenz system was obtained. Second, the numerical solution of the Lorenz system was regarded as a vector; thus, this solution could be considered as a vector time series. We transformed the vector time series into a time series using the vector inner product, considering the geometric and topological features of the Lorenz system path. Third, the sudden change of the resulting time series was detected using the sliding t-test method. Comparing the test results with the quantitatively marked time indicated that the method could detect every sudden change of the Lorenz path, thus the method is effective. Finally, we used the method to detect the sudden change of the pressure field time series and temperature field time series, and obtained good results for both series, which indicates that the method can apply to high-dimension vector time series. Mathematically, there is no essential difference between the field time series and vector time series; thus, we provide a new method for the detection of the sudden change of the field time series. PMID:28141832

  3. TimesVector: a vectorized clustering approach to the analysis of time series transcriptome data from multiple phenotypes.

    PubMed

    Jung, Inuk; Jo, Kyuri; Kang, Hyejin; Ahn, Hongryul; Yu, Youngjae; Kim, Sun

    2017-12-01

    Identifying biologically meaningful gene expression patterns from time series gene expression data is important to understand the underlying biological mechanisms. To identify significantly perturbed gene sets between different phenotypes, analysis of time series transcriptome data requires consideration of time and sample dimensions. Thus, the analysis of such time series data seeks to search gene sets that exhibit similar or different expression patterns between two or more sample conditions, constituting the three-dimensional data, i.e. gene-time-condition. Computational complexity for analyzing such data is very high, compared to the already difficult NP-hard two dimensional biclustering algorithms. Because of this challenge, traditional time series clustering algorithms are designed to capture co-expressed genes with similar expression pattern in two sample conditions. We present a triclustering algorithm, TimesVector, specifically designed for clustering three-dimensional time series data to capture distinctively similar or different gene expression patterns between two or more sample conditions. TimesVector identifies clusters with distinctive expression patterns in three steps: (i) dimension reduction and clustering of time-condition concatenated vectors, (ii) post-processing clusters for detecting similar and distinct expression patterns and (iii) rescuing genes from unclassified clusters. Using four sets of time series gene expression data, generated by both microarray and high throughput sequencing platforms, we demonstrated that TimesVector successfully detected biologically meaningful clusters of high quality. TimesVector improved the clustering quality compared to existing triclustering tools and only TimesVector detected clusters with differential expression patterns across conditions successfully. The TimesVector software is available at http://biohealth.snu.ac.kr/software/TimesVector/. sunkim.bioinfo@snu.ac.kr. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  4. Small Sample Properties of Bayesian Multivariate Autoregressive Time Series Models

    ERIC Educational Resources Information Center

    Price, Larry R.

    2012-01-01

    The aim of this study was to compare the small sample (N = 1, 3, 5, 10, 15) performance of a Bayesian multivariate vector autoregressive (BVAR-SEM) time series model relative to frequentist power and parameter estimation bias. A multivariate autoregressive model was developed based on correlated autoregressive time series vectors of varying…

  5. Space Object Classification Using Fused Features of Time Series Data

    NASA Astrophysics Data System (ADS)

    Jia, B.; Pham, K. D.; Blasch, E.; Shen, D.; Wang, Z.; Chen, G.

    In this paper, a fused feature vector consisting of raw time series and texture feature information is proposed for space object classification. The time series data includes historical orbit trajectories and asteroid light curves. The texture feature is derived from recurrence plots using Gabor filters for both unsupervised learning and supervised learning algorithms. The simulation results show that the classification algorithms using the fused feature vector achieve better performance than those using raw time series or texture features only.

  6. iVAR: a program for imputing missing data in multivariate time series using vector autoregressive models.

    PubMed

    Liu, Siwei; Molenaar, Peter C M

    2014-12-01

    This article introduces iVAR, an R program for imputing missing data in multivariate time series on the basis of vector autoregressive (VAR) models. We conducted a simulation study to compare iVAR with three methods for handling missing data: listwise deletion, imputation with sample means and variances, and multiple imputation ignoring time dependency. The results showed that iVAR produces better estimates for the cross-lagged coefficients than do the other three methods. We demonstrate the use of iVAR with an empirical example of time series electrodermal activity data and discuss the advantages and limitations of the program.

  7. Using Time Series Analysis to Predict Cardiac Arrest in a PICU.

    PubMed

    Kennedy, Curtis E; Aoki, Noriaki; Mariscalco, Michele; Turley, James P

    2015-11-01

    To build and test cardiac arrest prediction models in a PICU, using time series analysis as input, and to measure changes in prediction accuracy attributable to different classes of time series data. Retrospective cohort study. Thirty-one bed academic PICU that provides care for medical and general surgical (not congenital heart surgery) patients. Patients experiencing a cardiac arrest in the PICU and requiring external cardiac massage for at least 2 minutes. None. One hundred three cases of cardiac arrest and 109 control cases were used to prepare a baseline dataset that consisted of 1,025 variables in four data classes: multivariate, raw time series, clinical calculations, and time series trend analysis. We trained 20 arrest prediction models using a matrix of five feature sets (combinations of data classes) with four modeling algorithms: linear regression, decision tree, neural network, and support vector machine. The reference model (multivariate data with regression algorithm) had an accuracy of 78% and 87% area under the receiver operating characteristic curve. The best model (multivariate + trend analysis data with support vector machine algorithm) had an accuracy of 94% and 98% area under the receiver operating characteristic curve. Cardiac arrest predictions based on a traditional model built with multivariate data and a regression algorithm misclassified cases 3.7 times more frequently than predictions that included time series trend analysis and built with a support vector machine algorithm. Although the final model lacks the specificity necessary for clinical application, we have demonstrated how information from time series data can be used to increase the accuracy of clinical prediction models.

  8. G14A-06- Analysis of the DORIS, GNSS, SLR, VLBI and Gravimetric Time Series at the GGOS Core Sites

    NASA Technical Reports Server (NTRS)

    Moreaux, G.; Lemoine, F.; Luceri, V.; Pavlis, E.; MacMillan, D.; Bonvalot, S.; Saunier, J.

    2017-01-01

    Analysis of the time series at the 3-4 multi-technique GGOS sites to analyze and compare the spectral content of the space geodetic and gravity time series. Evaluate the level of agreement between the space geodesy measurements and the physical tie vectors.

  9. Process fault detection and nonlinear time series analysis for anomaly detection in safeguards

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burr, T.L.; Mullen, M.F.; Wangen, L.E.

    In this paper we discuss two advanced techniques, process fault detection and nonlinear time series analysis, and apply them to the analysis of vector-valued and single-valued time-series data. We investigate model-based process fault detection methods for analyzing simulated, multivariate, time-series data from a three-tank system. The model-predictions are compared with simulated measurements of the same variables to form residual vectors that are tested for the presence of faults (possible diversions in safeguards terminology). We evaluate two methods, testing all individual residuals with a univariate z-score and testing all variables simultaneously with the Mahalanobis distance, for their ability to detect lossmore » of material from two different leak scenarios from the three-tank system: a leak without and with replacement of the lost volume. Nonlinear time-series analysis tools were compared with the linear methods popularized by Box and Jenkins. We compare prediction results using three nonlinear and two linear modeling methods on each of six simulated time series: two nonlinear and four linear. The nonlinear methods performed better at predicting the nonlinear time series and did as well as the linear methods at predicting the linear values.« less

  10. River flow prediction using hybrid models of support vector regression with the wavelet transform, singular spectrum analysis and chaotic approach

    NASA Astrophysics Data System (ADS)

    Baydaroğlu, Özlem; Koçak, Kasım; Duran, Kemal

    2018-06-01

    Prediction of water amount that will enter the reservoirs in the following month is of vital importance especially for semi-arid countries like Turkey. Climate projections emphasize that water scarcity will be one of the serious problems in the future. This study presents a methodology for predicting river flow for the subsequent month based on the time series of observed monthly river flow with hybrid models of support vector regression (SVR). Monthly river flow over the period 1940-2012 observed for the Kızılırmak River in Turkey has been used for training the method, which then has been applied for predictions over a period of 3 years. SVR is a specific implementation of support vector machines (SVMs), which transforms the observed input data time series into a high-dimensional feature space (input matrix) by way of a kernel function and performs a linear regression in this space. SVR requires a special input matrix. The input matrix was produced by wavelet transforms (WT), singular spectrum analysis (SSA), and a chaotic approach (CA) applied to the input time series. WT convolutes the original time series into a series of wavelets, and SSA decomposes the time series into a trend, an oscillatory and a noise component by singular value decomposition. CA uses a phase space formed by trajectories, which represent the dynamics producing the time series. These three methods for producing the input matrix for the SVR proved successful, while the SVR-WT combination resulted in the highest coefficient of determination and the lowest mean absolute error.

  11. Modeling global vector fields of chaotic systems from noisy time series with the aid of structure-selection techniques.

    PubMed

    Xu, Daolin; Lu, Fangfang

    2006-12-01

    We address the problem of reconstructing a set of nonlinear differential equations from chaotic time series. A method that combines the implicit Adams integration and the structure-selection technique of an error reduction ratio is proposed for system identification and corresponding parameter estimation of the model. The structure-selection technique identifies the significant terms from a pool of candidates of functional basis and determines the optimal model through orthogonal characteristics on data. The technique with the Adams integration algorithm makes the reconstruction available to data sampled with large time intervals. Numerical experiment on Lorenz and Rossler systems shows that the proposed strategy is effective in global vector field reconstruction from noisy time series.

  12. GNSS Network time series analysis

    NASA Astrophysics Data System (ADS)

    Normand, M.; Balodis, J.; Janpaule, I.; Haritonova, D.

    2012-12-01

    Time series of GNSS station results of both the EUPOS®-Riga and LatPos networks have been developed at the Institute of Geodesy and Geoinformation (University of Latvia) using Bernese v.5.0 software. The base stations were selected among the EPN and IGS stations in surroundings of Latvia at the distances up to 700 km. The results of time series are analysed and coordinate velocity vectors have been determined. The background of the map of tectonic faults helps to interpret the GNSS station coordinate velocity vector behaviour in proper environment. The outlying situations recognized. The question still aroused on the nature of the some of outlying situations. The dependence from various influences has been tested.

  13. Association mining of dependency between time series

    NASA Astrophysics Data System (ADS)

    Hafez, Alaaeldin

    2001-03-01

    Time series analysis is considered as a crucial component of strategic control over a broad variety of disciplines in business, science and engineering. Time series data is a sequence of observations collected over intervals of time. Each time series describes a phenomenon as a function of time. Analysis on time series data includes discovering trends (or patterns) in a time series sequence. In the last few years, data mining has emerged and been recognized as a new technology for data analysis. Data Mining is the process of discovering potentially valuable patterns, associations, trends, sequences and dependencies in data. Data mining techniques can discover information that many traditional business analysis and statistical techniques fail to deliver. In this paper, we adapt and innovate data mining techniques to analyze time series data. By using data mining techniques, maximal frequent patterns are discovered and used in predicting future sequences or trends, where trends describe the behavior of a sequence. In order to include different types of time series (e.g. irregular and non- systematic), we consider past frequent patterns of the same time sequences (local patterns) and of other dependent time sequences (global patterns). We use the word 'dependent' instead of the word 'similar' for emphasis on real life time series where two time series sequences could be completely different (in values, shapes, etc.), but they still react to the same conditions in a dependent way. In this paper, we propose the Dependence Mining Technique that could be used in predicting time series sequences. The proposed technique consists of three phases: (a) for all time series sequences, generate their trend sequences, (b) discover maximal frequent trend patterns, generate pattern vectors (to keep information of frequent trend patterns), use trend pattern vectors to predict future time series sequences.

  14. Modeling Time Series Data for Supervised Learning

    ERIC Educational Resources Information Center

    Baydogan, Mustafa Gokce

    2012-01-01

    Temporal data are increasingly prevalent and important in analytics. Time series (TS) data are chronological sequences of observations and an important class of temporal data. Fields such as medicine, finance, learning science and multimedia naturally generate TS data. Each series provide a high-dimensional data vector that challenges the learning…

  15. Riemannian multi-manifold modeling and clustering in brain networks

    NASA Astrophysics Data System (ADS)

    Slavakis, Konstantinos; Salsabilian, Shiva; Wack, David S.; Muldoon, Sarah F.; Baidoo-Williams, Henry E.; Vettel, Jean M.; Cieslak, Matthew; Grafton, Scott T.

    2017-08-01

    This paper introduces Riemannian multi-manifold modeling in the context of brain-network analytics: Brainnetwork time-series yield features which are modeled as points lying in or close to a union of a finite number of submanifolds within a known Riemannian manifold. Distinguishing disparate time series amounts thus to clustering multiple Riemannian submanifolds. To this end, two feature-generation schemes for brain-network time series are put forth. The first one is motivated by Granger-causality arguments and uses an auto-regressive moving average model to map low-rank linear vector subspaces, spanned by column vectors of appropriately defined observability matrices, to points into the Grassmann manifold. The second one utilizes (non-linear) dependencies among network nodes by introducing kernel-based partial correlations to generate points in the manifold of positivedefinite matrices. Based on recently developed research on clustering Riemannian submanifolds, an algorithm is provided for distinguishing time series based on their Riemannian-geometry properties. Numerical tests on time series, synthetically generated from real brain-network structural connectivity matrices, reveal that the proposed scheme outperforms classical and state-of-the-art techniques in clustering brain-network states/structures.

  16. The effect of transverse wave vector and magnetic fields on resonant tunneling times in double-barrier structures

    NASA Astrophysics Data System (ADS)

    Wang, Hongmei; Zhang, Yafei; Xu, Huaizhe

    2007-01-01

    The effect of transverse wave vector and magnetic fields on resonant tunneling times in double-barrier structures, which is significant but has been frequently omitted in previous theoretical methods, has been reported in this paper. The analytical expressions of the longitudinal energies of quasibound levels (LEQBL) and the lifetimes of quasibound levels (LQBL) in symmetrical double-barrier (SDB) structures have been derived as a function of transverse wave vector and longitudinal magnetic fields perpendicular to interfaces. Based on our derived analytical expressions, the LEQBL and LQBL dependence upon transverse wave vector and longitudinal magnetic fields has been explored numerically for a SDB structure. Model calculations show that the LEQBL decrease monotonically and the LQBL shorten with increasing transverse wave vector, and each original LEQBL splits to a series of sub-LEQBL which shift nearly linearly toward the well bottom and the lifetimes of quasibound level series (LQBLS) shorten with increasing Landau-level indices and magnetic fields.

  17. Evidence for chaos in an experimental time series from serrated plastic flow

    NASA Astrophysics Data System (ADS)

    Venkadesan, S.; Valsakumar, M. C.; Murthy, K. P. N.; Rajasekar, S.

    1996-07-01

    An experimental time series from a tensile test of an Al-Mg alloy in the serrated plastic flow domain is analyzed for signature of chaos. We employ state space reconstruction by embedding of time delay vectors. The minimum embedding dimension is found to be 4 and the largest Lyapunov exponent is positive, thereby providing prima facie evidence for chaos in an experimental time series of serrated plastic flow data.

  18. Permutation entropy with vector embedding delays

    NASA Astrophysics Data System (ADS)

    Little, Douglas J.; Kane, Deb M.

    2017-12-01

    Permutation entropy (PE) is a statistic used widely for the detection of structure within a time series. Embedding delay times at which the PE is reduced are characteristic timescales for which such structure exists. Here, a generalized scheme is investigated where embedding delays are represented by vectors rather than scalars, permitting PE to be calculated over a (D -1 ) -dimensional space, where D is the embedding dimension. This scheme is applied to numerically generated noise, sine wave and logistic map series, and experimental data sets taken from a vertical-cavity surface emitting laser exhibiting temporally localized pulse structures within the round-trip time of the laser cavity. Results are visualized as PE maps as a function of embedding delay, with low PE values indicating combinations of embedding delays where correlation structure is present. It is demonstrated that vector embedding delays enable identification of structure that is ambiguous or masked, when the embedding delay is constrained to scalar form.

  19. Separation of spatial-temporal patterns ('climatic modes') by combined analysis of really measured and generated numerically vector time series

    NASA Astrophysics Data System (ADS)

    Feigin, A. M.; Mukhin, D.; Volodin, E. M.; Gavrilov, A.; Loskutov, E. M.

    2013-12-01

    The new method of decomposition of the Earth's climate system into well separated spatial-temporal patterns ('climatic modes') is discussed. The method is based on: (i) generalization of the MSSA (Multichannel Singular Spectral Analysis) [1] for expanding vector (space-distributed) time series in basis of spatial-temporal empirical orthogonal functions (STEOF), which makes allowance delayed correlations of the processes recorded in spatially separated points; (ii) expanding both real SST data, and longer by several times SST data generated numerically, in STEOF basis; (iii) use of the numerically produced STEOF basis for exclusion of 'too slow' (and thus not represented correctly) processes from real data. The application of the method allows by means of vector time series generated numerically by the INM RAS Coupled Climate Model [2] to separate from real SST anomalies data [3] two climatic modes possessing by noticeably different time scales: 3-5 and 9-11 years. Relations of separated modes to ENSO and PDO are investigated. Possible applications of spatial-temporal climatic patterns concept to prognosis of climate system evolution is discussed. 1. Ghil, M., R. M. Allen, M. D. Dettinger, K. Ide, D. Kondrashov, et al. (2002) "Advanced spectral methods for climatic time series", Rev. Geophys. 40(1), 3.1-3.41. 2. http://83.149.207.89/GCM_DATA_PLOTTING/GCM_INM_DATA_XY_en.htm 3. http://iridl.ldeo.columbia.edu/SOURCES/.KAPLAN/.EXTENDED/.v2/.ssta/

  20. A comparison between MS-VECM and MS-VECMX on economic time series data

    NASA Astrophysics Data System (ADS)

    Phoong, Seuk-Wai; Ismail, Mohd Tahir; Sek, Siok-Kun

    2014-07-01

    Multivariate Markov switching models able to provide useful information on the study of structural change data since the regime switching model can analyze the time varying data and capture the mean and variance in the series of dependence structure. This paper will investigates the oil price and gold price effects on Malaysia, Singapore, Thailand and Indonesia stock market returns. Two forms of Multivariate Markov switching models are used namely the mean adjusted heteroskedasticity Markov Switching Vector Error Correction Model (MSMH-VECM) and the mean adjusted heteroskedasticity Markov Switching Vector Error Correction Model with exogenous variable (MSMH-VECMX). The reason for using these two models are to capture the transition probabilities of the data since real financial time series data always exhibit nonlinear properties such as regime switching, cointegrating relations, jumps or breaks passing the time. A comparison between these two models indicates that MSMH-VECM model able to fit the time series data better than the MSMH-VECMX model. In addition, it was found that oil price and gold price affected the stock market changes in the four selected countries.

  1. New method for solving inductive electric fields in the non-uniformly conducting ionosphere

    NASA Astrophysics Data System (ADS)

    Vanhamäki, H.; Amm, O.; Viljanen, A.

    2006-10-01

    We present a new calculation method for solving inductive electric fields in the ionosphere. The time series of the potential part of the ionospheric electric field, together with the Hall and Pedersen conductances serves as the input to this method. The output is the time series of the induced rotational part of the ionospheric electric field. The calculation method works in the time-domain and can be used with non-uniform, time-dependent conductances. In addition, no particular symmetry requirements are imposed on the input potential electric field. The presented method makes use of special non-local vector basis functions called the Cartesian Elementary Current Systems (CECS). This vector basis offers a convenient way of representing curl-free and divergence-free parts of 2-dimensional vector fields and makes it possible to solve the induction problem using simple linear algebra. The new calculation method is validated by comparing it with previously published results for Alfvén wave reflection from a uniformly conducting ionosphere.

  2. Application of information-retrieval methods to the classification of physical data

    NASA Technical Reports Server (NTRS)

    Mamotko, Z. N.; Khorolskaya, S. K.; Shatrovskiy, L. I.

    1975-01-01

    Scientific data received from satellites are characterized as a multi-dimensional time series, whose terms are vector functions of a vector of measurement conditions. Information retrieval methods are used to construct lower dimensional samples on the basis of the condition vector, in order to obtain these data and to construct partial relations. The methods are applied to the joint Soviet-French Arkad project.

  3. Comparison of Support Vector Machine, Neural Network, and CART Algorithms for the Land-Cover Classification Using Limited Training Data Points

    EPA Science Inventory

    Support vector machine (SVM) was applied for land-cover characterization using MODIS time-series data. Classification performance was examined with respect to training sample size, sample variability, and landscape homogeneity (purity). The results were compared to two convention...

  4. Developing a local least-squares support vector machines-based neuro-fuzzy model for nonlinear and chaotic time series prediction.

    PubMed

    Miranian, A; Abdollahzade, M

    2013-02-01

    Local modeling approaches, owing to their ability to model different operating regimes of nonlinear systems and processes by independent local models, seem appealing for modeling, identification, and prediction applications. In this paper, we propose a local neuro-fuzzy (LNF) approach based on the least-squares support vector machines (LSSVMs). The proposed LNF approach employs LSSVMs, which are powerful in modeling and predicting time series, as local models and uses hierarchical binary tree (HBT) learning algorithm for fast and efficient estimation of its parameters. The HBT algorithm heuristically partitions the input space into smaller subdomains by axis-orthogonal splits. In each partitioning, the validity functions automatically form a unity partition and therefore normalization side effects, e.g., reactivation, are prevented. Integration of LSSVMs into the LNF network as local models, along with the HBT learning algorithm, yield a high-performance approach for modeling and prediction of complex nonlinear time series. The proposed approach is applied to modeling and predictions of different nonlinear and chaotic real-world and hand-designed systems and time series. Analysis of the prediction results and comparisons with recent and old studies demonstrate the promising performance of the proposed LNF approach with the HBT learning algorithm for modeling and prediction of nonlinear and chaotic systems and time series.

  5. Product demand forecasts using wavelet kernel support vector machine and particle swarm optimization in manufacture system

    NASA Astrophysics Data System (ADS)

    Wu, Qi

    2010-03-01

    Demand forecasts play a crucial role in supply chain management. The future demand for a certain product is the basis for the respective replenishment systems. Aiming at demand series with small samples, seasonal character, nonlinearity, randomicity and fuzziness, the existing support vector kernel does not approach the random curve of the sales time series in the space (quadratic continuous integral space). In this paper, we present a hybrid intelligent system combining the wavelet kernel support vector machine and particle swarm optimization for demand forecasting. The results of application in car sale series forecasting show that the forecasting approach based on the hybrid PSOWv-SVM model is effective and feasible, the comparison between the method proposed in this paper and other ones is also given, which proves that this method is, for the discussed example, better than hybrid PSOv-SVM and other traditional methods.

  6. Hybrid wavelet-support vector machine approach for modelling rainfall-runoff process.

    PubMed

    Komasi, Mehdi; Sharghi, Soroush

    2016-01-01

    Because of the importance of water resources management, the need for accurate modeling of the rainfall-runoff process has rapidly grown in the past decades. Recently, the support vector machine (SVM) approach has been used by hydrologists for rainfall-runoff modeling and the other fields of hydrology. Similar to the other artificial intelligence models, such as artificial neural network (ANN) and adaptive neural fuzzy inference system, the SVM model is based on the autoregressive properties. In this paper, the wavelet analysis was linked to the SVM model concept for modeling the rainfall-runoff process of Aghchai and Eel River watersheds. In this way, the main time series of two variables, rainfall and runoff, were decomposed to multiple frequent time series by wavelet theory; then, these time series were imposed as input data on the SVM model in order to predict the runoff discharge one day ahead. The obtained results show that the wavelet SVM model can predict both short- and long-term runoff discharges by considering the seasonality effects. Also, the proposed hybrid model is relatively more appropriate than classical autoregressive ones such as ANN and SVM because it uses the multi-scale time series of rainfall and runoff data in the modeling process.

  7. Vectorization efforts to increase Gram-negative intracellular drug concentration: a case study on HldE-K inhibitors.

    PubMed

    Atamanyuk, Dmytro; Faivre, Fabien; Oxoby, Mayalen; Ledoussal, Benoit; Drocourt, Elodie; Moreau, François; Gerusz, Vincent

    2013-03-14

    In this paper, we present different strategies to vectorize HldE kinase inhibitors with the goal to improve their gram-negative intracellular concentration. Syntheses and biological effects of siderophoric, aminoglycosidic, amphoteric, and polycationic vectors are discussed. While siderophoric and amphoteric vectorization efforts proved to be disappointing in this series, aminoglycosidic and polycationic vectors were able for the first time to achieve synergistic effects of our inhibitors with erythromycin. Although these effects proved to be nonspecific, this study provides information about the required stereoelectronic arrangement of the polycationic amines and their basicity requirements to fulfill outer membrane destabilization resulting in better erythromycin synergies.

  8. Predicting Jakarta composite index using hybrid of fuzzy time series and support vector regression models

    NASA Astrophysics Data System (ADS)

    Febrian Umbara, Rian; Tarwidi, Dede; Budi Setiawan, Erwin

    2018-03-01

    The paper discusses the prediction of Jakarta Composite Index (JCI) in Indonesia Stock Exchange. The study is based on JCI historical data for 1286 days to predict the value of JCI one day ahead. This paper proposes predictions done in two stages., The first stage using Fuzzy Time Series (FTS) to predict values of ten technical indicators, and the second stage using Support Vector Regression (SVR) to predict the value of JCI one day ahead, resulting in a hybrid prediction model FTS-SVR. The performance of this combined prediction model is compared with the performance of the single stage prediction model using SVR only. Ten technical indicators are used as input for each model.

  9. Featureless classification of light curves

    NASA Astrophysics Data System (ADS)

    Kügler, S. D.; Gianniotis, N.; Polsterer, K. L.

    2015-08-01

    In the era of rapidly increasing amounts of time series data, classification of variable objects has become the main objective of time-domain astronomy. Classification of irregularly sampled time series is particularly difficult because the data cannot be represented naturally as a vector which can be directly fed into a classifier. In the literature, various statistical features serve as vector representations. In this work, we represent time series by a density model. The density model captures all the information available, including measurement errors. Hence, we view this model as a generalization to the static features which directly can be derived, e.g. as moments from the density. Similarity between each pair of time series is quantified by the distance between their respective models. Classification is performed on the obtained distance matrix. In the numerical experiments, we use data from the OGLE (Optical Gravitational Lensing Experiment) and ASAS (All Sky Automated Survey) surveys and demonstrate that the proposed representation performs up to par with the best currently used feature-based approaches. The density representation preserves all static information present in the observational data, in contrast to a less-complete description by features. The density representation is an upper boundary in terms of information made available to the classifier. Consequently, the predictive power of the proposed classification depends on the choice of similarity measure and classifier, only. Due to its principled nature, we advocate that this new approach of representing time series has potential in tasks beyond classification, e.g. unsupervised learning.

  10. Project Physics Programmed Instruction, Vectors 1.

    ERIC Educational Resources Information Center

    Harvard Univ., Cambridge, MA. Harvard Project Physics.

    This programmed instruction booklet is an interim version of instructional materials being developed by Harvard Project Physics. It is the first in a series of three booklets on vectors and covers the definitions of vectors and scalars, drawing vector quantities to scale, and negative vectors. For others in this series, see SE 015 550 and SE 015…

  11. Project Physics Programmed Instruction, Vectors 2.

    ERIC Educational Resources Information Center

    Harvard Univ., Cambridge, MA. Harvard Project Physics.

    This is the second of a series of three programmed instruction booklets on vectors developed by Harvard Project Physics. It covers adding two or more vectors together, and finding a third vector that could be added to two given vectors to make a sum of zero. For other booklets in this series, see SE 015 549 and SE 015 551. (DT)

  12. Applicability of initial optimal maternal and fetal electrocardiogram combination vectors to subsequent recordings

    NASA Astrophysics Data System (ADS)

    Yan, Hua-Wen; Huang, Xiao-Lin; Zhao, Ying; Si, Jun-Feng; Liu, Tie-Bing; Liu, Hong-Xing

    2014-11-01

    A series of experiments are conducted to confirm whether the vectors calculated for an early section of a continuous non-invasive fetal electrocardiogram (fECG) recording can be directly applied to subsequent sections in order to reduce the computation required for real-time monitoring. Our results suggest that it is generally feasible to apply the initial optimal maternal and fetal ECG combination vectors to extract the fECG and maternal ECG in subsequent recorded sections.

  13. A new series of yeast shuttle vectors for the recovery and identification of multiple plasmids from Saccharomyces cerevisiae.

    PubMed

    Frazer, LilyAnn Novak; O'Keefe, Raymond T

    2007-09-01

    The availability of Saccharomyces cerevisiae yeast strains with multiple auxotrophic markers allows the stable introduction and selection of more than one yeast shuttle vector containing marker genes that complement the auxotrophic markers. In certain experimental situations there is a need to recover more than one shuttle vector from yeast. To facilitate the recovery and identification of multiple plasmids from S. cerevisiae, we have constructed a series of plasmids based on the pRS series of yeast shuttle vectors. Bacterial antibiotic resistance genes to chloramphenicol, kanamycin and zeocin have been combined with the yeast centromere sequence (CEN6), the autonomously replicating sequence (ARSH4) and one of the four yeast selectable marker genes (HIS3, TRP1, LEU2 or URA3) from the pRS series of vectors. The 12 plasmids produced differ in antibiotic resistance and yeast marker gene within the backbone of the multipurpose plasmid pBluescript II. The newly constructed vectors show similar mitotic stability to the original pRS vectors. In combination with the ampicillin-resistant pRS series of yeast shuttle vectors, these plasmids now allow the recovery and identification in bacteria of up to four different vectors from S. cerevisiae. Copyright (c) 2007 John Wiley & Sons, Ltd.

  14. Project Physics Programmed Instruction, Vectors 3.

    ERIC Educational Resources Information Center

    Harvard Univ., Cambridge, MA. Harvard Project Physics.

    This is the third of a series of three programmed instruction booklets on vectors developed by Harvard Project Physics. Separating vectors into components and obtaining a vector from its components are the topics covered. For other booklets in this series, see SE 015 549 and SE 015 550. (DT)

  15. On the relationship between health, education and economic growth: Time series evidence from Malaysia

    NASA Astrophysics Data System (ADS)

    Khan, Habib Nawaz; Razali, Radzuan B.; Shafei, Afza Bt.

    2016-11-01

    The objectives of this paper is two-fold: First, to empirically investigate the effects of an enlarged number of healthy and well-educated people on economic growth in Malaysia within the Endogeneous Growth Model framework. Second, to examine the causal links between education, health and economic growth using annual time series data from 1981 to 2014 for Malaysia. Data series were checked for the time series properties by using ADF and KPSS tests. Long run co-integration relationship was investigated with the help of vector autoregressive (VAR) method. For short and long run dynamic relationship investigation vector error correction model (VECM) was applied. Causality analysis was performed through Engle-Granger technique. The study results showed long run co-integration relation and positively significant effects of education and health on economic growth in Malaysia. The reported results also confirmed a feedback hypothesis between the variables in the case of Malaysia. The study results have policy relevance of the importance of human capital (health and education) to the growth process of the Malaysia. Thus, it is suggested that policy makers focus on education and health sectors for sustainable economic growth in Malaysia.

  16. A data mining framework for time series estimation.

    PubMed

    Hu, Xiao; Xu, Peng; Wu, Shaozhi; Asgari, Shadnaz; Bergsneider, Marvin

    2010-04-01

    Time series estimation techniques are usually employed in biomedical research to derive variables less accessible from a set of related and more accessible variables. These techniques are traditionally built from systems modeling approaches including simulation, blind decovolution, and state estimation. In this work, we define target time series (TTS) and its related time series (RTS) as the output and input of a time series estimation process, respectively. We then propose a novel data mining framework for time series estimation when TTS and RTS represent different sets of observed variables from the same dynamic system. This is made possible by mining a database of instances of TTS, its simultaneously recorded RTS, and the input/output dynamic models between them. The key mining strategy is to formulate a mapping function for each TTS-RTS pair in the database that translates a feature vector extracted from RTS to the dissimilarity between true TTS and its estimate from the dynamic model associated with the same TTS-RTS pair. At run time, a feature vector is extracted from an inquiry RTS and supplied to the mapping function associated with each TTS-RTS pair to calculate a dissimilarity measure. An optimal TTS-RTS pair is then selected by analyzing these dissimilarity measures. The associated input/output model of the selected TTS-RTS pair is then used to simulate the TTS given the inquiry RTS as an input. An exemplary implementation was built to address a biomedical problem of noninvasive intracranial pressure assessment. The performance of the proposed method was superior to that of a simple training-free approach of finding the optimal TTS-RTS pair by a conventional similarity-based search on RTS features. 2009 Elsevier Inc. All rights reserved.

  17. Measuring and Modeling Shared Visual Attention

    NASA Technical Reports Server (NTRS)

    Mulligan, Jeffrey B.; Gontar, Patrick

    2016-01-01

    Multi-person teams are sometimes responsible for critical tasks, such as flying an airliner. Here we present a method using gaze tracking data to assess shared visual attention, a term we use to describe the situation where team members are attending to a common set of elements in the environment. Gaze data are quantized with respect to a set of N areas of interest (AOIs); these are then used to construct a time series of N dimensional vectors, with each vector component representing one of the AOIs, all set to 0 except for the component corresponding to the currently fixated AOI, which is set to 1. The resulting sequence of vectors can be averaged in time, with the result that each vector component represents the proportion of time that the corresponding AOI was fixated within the given time interval. We present two methods for comparing sequences of this sort, one based on computing the time-varying correlation of the averaged vectors, and another based on a chi-square test testing the hypothesis that the observed gaze proportions are drawn from identical probability distributions. We have evaluated the method using synthetic data sets, in which the behavior was modeled as a series of "activities," each of which was modeled as a first-order Markov process. By tabulating distributions for pairs of identical and disparate activities, we are able to perform a receiver operating characteristic (ROC) analysis, allowing us to choose appropriate criteria and estimate error rates. We have applied the methods to data from airline crews, collected in a high-fidelity flight simulator (Haslbeck, Gontar & Schubert, 2014). We conclude by considering the problem of automatic (blind) discovery of activities, using methods developed for text analysis.

  18. Measuring and Modeling Shared Visual Attention

    NASA Technical Reports Server (NTRS)

    Mulligan, Jeffrey B.

    2016-01-01

    Multi-person teams are sometimes responsible for critical tasks, such as flying an airliner. Here we present a method using gaze tracking data to assess shared visual attention, a term we use to describe the situation where team members are attending to a common set of elements in the environment. Gaze data are quantized with respect to a set of N areas of interest (AOIs); these are then used to construct a time series of N dimensional vectors, with each vector component representing one of the AOIs, all set to 0 except for the component corresponding to the currently fixated AOI, which is set to 1. The resulting sequence of vectors can be averaged in time, with the result that each vector component represents the proportion of time that the corresponding AOI was fixated within the given time interval. We present two methods for comparing sequences of this sort, one based on computing the time-varying correlation of the averaged vectors, and another based on a chi-square test testing the hypothesis that the observed gaze proportions are drawn from identical probability distributions.We have evaluated the method using synthetic data sets, in which the behavior was modeled as a series of activities, each of which was modeled as a first-order Markov process. By tabulating distributions for pairs of identical and disparate activities, we are able to perform a receiver operating characteristic (ROC) analysis, allowing us to choose appropriate criteria and estimate error rates.We have applied the methods to data from airline crews, collected in a high-fidelity flight simulator (Haslbeck, Gontar Schubert, 2014). We conclude by considering the problem of automatic (blind) discovery of activities, using methods developed for text analysis.

  19. Hydrological time series modeling: A comparison between adaptive neuro-fuzzy, neural network and autoregressive techniques

    NASA Astrophysics Data System (ADS)

    Lohani, A. K.; Kumar, Rakesh; Singh, R. D.

    2012-06-01

    SummaryTime series modeling is necessary for the planning and management of reservoirs. More recently, the soft computing techniques have been used in hydrological modeling and forecasting. In this study, the potential of artificial neural networks and neuro-fuzzy system in monthly reservoir inflow forecasting are examined by developing and comparing monthly reservoir inflow prediction models, based on autoregressive (AR), artificial neural networks (ANNs) and adaptive neural-based fuzzy inference system (ANFIS). To take care the effect of monthly periodicity in the flow data, cyclic terms are also included in the ANN and ANFIS models. Working with time series flow data of the Sutlej River at Bhakra Dam, India, several ANN and adaptive neuro-fuzzy models are trained with different input vectors. To evaluate the performance of the selected ANN and adaptive neural fuzzy inference system (ANFIS) models, comparison is made with the autoregressive (AR) models. The ANFIS model trained with the input data vector including previous inflows and cyclic terms of monthly periodicity has shown a significant improvement in the forecast accuracy in comparison with the ANFIS models trained with the input vectors considering only previous inflows. In all cases ANFIS gives more accurate forecast than the AR and ANN models. The proposed ANFIS model coupled with the cyclic terms is shown to provide better representation of the monthly inflow forecasting for planning and operation of reservoir.

  20. Output-only modal parameter estimator of linear time-varying structural systems based on vector TAR model and least squares support vector machine

    NASA Astrophysics Data System (ADS)

    Zhou, Si-Da; Ma, Yuan-Chen; Liu, Li; Kang, Jie; Ma, Zhi-Sai; Yu, Lei

    2018-01-01

    Identification of time-varying modal parameters contributes to the structural health monitoring, fault detection, vibration control, etc. of the operational time-varying structural systems. However, it is a challenging task because there is not more information for the identification of the time-varying systems than that of the time-invariant systems. This paper presents a vector time-dependent autoregressive model and least squares support vector machine based modal parameter estimator for linear time-varying structural systems in case of output-only measurements. To reduce the computational cost, a Wendland's compactly supported radial basis function is used to achieve the sparsity of the Gram matrix. A Gamma-test-based non-parametric approach of selecting the regularization factor is adapted for the proposed estimator to replace the time-consuming n-fold cross validation. A series of numerical examples have illustrated the advantages of the proposed modal parameter estimator on the suppression of the overestimate and the short data. A laboratory experiment has further validated the proposed estimator.

  1. Drunk driving detection based on classification of multivariate time series.

    PubMed

    Li, Zhenlong; Jin, Xue; Zhao, Xiaohua

    2015-09-01

    This paper addresses the problem of detecting drunk driving based on classification of multivariate time series. First, driving performance measures were collected from a test in a driving simulator located in the Traffic Research Center, Beijing University of Technology. Lateral position and steering angle were used to detect drunk driving. Second, multivariate time series analysis was performed to extract the features. A piecewise linear representation was used to represent multivariate time series. A bottom-up algorithm was then employed to separate multivariate time series. The slope and time interval of each segment were extracted as the features for classification. Third, a support vector machine classifier was used to classify driver's state into two classes (normal or drunk) according to the extracted features. The proposed approach achieved an accuracy of 80.0%. Drunk driving detection based on the analysis of multivariate time series is feasible and effective. The approach has implications for drunk driving detection. Copyright © 2015 Elsevier Ltd and National Safety Council. All rights reserved.

  2. TMV-Gate vectors: Gateway compatible tobacco mosaic virus based expression vectors for functional analysis of proteins

    PubMed Central

    Kagale, Sateesh; Uzuhashi, Shihomi; Wigness, Merek; Bender, Tricia; Yang, Wen; Borhan, M. Hossein; Rozwadowski, Kevin

    2012-01-01

    Plant viral expression vectors are advantageous for high-throughput functional characterization studies of genes due to their capability for rapid, high-level transient expression of proteins. We have constructed a series of tobacco mosaic virus (TMV) based vectors that are compatible with Gateway technology to enable rapid assembly of expression constructs and exploitation of ORFeome collections. In addition to the potential of producing recombinant protein at grams per kilogram FW of leaf tissue, these vectors facilitate either N- or C-terminal fusions to a broad series of epitope tag(s) and fluorescent proteins. We demonstrate the utility of these vectors in affinity purification, immunodetection and subcellular localisation studies. We also apply the vectors to characterize protein-protein interactions and demonstrate their utility in screening plant pathogen effectors. Given its broad utility in defining protein properties, this vector series will serve as a useful resource to expedite gene characterization efforts. PMID:23166857

  3. A Wavelet Support Vector Machine Combination Model for Singapore Tourist Arrival to Malaysia

    NASA Astrophysics Data System (ADS)

    Rafidah, A.; Shabri, Ani; Nurulhuda, A.; Suhaila, Y.

    2017-08-01

    In this study, wavelet support vector machine model (WSVM) is proposed and applied for monthly data Singapore tourist time series prediction. The WSVM model is combination between wavelet analysis and support vector machine (SVM). In this study, we have two parts, first part we compare between the kernel function and second part we compare between the developed models with single model, SVM. The result showed that kernel function linear better than RBF while WSVM outperform with single model SVM to forecast monthly Singapore tourist arrival to Malaysia.

  4. An Environmental Data Set for Vector-Borne Disease Modeling and Epidemiology

    PubMed Central

    Chabot-Couture, Guillaume; Nigmatulina, Karima; Eckhoff, Philip

    2014-01-01

    Understanding the environmental conditions of disease transmission is important in the study of vector-borne diseases. Low- and middle-income countries bear a significant portion of the disease burden; but data about weather conditions in those countries can be sparse and difficult to reconstruct. Here, we describe methods to assemble high-resolution gridded time series data sets of air temperature, relative humidity, land temperature, and rainfall for such areas; and we test these methods on the island of Madagascar. Air temperature and relative humidity were constructed using statistical interpolation of weather station measurements; the resulting median 95th percentile absolute errors were 2.75°C and 16.6%. Missing pixels from the MODIS11 remote sensing land temperature product were estimated using Fourier decomposition and time-series analysis; thus providing an alternative to the 8-day and 30-day aggregated products. The RFE 2.0 remote sensing rainfall estimator was characterized by comparing it with multiple interpolated rainfall products, and we observed significant differences in temporal and spatial heterogeneity relevant to vector-borne disease modeling. PMID:24755954

  5. A new method for reconstruction of solar irradiance

    NASA Astrophysics Data System (ADS)

    Privalsky, Victor

    2018-07-01

    The purpose of this research is to show how time series should be reconstructed using an example with the data on total solar irradiation (TSI) of the Earth and on sunspot numbers (SSN) since 1749. The traditional approach through regression equation(s) is designed for time-invariant vectors of random variables and is not applicable to time series, which present random functions of time. The autoregressive reconstruction (ARR) method suggested here requires fitting a multivariate stochastic difference equation to the target/proxy time series. The reconstruction is done through the scalar equation for the target time series with the white noise term excluded. The time series approach is shown to provide a better reconstruction of TSI than the correlation/regression method. A reconstruction criterion is introduced which allows one to define in advance the achievable level of success in the reconstruction. The conclusion is that time series, including the total solar irradiance, cannot be reconstructed properly if the data are not treated as sample records of random processes and analyzed in both time and frequency domains.

  6. Application of the Allan Variance to Time Series Analysis in Astrometry and Geodesy: A Review.

    PubMed

    Malkin, Zinovy

    2016-04-01

    The Allan variance (AVAR) was introduced 50 years ago as a statistical tool for assessing the frequency standards deviations. For the past decades, AVAR has increasingly been used in geodesy and astrometry to assess the noise characteristics in geodetic and astrometric time series. A specific feature of astrometric and geodetic measurements, as compared with clock measurements, is that they are generally associated with uncertainties; thus, an appropriate weighting should be applied during data analysis. In addition, some physically connected scalar time series naturally form series of multidimensional vectors. For example, three station coordinates time series X, Y, and Z can be combined to analyze 3-D station position variations. The classical AVAR is not intended for processing unevenly weighted and/or multidimensional data. Therefore, AVAR modifications, namely weighted AVAR (WAVAR), multidimensional AVAR (MAVAR), and weighted multidimensional AVAR (WMAVAR), were introduced to overcome these deficiencies. In this paper, a brief review is given of the experience of using AVAR and its modifications in processing astrogeodetic time series.

  7. A Space Affine Matching Approach to fMRI Time Series Analysis.

    PubMed

    Chen, Liang; Zhang, Weishi; Liu, Hongbo; Feng, Shigang; Chen, C L Philip; Wang, Huili

    2016-07-01

    For fMRI time series analysis, an important challenge is to overcome the potential delay between hemodynamic response signal and cognitive stimuli signal, namely the same frequency but different phase (SFDP) problem. In this paper, a novel space affine matching feature is presented by introducing the time domain and frequency domain features. The time domain feature is used to discern different stimuli, while the frequency domain feature to eliminate the delay. And then we propose a space affine matching (SAM) algorithm to match fMRI time series by our affine feature, in which a normal vector is estimated using gradient descent to explore the time series matching optimally. The experimental results illustrate that the SAM algorithm is insensitive to the delay between the hemodynamic response signal and the cognitive stimuli signal. Our approach significantly outperforms GLM method while there exists the delay. The approach can help us solve the SFDP problem in fMRI time series matching and thus of great promise to reveal brain dynamics.

  8. The incorrect usage of singular spectral analysis and discrete wavelet transform in hybrid models to predict hydrological time series

    NASA Astrophysics Data System (ADS)

    Du, Kongchang; Zhao, Ying; Lei, Jiaqiang

    2017-09-01

    In hydrological time series prediction, singular spectrum analysis (SSA) and discrete wavelet transform (DWT) are widely used as preprocessing techniques for artificial neural network (ANN) and support vector machine (SVM) predictors. These hybrid or ensemble models seem to largely reduce the prediction error. In current literature researchers apply these techniques to the whole observed time series and then obtain a set of reconstructed or decomposed time series as inputs to ANN or SVM. However, through two comparative experiments and mathematical deduction we found the usage of SSA and DWT in building hybrid models is incorrect. Since SSA and DWT adopt 'future' values to perform the calculation, the series generated by SSA reconstruction or DWT decomposition contain information of 'future' values. These hybrid models caused incorrect 'high' prediction performance and may cause large errors in practice.

  9. Full Wave Analysis of RF Signal Attenuation in a Lossy Cave using a High Order Time Domain Vector Finite Element Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pingenot, J; Rieben, R; White, D

    2004-12-06

    We present a computational study of signal propagation and attenuation of a 200 MHz dipole antenna in a cave environment. The cave is modeled as a straight and lossy random rough wall. To simulate a broad frequency band, the full wave Maxwell equations are solved directly in the time domain via a high order vector finite element discretization using the massively parallel CEM code EMSolve. The simulation is performed for a series of random meshes in order to generate statistical data for the propagation and attenuation properties of the cave environment. Results for the power spectral density and phase ofmore » the electric field vector components are presented and discussed.« less

  10. State space model approach for forecasting the use of electrical energy (a case study on: PT. PLN (Persero) district of Kroya)

    NASA Astrophysics Data System (ADS)

    Kurniati, Devi; Hoyyi, Abdul; Widiharih, Tatik

    2018-05-01

    Time series data is a series of data taken or measured based on observations at the same time interval. Time series data analysis is used to perform data analysis considering the effect of time. The purpose of time series analysis is to know the characteristics and patterns of a data and predict a data value in some future period based on data in the past. One of the forecasting methods used for time series data is the state space model. This study discusses the modeling and forecasting of electric energy consumption using the state space model for univariate data. The modeling stage is began with optimal Autoregressive (AR) order selection, determination of state vector through canonical correlation analysis, estimation of parameter, and forecasting. The result of this research shows that modeling of electric energy consumption using state space model of order 4 with Mean Absolute Percentage Error (MAPE) value 3.655%, so the model is very good forecasting category.

  11. Multiscale limited penetrable horizontal visibility graph for analyzing nonlinear time series

    NASA Astrophysics Data System (ADS)

    Gao, Zhong-Ke; Cai, Qing; Yang, Yu-Xuan; Dang, Wei-Dong; Zhang, Shan-Shan

    2016-10-01

    Visibility graph has established itself as a powerful tool for analyzing time series. We in this paper develop a novel multiscale limited penetrable horizontal visibility graph (MLPHVG). We use nonlinear time series from two typical complex systems, i.e., EEG signals and two-phase flow signals, to demonstrate the effectiveness of our method. Combining MLPHVG and support vector machine, we detect epileptic seizures from the EEG signals recorded from healthy subjects and epilepsy patients and the classification accuracy is 100%. In addition, we derive MLPHVGs from oil-water two-phase flow signals and find that the average clustering coefficient at different scales allows faithfully identifying and characterizing three typical oil-water flow patterns. These findings render our MLPHVG method particularly useful for analyzing nonlinear time series from the perspective of multiscale network analysis.

  12. Analysis of algae growth mechanism and water bloom prediction under the effect of multi-affecting factor.

    PubMed

    Wang, Li; Wang, Xiaoyi; Jin, Xuebo; Xu, Jiping; Zhang, Huiyan; Yu, Jiabin; Sun, Qian; Gao, Chong; Wang, Lingbin

    2017-03-01

    The formation process of algae is described inaccurately and water blooms are predicted with a low precision by current methods. In this paper, chemical mechanism of algae growth is analyzed, and a correlation analysis of chlorophyll-a and algal density is conducted by chemical measurement. Taking into account the influence of multi-factors on algae growth and water blooms, the comprehensive prediction method combined with multivariate time series and intelligent model is put forward in this paper. Firstly, through the process of photosynthesis, the main factors that affect the reproduction of the algae are analyzed. A compensation prediction method of multivariate time series analysis based on neural network and Support Vector Machine has been put forward which is combined with Kernel Principal Component Analysis to deal with dimension reduction of the influence factors of blooms. Then, Genetic Algorithm is applied to improve the generalization ability of the BP network and Least Squares Support Vector Machine. Experimental results show that this method could better compensate the prediction model of multivariate time series analysis which is an effective way to improve the description accuracy of algae growth and prediction precision of water blooms.

  13. Revision of Primary Series Maps

    USGS Publications Warehouse

    ,

    2000-01-01

    In 1992, the U.S. Geological Survey (USGS) completed a 50-year effort to provide primary series map coverage of the United States. Many of these maps now need to be updated to reflect the construction of new roads and highways and other changes that have taken place over time. The USGS has formulated a graphic revision plan to help keep the primary series maps current. Primary series maps include 1:20,000-scale quadrangles of Puerto Rico, 1:24,000- or 1:25,000-scale quadrangles of the conterminous United States, Hawaii, and U.S. Territories, and 1:63,360-scale quadrangles of Alaska. The revision of primary series maps from new collection sources is accomplished using a variety of processes. The raster revision process combines the scanned content of paper maps with raster updating technologies. The vector revision process involves the automated plotting of updated vector files. Traditional processes use analog stereoplotters and manual scribing instruments on specially coated map separates. The ability to select from or combine these processes increases the efficiency of the National Mapping Division map revision program.

  14. Characterization of the Dispersal of Non-Domiciliated Triatoma dimidiata through the Selection of Spatially Explicit Models

    PubMed Central

    Barbu, Corentin; Dumonteil, Eric; Gourbière, Sébastien

    2010-01-01

    Background Chagas disease is a major parasitic disease in Latin America, prevented in part by vector control programs that reduce domestic populations of triatomines. However, the design of control strategies adapted to non-domiciliated vectors, such as Triatoma dimidiata, remains a challenge because it requires an accurate description of their spatio-temporal distributions, and a proper understanding of the underlying dispersal processes. Methodology/Principal Findings We combined extensive spatio-temporal data sets describing house infestation dynamics by T. dimidiata within a village, and spatially explicit population dynamics models in a selection model approach. Several models were implemented to provide theoretical predictions under different hypotheses on the origin of the dispersers and their dispersal characteristics, which we compared with the spatio-temporal pattern of infestation observed in the field. The best models fitted the dynamic of infestation described by a one year time-series, and also predicted with a very good accuracy the infestation process observed during a second replicate one year time-series. The parameterized models gave key insights into the dispersal of these vectors. i) About 55% of the triatomines infesting houses came from the peridomestic habitat, the rest corresponding to immigration from the sylvatic habitat, ii) dispersing triatomines were 5–15 times more attracted by houses than by peridomestic area, and iii) the moving individuals spread on average over rather small distances, typically 40–60 m/15 days. Conclusion/Significance Since these dispersal characteristics are associated with much higher abundance of insects in the periphery of the village, we discuss the possibility that spatially targeted interventions allow for optimizing the efficacy of vector control activities within villages. Such optimization could prove very useful in the context of limited resources devoted to vector control. PMID:20689823

  15. Predicting Flavonoid UGT Regioselectivity

    PubMed Central

    Jackson, Rhydon; Knisley, Debra; McIntosh, Cecilia; Pfeiffer, Phillip

    2011-01-01

    Machine learning was applied to a challenging and biologically significant protein classification problem: the prediction of avonoid UGT acceptor regioselectivity from primary sequence. Novel indices characterizing graphical models of residues were proposed and found to be widely distributed among existing amino acid indices and to cluster residues appropriately. UGT subsequences biochemically linked to regioselectivity were modeled as sets of index sequences. Several learning techniques incorporating these UGT models were compared with classifications based on standard sequence alignment scores. These techniques included an application of time series distance functions to protein classification. Time series distances defined on the index sequences were used in nearest neighbor and support vector machine classifiers. Additionally, Bayesian neural network classifiers were applied to the index sequences. The experiments identified improvements over the nearest neighbor and support vector machine classifications relying on standard alignment similarity scores, as well as strong correlations between specific subsequences and regioselectivities. PMID:21747849

  16. Daily sea level prediction at Chiayi coast, Taiwan using extreme learning machine and relevance vector machine

    NASA Astrophysics Data System (ADS)

    Imani, Moslem; Kao, Huan-Chin; Lan, Wen-Hau; Kuo, Chung-Yen

    2018-02-01

    The analysis and the prediction of sea level fluctuations are core requirements of marine meteorology and operational oceanography. Estimates of sea level with hours-to-days warning times are especially important for low-lying regions and coastal zone management. The primary purpose of this study is to examine the applicability and capability of extreme learning machine (ELM) and relevance vector machine (RVM) models for predicting sea level variations and compare their performances with powerful machine learning methods, namely, support vector machine (SVM) and radial basis function (RBF) models. The input dataset from the period of January 2004 to May 2011 used in the study was obtained from the Dongshi tide gauge station in Chiayi, Taiwan. Results showed that the ELM and RVM models outperformed the other methods. The performance of the RVM approach was superior in predicting the daily sea level time series given the minimum root mean square error of 34.73 mm and the maximum determination coefficient of 0.93 (R2) during the testing periods. Furthermore, the obtained results were in close agreement with the original tide-gauge data, which indicates that RVM approach is a promising alternative method for time series prediction and could be successfully used for daily sea level forecasts.

  17. New Method for Solving Inductive Electric Fields in the Ionosphere

    NASA Astrophysics Data System (ADS)

    Vanhamäki, H.

    2005-12-01

    We present a new method for calculating inductive electric fields in the ionosphere. It is well established that on large scales the ionospheric electric field is a potential field. This is understandable, since the temporal variations of large scale current systems are generally quite slow, in the timescales of several minutes, so inductive effects should be small. However, studies of Alfven wave reflection have indicated that in some situations inductive phenomena could well play a significant role in the reflection process, and thus modify the nature of ionosphere-magnetosphere coupling. The input to our calculation method are the time series of the potential part of the ionospheric electric field together with the Hall and Pedersen conductances. The output is the time series of the induced rotational part of the ionospheric electric field. The calculation method works in the time-domain and can be used with non-uniform, time-dependent conductances. In addition no particular symmetry requirements are imposed on the input potential electric field. The presented method makes use of special non-local vector basis functions called Cartesian Elementary Current Systems (CECS). This vector basis offers a convenient way of representing curl-free and divergence-free parts of 2-dimensional vector fields and makes it possible to solve the induction problem using simple linear algebra. The new calculation method is validated by comparing it with previously published results for Alfven wave reflection from uniformly conducting ionosphere.

  18. Electrocardiogram ST-Segment Morphology Delineation Method Using Orthogonal Transformations

    PubMed Central

    2016-01-01

    Differentiation between ischaemic and non-ischaemic transient ST segment events of long term ambulatory electrocardiograms is a persisting weakness in present ischaemia detection systems. Traditional ST segment level measuring is not a sufficiently precise technique due to the single point of measurement and severe noise which is often present. We developed a robust noise resistant orthogonal-transformation based delineation method, which allows tracing the shape of transient ST segment morphology changes from the entire ST segment in terms of diagnostic and morphologic feature-vector time series, and also allows further analysis. For these purposes, we developed a new Legendre Polynomials based Transformation (LPT) of ST segment. Its basis functions have similar shapes to typical transient changes of ST segment morphology categories during myocardial ischaemia (level, slope and scooping), thus providing direct insight into the types of time domain morphology changes through the LPT feature-vector space. We also generated new Karhunen and Lo ève Transformation (KLT) ST segment basis functions using a robust covariance matrix constructed from the ST segment pattern vectors derived from the Long Term ST Database (LTST DB). As for the delineation of significant transient ischaemic and non-ischaemic ST segment episodes, we present a study on the representation of transient ST segment morphology categories, and an evaluation study on the classification power of the KLT- and LPT-based feature vectors to classify between ischaemic and non-ischaemic ST segment episodes of the LTST DB. Classification accuracy using the KLT and LPT feature vectors was 90% and 82%, respectively, when using the k-Nearest Neighbors (k = 3) classifier and 10-fold cross-validation. New sets of feature-vector time series for both transformations were derived for the records of the LTST DB which is freely available on the PhysioNet website and were contributed to the LTST DB. The KLT and LPT present new possibilities for human-expert diagnostics, and for automated ischaemia detection. PMID:26863140

  19. Equivalent Dynamic Models.

    PubMed

    Molenaar, Peter C M

    2017-01-01

    Equivalences of two classes of dynamic models for weakly stationary multivariate time series are discussed: dynamic factor models and autoregressive models. It is shown that exploratory dynamic factor models can be rotated, yielding an infinite set of equivalent solutions for any observed series. It also is shown that dynamic factor models with lagged factor loadings are not equivalent to the currently popular state-space models, and that restriction of attention to the latter type of models may yield invalid results. The known equivalent vector autoregressive model types, standard and structural, are given a new interpretation in which they are conceived of as the extremes of an innovating type of hybrid vector autoregressive models. It is shown that consideration of hybrid models solves many problems, in particular with Granger causality testing.

  20. Conversion of the magnetic field measured in three components on the magnetic sensor body's random coordinate system into three components on geographical coordinate system through quaternion rotation.

    NASA Astrophysics Data System (ADS)

    LIM, M.; PARK, Y.; Jung, H.; SHIN, Y.; Rim, H.; PARK, C.

    2017-12-01

    To measure all components of a physical property, for example the magnetic field, is more useful than to measure its magnitude only in interpretation and application thereafter. To convert the physical property measured in 3 components on a random coordinate system, for example on moving magnetic sensor body's coordinate system, into 3 components on a fixed coordinate system, for example on geographical coordinate system, by the rotations of coordinate system around Euler angles for example, we should have the attitude values of the sensor body in time series, which could be acquired by an INS-GNSS system of which the axes are installed coincident with those of the sensor body. But if we want to install some magnetic sensors in array at sea floor but without attitude acquisition facility of the magnetic sensors and to monitor the variation of magnetic fields in time, we should have also some way to estimate the relation between the geographical coordinate system and each sensor body's coordinate system by comparison of the vectors only measured on both coordinate systems on the assumption that the directions of the measured magnetic field on both coordinate systems are the same. For that estimation, we have at least 3 ways. The first one is to calculate 3 Euler angles phi, theta, psi from the equation Vgeograph = Rx(phi) Ry(theta) Rz(psi) Vrandom, where Vgeograph is the vector on geographical coordinate system etc. and Rx(phi) is the rotation matrix around the x axis by the angle phi etc. The second one is to calculate the difference of inclination and declination between the 2 vectors on spherical coordinate system. The third one, used by us for this study, is to calculate the angle of rotation along a great circle around the rotation axis, and the direction of the rotation axis. We installed no. 1 and no. 2 FVM-400 fluxgate magnetometers in array near Cheongyang Geomagnetic Observatory (IAGA code CYG) and acquired time series of magnetic fields for CYG and for the two magnetometers. Once the angle of rotation and the direction of the rotation axis for each couple of CYG and no. 1 and of CYG and no. 2 estimated, we rotated the measured time series of vectors using quaternion rotation to get 3 time series of magnetic fields all on geographical coordinate system, which were used for tracing the moving magnetic bodies along time in that area.

  1. Chaos and Forecasting - Proceedings of the Royal Society Discussion Meeting

    NASA Astrophysics Data System (ADS)

    Tong, Howell

    1995-04-01

    The Table of Contents for the full book PDF is as follows: * Preface * Orthogonal Projection, Embedding Dimension and Sample Size in Chaotic Time Series from a Statistical Perspective * A Theory of Correlation Dimension for Stationary Time Series * On Prediction and Chaos in Stochastic Systems * Locally Optimized Prediction of Nonlinear Systems: Stochastic and Deterministic * A Poisson Distribution for the BDS Test Statistic for Independence in a Time Series * Chaos and Nonlinear Forecastability in Economics and Finance * Paradigm Change in Prediction * Predicting Nonuniform Chaotic Attractors in an Enzyme Reaction * Chaos in Geophysical Fluids * Chaotic Modulation of the Solar Cycle * Fractal Nature in Earthquake Phenomena and its Simple Models * Singular Vectors and the Predictability of Weather and Climate * Prediction as a Criterion for Classifying Natural Time Series * Measuring and Characterising Spatial Patterns, Dynamics and Chaos in Spatially-Extended Dynamical Systems and Ecologies * Non-Linear Forecasting and Chaos in Ecology and Epidemiology: Measles as a Case Study

  2. Indicator Development for Potential Presence of Schistosomiasis Japonicum's Vector in Lake and Marshland Regions- A Case Study of Poyang Lake, Jiangxi Province, P.R. China

    NASA Astrophysics Data System (ADS)

    Marie, Tiphanie; Yesou, Herve; Huber, Claire; De Fraipont, Paul; Uribe, Carlos; Lacaux, Jean-Pierre; Lafaye, Murielle; Lai, Xijun; Desnos, Yves-Louis

    2013-01-01

    Earth observation data and bibliography on environmental parameters were used for mapping Oncomelania hupensis distribution, the Schistosomiasis japonicum’s intermediate host snail, within Poyang Lake (Jiangxi Province, P.R. China). Areas suitable for the development of O. hupensis, the vector of schistosomiasis, were derived from submersion time parameters and vegetation community indicators. ENVISAT time series data acquired from 2000 to 2009 were used for submersion times mapping, and 5 Beijing-1 data acquired during the dry season between 2006 and 2008 were used to map suitable vegetation for vector development. Yearly maps obtained indicate four principally potential endemic areas: the Gan Delta, the bank of the Fu He River, the Dalianzi Hu sector and the Poyang Lake Nature Reserve. Monthly maps from December 2005 to December 2008 show the dynamic of potential O. hupensis presence areas.

  3. GRASS GIS: The first Open Source Temporal GIS

    NASA Astrophysics Data System (ADS)

    Gebbert, Sören; Leppelt, Thomas

    2015-04-01

    GRASS GIS is a full featured, general purpose Open Source geographic information system (GIS) with raster, 3D raster and vector processing support[1]. Recently, time was introduced as a new dimension that transformed GRASS GIS into the first Open Source temporal GIS with comprehensive spatio-temporal analysis, processing and visualization capabilities[2]. New spatio-temporal data types were introduced in GRASS GIS version 7, to manage raster, 3D raster and vector time series. These new data types are called space time datasets. They are designed to efficiently handle hundreds of thousands of time stamped raster, 3D raster and vector map layers of any size. Time stamps can be defined as time intervals or time instances in Gregorian calendar time or relative time. Space time datasets are simplifying the processing and analysis of large time series in GRASS GIS, since these new data types are used as input and output parameter in temporal modules. The handling of space time datasets is therefore equal to the handling of raster, 3D raster and vector map layers in GRASS GIS. A new dedicated Python library, the GRASS GIS Temporal Framework, was designed to implement the spatio-temporal data types and their management. The framework provides the functionality to efficiently handle hundreds of thousands of time stamped map layers and their spatio-temporal topological relations. The framework supports reasoning based on the temporal granularity of space time datasets as well as their temporal topology. It was designed in conjunction with the PyGRASS [3] library to support parallel processing of large datasets, that has a long tradition in GRASS GIS [4,5]. We will present a subset of more than 40 temporal modules that were implemented based on the GRASS GIS Temporal Framework, PyGRASS and the GRASS GIS Python scripting library. These modules provide a comprehensive temporal GIS tool set. The functionality range from space time dataset and time stamped map layer management over temporal aggregation, temporal accumulation, spatio-temporal statistics, spatio-temporal sampling, temporal algebra, temporal topology analysis, time series animation and temporal topology visualization to time series import and export capabilities with support for NetCDF and VTK data formats. We will present several temporal modules that support parallel processing of raster and 3D raster time series. [1] GRASS GIS Open Source Approaches in Spatial Data Handling In Open Source Approaches in Spatial Data Handling, Vol. 2 (2008), pp. 171-199, doi:10.1007/978-3-540-74831-19 by M. Neteler, D. Beaudette, P. Cavallini, L. Lami, J. Cepicky edited by G. Brent Hall, Michael G. Leahy [2] Gebbert, S., Pebesma, E., 2014. A temporal GIS for field based environmental modeling. Environ. Model. Softw. 53, 1-12. [3] Zambelli, P., Gebbert, S., Ciolli, M., 2013. Pygrass: An Object Oriented Python Application Programming Interface (API) for Geographic Resources Analysis Support System (GRASS) Geographic Information System (GIS). ISPRS Intl Journal of Geo-Information 2, 201-219. [4] Löwe, P., Klump, J., Thaler, J. (2012): The FOSS GIS Workbench on the GFZ Load Sharing Facility compute cluster, (Geophysical Research Abstracts Vol. 14, EGU2012-4491, 2012), General Assembly European Geosciences Union (Vienna, Austria 2012). [5] Akhter, S., Aida, K., Chemin, Y., 2010. "GRASS GIS on High Performance Computing with MPI, OpenMP and Ninf-G Programming Framework". ISPRS Conference, Kyoto, 9-12 August 2010

  4. Validation of a Monte Carlo Simulation of Binary Time Series.

    DTIC Science & Technology

    1981-09-18

    the probability distribution corresponding to the population from which the n sample vectors are generated. Simple unbiased estimators were chosen for...Cowcept A s*us Agew Bethesd, Marylnd H. L. Wauom Am D. RoQuE SymMS Reserch Brach , p" Ssms Delsbian September 18, 1981 DTIC EL E C T E SEP 24 =I98ST...is generated from the sample of such vectors produced by several independent replications of the Monte Carlo simulation. Then the validity of the

  5. Pulse Code Modulation (PCM) encoder handbook for Aydin Vector MMP-900 series system

    NASA Technical Reports Server (NTRS)

    Raphael, David

    1995-01-01

    This handbook explicates the hardware and software properties of a time division multiplex system. This system is used to sample analog and digital data. The data is then merged with frame synchronization information to produce a serial pulse coded modulation (PCM) bit stream. Information in this handbook is required by users to design congruous interface and attest effective utilization of this encoder system. Aydin Vector provides all of the components for these systems to Goddard Space Flight Center/Wallops Flight Facility.

  6. Use of Mapping and Spatial and Space-Time Modeling Approaches in Operational Control of Aedes aegypti and Dengue

    PubMed Central

    Eisen, Lars; Lozano-Fuentes, Saul

    2009-01-01

    The aims of this review paper are to 1) provide an overview of how mapping and spatial and space-time modeling approaches have been used to date to visualize and analyze mosquito vector and epidemiologic data for dengue; and 2) discuss the potential for these approaches to be included as routine activities in operational vector and dengue control programs. Geographical information system (GIS) software are becoming more user-friendly and now are complemented by free mapping software that provide access to satellite imagery and basic feature-making tools and have the capacity to generate static maps as well as dynamic time-series maps. Our challenge is now to move beyond the research arena by transferring mapping and GIS technologies and spatial statistical analysis techniques in user-friendly packages to operational vector and dengue control programs. This will enable control programs to, for example, generate risk maps for exposure to dengue virus, develop Priority Area Classifications for vector control, and explore socioeconomic associations with dengue risk. PMID:19399163

  7. Learning with LOGO: Logo and Vectors.

    ERIC Educational Resources Information Center

    Lough, Tom; Tipps, Steve

    1986-01-01

    This is the first of a two-part series on the general concept of vector space. Provides tool procedures to allow investigation of vector properties, vector addition and subtraction, and X and Y components. Lists several sources of additional vector ideas. (JM)

  8. Coil-to-coil physiological noise correlations and their impact on fMRI time-series SNR

    PubMed Central

    Triantafyllou, C.; Polimeni, J. R.; Keil, B.; Wald, L. L.

    2017-01-01

    Purpose Physiological nuisance fluctuations (“physiological noise”) are a major contribution to the time-series Signal to Noise Ratio (tSNR) of functional imaging. While thermal noise correlations between array coil elements have a well-characterized effect on the image Signal to Noise Ratio (SNR0), the element-to-element covariance matrix of the time-series fluctuations has not yet been analyzed. We examine this effect with a goal of ultimately improving the combination of multichannel array data. Theory and Methods We extend the theoretical relationship between tSNR and SNR0 to include a time-series noise covariance matrix Ψt, distinct from the thermal noise covariance matrix Ψ0, and compare its structure to Ψ0 and the signal coupling matrix SSH formed from the signal intensity vectors S. Results Inclusion of the measured time-series noise covariance matrix into the model relating tSNR and SNR0 improves the fit of experimental multichannel data and is shown to be distinct from Ψ0 or SSH. Conclusion Time-series noise covariances in array coils are found to differ from Ψ0 and more surprisingly, from the signal coupling matrix SSH. Correct characterization of the time-series noise has implications for the analysis of time-series data and for improving the coil element combination process. PMID:26756964

  9. Understanding the Role of Deterrence in Counterterrorism Security

    DTIC Science & Technology

    2009-11-01

    30, No. 5, pp. 429–443. Enders, W., Sandler, T. (1993). “The Effectiveness of Anti-Terrorism Policies: Vector Autoregression Intervention Analysis ...occasional paper series . RAND occasional papers may include an informed perspective on a timely policy issue, a discussion of new research...United States safe? Are better means available for evaluating what may work or not and why? This series is designed to focus on a small set of

  10. dynGENIE3: dynamical GENIE3 for the inference of gene networks from time series expression data.

    PubMed

    Huynh-Thu, Vân Anh; Geurts, Pierre

    2018-02-21

    The elucidation of gene regulatory networks is one of the major challenges of systems biology. Measurements about genes that are exploited by network inference methods are typically available either in the form of steady-state expression vectors or time series expression data. In our previous work, we proposed the GENIE3 method that exploits variable importance scores derived from Random forests to identify the regulators of each target gene. This method provided state-of-the-art performance on several benchmark datasets, but it could however not specifically be applied to time series expression data. We propose here an adaptation of the GENIE3 method, called dynamical GENIE3 (dynGENIE3), for handling both time series and steady-state expression data. The proposed method is evaluated extensively on the artificial DREAM4 benchmarks and on three real time series expression datasets. Although dynGENIE3 does not systematically yield the best performance on each and every network, it is competitive with diverse methods from the literature, while preserving the main advantages of GENIE3 in terms of scalability.

  11. ℓ(p)-Norm multikernel learning approach for stock market price forecasting.

    PubMed

    Shao, Xigao; Wu, Kun; Liao, Bifeng

    2012-01-01

    Linear multiple kernel learning model has been used for predicting financial time series. However, ℓ(1)-norm multiple support vector regression is rarely observed to outperform trivial baselines in practical applications. To allow for robust kernel mixtures that generalize well, we adopt ℓ(p)-norm multiple kernel support vector regression (1 ≤ p < ∞) as a stock price prediction model. The optimization problem is decomposed into smaller subproblems, and the interleaved optimization strategy is employed to solve the regression model. The model is evaluated on forecasting the daily stock closing prices of Shanghai Stock Index in China. Experimental results show that our proposed model performs better than ℓ(1)-norm multiple support vector regression model.

  12. A hybrid least squares support vector machines and GMDH approach for river flow forecasting

    NASA Astrophysics Data System (ADS)

    Samsudin, R.; Saad, P.; Shabri, A.

    2010-06-01

    This paper proposes a novel hybrid forecasting model, which combines the group method of data handling (GMDH) and the least squares support vector machine (LSSVM), known as GLSSVM. The GMDH is used to determine the useful input variables for LSSVM model and the LSSVM model which works as time series forecasting. In this study the application of GLSSVM for monthly river flow forecasting of Selangor and Bernam River are investigated. The results of the proposed GLSSVM approach are compared with the conventional artificial neural network (ANN) models, Autoregressive Integrated Moving Average (ARIMA) model, GMDH and LSSVM models using the long term observations of monthly river flow discharge. The standard statistical, the root mean square error (RMSE) and coefficient of correlation (R) are employed to evaluate the performance of various models developed. Experiment result indicates that the hybrid model was powerful tools to model discharge time series and can be applied successfully in complex hydrological modeling.

  13. Rolling bearing fault detection and diagnosis based on composite multiscale fuzzy entropy and ensemble support vector machines

    NASA Astrophysics Data System (ADS)

    Zheng, Jinde; Pan, Haiyang; Cheng, Junsheng

    2017-02-01

    To timely detect the incipient failure of rolling bearing and find out the accurate fault location, a novel rolling bearing fault diagnosis method is proposed based on the composite multiscale fuzzy entropy (CMFE) and ensemble support vector machines (ESVMs). Fuzzy entropy (FuzzyEn), as an improvement of sample entropy (SampEn), is a new nonlinear method for measuring the complexity of time series. Since FuzzyEn (or SampEn) in single scale can not reflect the complexity effectively, multiscale fuzzy entropy (MFE) is developed by defining the FuzzyEns of coarse-grained time series, which represents the system dynamics in different scales. However, the MFE values will be affected by the data length, especially when the data are not long enough. By combining information of multiple coarse-grained time series in the same scale, the CMFE algorithm is proposed in this paper to enhance MFE, as well as FuzzyEn. Compared with MFE, with the increasing of scale factor, CMFE obtains much more stable and consistent values for a short-term time series. In this paper CMFE is employed to measure the complexity of vibration signals of rolling bearings and is applied to extract the nonlinear features hidden in the vibration signals. Also the physically meanings of CMFE being suitable for rolling bearing fault diagnosis are explored. Based on these, to fulfill an automatic fault diagnosis, the ensemble SVMs based multi-classifier is constructed for the intelligent classification of fault features. Finally, the proposed fault diagnosis method of rolling bearing is applied to experimental data analysis and the results indicate that the proposed method could effectively distinguish different fault categories and severities of rolling bearings.

  14. "Analytical" vector-functions I

    NASA Astrophysics Data System (ADS)

    Todorov, Vladimir Todorov

    2017-12-01

    In this note we try to give a new (or different) approach to the investigation of analytical vector functions. More precisely a notion of a power xn; n ∈ ℕ+ of a vector x ∈ ℝ3 is introduced which allows to define an "analytical" function f : ℝ3 → ℝ3. Let furthermore f (ξ )= ∑n =0 ∞ anξn be an analytical function of the real variable ξ. Here we replace the power ξn of the number ξ with the power of a vector x ∈ ℝ3 to obtain a vector "power series" f (x )= ∑n =0 ∞ anxn . We research some properties of the vector series as well as some applications of this idea. Note that an "analytical" vector function does not depend of any basis, which may be used in research into some problems in physics.

  15. A general approach for developing system-specific functions to score protein-ligand docked complexes using support vector inductive logic programming.

    PubMed

    Amini, Ata; Shrimpton, Paul J; Muggleton, Stephen H; Sternberg, Michael J E

    2007-12-01

    Despite the increased recent use of protein-ligand and protein-protein docking in the drug discovery process due to the increases in computational power, the difficulty of accurately ranking the binding affinities of a series of ligands or a series of proteins docked to a protein receptor remains largely unsolved. This problem is of major concern in lead optimization procedures and has lead to the development of scoring functions tailored to rank the binding affinities of a series of ligands to a specific system. However, such methods can take a long time to develop and their transferability to other systems remains open to question. Here we demonstrate that given a suitable amount of background information a new approach using support vector inductive logic programming (SVILP) can be used to produce system-specific scoring functions. Inductive logic programming (ILP) learns logic-based rules for a given dataset that can be used to describe properties of each member of the set in a qualitative manner. By combining ILP with support vector machine regression, a quantitative set of rules can be obtained. SVILP has previously been used in a biological context to examine datasets containing a series of singular molecular structures and properties. Here we describe the use of SVILP to produce binding affinity predictions of a series of ligands to a particular protein. We also for the first time examine the applicability of SVILP techniques to datasets consisting of protein-ligand complexes. Our results show that SVILP performs comparably with other state-of-the-art methods on five protein-ligand systems as judged by similar cross-validated squares of their correlation coefficients. A McNemar test comparing SVILP to CoMFA and CoMSIA across the five systems indicates our method to be significantly better on one occasion. The ability to graphically display and understand the SVILP-produced rules is demonstrated and this feature of ILP can be used to derive hypothesis for future ligand design in lead optimization procedures. The approach can readily be extended to evaluate the binding affinities of a series of protein-protein complexes. (c) 2007 Wiley-Liss, Inc.

  16. Kernel canonical-correlation Granger causality for multiple time series

    NASA Astrophysics Data System (ADS)

    Wu, Guorong; Duan, Xujun; Liao, Wei; Gao, Qing; Chen, Huafu

    2011-04-01

    Canonical-correlation analysis as a multivariate statistical technique has been applied to multivariate Granger causality analysis to infer information flow in complex systems. It shows unique appeal and great superiority over the traditional vector autoregressive method, due to the simplified procedure that detects causal interaction between multiple time series, and the avoidance of potential model estimation problems. However, it is limited to the linear case. Here, we extend the framework of canonical correlation to include the estimation of multivariate nonlinear Granger causality for drawing inference about directed interaction. Its feasibility and effectiveness are verified on simulated data.

  17. Structural Equation Modeling of Multivariate Time Series

    ERIC Educational Resources Information Center

    du Toit, Stephen H. C.; Browne, Michael W.

    2007-01-01

    The covariance structure of a vector autoregressive process with moving average residuals (VARMA) is derived. It differs from other available expressions for the covariance function of a stationary VARMA process and is compatible with current structural equation methodology. Structural equation modeling programs, such as LISREL, may therefore be…

  18. Conventional and advanced time series estimation: application to the Australian and New Zealand Intensive Care Society (ANZICS) adult patient database, 1993-2006.

    PubMed

    Moran, John L; Solomon, Patricia J

    2011-02-01

    Time series analysis has seen limited application in the biomedical Literature. The utility of conventional and advanced time series estimators was explored for intensive care unit (ICU) outcome series. Monthly mean time series, 1993-2006, for hospital mortality, severity-of-illness score (APACHE III), ventilation fraction and patient type (medical and surgical), were generated from the Australia and New Zealand Intensive Care Society adult patient database. Analyses encompassed geographical seasonal mortality patterns, series structural time changes, mortality series volatility using autoregressive moving average and Generalized Autoregressive Conditional Heteroscedasticity models in which predicted variances are updated adaptively, and bivariate and multivariate (vector error correction models) cointegrating relationships between series. The mortality series exhibited marked seasonality, declining mortality trend and substantial autocorrelation beyond 24 lags. Mortality increased in winter months (July-August); the medical series featured annual cycling, whereas the surgical demonstrated long and short (3-4 months) cycling. Series structural breaks were apparent in January 1995 and December 2002. The covariance stationary first-differenced mortality series was consistent with a seasonal autoregressive moving average process; the observed conditional-variance volatility (1993-1995) and residual Autoregressive Conditional Heteroscedasticity effects entailed a Generalized Autoregressive Conditional Heteroscedasticity model, preferred by information criterion and mean model forecast performance. Bivariate cointegration, indicating long-term equilibrium relationships, was established between mortality and severity-of-illness scores at the database level and for categories of ICUs. Multivariate cointegration was demonstrated for {log APACHE III score, log ICU length of stay, ICU mortality and ventilation fraction}. A system approach to understanding series time-dependence may be established using conventional and advanced econometric time series estimators. © 2010 Blackwell Publishing Ltd.

  19. Pulse Code Modulation (PCM) encoder handbook for Aydin Vector MMP-600 series system

    NASA Technical Reports Server (NTRS)

    Currier, S. F.; Powell, W. R.

    1986-01-01

    The hardware and software characteristics of a time division multiplex system are described. The system is used to sample analog and digital data. The data is merged with synchronization information to produce a serial pulse coded modulation (PCM) bit stream. Information presented herein is required by users to design compatible interfaces and assure effective utilization of this encoder system. GSFC/Wallops Flight Facility has flown approximately 50 of these systems through 1984 on sounding rockets with no inflight failures. Aydin Vector manufactures all of the components for these systems.

  20. Numerical limitations in application of vector autoregressive modeling and Granger causality to analysis of EEG time series

    NASA Astrophysics Data System (ADS)

    Kammerdiner, Alla; Xanthopoulos, Petros; Pardalos, Panos M.

    2007-11-01

    In this chapter a potential problem with application of the Granger-causality based on the simple vector autoregressive (VAR) modeling to EEG data is investigated. Although some initial studies tested whether the data support the stationarity assumption of VAR, the stability of the estimated model is rarely (if ever) been verified. In fact, in cases when the stability condition is violated the process may exhibit a random walk like behavior or even be explosive. The problem is illustrated by an example.

  1. Application of vector-valued rational approximations to the matrix eigenvalue problem and connections with Krylov subspace methods

    NASA Technical Reports Server (NTRS)

    Sidi, Avram

    1992-01-01

    Let F(z) be a vectored-valued function F: C approaches C sup N, which is analytic at z=0 and meromorphic in a neighborhood of z=0, and let its Maclaurin series be given. We use vector-valued rational approximation procedures for F(z) that are based on its Maclaurin series in conjunction with power iterations to develop bona fide generalizations of the power method for an arbitrary N X N matrix that may be diagonalizable or not. These generalizations can be used to obtain simultaneously several of the largest distinct eigenvalues and the corresponding invariant subspaces, and present a detailed convergence theory for them. In addition, it is shown that the generalized power methods of this work are equivalent to some Krylov subspace methods, among them the methods of Arnoldi and Lanczos. Thus, the theory provides a set of completely new results and constructions for these Krylov subspace methods. This theory suggests at the same time a new mode of usage for these Krylov subspace methods that were observed to possess computational advantages over their common mode of usage.

  2. Coil-to-coil physiological noise correlations and their impact on functional MRI time-series signal-to-noise ratio.

    PubMed

    Triantafyllou, Christina; Polimeni, Jonathan R; Keil, Boris; Wald, Lawrence L

    2016-12-01

    Physiological nuisance fluctuations ("physiological noise") are a major contribution to the time-series signal-to-noise ratio (tSNR) of functional imaging. While thermal noise correlations between array coil elements have a well-characterized effect on the image Signal to Noise Ratio (SNR 0 ), the element-to-element covariance matrix of the time-series fluctuations has not yet been analyzed. We examine this effect with a goal of ultimately improving the combination of multichannel array data. We extend the theoretical relationship between tSNR and SNR 0 to include a time-series noise covariance matrix Ψ t , distinct from the thermal noise covariance matrix Ψ 0 , and compare its structure to Ψ 0 and the signal coupling matrix SS H formed from the signal intensity vectors S. Inclusion of the measured time-series noise covariance matrix into the model relating tSNR and SNR 0 improves the fit of experimental multichannel data and is shown to be distinct from Ψ 0 or SS H . Time-series noise covariances in array coils are found to differ from Ψ 0 and more surprisingly, from the signal coupling matrix SS H . Correct characterization of the time-series noise has implications for the analysis of time-series data and for improving the coil element combination process. Magn Reson Med 76:1708-1719, 2016. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  3. A Metric to Quantify Shared Visual Attention in Two-Person Teams

    NASA Technical Reports Server (NTRS)

    Gontar, Patrick; Mulligan, Jeffrey B.

    2015-01-01

    Introduction: Critical tasks in high-risk environments are often performed by teams, the members of which must work together efficiently. In some situations, the team members may have to work together to solve a particular problem, while in others it may be better for them to divide the work into separate tasks that can be completed in parallel. We hypothesize that these two team strategies can be differentiated on the basis of shared visual attention, measured by gaze tracking. 2) Methods: Gaze recordings were obtained for two-person flight crews flying a high-fidelity simulator (Gontar, Hoermann, 2014). Gaze was categorized with respect to 12 areas of interest (AOIs). We used these data to construct time series of 12 dimensional vectors, with each vector component representing one of the AOIs. At each time step, each vector component was set to 0, except for the one corresponding to the currently fixated AOI, which was set to 1. This time series could then be averaged in time, with the averaging window time (t) as a variable parameter. For example, when we average with a t of one minute, each vector component represents the proportion of time that the corresponding AOI was fixated within the corresponding one minute interval. We then computed the Pearson product-moment correlation coefficient between the gaze proportion vectors for each of the two crew members, at each point in time, resulting in a signal representing the time-varying correlation between gaze behaviors. We determined criteria for concluding correlated gaze behavior using two methods: first, a permutation test was applied to the subjects' data. When one crew member's gaze proportion vector is correlated with a random time sample from the other crewmember's data, a distribution of correlation values is obtained that differs markedly from the distribution obtained from temporally aligned samples. In addition to validating that the gaze tracker was functioning reasonably well, this also allows us to compute probabilities of coordinated behavior for each value of the correlation. As an alternative, we also tabulated distributions of correlation coefficients for synthetic data sets, in which the behavior was modeled as a first-order Markov process, and compared correlation distributions for identical processes with those for disparate processes, allowing us to choose criteria and estimate error rates. 3) Discussion: Our method of gaze correlation is able to measure shared visual attention, and can distinguish between activities involving different instruments. We plan to analyze whether pilots strategies of sharing visual attention can predict performance. Possible measurements of performance include expert ratings from instructors, fuel consumption, total task time, and failure rate. While developed for two-person crews, our approach can be applied to larger groups, using intra-class correlation coefficients instead of the Pearson product-moment correlation.

  4. Phase definition to assess synchronization quality of nonlinear oscillators

    NASA Astrophysics Data System (ADS)

    Freitas, Leandro; Torres, Leonardo A. B.; Aguirre, Luis A.

    2018-05-01

    This paper proposes a phase definition, named the vector field phase, which can be defined for systems with arbitrary finite dimension and is a monotonically increasing function of time. The proposed definition can properly quantify the dynamics in the flow direction, often associated with the null Lyapunov exponent. Numerical examples that use benchmark periodic and chaotic oscillators are discussed to illustrate some of the main features of the definition, which are that (i) phase information can be obtained either from the vector field or from a time series, (ii) it permits not only detection of phase synchronization but also quantification of it, and (iii) it can be used in the phase synchronization of very different oscillators.

  5. Phase definition to assess synchronization quality of nonlinear oscillators.

    PubMed

    Freitas, Leandro; Torres, Leonardo A B; Aguirre, Luis A

    2018-05-01

    This paper proposes a phase definition, named the vector field phase, which can be defined for systems with arbitrary finite dimension and is a monotonically increasing function of time. The proposed definition can properly quantify the dynamics in the flow direction, often associated with the null Lyapunov exponent. Numerical examples that use benchmark periodic and chaotic oscillators are discussed to illustrate some of the main features of the definition, which are that (i) phase information can be obtained either from the vector field or from a time series, (ii) it permits not only detection of phase synchronization but also quantification of it, and (iii) it can be used in the phase synchronization of very different oscillators.

  6. Segmented Polynomial Models in Quasi-Experimental Research.

    ERIC Educational Resources Information Center

    Wasik, John L.

    1981-01-01

    The use of segmented polynomial models is explained. Examples of design matrices of dummy variables are given for the least squares analyses of time series and discontinuity quasi-experimental research designs. Linear combinations of dummy variable vectors appear to provide tests of effects in the two quasi-experimental designs. (Author/BW)

  7. An EM Algorithm for Maximum Likelihood Estimation of Process Factor Analysis Models

    ERIC Educational Resources Information Center

    Lee, Taehun

    2010-01-01

    In this dissertation, an Expectation-Maximization (EM) algorithm is developed and implemented to obtain maximum likelihood estimates of the parameters and the associated standard error estimates characterizing temporal flows for the latent variable time series following stationary vector ARMA processes, as well as the parameters defining the…

  8. Movie-maps of low-latitude magnetic storm disturbance

    NASA Astrophysics Data System (ADS)

    Love, Jeffrey J.; Gannon, Jennifer L.

    2010-06-01

    We present 29 movie-maps of low-latitude horizontal-intensity magnetic disturbance for the years 1999-2006: 28 recording magnetic storms and 1 magnetically quiescent period. The movie-maps are derived from magnetic vector time series data collected at up to 25 ground-based observatories. Using a technique similar to that used in the calculation of Dst, a quiet time baseline is subtracted from the time series from each observatory. The remaining disturbance time series are shown in a polar coordinate system that accommodates both Earth rotation and the universal time dependence of magnetospheric disturbance. Each magnetic storm recorded in the movie-maps is different. While some standard interpretations about the storm time equatorial ring current appear to apply to certain moments and certain phases of some storms, the movie-maps also show substantial variety in the local time distribution of low-latitude magnetic disturbance, especially during storm commencements and storm main phases. All movie-maps are available at the U.S. Geological Survey Geomagnetism Program Web site (http://geomag.usgs.gov).

  9. Nonlinear time series modeling and forecasting the seismic data of the Hindu Kush region

    NASA Astrophysics Data System (ADS)

    Khan, Muhammad Yousaf; Mittnik, Stefan

    2018-01-01

    In this study, we extended the application of linear and nonlinear time models in the field of earthquake seismology and examined the out-of-sample forecast accuracy of linear Autoregressive (AR), Autoregressive Conditional Duration (ACD), Self-Exciting Threshold Autoregressive (SETAR), Threshold Autoregressive (TAR), Logistic Smooth Transition Autoregressive (LSTAR), Additive Autoregressive (AAR), and Artificial Neural Network (ANN) models for seismic data of the Hindu Kush region. We also extended the previous studies by using Vector Autoregressive (VAR) and Threshold Vector Autoregressive (TVAR) models and compared their forecasting accuracy with linear AR model. Unlike previous studies that typically consider the threshold model specifications by using internal threshold variable, we specified these models with external transition variables and compared their out-of-sample forecasting performance with the linear benchmark AR model. The modeling results show that time series models used in the present study are capable of capturing the dynamic structure present in the seismic data. The point forecast results indicate that the AR model generally outperforms the nonlinear models. However, in some cases, threshold models with external threshold variables specification produce more accurate forecasts, indicating that specification of threshold time series models is of crucial importance. For raw seismic data, the ACD model does not show an improved out-of-sample forecasting performance over the linear AR model. The results indicate that the AR model is the best forecasting device to model and forecast the raw seismic data of the Hindu Kush region.

  10. Potential assessment of the "support vector machine" method in forecasting ambient air pollutant trends.

    PubMed

    Lu, Wei-Zhen; Wang, Wen-Jian

    2005-04-01

    Monitoring and forecasting of air quality parameters are popular and important topics of atmospheric and environmental research today due to the health impact caused by exposing to air pollutants existing in urban air. The accurate models for air pollutant prediction are needed because such models would allow forecasting and diagnosing potential compliance or non-compliance in both short- and long-term aspects. Artificial neural networks (ANN) are regarded as reliable and cost-effective method to achieve such tasks and have produced some promising results to date. Although ANN has addressed more attentions to environmental researchers, its inherent drawbacks, e.g., local minima, over-fitting training, poor generalization performance, determination of the appropriate network architecture, etc., impede the practical application of ANN. Support vector machine (SVM), a novel type of learning machine based on statistical learning theory, can be used for regression and time series prediction and have been reported to perform well by some promising results. The work presented in this paper aims to examine the feasibility of applying SVM to predict air pollutant levels in advancing time series based on the monitored air pollutant database in Hong Kong downtown area. At the same time, the functional characteristics of SVM are investigated in the study. The experimental comparisons between the SVM model and the classical radial basis function (RBF) network demonstrate that the SVM is superior to the conventional RBF network in predicting air quality parameters with different time series and of better generalization performance than the RBF model.

  11. ℓ p-Norm Multikernel Learning Approach for Stock Market Price Forecasting

    PubMed Central

    Shao, Xigao; Wu, Kun; Liao, Bifeng

    2012-01-01

    Linear multiple kernel learning model has been used for predicting financial time series. However, ℓ 1-norm multiple support vector regression is rarely observed to outperform trivial baselines in practical applications. To allow for robust kernel mixtures that generalize well, we adopt ℓ p-norm multiple kernel support vector regression (1 ≤ p < ∞) as a stock price prediction model. The optimization problem is decomposed into smaller subproblems, and the interleaved optimization strategy is employed to solve the regression model. The model is evaluated on forecasting the daily stock closing prices of Shanghai Stock Index in China. Experimental results show that our proposed model performs better than ℓ 1-norm multiple support vector regression model. PMID:23365561

  12. Urban Area Monitoring using MODIS Time Series Data

    NASA Astrophysics Data System (ADS)

    Devadiga, S.; Sarkar, S.; Mauoka, E.

    2015-12-01

    Growing urban sprawl and its impact on global climate due to urban heat island effects has been an active area of research over the recent years. This is especially significant in light of rapid urbanization that is happening in some of the first developing nations across the globe. But so far study of urban area growth has been largely restricted to local and regional scales, using high to medium resolution satellite observations, taken at distinct time periods. In this presentation we propose a new approach to detect and monitor urban area expansion using long time series of MODIS data. This work characterizes data points using a vector of several annual metrics computed from the MODIS 8-day and 16-day composite L3 data products, at 250M resolution and over several years and then uses a vector angle mapping classifier to detect and segment the urban area. The classifier is trained using a set of training points obtained from a reference vector point and polygon pre-filtered using the MODIS VI product. This work gains additional significance, given that, despite unprecedented urban growth since 2000, the area covered by the urban class in the MODIS Global Land Cover (MCD12Q1, MCDLCHKM and MCDLC1KM) product hasn't changed since the launch of Terra and Aqua. The proposed approach was applied to delineate the urban area around several cities in Asia known to have maximum growth in the last 15 years. Results were verified using high resolution Landsat data.

  13. Forecasting paediatric malaria admissions on the Kenya Coast using rainfall.

    PubMed

    Karuri, Stella Wanjugu; Snow, Robert W

    2016-01-01

    Malaria is a vector-borne disease which, despite recent scaled-up efforts to achieve control in Africa, continues to pose a major threat to child survival. The disease is caused by the protozoan parasite Plasmodium and requires mosquitoes and humans for transmission. Rainfall is a major factor in seasonal and secular patterns of malaria transmission along the East African coast. The goal of the study was to develop a model to reliably forecast incidences of paediatric malaria admissions to Kilifi District Hospital (KDH). In this article, we apply several statistical models to look at the temporal association between monthly paediatric malaria hospital admissions, rainfall, and Indian Ocean sea surface temperatures. Trend and seasonally adjusted, marginal and multivariate, time-series models for hospital admissions were applied to a unique data set to examine the role of climate, seasonality, and long-term anomalies in predicting malaria hospital admission rates and whether these might become more or less predictable with increasing vector control. The proportion of paediatric admissions to KDH that have malaria as a cause of admission can be forecast by a model which depends on the proportion of malaria admissions in the previous 2 months. This model is improved by incorporating either the previous month's Indian Ocean Dipole information or the previous 2 months' rainfall. Surveillance data can help build time-series prediction models which can be used to anticipate seasonal variations in clinical burdens of malaria in stable transmission areas and aid the timing of malaria vector control.

  14. Treatment for meibomian gland dysfunction and dry eye symptoms with a single-dose vectored thermal pulsation: a review.

    PubMed

    Blackie, Caroline A; Carlson, Alan N; Korb, Donald R

    2015-07-01

    Meibomian gland dysfunction (MGD) is understood to be a highly prevalent, chronic progressive disease and the leading cause of dry eye. All available published peer-reviewed results of the novel vectored thermal pulsation therapy for patients with MGD are investigated. The PubMed and meeting abstract search revealed a total of 31 peer-reviewed reports on vectored thermal pulsation therapy at the time of the search (eight manuscripts and 23 meeting abstracts). All manuscripts evidence a significant increase in meibomian gland function (∼3×) and symptom improvement post a single 12-min treatment. Additional reported objective measures such as osmolarity, tear break-up time, or lipid layer thickness also increased as a result of the therapy; however, not all findings were statistically significant. The randomized controlled studies evidence sustained gland function and symptom relief lasting out to 12 months. The uncontrolled case series evidence significantly longer duration of effect. A single 12 minute vectored thermal pulsation treatment allows for reducing dry eye symptoms, improving meibomian gland function and other correlates of the ocular surface health.

  15. Mathematical Methods for Optical Physics and Engineering

    NASA Astrophysics Data System (ADS)

    Gbur, Gregory J.

    2011-01-01

    1. Vector algebra; 2. Vector calculus; 3. Vector calculus in curvilinear coordinate systems; 4. Matrices and linear algebra; 5. Advanced matrix techniques and tensors; 6. Distributions; 7. Infinite series; 8. Fourier series; 9. Complex analysis; 10. Advanced complex analysis; 11. Fourier transforms; 12. Other integral transforms; 13. Discrete transforms; 14. Ordinary differential equations; 15. Partial differential equations; 16. Bessel functions; 17. Legendre functions and spherical harmonics; 18. Orthogonal functions; 19. Green's functions; 20. The calculus of variations; 21. Asymptotic techniques; Appendices; References; Index.

  16. Optimal Parameter Selection for Support Vector Machine Based on Artificial Bee Colony Algorithm: A Case Study of Grid-Connected PV System Power Prediction.

    PubMed

    Gao, Xiang-Ming; Yang, Shi-Feng; Pan, San-Bo

    2017-01-01

    Predicting the output power of photovoltaic system with nonstationarity and randomness, an output power prediction model for grid-connected PV systems is proposed based on empirical mode decomposition (EMD) and support vector machine (SVM) optimized with an artificial bee colony (ABC) algorithm. First, according to the weather forecast data sets on the prediction date, the time series data of output power on a similar day with 15-minute intervals are built. Second, the time series data of the output power are decomposed into a series of components, including some intrinsic mode components IMFn and a trend component Res, at different scales using EMD. The corresponding SVM prediction model is established for each IMF component and trend component, and the SVM model parameters are optimized with the artificial bee colony algorithm. Finally, the prediction results of each model are reconstructed, and the predicted values of the output power of the grid-connected PV system can be obtained. The prediction model is tested with actual data, and the results show that the power prediction model based on the EMD and ABC-SVM has a faster calculation speed and higher prediction accuracy than do the single SVM prediction model and the EMD-SVM prediction model without optimization.

  17. Optimal Parameter Selection for Support Vector Machine Based on Artificial Bee Colony Algorithm: A Case Study of Grid-Connected PV System Power Prediction

    PubMed Central

    2017-01-01

    Predicting the output power of photovoltaic system with nonstationarity and randomness, an output power prediction model for grid-connected PV systems is proposed based on empirical mode decomposition (EMD) and support vector machine (SVM) optimized with an artificial bee colony (ABC) algorithm. First, according to the weather forecast data sets on the prediction date, the time series data of output power on a similar day with 15-minute intervals are built. Second, the time series data of the output power are decomposed into a series of components, including some intrinsic mode components IMFn and a trend component Res, at different scales using EMD. The corresponding SVM prediction model is established for each IMF component and trend component, and the SVM model parameters are optimized with the artificial bee colony algorithm. Finally, the prediction results of each model are reconstructed, and the predicted values of the output power of the grid-connected PV system can be obtained. The prediction model is tested with actual data, and the results show that the power prediction model based on the EMD and ABC-SVM has a faster calculation speed and higher prediction accuracy than do the single SVM prediction model and the EMD-SVM prediction model without optimization. PMID:28912803

  18. Plasmids for increased efficiency of vector construction and genetic engineering in filamentous fungi.

    PubMed

    Schoberle, Taylor J; Nguyen-Coleman, C Kim; May, Gregory S

    2013-01-01

    Fungal species are continuously being studied to not only understand disease in humans and plants but also to identify novel antibiotics and other metabolites of industrial importance. Genetic manipulations, such as gene deletion, gene complementation, and gene over-expression, are common techniques to investigate fungal gene functions. Although advances in transformation efficiency and promoter usage have improved genetic studies, some basic steps in vector construction are still laborious and time-consuming. Gateway cloning technology solves this problem by increasing the efficiency of vector construction through the use of λ phage integrase proteins and att recombination sites. We developed a series of Gateway-compatible vectors for use in genetic studies in a range of fungal species. They contain nutritional and drug-resistance markers and can be utilized to manipulate different filamentous fungal genomes. Copyright © 2013 Elsevier Inc. All rights reserved.

  19. A modified temporal criterion to meta-optimize the extended Kalman filter for land cover classification of remotely sensed time series

    NASA Astrophysics Data System (ADS)

    Salmon, B. P.; Kleynhans, W.; Olivier, J. C.; van den Bergh, F.; Wessels, K. J.

    2018-05-01

    Humans are transforming land cover at an ever-increasing rate. Accurate geographical maps on land cover, especially rural and urban settlements are essential to planning sustainable development. Time series extracted from MODerate resolution Imaging Spectroradiometer (MODIS) land surface reflectance products have been used to differentiate land cover classes by analyzing the seasonal patterns in reflectance values. The proper fitting of a parametric model to these time series usually requires several adjustments to the regression method. To reduce the workload, a global setting of parameters is done to the regression method for a geographical area. In this work we have modified a meta-optimization approach to setting a regression method to extract the parameters on a per time series basis. The standard deviation of the model parameters and magnitude of residuals are used as scoring function. We successfully fitted a triply modulated model to the seasonal patterns of our study area using a non-linear extended Kalman filter (EKF). The approach uses temporal information which significantly reduces the processing time and storage requirements to process each time series. It also derives reliability metrics for each time series individually. The features extracted using the proposed method are classified with a support vector machine and the performance of the method is compared to the original approach on our ground truth data.

  20. Deriving a Core Magnetic Field Model from Swarm Satellite Data

    NASA Astrophysics Data System (ADS)

    Lesur, V.; Rother, M.; Wardinski, I.

    2014-12-01

    A model of the Earth's core magnetic field has been built using Swarm satellite mission data and observatory quasi-definitive data. The satellite data processing scheme, which was used to derive previous satellite field models (i.e. GRIMM series), has been modified to handle discrepancies between the satellite total intensity data derived from the vector fluxgate magnetometer and the absolute scalar instrument. Further, the Euler angles, i.e. the angles between the vector magnetometer and the satellite reference frame, have been recalculated on a series of 30-day windows to obtain an accurate model of the core field for 2014. Preliminary derivations of core magnetic field and SV models for 2014 present the same characteristics as during the CHAMP era. The acceleration (i.e. the field second time derivative) has shown a rapid evolution over the last few years, and is present in the current model, which confirms previous observations.

  1. Binarized cross-approximate entropy in crowdsensing environment.

    PubMed

    Skoric, Tamara; Mohamoud, Omer; Milovanovic, Branislav; Japundzic-Zigon, Nina; Bajic, Dragana

    2017-01-01

    Personalised monitoring in health applications has been recognised as part of the mobile crowdsensing concept, where subjects equipped with sensors extract information and share them for personal or common benefit. Limited transmission resources impose the use of local analyses methodology, but this approach is incompatible with analytical tools that require stationary and artefact-free data. This paper proposes a computationally efficient binarised cross-approximate entropy, referred to as (X)BinEn, for unsupervised cardiovascular signal processing in environments where energy and processor resources are limited. The proposed method is a descendant of the cross-approximate entropy ((X)ApEn). It operates on binary, differentially encoded data series split into m-sized vectors. The Hamming distance is used as a distance measure, while a search for similarities is performed on the vector sets. The procedure is tested on rats under shaker and restraint stress, and compared to the existing (X)ApEn results. The number of processing operations is reduced. (X)BinEn captures entropy changes in a similar manner to (X)ApEn. The coding coarseness yields an adverse effect of reduced sensitivity, but it attenuates parameter inconsistency and binary bias. A special case of (X)BinEn is equivalent to Shannon's entropy. A binary conditional entropy for m =1 vectors is embedded into the (X)BinEn procedure. (X)BinEn can be applied to a single time series as an auto-entropy method, or to a pair of time series, as a cross-entropy method. Its low processing requirements makes it suitable for mobile, battery operated, self-attached sensing devices, with limited power and processor resources. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Parameter Estimation for Real Filtered Sinusoids

    DTIC Science & Technology

    1997-09-01

    Statistical Signal Processing: Detection, Estimation and Time Series Analysis. New York: Addison-Wesley, 1991. 74. Serway , Raymond A . Physics for...Dr. Yung Kee Yeo, Dean’s Representative Dr. Robert A . Calico, Jr., Dean Table of Contents Page List of Abbreviations...Contributions ....... ...................... 5-4 5.4 Summary ........ ............................. 5-6 Appendix A . Vector-Matrix Differentiation

  3. The Response of US College Enrollment to Unexpected Changes in Macroeconomic Activity

    ERIC Educational Resources Information Center

    Ewing, Kris M.; Beckert, Kim A.; Ewing, Bradley T.

    2010-01-01

    This paper estimates the extent and magnitude of US college and university enrollment responses to unanticipated changes in macroeconomic activity. In particular, we consider the relationship between enrollment, economic growth, and inflation. A time series analysis known as a vector autoregression is estimated and impulse response functions are…

  4. A Python-based interface to examine motions in time series of solar images

    NASA Astrophysics Data System (ADS)

    Campos-Rozo, J. I.; Vargas Domínguez, S.

    2017-10-01

    Python is considered to be a mature programming language, besides of being widely accepted as an engaging option for scientific analysis in multiple areas, as will be presented in this work for the particular case of solar physics research. SunPy is an open-source library based on Python that has been recently developed to furnish software tools to solar data analysis and visualization. In this work we present a graphical user interface (GUI) based on Python and Qt to effectively compute proper motions for the analysis of time series of solar data. This user-friendly computing interface, that is intended to be incorporated to the Sunpy library, uses a local correlation tracking technique and some extra tools that allows the selection of different parameters to calculate, vizualize and analyze vector velocity fields of solar data, i.e. time series of solar filtergrams and magnetograms.

  5. Detecting a currency’s dominance using multivariate time series analysis

    NASA Astrophysics Data System (ADS)

    Syahidah Yusoff, Nur; Sharif, Shamshuritawati

    2017-09-01

    A currency exchange rate is the price of one country’s currency in terms of another country’s currency. There are four different prices; opening, closing, highest, and lowest can be achieved from daily trading activities. In the past, a lot of studies have been carried out by using closing price only. However, those four prices are interrelated to each other. Thus, the multivariate time series can provide more information than univariate time series. Therefore, the enthusiasm of this paper is to compare the results of two different approaches, which are mean vector and Escoufier’s RV coefficient in constructing similarity matrices of 20 world currencies. Consequently, both matrices are used to substitute the correlation matrix required by network topology. With the help of degree centrality measure, we can detect the currency’s dominance for both networks. The pros and cons for both approaches will be presented at the end of this paper.

  6. Cluster analysis of word frequency dynamics

    NASA Astrophysics Data System (ADS)

    Maslennikova, Yu S.; Bochkarev, V. V.; Belashova, I. A.

    2015-01-01

    This paper describes the analysis and modelling of word usage frequency time series. During one of previous studies, an assumption was put forward that all word usage frequencies have uniform dynamics approaching the shape of a Gaussian function. This assumption can be checked using the frequency dictionaries of the Google Books Ngram database. This database includes 5.2 million books published between 1500 and 2008. The corpus contains over 500 billion words in American English, British English, French, German, Spanish, Russian, Hebrew, and Chinese. We clustered time series of word usage frequencies using a Kohonen neural network. The similarity between input vectors was estimated using several algorithms. As a result of the neural network training procedure, more than ten different forms of time series were found. They describe the dynamics of word usage frequencies from birth to death of individual words. Different groups of word forms were found to have different dynamics of word usage frequency variations.

  7. Mathematical Methods for Physics and Engineering Third Edition Paperback Set

    NASA Astrophysics Data System (ADS)

    Riley, Ken F.; Hobson, Mike P.; Bence, Stephen J.

    2006-06-01

    Prefaces; 1. Preliminary algebra; 2. Preliminary calculus; 3. Complex numbers and hyperbolic functions; 4. Series and limits; 5. Partial differentiation; 6. Multiple integrals; 7. Vector algebra; 8. Matrices and vector spaces; 9. Normal modes; 10. Vector calculus; 11. Line, surface and volume integrals; 12. Fourier series; 13. Integral transforms; 14. First-order ordinary differential equations; 15. Higher-order ordinary differential equations; 16. Series solutions of ordinary differential equations; 17. Eigenfunction methods for differential equations; 18. Special functions; 19. Quantum operators; 20. Partial differential equations: general and particular; 21. Partial differential equations: separation of variables; 22. Calculus of variations; 23. Integral equations; 24. Complex variables; 25. Application of complex variables; 26. Tensors; 27. Numerical methods; 28. Group theory; 29. Representation theory; 30. Probability; 31. Statistics; Index.

  8. Investigation of Time Series Representations and Similarity Measures for Structural Damage Pattern Recognition

    PubMed Central

    Swartz, R. Andrew

    2013-01-01

    This paper investigates the time series representation methods and similarity measures for sensor data feature extraction and structural damage pattern recognition. Both model-based time series representation and dimensionality reduction methods are studied to compare the effectiveness of feature extraction for damage pattern recognition. The evaluation of feature extraction methods is performed by examining the separation of feature vectors among different damage patterns and the pattern recognition success rate. In addition, the impact of similarity measures on the pattern recognition success rate and the metrics for damage localization are also investigated. The test data used in this study are from the System Identification to Monitor Civil Engineering Structures (SIMCES) Z24 Bridge damage detection tests, a rigorous instrumentation campaign that recorded the dynamic performance of a concrete box-girder bridge under progressively increasing damage scenarios. A number of progressive damage test case datasets and damage test data with different damage modalities are used. The simulation results show that both time series representation methods and similarity measures have significant impact on the pattern recognition success rate. PMID:24191136

  9. Supercomputer implementation of finite element algorithms for high speed compressible flows

    NASA Technical Reports Server (NTRS)

    Thornton, E. A.; Ramakrishnan, R.

    1986-01-01

    Prediction of compressible flow phenomena using the finite element method is of recent origin and considerable interest. Two shock capturing finite element formulations for high speed compressible flows are described. A Taylor-Galerkin formulation uses a Taylor series expansion in time coupled with a Galerkin weighted residual statement. The Taylor-Galerkin algorithms use explicit artificial dissipation, and the performance of three dissipation models are compared. A Petrov-Galerkin algorithm has as its basis the concepts of streamline upwinding. Vectorization strategies are developed to implement the finite element formulations on the NASA Langley VPS-32. The vectorization scheme results in finite element programs that use vectors of length of the order of the number of nodes or elements. The use of the vectorization procedure speeds up processing rates by over two orders of magnitude. The Taylor-Galerkin and Petrov-Galerkin algorithms are evaluated for 2D inviscid flows on criteria such as solution accuracy, shock resolution, computational speed and storage requirements. The convergence rates for both algorithms are enhanced by local time-stepping schemes. Extension of the vectorization procedure for predicting 2D viscous and 3D inviscid flows are demonstrated. Conclusions are drawn regarding the applicability of the finite element procedures for realistic problems that require hundreds of thousands of nodes.

  10. Increasing the computational efficient of digital cross correlation by a vectorization method

    NASA Astrophysics Data System (ADS)

    Chang, Ching-Yuan; Ma, Chien-Ching

    2017-08-01

    This study presents a vectorization method for use in MATLAB programming aimed at increasing the computational efficiency of digital cross correlation in sound and images, resulting in a speedup of 6.387 and 36.044 times compared with performance values obtained from looped expression. This work bridges the gap between matrix operations and loop iteration, preserving flexibility and efficiency in program testing. This paper uses numerical simulation to verify the speedup of the proposed vectorization method as well as experiments to measure the quantitative transient displacement response subjected to dynamic impact loading. The experiment involved the use of a high speed camera as well as a fiber optic system to measure the transient displacement in a cantilever beam under impact from a steel ball. Experimental measurement data obtained from the two methods are in excellent agreement in both the time and frequency domain, with discrepancies of only 0.68%. Numerical and experiment results demonstrate the efficacy of the proposed vectorization method with regard to computational speed in signal processing and high precision in the correlation algorithm. We also present the source code with which to build MATLAB-executable functions on Windows as well as Linux platforms, and provide a series of examples to demonstrate the application of the proposed vectorization method.

  11. Time series analysis of reference crop evapotranspiration using soft computing techniques for Ganjam District, Odisha, India

    NASA Astrophysics Data System (ADS)

    Patra, S. R.

    2017-12-01

    Evapotranspiration (ET0) influences water resources and it is considered as a vital process in aridic hydrologic frameworks. It is one of the most important measure in finding the drought condition. Therefore, time series forecasting of evapotranspiration is very important in order to help the decision makers and water system mangers build up proper systems to sustain and manage water resources. Time series considers that -history repeats itself, hence by analysing the past values, better choices, or forecasts, can be carried out for the future. Ten years of ET0 data was used as a part of this study to make sure a satisfactory forecast of monthly values. In this study, three models: (ARIMA) mathematical model, artificial neural network model, support vector machine model are presented. These three models are used for forecasting monthly reference crop evapotranspiration based on ten years of past historical records (1991-2001) of measured evaporation at Ganjam region, Odisha, India without considering the climate data. The developed models will allow water resource managers to predict up to 12 months, making these predictions very useful to optimize the resources needed for effective water resources management. In this study multistep-ahead prediction is performed which is more complex and troublesome than onestep ahead. Our investigation proposed that nonlinear relationships may exist among the monthly indices, so that the ARIMA model might not be able to effectively extract the full relationship hidden in the historical data. Support vector machines are potentially helpful time series forecasting strategies on account of their strong nonlinear mapping capability and resistance to complexity in forecasting data. SVMs have great learning capability in time series modelling compared to ANN. For instance, the SVMs execute the structural risk minimization principle, which allows in better generalization as compared to neural networks that use the empirical risk minimization principle. The reliability of these computational models was analysed in light of simulation results and it was found out that SVM model produces better results among the three. The future research should be routed to extend the validation data set and to check the validity of our results on different areas with hybrid intelligence techniques.

  12. CI2 for creating and comparing confidence-intervals for time-series bivariate plots.

    PubMed

    Mullineaux, David R

    2017-02-01

    Currently no method exists for calculating and comparing the confidence-intervals (CI) for the time-series of a bivariate plot. The study's aim was to develop 'CI2' as a method to calculate the CI on time-series bivariate plots, and to identify if the CI between two bivariate time-series overlap. The test data were the knee and ankle angles from 10 healthy participants running on a motorised standard-treadmill and non-motorised curved-treadmill. For a recommended 10+ trials, CI2 involved calculating 95% confidence-ellipses at each time-point, then taking as the CI the points on the ellipses that were perpendicular to the direction vector between the means of two adjacent time-points. Consecutive pairs of CI created convex quadrilaterals, and any overlap of these quadrilaterals at the same time or ±1 frame as a time-lag calculated using cross-correlations, indicated where the two time-series differed. CI2 showed no group differences between left and right legs on both treadmills, but the same legs between treadmills for all participants showed differences of less knee extension on the curved-treadmill before heel-strike. To improve and standardise the use of CI2 it is recommended to remove outlier time-series, use 95% confidence-ellipses, and scale the ellipse by the fixed Chi-square value as opposed to the sample-size dependent F-value. For practical use, and to aid in standardisation or future development of CI2, Matlab code is provided. CI2 provides an effective method to quantify the CI of bivariate plots, and to explore the differences in CI between two bivariate time-series. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Student Solution Manual for Mathematical Methods for Physics and Engineering Third Edition

    NASA Astrophysics Data System (ADS)

    Riley, K. F.; Hobson, M. P.

    2006-03-01

    Preface; 1. Preliminary algebra; 2. Preliminary calculus; 3. Complex numbers and hyperbolic functions; 4. Series and limits; 5. Partial differentiation; 6. Multiple integrals; 7. Vector algebra; 8. Matrices and vector spaces; 9. Normal modes; 10. Vector calculus; 11. Line, surface and volume integrals; 12. Fourier series; 13. Integral transforms; 14. First-order ordinary differential equations; 15. Higher-order ordinary differential equations; 16. Series solutions of ordinary differential equations; 17. Eigenfunction methods for differential equations; 18. Special functions; 19. Quantum operators; 20. Partial differential equations: general and particular; 21. Partial differential equations: separation of variables; 22. Calculus of variations; 23. Integral equations; 24. Complex variables; 25. Application of complex variables; 26. Tensors; 27. Numerical methods; 28. Group theory; 29. Representation theory; 30. Probability; 31. Statistics.

  14. Long-term correction of canine hemophilia B by gene transfer of blood coagulation factor IX mediated by adeno-associated viral vector.

    PubMed

    Herzog, R W; Yang, E Y; Couto, L B; Hagstrom, J N; Elwell, D; Fields, P A; Burton, M; Bellinger, D A; Read, M S; Brinkhous, K M; Podsakoff, G M; Nichols, T C; Kurtzman, G J; High, K A

    1999-01-01

    Hemophilia B is a severe X-linked bleeding diathesis caused by the absence of functional blood coagulation factor IX, and is an excellent candidate for treatment of a genetic disease by gene therapy. Using an adeno-associated viral vector, we demonstrate sustained expression (>17 months) of factor IX in a large-animal model at levels that would have a therapeutic effect in humans (up to 70 ng/ml, adequate to achieve phenotypic correction, in an animal injected with 8.5x10(12) vector particles/kg). The five hemophilia B dogs treated showed stable, vector dose-dependent partial correction of the whole blood clotting time and, at higher doses, of the activated partial thromboplastin time. In contrast to other viral gene delivery systems, this minimally invasive procedure, consisting of a series of percutaneous intramuscular injections at a single timepoint, was not associated with local or systemic toxicity. Efficient gene transfer to muscle was shown by immunofluorescence staining and DNA analysis of biopsied tissue. Immune responses against factor IX were either absent or transient. These data provide strong support for the feasibility of the approach for therapy of human subjects.

  15. An Alternative Lunar Ephemeris Model for On-Board Flight Software Use

    NASA Technical Reports Server (NTRS)

    Simpson, David G.

    1998-01-01

    In calculating the position vector of the Moon in on-board flight software, one often begins by using a series expansion to calculate the ecliptic latitude and longitude of the Moon, referred to the mean ecliptic and equinox of date. One then performs a reduction for precession, followed by a rotation of the position vector from the ecliptic plane to the equator, and a transformation from spherical to Cartesian coordinates before finally arriving at the desired result: equatorial J2000 Cartesian components of the lunar position vector. An alternative method is developed here in which the equatorial J2000 Cartesian components of the lunar position vector are calculated directly by a series expansion, saving valuable on-board computer resources.

  16. Application of multivariate autoregressive spectrum estimation to ULF waves

    NASA Technical Reports Server (NTRS)

    Ioannidis, G. A.

    1975-01-01

    The estimation of the power spectrum of a time series by fitting a finite autoregressive model to the data has recently found widespread application in the physical sciences. The extension of this method to the analysis of vector time series is presented here through its application to ULF waves observed in the magnetosphere by the ATS 6 synchronous satellite. Autoregressive spectral estimates of the power and cross-power spectra of these waves are computed with computer programs developed by the author and are compared with the corresponding Blackman-Tukey spectral estimates. The resulting spectral density matrices are then analyzed to determine the direction of propagation and polarization of the observed waves.

  17. The vectorization of a ray tracing program for image generation

    NASA Technical Reports Server (NTRS)

    Plunkett, D. J.; Cychosz, J. M.; Bailey, M. J.

    1984-01-01

    Ray tracing is a widely used method for producing realistic computer generated images. Ray tracing involves firing an imaginary ray from a view point, through a point on an image plane, into a three dimensional scene. The intersections of the ray with the objects in the scene determines what is visible at the point on the image plane. This process must be repeated many times, once for each point (commonly called a pixel) in the image plane. A typical image contains more than a million pixels making this process computationally expensive. A traditional ray tracing program processes one ray at a time. In such a serial approach, as much as ninety percent of the execution time is spent computing the intersection of a ray with the surface in the scene. With the CYBER 205, many rays can be intersected with all the bodies im the scene with a single series of vector operations. Vectorization of this intersection process results in large decreases in computation time. The CADLAB's interest in ray tracing stems from the need to produce realistic images of mechanical parts. A high quality image of a part during the design process can increase the productivity of the designer by helping him visualize the results of his work. To be useful in the design process, these images must be produced in a reasonable amount of time. This discussion will explain how the ray tracing process was vectorized and gives examples of the images obtained.

  18. Vectors and Fomites: An Investigative Laboratory for Undergraduates.

    ERIC Educational Resources Information Center

    Adamo, Joseph A.; Gealt, Michael A.

    1996-01-01

    Presents a laboratory model system for introductory microbiology students that involves hands-on studies of bacteria vectored in soil nematodes. Describes a series of experiments designed to demonstrate vector-fomite transmission, bacterial survival, and disinfectant activity. Introduces the concept of genetically engineered microorganisms and the…

  19. Multiscale asymmetric orthogonal wavelet kernel for linear programming support vector learning and nonlinear dynamic systems identification.

    PubMed

    Lu, Zhao; Sun, Jing; Butts, Kenneth

    2014-05-01

    Support vector regression for approximating nonlinear dynamic systems is more delicate than the approximation of indicator functions in support vector classification, particularly for systems that involve multitudes of time scales in their sampled data. The kernel used for support vector learning determines the class of functions from which a support vector machine can draw its solution, and the choice of kernel significantly influences the performance of a support vector machine. In this paper, to bridge the gap between wavelet multiresolution analysis and kernel learning, the closed-form orthogonal wavelet is exploited to construct new multiscale asymmetric orthogonal wavelet kernels for linear programming support vector learning. The closed-form multiscale orthogonal wavelet kernel provides a systematic framework to implement multiscale kernel learning via dyadic dilations and also enables us to represent complex nonlinear dynamics effectively. To demonstrate the superiority of the proposed multiscale wavelet kernel in identifying complex nonlinear dynamic systems, two case studies are presented that aim at building parallel models on benchmark datasets. The development of parallel models that address the long-term/mid-term prediction issue is more intricate and challenging than the identification of series-parallel models where only one-step ahead prediction is required. Simulation results illustrate the effectiveness of the proposed multiscale kernel learning.

  20. Vector Autoregressive Models and Granger Causality in Time Series Analysis in Nursing Research: Dynamic Changes Among Vital Signs Prior to Cardiorespiratory Instability Events as an Example.

    PubMed

    Bose, Eliezer; Hravnak, Marilyn; Sereika, Susan M

    Patients undergoing continuous vital sign monitoring (heart rate [HR], respiratory rate [RR], pulse oximetry [SpO2]) in real time display interrelated vital sign changes during situations of physiological stress. Patterns in this physiological cross-talk could portend impending cardiorespiratory instability (CRI). Vector autoregressive (VAR) modeling with Granger causality tests is one of the most flexible ways to elucidate underlying causal mechanisms in time series data. The purpose of this article is to illustrate the development of patient-specific VAR models using vital sign time series data in a sample of acutely ill, monitored, step-down unit patients and determine their Granger causal dynamics prior to onset of an incident CRI. CRI was defined as vital signs beyond stipulated normality thresholds (HR = 40-140/minute, RR = 8-36/minute, SpO2 < 85%) and persisting for 3 minutes within a 5-minute moving window (60% of the duration of the window). A 6-hour time segment prior to onset of first CRI was chosen for time series modeling in 20 patients using a six-step procedure: (a) the uniform time series for each vital sign was assessed for stationarity, (b) appropriate lag was determined using a lag-length selection criteria, (c) the VAR model was constructed, (d) residual autocorrelation was assessed with the Lagrange Multiplier test, (e) stability of the VAR system was checked, and (f) Granger causality was evaluated in the final stable model. The primary cause of incident CRI was low SpO2 (60% of cases), followed by out-of-range RR (30%) and HR (10%). Granger causality testing revealed that change in RR caused change in HR (21%; i.e., RR changed before HR changed) more often than change in HR causing change in RR (15%). Similarly, changes in RR caused changes in SpO2 (15%) more often than changes in SpO2 caused changes in RR (9%). For HR and SpO2, changes in HR causing changes in SpO2 and changes in SpO2 causing changes in HR occurred with equal frequency (18%). Within this sample of acutely ill patients who experienced a CRI event, VAR modeling indicated that RR changes tend to occur before changes in HR and SpO2. These findings suggest that contextual assessment of RR changes as the earliest sign of CRI is warranted. Use of VAR modeling may be helpful in other nursing research applications based on time series data.

  1. Student Solution Manual for Essential Mathematical Methods for the Physical Sciences

    NASA Astrophysics Data System (ADS)

    Riley, K. F.; Hobson, M. P.

    2011-02-01

    1. Matrices and vector spaces; 2. Vector calculus; 3. Line, surface and volume integrals; 4. Fourier series; 5. Integral transforms; 6. Higher-order ODEs; 7. Series solutions of ODEs; 8. Eigenfunction methods; 9. Special functions; 10. Partial differential equations; 11. Solution methods for PDEs; 12. Calculus of variations; 13. Integral equations; 14. Complex variables; 15. Applications of complex variables; 16. Probability; 17. Statistics.

  2. Essential Mathematical Methods for the Physical Sciences

    NASA Astrophysics Data System (ADS)

    Riley, K. F.; Hobson, M. P.

    2011-02-01

    1. Matrices and vector spaces; 2. Vector calculus; 3. Line, surface and volume integrals; 4. Fourier series; 5. Integral transforms; 6. Higher-order ODEs; 7. Series solutions of ODEs; 8. Eigenfunction methods; 9. Special functions; 10. Partial differential equations; 11. Solution methods for PDEs; 12. Calculus of variations; 13. Integral equations; 14. Complex variables; 15. Applications of complex variables; 16. Probability; 17. Statistics; Appendices; Index.

  3. Effect of removing the common mode errors on linear regression analysis of noise amplitudes in position time series of a regional GPS network & a case study of GPS stations in Southern California

    NASA Astrophysics Data System (ADS)

    Jiang, Weiping; Ma, Jun; Li, Zhao; Zhou, Xiaohui; Zhou, Boye

    2018-05-01

    The analysis of the correlations between the noise in different components of GPS stations has positive significance to those trying to obtain more accurate uncertainty of velocity with respect to station motion. Previous research into noise in GPS position time series focused mainly on single component evaluation, which affects the acquisition of precise station positions, the velocity field, and its uncertainty. In this study, before and after removing the common-mode error (CME), we performed one-dimensional linear regression analysis of the noise amplitude vectors in different components of 126 GPS stations with a combination of white noise, flicker noise, and random walking noise in Southern California. The results show that, on the one hand, there are above-moderate degrees of correlation between the white noise amplitude vectors in all components of the stations before and after removal of the CME, while the correlations between flicker noise amplitude vectors in horizontal and vertical components are enhanced from un-correlated to moderately correlated by removing the CME. On the other hand, the significance tests show that, all of the obtained linear regression equations, which represent a unique function of the noise amplitude in any two components, are of practical value after removing the CME. According to the noise amplitude estimates in two components and the linear regression equations, more accurate noise amplitudes can be acquired in the two components.

  4. A Maple package for improved global mapping forecast

    NASA Astrophysics Data System (ADS)

    Carli, H.; Duarte, L. G. S.; da Mota, L. A. C. P.

    2014-03-01

    We present a Maple implementation of the well known global approach to time series analysis and some further developments designed to improve the computational efficiency of the forecasting capabilities of the approach. This global approach can be summarized as being a reconstruction of the phase space, based on a time ordered series of data obtained from the system. After that, using the reconstructed vectors, a portion of this space is used to produce a mapping, a polynomial fitting, through a minimization procedure, that represents the system and can be employed to forecast further entries for the series. In the present implementation, we introduce a set of commands, tools, in order to perform all these tasks. For example, the command VecTS deals mainly with the reconstruction of the vector in the phase space. The command GfiTS deals with producing the minimization and the fitting. ForecasTS uses all these and produces the prediction of the next entries. For the non-standard algorithms, we here present two commands: IforecasTS and NiforecasTS that, respectively deal with the one-step and the N-step forecasting. Finally, we introduce two further tools to aid the forecasting. The commands GfiTS and AnalysTS, basically, perform an analysis of the behavior of each portion of a series regarding the settings used on the commands just mentioned above. Catalogue identifier: AERW_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERW_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 3001 No. of bytes in distributed program, including test data, etc.: 95018 Distribution format: tar.gz Programming language: Maple 14. Computer: Any capable of running Maple Operating system: Any capable of running Maple. Tested on Windows ME, Windows XP, Windows 7. RAM: 128 MB Classification: 4.3, 4.9, 5 Nature of problem: Time series analysis and improving forecast capability. Solution method: The method of solution is partially based on a result published in [1]. Restrictions: If the time series that is being analyzed presents a great amount of noise or if the dynamical system behind the time series is of high dimensionality (Dim≫3), then the method may not work well. Unusual features: Our implementation can, in the cases where the dynamics behind the time series is given by a system of low dimensionality, greatly improve the forecast. Running time: This depends strongly on the command that is being used. References: [1] Barbosa, L.M.C.R., Duarte, L.G.S., Linhares, C.A. and da Mota, L.A.C.P., Improving the global fitting method on nonlinear time series analysis, Phys. Rev. E 74, 026702 (2006).

  5. A gravity model for the spread of a pollinator-borne plant pathogen.

    PubMed

    Ferrari, Matthew J; Bjørnstad, Ottar N; Partain, Jessica L; Antonovics, Janis

    2006-09-01

    Many pathogens of plants are transmitted by arthropod vectors whose movement between individual hosts is influenced by foraging behavior. Insect foraging has been shown to depend on both the quality of hosts and the distances between hosts. Given the spatial distribution of host plants and individual variation in quality, vector foraging patterns may therefore produce predictable variation in exposure to pathogens. We develop a "gravity" model to describe the spatial spread of a vector-borne plant pathogen from underlying models of insect foraging in response to host quality using the pollinator-borne smut fungus Microbotryum violaceum as a case study. We fit the model to spatially explicit time series of M. violaceum transmission in replicate experimental plots of the white campion Silene latifolia. The gravity model provides a better fit than a mean field model or a model with only distance-dependent transmission. The results highlight the importance of active vector foraging in generating spatial patterns of disease incidence and for pathogen-mediated selection for floral traits.

  6. Generating an Open Reading Frame (ORF) Entry Clone and Destination Clone.

    PubMed

    Reece-Hoyes, John S; Walhout, Albertha J M

    2018-01-02

    This protocol describes using the Gateway recombinatorial cloning system to create an Entry clone carrying an open reading frame (ORF) and then to transfer the ORF into a Destination vector. In this example, BP recombination is used to clone an ORF from a cDNA source into the Donor vector pDONR 221. The ORF from the resulting Entry clone is then transferred into the Destination vector pDEST-15; the product (the Destination clone) will express the ORF as an amino-terminal GST-fusion. The technique can be used as a guide for cloning any other DNA fragment of interest-a promoter sequence or 3' untranslated region (UTR), for example-with substitutions of different genetic material such as genomic DNA, att sites, and vectors as required. The series of constructions and transformations requires 9-15 d, not including time that may be required for sequence confirmation, if desired/necessary. © 2018 Cold Spring Harbor Laboratory Press.

  7. Origin and structures of solar eruptions II: Magnetic modeling

    NASA Astrophysics Data System (ADS)

    Guo, Yang; Cheng, Xin; Ding, MingDe

    2017-07-01

    The topology and dynamics of the three-dimensional magnetic field in the solar atmosphere govern various solar eruptive phenomena and activities, such as flares, coronal mass ejections, and filaments/prominences. We have to observe and model the vector magnetic field to understand the structures and physical mechanisms of these solar activities. Vector magnetic fields on the photosphere are routinely observed via the polarized light, and inferred with the inversion of Stokes profiles. To analyze these vector magnetic fields, we need first to remove the 180° ambiguity of the transverse components and correct the projection effect. Then, the vector magnetic field can be served as the boundary conditions for a force-free field modeling after a proper preprocessing. The photospheric velocity field can also be derived from a time sequence of vector magnetic fields. Three-dimensional magnetic field could be derived and studied with theoretical force-free field models, numerical nonlinear force-free field models, magnetohydrostatic models, and magnetohydrodynamic models. Magnetic energy can be computed with three-dimensional magnetic field models or a time series of vector magnetic field. The magnetic topology is analyzed by pinpointing the positions of magnetic null points, bald patches, and quasi-separatrix layers. As a well conserved physical quantity, magnetic helicity can be computed with various methods, such as the finite volume method, discrete flux tube method, and helicity flux integration method. This quantity serves as a promising parameter characterizing the activity level of solar active regions.

  8. A New Strategy for Analyzing Time-Series Data Using Dynamic Networks: Identifying Prospective Biomarkers of Hepatocellular Carcinoma.

    PubMed

    Huang, Xin; Zeng, Jun; Zhou, Lina; Hu, Chunxiu; Yin, Peiyuan; Lin, Xiaohui

    2016-08-31

    Time-series metabolomics studies can provide insight into the dynamics of disease development and facilitate the discovery of prospective biomarkers. To improve the performance of early risk identification, a new strategy for analyzing time-series data based on dynamic networks (ATSD-DN) in a systematic time dimension is proposed. In ATSD-DN, the non-overlapping ratio was applied to measure the changes in feature ratios during the process of disease development and to construct dynamic networks. Dynamic concentration analysis and network topological structure analysis were performed to extract early warning information. This strategy was applied to the study of time-series lipidomics data from a stepwise hepatocarcinogenesis rat model. A ratio of lyso-phosphatidylcholine (LPC) 18:1/free fatty acid (FFA) 20:5 was identified as the potential biomarker for hepatocellular carcinoma (HCC). It can be used to classify HCC and non-HCC rats, and the area under the curve values in the discovery and external validation sets were 0.980 and 0.972, respectively. This strategy was also compared with a weighted relative difference accumulation algorithm (wRDA), multivariate empirical Bayes statistics (MEBA) and support vector machine-recursive feature elimination (SVM-RFE). The better performance of ATSD-DN suggests its potential for a more complete presentation of time-series changes and effective extraction of early warning information.

  9. A New Strategy for Analyzing Time-Series Data Using Dynamic Networks: Identifying Prospective Biomarkers of Hepatocellular Carcinoma

    NASA Astrophysics Data System (ADS)

    Huang, Xin; Zeng, Jun; Zhou, Lina; Hu, Chunxiu; Yin, Peiyuan; Lin, Xiaohui

    2016-08-01

    Time-series metabolomics studies can provide insight into the dynamics of disease development and facilitate the discovery of prospective biomarkers. To improve the performance of early risk identification, a new strategy for analyzing time-series data based on dynamic networks (ATSD-DN) in a systematic time dimension is proposed. In ATSD-DN, the non-overlapping ratio was applied to measure the changes in feature ratios during the process of disease development and to construct dynamic networks. Dynamic concentration analysis and network topological structure analysis were performed to extract early warning information. This strategy was applied to the study of time-series lipidomics data from a stepwise hepatocarcinogenesis rat model. A ratio of lyso-phosphatidylcholine (LPC) 18:1/free fatty acid (FFA) 20:5 was identified as the potential biomarker for hepatocellular carcinoma (HCC). It can be used to classify HCC and non-HCC rats, and the area under the curve values in the discovery and external validation sets were 0.980 and 0.972, respectively. This strategy was also compared with a weighted relative difference accumulation algorithm (wRDA), multivariate empirical Bayes statistics (MEBA) and support vector machine-recursive feature elimination (SVM-RFE). The better performance of ATSD-DN suggests its potential for a more complete presentation of time-series changes and effective extraction of early warning information.

  10. How are you feeling?: A personalized methodology for predicting mental states from temporally observable physical and behavioral information.

    PubMed

    Tuarob, Suppawong; Tucker, Conrad S; Kumara, Soundar; Giles, C Lee; Pincus, Aaron L; Conroy, David E; Ram, Nilam

    2017-04-01

    It is believed that anomalous mental states such as stress and anxiety not only cause suffering for the individuals, but also lead to tragedies in some extreme cases. The ability to predict the mental state of an individual at both current and future time periods could prove critical to healthcare practitioners. Currently, the practical way to predict an individual's mental state is through mental examinations that involve psychological experts performing the evaluations. However, such methods can be time and resource consuming, mitigating their broad applicability to a wide population. Furthermore, some individuals may also be unaware of their mental states or may feel uncomfortable to express themselves during the evaluations. Hence, their anomalous mental states could remain undetected for a prolonged period of time. The objective of this work is to demonstrate the ability of using advanced machine learning based approaches to generate mathematical models that predict current and future mental states of an individual. The problem of mental state prediction is transformed into the time series forecasting problem, where an individual is represented as a multivariate time series stream of monitored physical and behavioral attributes. A personalized mathematical model is then automatically generated to capture the dependencies among these attributes, which is used for prediction of mental states for each individual. In particular, we first illustrate the drawbacks of traditional multivariate time series forecasting methodologies such as vector autoregression. Then, we show that such issues could be mitigated by using machine learning regression techniques which are modified for capturing temporal dependencies in time series data. A case study using the data from 150 human participants illustrates that the proposed machine learning based forecasting methods are more suitable for high-dimensional psychological data than the traditional vector autoregressive model in terms of both magnitude of error and directional accuracy. These results not only present a successful usage of machine learning techniques in psychological studies, but also serve as a building block for multiple medical applications that could rely on an automated system to gauge individuals' mental states. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Kumaraswamy autoregressive moving average models for double bounded environmental data

    NASA Astrophysics Data System (ADS)

    Bayer, Fábio Mariano; Bayer, Débora Missio; Pumi, Guilherme

    2017-12-01

    In this paper we introduce the Kumaraswamy autoregressive moving average models (KARMA), which is a dynamic class of models for time series taking values in the double bounded interval (a,b) following the Kumaraswamy distribution. The Kumaraswamy family of distribution is widely applied in many areas, especially hydrology and related fields. Classical examples are time series representing rates and proportions observed over time. In the proposed KARMA model, the median is modeled by a dynamic structure containing autoregressive and moving average terms, time-varying regressors, unknown parameters and a link function. We introduce the new class of models and discuss conditional maximum likelihood estimation, hypothesis testing inference, diagnostic analysis and forecasting. In particular, we provide closed-form expressions for the conditional score vector and conditional Fisher information matrix. An application to environmental real data is presented and discussed.

  12. Registration of 4D time-series of cardiac images with multichannel Diffeomorphic Demons.

    PubMed

    Peyrat, Jean-Marc; Delingette, Hervé; Sermesant, Maxime; Pennec, Xavier; Xu, Chenyang; Ayache, Nicholas

    2008-01-01

    In this paper, we propose a generic framework for intersubject non-linear registration of 4D time-series images. In this framework, spatio-temporal registration is defined by mapping trajectories of physical points as opposed to spatial registration that solely aims at mapping homologous points. First, we determine the trajectories we want to register in each sequence using a motion tracking algorithm based on the Diffeomorphic Demons algorithm. Then, we perform simultaneously pairwise registrations of corresponding time-points with the constraint to map the same physical points over time. We show this trajectory registration can be formulated as a multichannel registration of 3D images. We solve it using the Diffeomorphic Demons algorithm extended to vector-valued 3D images. This framework is applied to the inter-subject non-linear registration of 4D cardiac CT sequences.

  13. 78 FR 14367 - Market Vectors ETF Trust, et al.; Notice of Application

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-05

    ...] Market Vectors ETF Trust, et al.; Notice of Application February 27, 2013. AGENCY: Securities and... companies and unit investment trusts outside of the same group of investment companies as the series to...-feeder structure. Applicants: Market Vectors ETF Trust (the ``Trust''), Van Eck Associates Corporation...

  14. The morphing of geographical features by Fourier transformation.

    PubMed

    Li, Jingzhong; Liu, Pengcheng; Yu, Wenhao; Cheng, Xiaoqiang

    2018-01-01

    This paper presents a morphing model of vector geographical data based on Fourier transformation. This model involves three main steps. They are conversion from vector data to Fourier series, generation of intermediate function by combination of the two Fourier series concerning a large scale and a small scale, and reverse conversion from combination function to vector data. By mirror processing, the model can also be used for morphing of linear features. Experimental results show that this method is sensitive to scale variations and it can be used for vector map features' continuous scale transformation. The efficiency of this model is linearly related to the point number of shape boundary and the interceptive value n of Fourier expansion. The effect of morphing by Fourier transformation is plausible and the efficiency of the algorithm is acceptable.

  15. Wind data mining by Kohonen Neural Networks.

    PubMed

    Fayos, José; Fayos, Carolina

    2007-02-14

    Time series of Circulation Weather Type (CWT), including daily averaged wind direction and vorticity, are self-classified by similarity using Kohonen Neural Networks (KNN). It is shown that KNN is able to map by similarity all 7300 five-day CWT sequences during the period of 1975-94, in London, United Kingdom. It gives, as a first result, the most probable wind sequences preceding each one of the 27 CWT Lamb classes in that period. Inversely, as a second result, the observed diffuse correlation between both five-day CWT sequences and the CWT of the 6(th) day, in the long 20-year period, can be generalized to predict the last from the previous CWT sequence in a different test period, like 1995, as both time series are similar. Although the average prediction error is comparable to that obtained by forecasting standard methods, the KNN approach gives complementary results, as they depend only on an objective classification of observed CWT data, without any model assumption. The 27 CWT of the Lamb Catalogue were coded with binary three-dimensional vectors, pointing to faces, edges and vertex of a "wind-cube," so that similar CWT vectors were close.

  16. Finite-element time-domain modeling of electromagnetic data in general dispersive medium using adaptive Padé series

    NASA Astrophysics Data System (ADS)

    Cai, Hongzhu; Hu, Xiangyun; Xiong, Bin; Zhdanov, Michael S.

    2017-12-01

    The induced polarization (IP) method has been widely used in geophysical exploration to identify the chargeable targets such as mineral deposits. The inversion of the IP data requires modeling the IP response of 3D dispersive conductive structures. We have developed an edge-based finite-element time-domain (FETD) modeling method to simulate the electromagnetic (EM) fields in 3D dispersive medium. We solve the vector Helmholtz equation for total electric field using the edge-based finite-element method with an unstructured tetrahedral mesh. We adopt the backward propagation Euler method, which is unconditionally stable, with semi-adaptive time stepping for the time domain discretization. We use the direct solver based on a sparse LU decomposition to solve the system of equations. We consider the Cole-Cole model in order to take into account the frequency-dependent conductivity dispersion. The Cole-Cole conductivity model in frequency domain is expanded using a truncated Padé series with adaptive selection of the center frequency of the series for early and late time. This approach can significantly increase the accuracy of FETD modeling.

  17. The Gaussian Graphical Model in Cross-Sectional and Time-Series Data.

    PubMed

    Epskamp, Sacha; Waldorp, Lourens J; Mõttus, René; Borsboom, Denny

    2018-04-16

    We discuss the Gaussian graphical model (GGM; an undirected network of partial correlation coefficients) and detail its utility as an exploratory data analysis tool. The GGM shows which variables predict one-another, allows for sparse modeling of covariance structures, and may highlight potential causal relationships between observed variables. We describe the utility in three kinds of psychological data sets: data sets in which consecutive cases are assumed independent (e.g., cross-sectional data), temporally ordered data sets (e.g., n = 1 time series), and a mixture of the 2 (e.g., n > 1 time series). In time-series analysis, the GGM can be used to model the residual structure of a vector-autoregression analysis (VAR), also termed graphical VAR. Two network models can then be obtained: a temporal network and a contemporaneous network. When analyzing data from multiple subjects, a GGM can also be formed on the covariance structure of stationary means-the between-subjects network. We discuss the interpretation of these models and propose estimation methods to obtain these networks, which we implement in the R packages graphicalVAR and mlVAR. The methods are showcased in two empirical examples, and simulation studies on these methods are included in the supplementary materials.

  18. Crop Identification Using Time Series of Landsat-8 and Radarsat-2 Images: Application in a Groundwater Irrigated Region, South India

    NASA Astrophysics Data System (ADS)

    Sharma, A. K.; Hubert-Moy, L.; Betbederet, J.; Ruiz, L.; Sekhar, M.; Corgne, S.

    2016-08-01

    Monitoring land use and land cover and more particularly irrigated cropland dynamics is of great importance for water resources management and land use planning. The objective of this study was to evaluate the combined use of multi-temporal optical and radar data with a high spatial resolution in order to improve the precision of irrigated crop identification by taking into account information on crop phenological stages. SAR and optical parameters were derived from time- series of seven quad-pol RADARSAT-2 and four Landsat-8 images which were acquired on the Berambadi catchment, South India, during the monsoon crop season at the growth stages of turmeric crop. To select the best parameter to discriminate turmeric crops, an analysis of covariance (ANCOVA) was applied on all the time-series parameters and the most discriminant ones were classified using the Support Vector Machine (SVM) technique. Results show that in absence of optical images, polarimetric parameters derived from SAR time-series can be used for the turmeric area estimates and that the combined use of SAR and optical parameters can improve the classification accuracy to identify turmeric.

  19. Ecology of West Nile virus across four European countries: empirical modelling of the Culex pipiens abundance dynamics as a function of weather.

    PubMed

    Groen, Thomas A; L'Ambert, Gregory; Bellini, Romeo; Chaskopoulou, Alexandra; Petric, Dusan; Zgomba, Marija; Marrama, Laurence; Bicout, Dominique J

    2017-10-26

    Culex pipiens is the major vector of West Nile virus in Europe, and is causing frequent outbreaks throughout the southern part of the continent. Proper empirical modelling of the population dynamics of this species can help in understanding West Nile virus epidemiology, optimizing vector surveillance and mosquito control efforts. But modelling results may differ from place to place. In this study we look at which type of models and weather variables can be consistently used across different locations. Weekly mosquito trap collections from eight functional units located in France, Greece, Italy and Serbia for several years were combined. Additionally, rainfall, relative humidity and temperature were recorded. Correlations between lagged weather conditions and Cx. pipiens dynamics were analysed. Also seasonal autoregressive integrated moving-average (SARIMA) models were fitted to describe the temporal dynamics of Cx. pipiens and to check whether the weather variables could improve these models. Correlations were strongest between mean temperatures at short time lags, followed by relative humidity, most likely due to collinearity. Precipitation alone had weak correlations and inconsistent patterns across sites. SARIMA models could also make reasonable predictions, especially when longer time series of Cx. pipiens observations are available. Average temperature was a consistently good predictor across sites. When only short time series (~ < 4 years) of observations are available, average temperature can therefore be used to model Cx. pipiens dynamics. When longer time series (~ > 4 years) are available, SARIMAs can provide better statistical descriptions of Cx. pipiens dynamics, without the need for further weather variables. This suggests that density dependence is also an important determinant of Cx. pipiens dynamics.

  20. The Quasi-Biennial Oscillation and Ross River virus incidence in Queensland, Australia.

    PubMed

    Done, Sinead J; Holbrook, Neil J; Beggs, Paul J

    2002-09-01

    Ross River virus (RRV) is the most important vector-borne disease in Australia. The National Notifiable Diseases Surveillance System has confirmed that its incidence is often greatest in the state of Queensland, where there is a clear seasonal pattern as well as interannual variability. Previous studies have examined relationships between large-scale climate fluctuations (such as El Niño Southern Oscillation) and vector-borne disease. No previous study has examined such relationships with the Quasi-Biennial Oscillation (QBO), another large-scale climate fluctuation. We employ time-series analysis techniques to investigate cycles inherent in monthly RRV incidence in Queensland, Australia, from January 1991 to December 1997 inclusive. The presence of a quasi-biennial cycle in the RRV time series that is out of phase with the climatic QBO is described. Quantitative analyses using correlograms and periodograms demonstrate that the quasi-biennial cycle in the RRV time series is statistically significant, at the 95% level, above the noise. Together with the seasonal cycle, the quasi-biennial cycle accounts for 77% of the variance in Queensland RRV cases. Regression analysis of QBO and summer rainfall in three climatic zones of Queensland indicates a significant association between QBO and rainfall in the subtropical southeastern part of the state. These results suggest an indirect influence of the QBO on RRV incidence in Queensland, via its influence on climate in this region. Our findings indicate that the QBO may be a useful predictor of RRV at several months lead, and might be used by public health authorities in the management and prevention of this disease.

  1. Memory persistency and nonlinearity in daily mean dew point across India

    NASA Astrophysics Data System (ADS)

    Ray, Rajdeep; Khondekar, Mofazzal Hossain; Ghosh, Koushik; Bhattacharjee, Anup Kumar

    2016-04-01

    Enterprising endeavour has been taken in this work to realize and estimate the persistence in memory of the daily mean dew point time series obtained from seven different weather stations viz. Kolkata, Chennai (Madras), New Delhi, Mumbai (Bombay), Bhopal, Agartala and Ahmedabad representing different geographical zones in India. Hurst exponent values reveal an anti-persistent behaviour of these dew point series. To affirm the Hurst exponent values, five different scaling methods have been used and the corresponding results are compared to synthesize a finer and reliable conclusion out of it. The present analysis also bespeaks that the variation in daily mean dew point is governed by a non-stationary process with stationary increments. The delay vector variance (DVV) method has been exploited to investigate nonlinearity, and the present calculation confirms the presence of deterministic nonlinear profile in the daily mean dew point time series of the seven stations.

  2. Method of multiplexed analysis using ion mobility spectrometer

    DOEpatents

    Belov, Mikhail E [Richland, WA; Smith, Richard D [Richland, WA

    2009-06-02

    A method for analyzing analytes from a sample introduced into a Spectrometer by generating a pseudo random sequence of a modulation bins, organizing each modulation bin as a series of submodulation bins, thereby forming an extended pseudo random sequence of submodulation bins, releasing the analytes in a series of analyte packets into a Spectrometer, thereby generating an unknown original ion signal vector, detecting the analytes at a detector, and characterizing the sample using the plurality of analyte signal subvectors. The method is advantageously applied to an Ion Mobility Spectrometer, and an Ion Mobility Spectrometer interfaced with a Time of Flight Mass Spectrometer.

  3. Support vector machines for TEC seismo-ionospheric anomalies detection

    NASA Astrophysics Data System (ADS)

    Akhoondzadeh, M.

    2013-02-01

    Using time series prediction methods, it is possible to pursue the behaviors of earthquake precursors in the future and to announce early warnings when the differences between the predicted value and the observed value exceed the predefined threshold value. Support Vector Machines (SVMs) are widely used due to their many advantages for classification and regression tasks. This study is concerned with investigating the Total Electron Content (TEC) time series by using a SVM to detect seismo-ionospheric anomalous variations induced by the three powerful earthquakes of Tohoku (11 March 2011), Haiti (12 January 2010) and Samoa (29 September 2009). The duration of TEC time series dataset is 49, 46 and 71 days, for Tohoku, Haiti and Samoa earthquakes, respectively, with each at time resolution of 2 h. In the case of Tohoku earthquake, the results show that the difference between the predicted value obtained from the SVM method and the observed value reaches the maximum value (i.e., 129.31 TECU) at earthquake time in a period of high geomagnetic activities. The SVM method detected a considerable number of anomalous occurrences 1 and 2 days prior to the Haiti earthquake and also 1 and 5 days before the Samoa earthquake in a period of low geomagnetic activities. In order to show that the method is acting sensibly with regard to the results extracted during nonevent and event TEC data, i.e., to perform some null-hypothesis tests in which the methods would also be calibrated, the same period of data from the previous year of the Samoa earthquake date has been taken into the account. Further to this, in this study, the detected TEC anomalies using the SVM method were compared to the previous results (Akhoondzadeh and Saradjian, 2011; Akhoondzadeh, 2012) obtained from the mean, median, wavelet and Kalman filter methods. The SVM detected anomalies are similar to those detected using the previous methods. It can be concluded that SVM can be a suitable learning method to detect the novelty changes of a nonlinear time series such as variations of earthquake precursors.

  4. Climate Cycles and Forecasts of Cutaneous Leishmaniasis, a Nonstationary Vector-Borne Disease

    PubMed Central

    Chaves, Luis Fernando; Pascual, Mercedes

    2006-01-01

    Background Cutaneous leishmaniasis (CL) is one of the main emergent diseases in the Americas. As in other vector-transmitted diseases, its transmission is sensitive to the physical environment, but no study has addressed the nonstationary nature of such relationships or the interannual patterns of cycling of the disease. Methods and Findings We studied monthly data, spanning from 1991 to 2001, of CL incidence in Costa Rica using several approaches for nonstationary time series analysis in order to ensure robustness in the description of CL's cycles. Interannual cycles of the disease and the association of these cycles to climate variables were described using frequency and time-frequency techniques for time series analysis. We fitted linear models to the data using climatic predictors, and tested forecasting accuracy for several intervals of time. Forecasts were evaluated using “out of fit” data (i.e., data not used to fit the models). We showed that CL has cycles of approximately 3 y that are coherent with those of temperature and El Niño Southern Oscillation indices (Sea Surface Temperature 4 and Multivariate ENSO Index). Conclusions Linear models using temperature and MEI can predict satisfactorily CL incidence dynamics up to 12 mo ahead, with an accuracy that varies from 72% to 77% depending on prediction time. They clearly outperform simpler models with no climate predictors, a finding that further supports a dynamical link between the disease and climate. PMID:16903778

  5. On vector-valued Poincaré series of weight 2

    NASA Astrophysics Data System (ADS)

    Meneses, Claudio

    2017-10-01

    Given a pair (Γ , ρ) of a Fuchsian group of the first kind, and a unitary representation ρ of Γ of arbitrary rank, the problem of construction of vector-valued Poincaré series of weight 2 is considered. Implications in the theory of parabolic bundles are discussed. When the genus of the group is zero, it is shown how an explicit basis for the space of these functions can be constructed.

  6. Approximate techniques of structural reanalysis

    NASA Technical Reports Server (NTRS)

    Noor, A. K.; Lowder, H. E.

    1974-01-01

    A study is made of two approximate techniques for structural reanalysis. These include Taylor series expansions for response variables in terms of design variables and the reduced-basis method. In addition, modifications to these techniques are proposed to overcome some of their major drawbacks. The modifications include a rational approach to the selection of the reduced-basis vectors and the use of Taylor series approximation in an iterative process. For the reduced basis a normalized set of vectors is chosen which consists of the original analyzed design and the first-order sensitivity analysis vectors. The use of the Taylor series approximation as a first (initial) estimate in an iterative process, can lead to significant improvements in accuracy, even with one iteration cycle. Therefore, the range of applicability of the reanalysis technique can be extended. Numerical examples are presented which demonstrate the gain in accuracy obtained by using the proposed modification techniques, for a wide range of variations in the design variables.

  7. Factors Contributing to the Interrupted Decay of Hurricane Joaquin (2015) in a Moderate Vertical Wind Shear Environment

    DTIC Science & Technology

    2017-06-01

    at 1200 UTC 3 October with maximum winds of 135 knots (kt) and minimum sea-level pressure of 934 millibars (mb). The time frame for the interrupted ...DeMaria et al. (2005). Figure 17. SHIPS Shear and 200 mb Divergence Since 1800 UTC 4 October was the time of the interruption of the rapid decay of...right) calculations. 43 Time series of CIMSS VWS magnitude (m/s, red line) and direction (degrees, blue line) from which the VWS vector is coming

  8. The morphing of geographical features by Fourier transformation

    PubMed Central

    Liu, Pengcheng; Yu, Wenhao; Cheng, Xiaoqiang

    2018-01-01

    This paper presents a morphing model of vector geographical data based on Fourier transformation. This model involves three main steps. They are conversion from vector data to Fourier series, generation of intermediate function by combination of the two Fourier series concerning a large scale and a small scale, and reverse conversion from combination function to vector data. By mirror processing, the model can also be used for morphing of linear features. Experimental results show that this method is sensitive to scale variations and it can be used for vector map features’ continuous scale transformation. The efficiency of this model is linearly related to the point number of shape boundary and the interceptive value n of Fourier expansion. The effect of morphing by Fourier transformation is plausible and the efficiency of the algorithm is acceptable. PMID:29351344

  9. How to Maneuver Around in Eccentricity Vector Space

    NASA Technical Reports Server (NTRS)

    Sweetser, Theodore H.

    2010-01-01

    The GRAIL mission to the Moon will be the first time that two separate robotic orbiters will be placed into formation in orbit around a body other than Earth. The need to design an efficient series of maneuvers to shape the orbits and phasing of the two orbiters after arrival presents a significant challenge to mission designers. This paper presents a simple geometric method for relating in-plane impulsive maneuvers to changes in the eccentricity vector, which determines the shape and orientation of an orbit in the orbit plane. Examples then show how such maneuvers can accommodate desired changes to other orbital elements such as period, incination, and longitude of the ascending node.

  10. Application of data cubes for improving detection of water cycle extreme events

    NASA Astrophysics Data System (ADS)

    Teng, W. L.; Albayrak, A.

    2015-12-01

    As part of an ongoing NASA-funded project to remove a longstanding barrier to accessing NASA data (i.e., accessing archived time-step array data as point-time series), for the hydrology and other point-time series-oriented communities, "data cubes" are created from which time series files (aka "data rods") are generated on-the-fly and made available as Web services from the Goddard Earth Sciences Data and Information Services Center (GES DISC). Data cubes are data as archived rearranged into spatio-temporal matrices, which allow for easy access to the data, both spatially and temporally. A data cube is a specific case of the general optimal strategy of reorganizing data to match the desired means of access. The gain from such reorganization is greater the larger the data set. As a use case for our project, we are leveraging existing software to explore the application of the data cubes concept to machine learning, for the purpose of detecting water cycle extreme (WCE) events, a specific case of anomaly detection, requiring time series data. We investigate the use of the sequential probability ratio test (SPRT) for anomaly detection and support vector machines (SVM) for anomaly classification. We show an example of detection of WCE events, using the Global Land Data Assimilation Systems (GLDAS) data set.

  11. Application of Data Cubes for Improving Detection of Water Cycle Extreme Events

    NASA Technical Reports Server (NTRS)

    Albayrak, Arif; Teng, William

    2015-01-01

    As part of an ongoing NASA-funded project to remove a longstanding barrier to accessing NASA data (i.e., accessing archived time-step array data as point-time series), for the hydrology and other point-time series-oriented communities, "data cubes" are created from which time series files (aka "data rods") are generated on-the-fly and made available as Web services from the Goddard Earth Sciences Data and Information Services Center (GES DISC). Data cubes are data as archived rearranged into spatio-temporal matrices, which allow for easy access to the data, both spatially and temporally. A data cube is a specific case of the general optimal strategy of reorganizing data to match the desired means of access. The gain from such reorganization is greater the larger the data set. As a use case of our project, we are leveraging existing software to explore the application of the data cubes concept to machine learning, for the purpose of detecting water cycle extreme events, a specific case of anomaly detection, requiring time series data. We investigate the use of support vector machines (SVM) for anomaly classification. We show an example of detection of water cycle extreme events, using data from the Tropical Rainfall Measuring Mission (TRMM).

  12. SHARPs - A Near-Real-Time Space Weather Data Product from HMI

    NASA Astrophysics Data System (ADS)

    Bobra, M.; Turmon, M.; Baldner, C.; Sun, X.; Hoeksema, J. T.

    2012-12-01

    A data product from the Helioseismic and Magnetic Imager (HMI) on the Solar Dynamics Observatory (SDO), called Space-weather HMI Active Region Patches (SHARPs), is now available through the SDO Joint Science Operations Center (JSOC) and the Virtual Solar Observatory. SHARPs are magnetically active regions identified on the solar disk and tracked automatically in time. SHARP data are processed within a few hours of the observation time. The SHARP data series contains active region-sized disambiguated vector magnetic field data in both Lambert Cylindrical Equal-Area and CCD coordinates on a 12 minute cadence. The series also provides simultaneous HMI maps of the line-of-sight magnetic field, continuum intensity, and velocity on the same ~0.5 arc-second pixel grid. In addition, the SHARP data series provides space weather quantities computed on the inverted, disambiguated, and remapped data. The values for each tracked region are computed and updated in near real time. We present space weather results for several X-class flares; furthermore, we compare said space weather quantities with helioseismic quantities calculated using ring-diagram analysis.

  13. 3D landslide motion from a UAV-derived time-series of morphological attributes

    NASA Astrophysics Data System (ADS)

    Valasia Peppa, Maria; Mills, Jon Philip; Moore, Philip; Miller, Pauline; Chambers, Jon

    2017-04-01

    Landslides are recognised as dynamic and significantly hazardous phenomena. Time-series observations can improve the understanding of a landslide's complex behaviour and aid assessment of its geometry and kinematics. Conventional quantification of landslide motion involves the installation of survey markers into the ground at discrete locations and periodic observations over time. However, such surveying is labour intensive, provides limited spatial resolution, is occasionally hazardous for steep terrain, or even impossible for inaccessible mountainous areas. The emergence of mini unmanned aerial vehicles (UAVs) equipped with off-the-shelf compact cameras, alongside the structure-from-motion (SfM) photogrammetric pipeline and modern pixel-based matching approaches, has expedited the automatic generation of high resolution digital elevation models (DEMs). Moreover, cross-correlation functions applied to finely co-registered consecutive orthomosaics and/or DEMs have been widely used to determine the displacement of moving features in an automated way, resulting in high spatial resolution motion vectors. This research focuses on estimating the 3D displacement field of an active slow moving earth-slide earth-flow landslide located in Lias mudrocks of North Yorkshire, UK, with the ultimate aim of assessing landslide deformation patterns. The landslide extends approximately 290 m E-W and 230 m N-S, with an average slope of 12˚ and 50 m elevation difference from N-S. Cross-correlation functions were applied to an eighteen-month duration, UAV-derived, time-series of morphological attributes in order to determine motion vectors for subsequent landslide analysis. A self-calibrating bundle adjustment was firstly incorporated into the SfM pipeline and utilised to process imagery acquired using a Panasonic Lumix DMC-LX5 compact camera from a mini fixed-wing Quest 300 UAV, with 2 m wingspan and maximum 5 kg payload. Data from six field campaigns were used to generate a DEM time-series at 6 cm spatial resolution. DEMs were georeferenced into a common reference frame using control information from surveyed ground control points. The accuracy of the co-registration was estimated from planimetric and vertical RMS errors at independent checkpoints as 4 cm and 3 cm respectively. Afterwards, various morphological attributes, including shaded relief, curvature and openness were calculated from the UAV-derived DEMs. These attributes are indicative of the local structures of discernible geomorphological features (e.g. scarps, ridges, cracks, etc.), the motion of which can be monitored using the cross-correlation algorithm. Multiple experiments were conducted to test the performance of the cross-correlation function implemented on successive epochs. Two benchmark datasets were used for validation of the cross-correlation results: a) the motion vectors generated from the surveyed 3D position of installed markers; b) the calculated displacements of features, manually tracked from successive UAV-derived orthomosaics. Both benchmark datasets detected a maximum planimetric displacement of approximately 1 m at the foot of the landslide, with a dominant N-S orientation, between December 2014 and May 2016. Preliminary cross-correlation results illustrated a similar planimetric motion in both magnitude and orientation, however user intervention was required to filter spurious displacement vectors.

  14. Baroreflex Coupling Assessed by Cross-Compression Entropy

    PubMed Central

    Schumann, Andy; Schulz, Steffen; Voss, Andreas; Scharbrodt, Susann; Baumert, Mathias; Bär, Karl-Jürgen

    2017-01-01

    Estimating interactions between physiological systems is an important challenge in modern biomedical research. Here, we explore a new concept for quantifying information common in two time series by cross-compressibility. Cross-compression entropy (CCE) exploits the ZIP data compression algorithm extended to bivariate data analysis. First, time series are transformed into symbol vectors. Symbols of the target time series are coded by the symbols of the source series. Uncoupled and linearly coupled surrogates were derived from cardiovascular recordings of 36 healthy controls obtained during rest to demonstrate suitability of this method for assessing physiological coupling. CCE at rest was compared to that of isometric handgrip exercise. Finally, spontaneous baroreflex interaction assessed by CCEBRS was compared between 21 patients suffering from acute schizophrenia and 21 matched controls. The CCEBRS of original time series was significantly higher than in uncoupled surrogates in 89% of the subjects and higher than in linearly coupled surrogates in 47% of the subjects. Handgrip exercise led to sympathetic activation and vagal inhibition accompanied by reduced baroreflex sensitivity. CCEBRS decreased from 0.553 ± 0.030 at rest to 0.514 ± 0.035 during exercise (p < 0.001). In acute schizophrenia, heart rate, and blood pressure were elevated. Heart rate variability indicated a change of sympathovagal balance. The CCEBRS of patients with schizophrenia was reduced compared to healthy controls (0.546 ± 0.042 vs. 0.507 ± 0.046, p < 0.01) and revealed a decrease of blood pressure influence on heart rate in patients with schizophrenia. Our results indicate that CCE is suitable for the investigation of linear and non-linear coupling in cardiovascular time series. CCE can quantify causal interactions in short, noisy and non-stationary physiological time series. PMID:28539889

  15. a Landsat Time-Series Stacks Model for Detection of Cropland Change

    NASA Astrophysics Data System (ADS)

    Chen, J.; Chen, J.; Zhang, J.

    2017-09-01

    Global, timely, accurate and cost-effective cropland monitoring with a fine spatial resolution will dramatically improve our understanding of the effects of agriculture on greenhouse gases emissions, food safety, and human health. Time-series remote sensing imagery have been shown particularly potential to describe land cover dynamics. The traditional change detection techniques are often not capable of detecting land cover changes within time series that are severely influenced by seasonal difference, which are more likely to generate pseuso changes. Here,we introduced and tested LTSM ( Landsat time-series stacks model), an improved Continuous Change Detection and Classification (CCDC) proposed previously approach to extract spectral trajectories of land surface change using a dense Landsat time-series stacks (LTS). The method is expected to eliminate pseudo changes caused by phenology driven by seasonal patterns. The main idea of the method is that using all available Landsat 8 images within a year, LTSM consisting of two term harmonic function are estimated iteratively for each pixel in each spectral band .LTSM can defines change area by differencing the predicted and observed Landsat images. The LTSM approach was compared with change vector analysis (CVA) method. The results indicated that the LTSM method correctly detected the "true change" without overestimating the "false" one, while CVA pointed out "true change" pixels with a large number of "false changes". The detection of change areas achieved an overall accuracy of 92.37 %, with a kappa coefficient of 0.676.

  16. Foundation Mathematics for the Physical Sciences

    NASA Astrophysics Data System (ADS)

    Riley, K. F.; Hobson, M. P.

    2011-03-01

    1. Arithmetic and geometry; 2. Preliminary algebra; 3. Differential calculus; 4. Integral calculus; 5. Complex numbers and hyperbolic functions; 6. Series and limits; 7. Partial differentiation; 8. Multiple integrals; 9. Vector algebra; 10. Matrices and vector spaces; 11. Vector calculus; 12. Line, surface and volume integrals; 13. Laplace transforms; 14. Ordinary differential equations; 15. Elementary probability; Appendices; Index.

  17. Student Solution Manual for Foundation Mathematics for the Physical Sciences

    NASA Astrophysics Data System (ADS)

    Riley, K. F.; Hobson, M. P.

    2011-03-01

    1. Arithmetic and geometry; 2. Preliminary algebra; 3. Differential calculus; 4. Integral calculus; 5. Complex numbers and hyperbolic functions; 6. Series and limits; 7. Partial differentiation; 8. Multiple integrals; 9. Vector algebra; 10. Matrices and vector spaces; 11. Vector calculus; 12. Line, surface and volume integrals; 13. Laplace transforms; 14. Ordinary differential equations; 15. Elementary probability; Appendix.

  18. Support Vector Hazards Machine: A Counting Process Framework for Learning Risk Scores for Censored Outcomes.

    PubMed

    Wang, Yuanjia; Chen, Tianle; Zeng, Donglin

    2016-01-01

    Learning risk scores to predict dichotomous or continuous outcomes using machine learning approaches has been studied extensively. However, how to learn risk scores for time-to-event outcomes subject to right censoring has received little attention until recently. Existing approaches rely on inverse probability weighting or rank-based regression, which may be inefficient. In this paper, we develop a new support vector hazards machine (SVHM) approach to predict censored outcomes. Our method is based on predicting the counting process associated with the time-to-event outcomes among subjects at risk via a series of support vector machines. Introducing counting processes to represent time-to-event data leads to a connection between support vector machines in supervised learning and hazards regression in standard survival analysis. To account for different at risk populations at observed event times, a time-varying offset is used in estimating risk scores. The resulting optimization is a convex quadratic programming problem that can easily incorporate non-linearity using kernel trick. We demonstrate an interesting link from the profiled empirical risk function of SVHM to the Cox partial likelihood. We then formally show that SVHM is optimal in discriminating covariate-specific hazard function from population average hazard function, and establish the consistency and learning rate of the predicted risk using the estimated risk scores. Simulation studies show improved prediction accuracy of the event times using SVHM compared to existing machine learning methods and standard conventional approaches. Finally, we analyze two real world biomedical study data where we use clinical markers and neuroimaging biomarkers to predict age-at-onset of a disease, and demonstrate superiority of SVHM in distinguishing high risk versus low risk subjects.

  19. Evaluation of a Unique Defibrillation Unit with Dual-Vector Biphasic Waveform Capabilities: Towards a Miniaturized Defibrillator.

    PubMed

    Okamura, Hideo; Desimone, Christopher V; Killu, Ammar M; Gilles, Emily J; Tri, Jason; Asirvatham, Roshini; Ladewig, Dejae J; Suddendorf, Scott H; Powers, Joanne M; Wood-Wentz, Christina M; Gray, Peter D; Raymond, Douglas M; Savage, Shelley J; Savage, Walter T; Bruce, Charles J; Asirvatham, Samuel J; Friedman, Paul A

    2017-02-01

    Automated external defibrillators can provide life-saving therapies to treat ventricular fibrillation. We developed a prototype unit that can deliver a unique shock waveform produced by four independent capacitors that is delivered through two shock vectors, with the rationale of providing more robust shock pathways during emergent defibrillation. We describe the initial testing and feasibility of this unique defibrillation unit, features of which may enable downsizing of current defibrillator devices. We tested our defibrillation unit in four large animal models (two canine and two swine) under general anesthesia. Experimental defibrillation thresholds (DFT) were obtained by delivery of a unique waveform shock pulse via a dual-vector pathway with four defibrillation pads (placed across the chest). DFTs were measured and compared with those of a commercially available biphasic defibrillator (Zoll M series, Zoll Medical, Chelmsford, MA, USA) tested in two different vectors. Shocks were delivered after 10 seconds of stable ventricular fibrillation and the output characteristics and shock outcome recorded. Each defibrillation series used a step-down to failure protocol to define the defibrillation threshold. A total of 96 shocks were delivered during ventricular fibrillation in four large animals. In comparison to the Zoll M series, which delivered a single-vector, biphasic shock, the energy required for successful defibrillation using the unique dual-vector biphasic waveform did not differ significantly (P = 0.65). Our early findings support the feasibility of a unique external defibrillation unit using a dual-vector biphasic waveform approach. This warrants further study to leverage this unique concept and work toward a miniaturized, portable shock delivery system. © 2016 Wiley Periodicals, Inc.

  20. Vector Autoregressive (VAR) Models and Granger Causality in Time Series Analysis in Nursing Research: Dynamic Changes Among Vital Signs Prior to Cardiorespiratory Instability Events as an Example

    PubMed Central

    Bose, Eliezer; Hravnak, Marilyn; Sereika, Susan M.

    2016-01-01

    Background Patients undergoing continuous vital sign monitoring (heart rate [HR], respiratory rate [RR], pulse oximetry [SpO2]) in real time display inter-related vital sign changes during situations of physiologic stress. Patterns in this physiological cross-talk could portend impending cardiorespiratory instability (CRI). Vector autoregressive (VAR) modeling with Granger causality tests is one of the most flexible ways to elucidate underlying causal mechanisms in time series data. Purpose The purpose of this article is to illustrate development of patient-specific VAR models using vital sign time series (VSTS) data in a sample of acutely ill, monitored, step-down unit (SDU) patients, and determine their Granger causal dynamics prior to onset of an incident CRI. Approach CRI was defined as vital signs beyond stipulated normality thresholds (HR = 40–140/minute, RR = 8–36/minute, SpO2 < 85%) and persisting for 3 minutes within a 5-minute moving window (60% of the duration of the window). A 6-hour time segment prior to onset of first CRI was chosen for time series modeling in 20 patients using a six-step procedure: (a) the uniform time series for each vital sign was assessed for stationarity; (b) appropriate lag was determined using a lag-length selection criteria; (c) the VAR model was constructed; (d) residual autocorrelation was assessed with the Lagrange Multiplier test; (e) stability of the VAR system was checked; and (f) Granger causality was evaluated in the final stable model. Results The primary cause of incident CRI was low SpO2 (60% of cases), followed by out-of-range RR (30%) and HR (10%). Granger causality testing revealed that change in RR caused change in HR (21%) (i.e., RR changed before HR changed) more often than change in HR causing change in RR (15%). Similarly, changes in RR caused changes in SpO2 (15%) more often than changes in SpO2 caused changes in RR (9%). For HR and SpO2, changes in HR causing changes in SpO2 and changes in SpO2 causing changes in HR occurred with equal frequency (18%). Discussion Within this sample of acutely ill patients who experienced a CRI event, VAR modeling indicated that RR changes tend to occur before changes in HR and SpO2. These findings suggest that contextual assessment of RR changes as the earliest sign of CRI is warranted. Use of VAR modeling may be helpful in other nursing research applications based on time series data. PMID:27977564

  1. Fisher information framework for time series modeling

    NASA Astrophysics Data System (ADS)

    Venkatesan, R. C.; Plastino, A.

    2017-08-01

    A robust prediction model invoking the Takens embedding theorem, whose working hypothesis is obtained via an inference procedure based on the minimum Fisher information principle, is presented. The coefficients of the ansatz, central to the working hypothesis satisfy a time independent Schrödinger-like equation in a vector setting. The inference of (i) the probability density function of the coefficients of the working hypothesis and (ii) the establishing of constraint driven pseudo-inverse condition for the modeling phase of the prediction scheme, is made, for the case of normal distributions, with the aid of the quantum mechanical virial theorem. The well-known reciprocity relations and the associated Legendre transform structure for the Fisher information measure (FIM, hereafter)-based model in a vector setting (with least square constraints) are self-consistently derived. These relations are demonstrated to yield an intriguing form of the FIM for the modeling phase, which defines the working hypothesis, solely in terms of the observed data. Cases for prediction employing time series' obtained from the: (i) the Mackey-Glass delay-differential equation, (ii) one ECG signal from the MIT-Beth Israel Deaconess Hospital (MIT-BIH) cardiac arrhythmia database, and (iii) one ECG signal from the Creighton University ventricular tachyarrhythmia database. The ECG samples were obtained from the Physionet online repository. These examples demonstrate the efficiency of the prediction model. Numerical examples for exemplary cases are provided.

  2. X-31 quasi-tailless flight demonstration

    NASA Technical Reports Server (NTRS)

    Huber, Peter; Schellenger, Harvey G.

    1994-01-01

    The primary objective of the quasi-tailless flight demonstration is to demonstrate the feasibility of using thrust vectoring for directional control of an unstable aircraft. By using this low-cost, low-risk approach it is possible to get information about required thrust vector control power and deflection rates from an inflight experiment as well as insight in low-power thrust vectoring issues. The quasi-tailless flight demonstration series with the X-31 began in March 1994. The demonstration flight condition was Mach 1.2 at 37,500 feet. A series of basic flying quality maneuvers, doublets, bank to bank rolls, and wind-up-turns have been performed with a simulated 100% vertical tail reduction. Flight test and supporting simulation demonstrated that the quasi-tailless approach is effective in representing the reduced stability of tailless configurations. The flights also demonstrated that thrust vectoring could be effectively used to stabilize a directionally unstable configuration and provide control power for maneuver coordination.

  3. Modeling and projection of dengue fever cases in Guangzhou based on variation of weather factors.

    PubMed

    Li, Chenlu; Wang, Xiaofeng; Wu, Xiaoxu; Liu, Jianing; Ji, Duoying; Du, Juan

    2017-12-15

    Dengue fever is one of the most serious vector-borne infectious diseases, especially in Guangzhou, China. Dengue viruses and their vectors Aedes albopictus are sensitive to climate change primarily in relation to weather factors. Previous research has mainly focused on identifying the relationship between climate factors and dengue cases, or developing dengue case models with some non-climate factors. However, there has been little research addressing the modeling and projection of dengue cases only from the perspective of climate change. This study considered this topic using long time series data (1998-2014). First, sensitive weather factors were identified through meta-analysis that included literature review screening, lagged analysis, and collinear analysis. Then, key factors that included monthly average temperature at a lag of two months, and monthly average relative humidity and monthly average precipitation at lags of three months were determined. Second, time series Poisson analysis was used with the generalized additive model approach to develop a dengue model based on key weather factors for January 1998 to December 2012. Data from January 2013 to July 2014 were used to validate that the model was reliable and reasonable. Finally, future weather data (January 2020 to December 2070) were input into the model to project the occurrence of dengue cases under different climate scenarios (RCP 2.6 and RCP 8.5). Longer time series analysis and scientifically selected weather variables were used to develop a dengue model to ensure reliability. The projections suggested that seasonal disease control (especially in summer and fall) and mitigation of greenhouse gas emissions could help reduce the incidence of dengue fever. The results of this study hope to provide a scientifically theoretical basis for the prevention and control of dengue fever in Guangzhou. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Genetic programming and serial processing for time series classification.

    PubMed

    Alfaro-Cid, Eva; Sharman, Ken; Esparcia-Alcázar, Anna I

    2014-01-01

    This work describes an approach devised by the authors for time series classification. In our approach genetic programming is used in combination with a serial processing of data, where the last output is the result of the classification. The use of genetic programming for classification, although still a field where more research in needed, is not new. However, the application of genetic programming to classification tasks is normally done by considering the input data as a feature vector. That is, to the best of our knowledge, there are not examples in the genetic programming literature of approaches where the time series data are processed serially and the last output is considered as the classification result. The serial processing approach presented here fills a gap in the existing literature. This approach was tested in three different problems. Two of them are real world problems whose data were gathered for online or conference competitions. As there are published results of these two problems this gives us the chance to compare the performance of our approach against top performing methods. The serial processing of data in combination with genetic programming obtained competitive results in both competitions, showing its potential for solving time series classification problems. The main advantage of our serial processing approach is that it can easily handle very large datasets.

  5. Aggregate Measures of Watershed Health from Reconstructed ...

    EPA Pesticide Factsheets

    Risk-based indices such as reliability, resilience, and vulnerability (R-R-V), have the potential to serve as watershed health assessment tools. Recent research has demonstrated the applicability of such indices for water quality (WQ) constituents such as total suspended solids and nutrients on an individual basis. However, the calculations can become tedious when time-series data for several WQ constituents have to be evaluated individually. Also, comparisons between locations with different sets of constituent data can prove difficult. In this study, data reconstruction using relevance vector machine algorithm was combined with dimensionality reduction via variational Bayesian noisy principal component analysis to reconstruct and condense sparse multidimensional WQ data sets into a single time series. The methodology allows incorporation of uncertainty in both the reconstruction and dimensionality-reduction steps. The R-R-V values were calculated using the aggregate time series at multiple locations within two Indiana watersheds. Results showed that uncertainty present in the reconstructed WQ data set propagates to the aggregate time series and subsequently to the aggregate R-R-V values as well. serving as motivating examples. Locations with different WQ constituents and different standards for impairment were successfully combined to provide aggregate measures of R-R-V values. Comparisons with individual constituent R-R-V values showed that v

  6. Statistical analysis of low level atmospheric turbulence

    NASA Technical Reports Server (NTRS)

    Tieleman, H. W.; Chen, W. W. L.

    1974-01-01

    The statistical properties of low-level wind-turbulence data were obtained with the model 1080 total vector anemometer and the model 1296 dual split-film anemometer, both manufactured by Thermo Systems Incorporated. The data obtained from the above fast-response probes were compared with the results obtained from a pair of Gill propeller anemometers. The digitized time series representing the three velocity components and the temperature were each divided into a number of blocks, the length of which depended on the lowest frequency of interest and also on the storage capacity of the available computer. A moving-average and differencing high-pass filter was used to remove the trend and the low frequency components in the time series. The calculated results for each of the anemometers used are represented in graphical or tabulated form.

  7. Forecasting electric vehicles sales with univariate and multivariate time series models: The case of China.

    PubMed

    Zhang, Yong; Zhong, Miner; Geng, Nana; Jiang, Yunjian

    2017-01-01

    The market demand for electric vehicles (EVs) has increased in recent years. Suitable models are necessary to understand and forecast EV sales. This study presents a singular spectrum analysis (SSA) as a univariate time-series model and vector autoregressive model (VAR) as a multivariate model. Empirical results suggest that SSA satisfactorily indicates the evolving trend and provides reasonable results. The VAR model, which comprised exogenous parameters related to the market on a monthly basis, can significantly improve the prediction accuracy. The EV sales in China, which are categorized into battery and plug-in EVs, are predicted in both short term (up to December 2017) and long term (up to 2020), as statistical proofs of the growth of the Chinese EV industry.

  8. Forecasting electric vehicles sales with univariate and multivariate time series models: The case of China

    PubMed Central

    Zhang, Yong; Zhong, Miner; Geng, Nana; Jiang, Yunjian

    2017-01-01

    The market demand for electric vehicles (EVs) has increased in recent years. Suitable models are necessary to understand and forecast EV sales. This study presents a singular spectrum analysis (SSA) as a univariate time-series model and vector autoregressive model (VAR) as a multivariate model. Empirical results suggest that SSA satisfactorily indicates the evolving trend and provides reasonable results. The VAR model, which comprised exogenous parameters related to the market on a monthly basis, can significantly improve the prediction accuracy. The EV sales in China, which are categorized into battery and plug-in EVs, are predicted in both short term (up to December 2017) and long term (up to 2020), as statistical proofs of the growth of the Chinese EV industry. PMID:28459872

  9. Using trees to compute approximate solutions to ordinary differential equations exactly

    NASA Technical Reports Server (NTRS)

    Grossman, Robert

    1991-01-01

    Some recent work is reviewed which relates families of trees to symbolic algorithms for the exact computation of series which approximate solutions of ordinary differential equations. It turns out that the vector space whose basis is the set of finite, rooted trees carries a natural multiplication related to the composition of differential operators, making the space of trees an algebra. This algebraic structure can be exploited to yield a variety of algorithms for manipulating vector fields and the series and algebras they generate.

  10. Full Wave Analysis of RF Signal Attenuation in a Lossy Rough Surface Cave using a High Order Time Domain Vector Finite Element Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pingenot, J; Rieben, R; White, D

    2005-10-31

    We present a computational study of signal propagation and attenuation of a 200 MHz planar loop antenna in a cave environment. The cave is modeled as a straight and lossy random rough wall. To simulate a broad frequency band, the full wave Maxwell equations are solved directly in the time domain via a high order vector finite element discretization using the massively parallel CEM code EMSolve. The numerical technique is first verified against theoretical results for a planar loop antenna in a smooth lossy cave. The simulation is then performed for a series of random rough surface meshes in ordermore » to generate statistical data for the propagation and attenuation properties of the antenna in a cave environment. Results for the mean and variance of the power spectral density of the electric field are presented and discussed.« less

  11. Real-time characterization of motion of motile microorganisms by means of a hybrid laser Doppler velocimeter technique

    NASA Astrophysics Data System (ADS)

    Zheng, Bin; Pleass, Charles M.; Ih, Charles S.

    1993-11-01

    A hybrid three-axis laser Doppler velocimeter system has been demonstrated in our laboratory. The system can monitor the motion of microorganisms in an unconstrained environment. During measurement, a computer system collects and processes time series data from the transit of a microorganism through the measurement volume. The fast Fourier transform of this data contains the motion signature of this microorganism. Because individual microorganisms can be selected from the field, ambiguity caused by multiscattering among two or more microorganisms can be avoided. Using this new system, we can obtain a feature vector that relates to features of the microorganism, such as its size, average translational velocity, rotation or wobbling, and its flagellum beat frequency. Such a vector appears to be a useful criterion for distinguishing the species using statistical pattern recognition. Successful experiments demonstrate that the new system and technique has some unique advantages.

  12. Factors Affecting the Initial Adhesion and Retention of the Plant Pathogen Xylella fastidiosa in the Foregut of an Insect Vector

    PubMed Central

    Almeida, Rodrigo P. P.

    2014-01-01

    Vector transmission of bacterial plant pathogens involves three steps: pathogen acquisition from an infected host, retention within the vector, and inoculation of cells into susceptible tissue of an uninfected plant. In this study, a combination of plant and artificial diet systems were used to determine the importance of several genes on the initial adhesion and retention of the bacterium Xylella fastidiosa to an efficient insect vector. Mutant strains included fimbrial (fimA and pilB) and afimbrial (hxfA and hxfB) adhesins and three loci involved in regulatory systems (rpfF, rpfC, and cgsA). Transmission assays with variable retention time indicated that HxfA and HxfB were primarily important for early adhesion to vectors, while FimA was necessary for both adhesion and retention. The long pilus protein PilB was not deficient in initial adhesion but may be important for retention. Genes upregulated under the control of rpfF are important for both initial adhesion and retention, as transmission rates of this mutant strain were initially low and decreased over time, while disruption of rpfC and cgsA yielded trends similar to that shown by the wild-type control. Because induction of an X. fastidiosa transmissible state requires pectin, a series of experiments were used to test the roles of a polygalacturonase (pglA) and the pectin and galacturonic acid carbohydrates on the transmission of X. fastidiosa. Results show that galacturonic acid, or PglA activity breaking pectin into its major subunit (galacturonic acid), is required for X. fastidiosa vector transmission using an artificial diet system. This study shows that early adhesion and retention of X. fastidiosa are mediated by different factors. It also illustrates that the interpretation of results of vector transmission experiments, in the context of vector-pathogen interaction studies, is highly dependent on experimental design. PMID:24185853

  13. Factors affecting the initial adhesion and retention of the plant pathogen Xylella fastidiosa in the foregut of an insect vector.

    PubMed

    Killiny, Nabil; Almeida, Rodrigo P P

    2014-01-01

    Vector transmission of bacterial plant pathogens involves three steps: pathogen acquisition from an infected host, retention within the vector, and inoculation of cells into susceptible tissue of an uninfected plant. In this study, a combination of plant and artificial diet systems were used to determine the importance of several genes on the initial adhesion and retention of the bacterium Xylella fastidiosa to an efficient insect vector. Mutant strains included fimbrial (fimA and pilB) and afimbrial (hxfA and hxfB) adhesins and three loci involved in regulatory systems (rpfF, rpfC, and cgsA). Transmission assays with variable retention time indicated that HxfA and HxfB were primarily important for early adhesion to vectors, while FimA was necessary for both adhesion and retention. The long pilus protein PilB was not deficient in initial adhesion but may be important for retention. Genes upregulated under the control of rpfF are important for both initial adhesion and retention, as transmission rates of this mutant strain were initially low and decreased over time, while disruption of rpfC and cgsA yielded trends similar to that shown by the wild-type control. Because induction of an X. fastidiosa transmissible state requires pectin, a series of experiments were used to test the roles of a polygalacturonase (pglA) and the pectin and galacturonic acid carbohydrates on the transmission of X. fastidiosa. Results show that galacturonic acid, or PglA activity breaking pectin into its major subunit (galacturonic acid), is required for X. fastidiosa vector transmission using an artificial diet system. This study shows that early adhesion and retention of X. fastidiosa are mediated by different factors. It also illustrates that the interpretation of results of vector transmission experiments, in the context of vector-pathogen interaction studies, is highly dependent on experimental design.

  14. Distributions of Magnetic Field Variations, Differences and Residuals

    DTIC Science & Technology

    1999-02-01

    differences and residuals between two neighbouring sites (1997 data, Monte - cristo area). Each panel displays the results from a specific vector...This means, in effect, counting the number of times the absolute value increased past one of a series of regularly spaced thresholds, and tally the...results. Crossings of the zero level were not counted . Fig. 7 illustrates the binning procedure for a fictitious data set and four bin thresholds on

  15. Supervised Time Series Event Detector for Building Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2016-04-13

    A machine learning based approach is developed to detect events that have rarely been seen in the historical data. The data can include building energy consumption, sensor data, environmental data and any data that may affect the building's energy consumption. The algorithm is a modified nonlinear Bayesian support vector machine, which examines daily energy consumption profile, detect the days with abnormal events, and diagnose the cause of the events.

  16. Dealing with Multiple Solutions in Structural Vector Autoregressive Models.

    PubMed

    Beltz, Adriene M; Molenaar, Peter C M

    2016-01-01

    Structural vector autoregressive models (VARs) hold great potential for psychological science, particularly for time series data analysis. They capture the magnitude, direction of influence, and temporal (lagged and contemporaneous) nature of relations among variables. Unified structural equation modeling (uSEM) is an optimal structural VAR instantiation, according to large-scale simulation studies, and it is implemented within an SEM framework. However, little is known about the uniqueness of uSEM results. Thus, the goal of this study was to investigate whether multiple solutions result from uSEM analysis and, if so, to demonstrate ways to select an optimal solution. This was accomplished with two simulated data sets, an empirical data set concerning children's dyadic play, and modifications to the group iterative multiple model estimation (GIMME) program, which implements uSEMs with group- and individual-level relations in a data-driven manner. Results revealed multiple solutions when there were large contemporaneous relations among variables. Results also verified several ways to select the correct solution when the complete solution set was generated, such as the use of cross-validation, maximum standardized residuals, and information criteria. This work has immediate and direct implications for the analysis of time series data and for the inferences drawn from those data concerning human behavior.

  17. Sensitivity vector fields in time-delay coordinate embeddings: theory and experiment.

    PubMed

    Sloboda, A R; Epureanu, B I

    2013-02-01

    Identifying changes in the parameters of a dynamical system can be vital in many diagnostic and sensing applications. Sensitivity vector fields (SVFs) are one way of identifying such parametric variations by quantifying their effects on the morphology of a dynamical system's attractor. In many cases, SVFs are a more effective means of identification than commonly employed modal methods. Previously, it has only been possible to construct SVFs for a given dynamical system when a full set of state variables is available. This severely restricts SVF applicability because it may be cost prohibitive, or even impossible, to measure the entire state in high-dimensional systems. Thus, the focus of this paper is constructing SVFs with only partial knowledge of the state by using time-delay coordinate embeddings. Local models are employed in which the embedded states of a neighborhood are weighted in a way referred to as embedded point cloud averaging. Application of the presented methodology to both simulated and experimental time series demonstrates its utility and reliability.

  18. Numerical analysis of transient fields near thin-wire antennas and scatterers

    NASA Astrophysics Data System (ADS)

    Landt, J. A.

    1981-11-01

    Under the premise that `accelerated charge radiates,' one would expect radiation on wire structures to occur from driving points, ends of wires, bends in wires, or locations of lumped loading. Here, this premise is investigated in a series of numerical experiments. The numerical procedure is based on a moment-method solution of a thin-wire time-domain electric-field integral equation. The fields in the vicinity of wire structures are calculated for short impulsive-type excitations, and are viewed in a series of time sequences or snapshots. For these excitations, the fields are spatially limited in the radial dimension, and expand in spheres centered about points of radiation. These centers of radiation coincide with the above list of possible source regions. Time retardation permits these observations to be made clearly in the time domain, similar to time-range gating. In addition to providing insight into transient radiation processes, these studies show that the direction of energy flow is not always defined by Poynting's vector near wire structures.

  19. A simple method of equine limb force vector analysis and its potential applications.

    PubMed

    Hobbs, Sarah Jane; Robinson, Mark A; Clayton, Hilary M

    2018-01-01

    Ground reaction forces (GRF) measured during equine gait analysis are typically evaluated by analyzing discrete values obtained from continuous force-time data for the vertical, longitudinal and transverse GRF components. This paper describes a simple, temporo-spatial method of displaying and analyzing sagittal plane GRF vectors. In addition, the application of statistical parametric mapping (SPM) is introduced to analyse differences between contra-lateral fore and hindlimb force-time curves throughout the stance phase. The overall aim of the study was to demonstrate alternative methods of evaluating functional (a)symmetry within horses. GRF and kinematic data were collected from 10 horses trotting over a series of four force plates (120 Hz). The kinematic data were used to determine clean hoof contacts. The stance phase of each hoof was determined using a 50 N threshold. Vertical and longitudinal GRF for each stance phase were plotted both as force-time curves and as force vector diagrams in which vectors originating at the centre of pressure on the force plate were drawn at intervals of 8.3 ms for the duration of stance. Visual evaluation was facilitated by overlay of the vector diagrams for different limbs. Summary vectors representing the magnitude (VecMag) and direction (VecAng) of the mean force over the entire stance phase were superimposed on the force vector diagram. Typical measurements extracted from the force-time curves (peak forces, impulses) were compared with VecMag and VecAng using partial correlation (controlling for speed). Paired samples t -tests (left v. right diagonal pair comparison and high v. low vertical force diagonal pair comparison) were performed on discrete and vector variables using traditional methods and Hotelling's T 2 tests on normalized stance phase data using SPM. Evidence from traditional statistical tests suggested that VecMag is more influenced by the vertical force and impulse, whereas VecAng is more influenced by the longitudinal force and impulse. When used to evaluate mean data from the group of ten sound horses, SPM did not identify differences between the left and right contralateral limb pairs or between limb pairs classified according to directional asymmetry. When evaluating a single horse, three periods were identified during which differences in the forces between the left and right forelimbs exceeded the critical threshold ( p  < .01). Traditional statistical analysis of 2D GRF peak values, summary vector variables and visual evaluation of force vector diagrams gave harmonious results and both methods identified the same inter-limb asymmetries. As alpha was more tightly controlled using SPM, significance was only found in the individual horse although T 2 plots followed the same trends as discrete analysis for the group. The techniques of force vector analysis and SPM hold promise for investigations of sidedness and asymmetry in horses.

  20. pLR: a lentiviral backbone series to stable transduction of bicistronic genes and exchange of promoters.

    PubMed

    Vargas, José Eduardo; Salton, Gabrielle; Sodré de Castro Laino, Andressa; Pires, Tiago Dalberto; Bonamino, Martin; Lenz, Guido; Delgado-Cañedo, Andrés

    2012-11-01

    Gene transfer based on lentiviral vectors allow the integration of exogenous genes into the genome of a target cell, turning these vectors into one of the most used methods for stable transgene expression in mammalian cells, in vitro and in vivo. Currently, there are no lentivectors that allow the cloning of different genes to be regulated by different promoters. Also, there are none that permit the analysis of the expression through an IRES (internal ribosome entry site)-- reporter gene system. In this work, we have generated a series of lentivectors containing: (1) a malleable structure to allow the cloning of different target genes in a multicloning site (mcs); (2) unique site to exchange promoters, and (3) IRES followed by one of two reporter genes: eGFP or DsRed. The series of the produced vectors were named pLR (for lentivirus and RSV promoter) and were fairly efficient with a strong fluorescence of the reporter genes in direct transfection and viral transduction experiments. This being said, the pLR series have been found to be powerful biotechnological tools for stable gene transfer and expression. Copyright © 2012 Elsevier Inc. All rights reserved.

  1. Exploratory wavelet analysis of dengue seasonal patterns in Colombia.

    PubMed

    Fernández-Niño, Julián Alfredo; Cárdenas-Cárdenas, Luz Mery; Hernández-Ávila, Juan Eugenio; Palacio-Mejía, Lina Sofía; Castañeda-Orjuela, Carlos Andrés

    2015-12-04

    Dengue has a seasonal behavior associated with climatic changes, vector cycles, circulating serotypes, and population dynamics. The wavelet analysis makes it possible to separate a very long time series into calendar time and periods. This is the first time this technique is used in an exploratory manner to model the behavior of dengue in Colombia.  To explore the annual seasonal dengue patterns in Colombia and in its five most endemic municipalities for the period 2007 to 2012, and for roughly annual cycles between 1978 and 2013 at the national level.  We made an exploratory wavelet analysis using data from all incident cases of dengue per epidemiological week for the period 2007 to 2012, and per year for 1978 to 2013. We used a first-order autoregressive model as the null hypothesis.  The effect of the 2010 epidemic was evident in both the national time series and the series for the five municipalities. Differences in interannual seasonal patterns were observed among municipalities. In addition, we identified roughly annual cycles of 2 to 5 years since 2004 at a national level.  Wavelet analysis is useful to study a long time series containing changing seasonal patterns, as is the case of dengue in Colombia, and to identify differences among regions. These patterns need to be explored at smaller aggregate levels, and their relationships with different predictive variables need to be investigated.

  2. Analysis models for the estimation of oceanic fields

    NASA Technical Reports Server (NTRS)

    Carter, E. F.; Robinson, A. R.

    1987-01-01

    A general model for statistically optimal estimates is presented for dealing with scalar, vector and multivariate datasets. The method deals with anisotropic fields and treats space and time dependence equivalently. Problems addressed include the analysis, or the production of synoptic time series of regularly gridded fields from irregular and gappy datasets, and the estimate of fields by compositing observations from several different instruments and sampling schemes. Technical issues are discussed, including the convergence of statistical estimates, the choice of representation of the correlations, the influential domain of an observation, and the efficiency of numerical computations.

  3. An efficient Foxtail mosaic virus vector system with reduced environmental risk

    PubMed Central

    2010-01-01

    Background Plant viral vectors offer high-yield expression of pharmaceutical and commercially important proteins with a minimum of cost and preparation time. The use of Agrobacterium tumefaciens has been introduced to deliver the viral vector as a transgene to each plant cell via a simple, nonsterile infiltration technique called "agroinoculation". With agroinoculation, a full length, systemically moving virus is no longer necessary for excellent protein yield, since the viral transgene is transcribed and replicates in every infiltrated cell. Viral genes may therefore be deleted to decrease the potential for accidental spread and persistence of the viral vector in the environment. Results In this study, both the coat protein (CP) and triple gene block (TGB) genetic segments were eliminated from Foxtail mosaic virus to create the "FECT" vector series, comprising a deletion of 29% of the genome. This viral vector is highly crippled and expresses little or no marker gene within the inoculated leaf. However, when co-agroinoculated with a silencing suppressor (p19 or HcPro), FECT expressed GFP at 40% total soluble protein in the tobacco host, Nicotiana benthamiana. The modified FoMV vector retained the full-length replicase ORF, the TGB1 subgenomic RNA leader sequence and either 0, 22 or 40 bases of TGB1 ORF (in vectors FECT0, FECT22 and FECT40, respectively). As well as N. benthamiana, infection of legumes was demonstrated. Despite many attempts, expression of GFP via syringe agroinoculation of various grass species was very low, reflecting the low Agrobacterium-mediated transformation rate of monocots. Conclusions The FECT/40 vector expresses foreign genes at a very high level, and yet has a greatly reduced biohazard potential. It can form no virions and can effectively replicate only in a plant with suppressed silencing. PMID:21162736

  4. Comparison of causality analysis on simultaneously measured fMRI and NIRS signals during motor tasks.

    PubMed

    Anwar, Abdul Rauf; Muthalib, Makii; Perrey, Stephane; Galka, Andreas; Granert, Oliver; Wolff, Stephan; Deuschl, Guenther; Raethjen, Jan; Heute, Ulrich; Muthuraman, Muthuraman

    2013-01-01

    Brain activity can be measured using different modalities. Since most of the modalities tend to complement each other, it seems promising to measure them simultaneously. In to be presented research, the data recorded from Functional Magnetic Resonance Imaging (fMRI) and Near Infrared Spectroscopy (NIRS), simultaneously, are subjected to causality analysis using time-resolved partial directed coherence (tPDC). Time-resolved partial directed coherence uses the principle of state space modelling to estimate Multivariate Autoregressive (MVAR) coefficients. This method is useful to visualize both frequency and time dynamics of causality between the time series. Afterwards, causality results from different modalities are compared by estimating the Spearman correlation. In to be presented study, we used directionality vectors to analyze correlation, rather than actual signal vectors. Results show that causality analysis of the fMRI correlates more closely to causality results of oxy-NIRS as compared to deoxy-NIRS in case of a finger sequencing task. However, in case of simple finger tapping, no clear difference between oxy-fMRI and deoxy-fMRI correlation is identified.

  5. SAX-VSM: Interpretable Time Series Classification Using SAX and Vector Space Model

    DTIC Science & Technology

    2013-01-01

    points in the region 800-1900 cm−1. The two top-ranked by SAX- VSM subsequences in both datasets correpond to spectrogram intervals of Chlorogenic acid ...1600 1800 Wavenumbers Best class-characteristic subsequences - Chlorogenic acid Arabica Robusta 800 1000 1200 1400 1600 1800 Wavenumbers Second to best...correspond to chlorogenic acid (best subsequence) and to caffeine (second to best) regions of spectra. This result aligns with the original work based on

  6. 3DView: Space physics data visualizer

    NASA Astrophysics Data System (ADS)

    Génot, V.; Beigbeder, L.; Popescu, D.; Dufourg, N.; Gangloff, M.; Bouchemit, M.; Caussarieu, S.; Toniutti, J.-P.; Durand, J.; Modolo, R.; André, N.; Cecconi, B.; Jacquey, C.; Pitout, F.; Rouillard, A.; Pinto, R.; Erard, S.; Jourdane, N.; Leclercq, L.; Hess, S.; Khodachenko, M.; Al-Ubaidi, T.; Scherf, M.; Budnik, E.

    2018-04-01

    3DView creates visualizations of space physics data in their original 3D context. Time series, vectors, dynamic spectra, celestial body maps, magnetic field or flow lines, and 2D cuts in simulation cubes are among the variety of data representation enabled by 3DView. It offers direct connections to several large databases and uses VO standards; it also allows the user to upload data. 3DView's versatility covers a wide range of space physics contexts.

  7. Interannual variability of human plague occurrence in the Western United States explained by tropical and North Pacific Ocean climate variability.

    PubMed

    Ari, Tamara Ben; Gershunov, Alexander; Tristan, Rouyer; Cazelles, Bernard; Gage, Kenneth; Stenseth, Nils C

    2010-09-01

    Plague is a vector-borne, highly virulent zoonotic disease caused by the bacterium Yersinia pestis. It persists in nature through transmission between its hosts (wild rodents) and vectors (fleas). During epizootics, the disease expands and spills over to other host species such as humans living in or close to affected areas. Here, we investigate the effect of large-scale climate variability on the dynamics of human plague in the western United States using a 56-year time series of plague reports (1950-2005). We found that El Niño Southern Oscillation and Pacific Decadal Oscillation in combination affect the dynamics of human plague over the western United States. The underlying mechanism could involve changes in precipitation and temperatures that impact both hosts and vectors. It is suggested that snow also may play a key role, possibly through its effects on summer soil moisture, which is known to be instrumental for flea survival and development and sustained growth of vegetation for rodents.

  8. A Physiological Time Series Dynamics-Based Approach to Patient Monitoring and Outcome Prediction

    PubMed Central

    Lehman, Li-Wei H.; Adams, Ryan P.; Mayaud, Louis; Moody, George B.; Malhotra, Atul; Mark, Roger G.; Nemati, Shamim

    2015-01-01

    Cardiovascular variables such as heart rate (HR) and blood pressure (BP) are regulated by an underlying control system, and therefore, the time series of these vital signs exhibit rich dynamical patterns of interaction in response to external perturbations (e.g., drug administration), as well as pathological states (e.g., onset of sepsis and hypotension). A question of interest is whether “similar” dynamical patterns can be identified across a heterogeneous patient cohort, and be used for prognosis of patients’ health and progress. In this paper, we used a switching vector autoregressive framework to systematically learn and identify a collection of vital sign time series dynamics, which are possibly recurrent within the same patient and may be shared across the entire cohort. We show that these dynamical behaviors can be used to characterize the physiological “state” of a patient. We validate our technique using simulated time series of the cardiovascular system, and human recordings of HR and BP time series from an orthostatic stress study with known postural states. Using the HR and BP dynamics of an intensive care unit (ICU) cohort of over 450 patients from the MIMIC II database, we demonstrate that the discovered cardiovascular dynamics are significantly associated with hospital mortality (dynamic modes 3 and 9, p = 0.001, p = 0.006 from logistic regression after adjusting for the APACHE scores). Combining the dynamics of BP time series and SAPS-I or APACHE-III provided a more accurate assessment of patient survival/mortality in the hospital than using SAPS-I and APACHE-III alone (p = 0.005 and p = 0.045). Our results suggest that the discovered dynamics of vital sign time series may contain additional prognostic value beyond that of the baseline acuity measures, and can potentially be used as an independent predictor of outcomes in the ICU. PMID:25014976

  9. Identification of high shears and compressive discontinuities in the inner heliosphere

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greco, A.; Perri, S.

    2014-04-01

    Two techniques, the Partial Variance of Increments (PVI) and the Local Intermittency Measure (LIM), have been applied and compared using MESSENGER magnetic field data in the solar wind at a heliocentric distance of about 0.3 AU. The spatial properties of the turbulent field at different scales, spanning the whole inertial range of magnetic turbulence down toward the proton scales have been studied. LIM and PVI methodologies allow us to identify portions of an entire time series where magnetic energy is mostly accumulated, and regions of intermittent bursts in the magnetic field vector increments, respectively. A statistical analysis has revealed thatmore » at small time scales and for high level of the threshold, the bursts present in the PVI and the LIM series correspond to regions of high shear stress and high magnetic field compressibility.« less

  10. DIY series of genetic cassettes useful in construction of versatile vectors specific for Alphaproteobacteria.

    PubMed

    Dziewit, Lukasz; Adamczuk, Marcin; Szuplewska, Magdalena; Bartosik, Dariusz

    2011-08-01

    We have developed a DIY (Do It Yourself) series of genetic cassettes, which facilitate construction of novel versatile vectors for Alphaproteobacteria. All the cassettes are based on defined genetic modules derived from three natural plasmids of Paracoccus aminophilus JCM 7686. We have constructed over 50 DIY cassettes, which differ in structure and specific features. All of them are functional in eight strains representing three orders of Alphaproteobacteria: Rhodobacterales, Rhizobiales and Caulobacterales. Besides various replication and stabilization systems, many of the cassettes also contain selective markers appropriate for Alphaproteobacteria (40 cassettes) and genetic modules responsible for mobilization for conjugal transfer (24 cassettes). All the DIY cassettes are bordered by different types of polylinkers, which facilitate vector construction. Using these DIY cassettes, we have created a set of compatible Escherichia coli-Alphaproteobacteria mobilizable shuttle vectors (high or low copy number in E. coli), which will greatly assist the genetic manipulation of Alphaproteobacteria. Copyright © 2011 Elsevier B.V. All rights reserved.

  11. Integrating support vector machines and random forests to classify crops in time series of Worldview-2 images

    NASA Astrophysics Data System (ADS)

    Zafari, A.; Zurita-Milla, R.; Izquierdo-Verdiguier, E.

    2017-10-01

    Crop maps are essential inputs for the agricultural planning done at various governmental and agribusinesses agencies. Remote sensing offers timely and costs efficient technologies to identify and map crop types over large areas. Among the plethora of classification methods, Support Vector Machine (SVM) and Random Forest (RF) are widely used because of their proven performance. In this work, we study the synergic use of both methods by introducing a random forest kernel (RFK) in an SVM classifier. A time series of multispectral WorldView-2 images acquired over Mali (West Africa) in 2014 was used to develop our case study. Ground truth containing five common crop classes (cotton, maize, millet, peanut, and sorghum) were collected at 45 farms and used to train and test the classifiers. An SVM with the standard Radial Basis Function (RBF) kernel, a RF, and an SVM-RFK were trained and tested over 10 random training and test subsets generated from the ground data. Results show that the newly proposed SVM-RFK classifier can compete with both RF and SVM-RBF. The overall accuracies based on the spectral bands only are of 83, 82 and 83% respectively. Adding vegetation indices to the analysis result in the classification accuracy of 82, 81 and 84% for SVM-RFK, RF, and SVM-RBF respectively. Overall, it can be observed that the newly tested RFK can compete with SVM-RBF and RF classifiers in terms of classification accuracy.

  12. Dimension reduction of frequency-based direct Granger causality measures on short time series.

    PubMed

    Siggiridou, Elsa; Kimiskidis, Vasilios K; Kugiumtzis, Dimitris

    2017-09-01

    The mainstream in the estimation of effective brain connectivity relies on Granger causality measures in the frequency domain. If the measure is meant to capture direct causal effects accounting for the presence of other observed variables, as in multi-channel electroencephalograms (EEG), typically the fit of a vector autoregressive (VAR) model on the multivariate time series is required. For short time series of many variables, the estimation of VAR may not be stable requiring dimension reduction resulting in restricted or sparse VAR models. The restricted VAR obtained by the modified backward-in-time selection method (mBTS) is adapted to the generalized partial directed coherence (GPDC), termed restricted GPDC (RGPDC). Dimension reduction on other frequency based measures, such the direct directed transfer function (dDTF), is straightforward. First, a simulation study using linear stochastic multivariate systems is conducted and RGPDC is favorably compared to GPDC on short time series in terms of sensitivity and specificity. Then the two measures are tested for their ability to detect changes in brain connectivity during an epileptiform discharge (ED) from multi-channel scalp EEG. It is shown that RGPDC identifies better than GPDC the connectivity structure of the simulated systems, as well as changes in the brain connectivity, and is less dependent on the free parameter of VAR order. The proposed dimension reduction in frequency measures based on VAR constitutes an appropriate strategy to estimate reliably brain networks within short-time windows. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Green fluorescent protein as a reporter of gene expression and protein localization.

    PubMed

    Kain, S R; Adams, M; Kondepudi, A; Yang, T T; Ward, W W; Kitts, P

    1995-10-01

    The green fluorescent protein (GFP) from the jellyfish Aequorea victoria is rapidly becoming an important reporter molecule for monitoring gene expression and protein localization in vivo, in situ and in real time. GFP emits bright green light (lambda max = 509 nm) when excited with UV or blue light (lambda max = 395 nm, minor peak at 470 nm). The fluorescence excitation and emission spectra of GFP are similar to those of fluorescein, and the conditions used to visualize this fluorophore are also suitable for GFP. Unlike other bioluminescent reporters, the chromophore in GFP is intrinsic to the primary structure of the protein, and GFP fluorescence does not require a substrate or cofactor. GFP fluorescence is stable, species-independent and can be monitored non-invasively in living cells and, in the case of transparent organisms, whole animals. Here we demonstrate GFP fluorescence in bacterial and mammalian cells and introduce our Living Colors line of GFP reporter vectors, GFP protein and anti-GFP antiserum. The reporter vectors for GFP include a promoterless GFP vector for monitoring the expression of cloned promoters/enhancers in mammalian cells and a series of six vectors for creating fusion protein to either the N or C terminus of GFP.

  14. Transient foreign gene expression in chloroplasts of cultured tobacco cells after biolistic delivery of chloroplast vectors.

    PubMed Central

    Daniell, H; Vivekananda, J; Nielsen, B L; Ye, G N; Tewari, K K; Sanford, J C

    1990-01-01

    Expression of chloramphenicol acetyltransferase (cat) by suitable vectors in chloroplasts of cultured tobacco cells, delivered by high-velocity microprojectiles, is reported here. Several chloroplast expression vectors containing bacterial cat genes, placed under the control of either psbA promoter region from pea (pHD series) or rbcL promoter region from maize (pAC series) have been used in this study. In addition, chloroplast expression vectors containing replicon fragments from pea, tobacco, or maize chloroplast DNA have also been tested for efficiency and duration of cat expression in chloroplasts of tobacco cells. Cultured NT1 tobacco cells collected on filter papers were bombarded with tungsten particles coated with pUC118 (negative control), 35S-CAT (nuclear expression vector), pHD312 (repliconless chloroplast expression vector), and pHD407, pACp18, and pACp19 (chloroplast expression vectors with replicon). Sonic extracts of cells bombarded with pUC118 showed no detectable cat activity in the autoradiograms. Nuclear expression of cat reached two-thirds of the maximal 48 hr after bombardment and the maximal at 72 hr. Cells bombarded with chloroplast expression vectors showed a low level of expression until 48 hr of incubation. A dramatic increase in the expression of cat was observed 24 hr after the addition of fresh medium to cultured cells in samples bombarded with pHD407; the repliconless vector pHD312 showed about 50% of this maximal activity. The expression of nuclear cat and the repliconless chloroplast vector decreased after 72 hr, but a high level of chloroplast cat expression was maintained in cells bombarded with pHD407. Organelle-specific expression of cat in appropriate compartments was checked by introducing various plasmid constructions into tobacco protoplasts by electroporation. Although the nuclear expression vector 35S-CAT showed expression of cat, no activity was observed with any chloroplast vectors. Images PMID:2404285

  15. Transient foreign gene expression in chloroplasts of cultured tobacco cells after biolistic delivery of chloroplast vectors.

    PubMed

    Daniell, H; Vivekananda, J; Nielsen, B L; Ye, G N; Tewari, K K; Sanford, J C

    1990-01-01

    Expression of chloramphenicol acetyltransferase (cat) by suitable vectors in chloroplasts of cultured tobacco cells, delivered by high-velocity microprojectiles, is reported here. Several chloroplast expression vectors containing bacterial cat genes, placed under the control of either psbA promoter region from pea (pHD series) or rbcL promoter region from maize (pAC series) have been used in this study. In addition, chloroplast expression vectors containing replicon fragments from pea, tobacco, or maize chloroplast DNA have also been tested for efficiency and duration of cat expression in chloroplasts of tobacco cells. Cultured NT1 tobacco cells collected on filter papers were bombarded with tungsten particles coated with pUC118 (negative control), 35S-CAT (nuclear expression vector), pHD312 (repliconless chloroplast expression vector), and pHD407, pACp18, and pACp19 (chloroplast expression vectors with replicon). Sonic extracts of cells bombarded with pUC118 showed no detectable cat activity in the autoradiograms. Nuclear expression of cat reached two-thirds of the maximal 48 hr after bombardment and the maximal at 72 hr. Cells bombarded with chloroplast expression vectors showed a low level of expression until 48 hr of incubation. A dramatic increase in the expression of cat was observed 24 hr after the addition of fresh medium to cultured cells in samples bombarded with pHD407; the repliconless vector pHD312 showed about 50% of this maximal activity. The expression of nuclear cat and the repliconless chloroplast vector decreased after 72 hr, but a high level of chloroplast cat expression was maintained in cells bombarded with pHD407. Organelle-specific expression of cat in appropriate compartments was checked by introducing various plasmid constructions into tobacco protoplasts by electroporation. Although the nuclear expression vector 35S-CAT showed expression of cat, no activity was observed with any chloroplast vectors.

  16. Ares I Static Tests Design

    NASA Technical Reports Server (NTRS)

    Carson, William; Lindemuth, Kathleen; Mich, John; White, K. Preston; Parker, Peter A.

    2009-01-01

    Probabilistic engineering design enhances safety and reduces costs by incorporating risk assessment directly into the design process. In this paper, we assess the format of the quantitative metrics for the vehicle which will replace the Space Shuttle, the Ares I rocket. Specifically, we address the metrics for in-flight measurement error in the vector position of the motor nozzle, dictated by limits on guidance, navigation, and control systems. Analyses include the propagation of error from measured to derived parameters, the time-series of dwell points for the duty cycle during static tests, and commanded versus achieved yaw angle during tests. Based on these analyses, we recommend a probabilistic template for specifying the maximum error in angular displacement and radial offset for the nozzle-position vector. Criteria for evaluating individual tests and risky decisions also are developed.

  17. Module Twelve: Series AC Resistive-Reactive Circuits; Basic Electricity and Electronics Individualized Learning System.

    ERIC Educational Resources Information Center

    Bureau of Naval Personnel, Washington, DC.

    The module covers series circuits which contain both resistive and reactive components and methods of solving these circuits for current, voltage, impedance, and phase angle. The module is divided into six lessons: voltage and impedance in AC (alternating current) series circuits, vector computations, rectangular and polar notation, variational…

  18. Identifiability and Problems of Model Selection for Time-Series Analysis in Econometrics.

    DTIC Science & Technology

    1980-01-01

    Z is defined by (2.1) Fx + ( y( If x(t), T c R;dt for discrete-time, that is, with the time set T = Z = integers, a system F is given by * The...O1/14/81 cb (2.2) x(t + 1) Fx (t) + Gu(t), y(t) = Hx(t), t c Z. In (2.1-2.2), the real (or complex) vectors x, u, and y are called state, inpuI, and...compulsory. Astro - loryi has been tried. No optimist would quarrel with the declaration of one of von NE!MAfrIiI’s direct successors that "exposure to the

  19. Observation and modeling of energetic particles at synchronous orbit on July 29, 1977

    NASA Technical Reports Server (NTRS)

    Baker, D. N.; Higbie, P. R.; Fritz, T. A.; Wilken, B.; Kaye, S. M.; Kivelson, M. G.; Moore, T. E.; Masley, A. J.; Smith, P. H.; Vampola, A. L.

    1982-01-01

    In the twelve hours following a worldwide storm, there was a series of at least four magnetospheric substorms, the last and largest of which exhibited an expansion phase onset at approximately 1200 UT. Data from six spacecraft in three general local time groupings (0300, 0700, and 1300 LT) were examined and vector magnetic field data and energetic electron and ion data from approximately 15 keV to 2 MeV were employed.

  20. A vector auto-regressive model for onshore and offshore wind synthesis incorporating meteorological model information

    NASA Astrophysics Data System (ADS)

    Hill, D.; Bell, K. R. W.; McMillan, D.; Infield, D.

    2014-05-01

    The growth of wind power production in the electricity portfolio is striving to meet ambitious targets set, for example by the EU, to reduce greenhouse gas emissions by 20% by 2020. Huge investments are now being made in new offshore wind farms around UK coastal waters that will have a major impact on the GB electrical supply. Representations of the UK wind field in syntheses which capture the inherent structure and correlations between different locations including offshore sites are required. Here, Vector Auto-Regressive (VAR) models are presented and extended in a novel way to incorporate offshore time series from a pan-European meteorological model called COSMO, with onshore wind speeds from the MIDAS dataset provided by the British Atmospheric Data Centre. Forecasting ability onshore is shown to be improved with the inclusion of the offshore sites with improvements of up to 25% in RMS error at 6 h ahead. In addition, the VAR model is used to synthesise time series of wind at each offshore site, which are then used to estimate wind farm capacity factors at the sites in question. These are then compared with estimates of capacity factors derived from the work of Hawkins et al. (2011). A good degree of agreement is established indicating that this synthesis tool should be useful in power system impact studies.

  1. Holomorphic projections and Ramanujan’s mock theta functions

    PubMed Central

    Imamoğlu, Özlem; Raum, Martin; Richter, Olav K.

    2014-01-01

    We use spectral methods of automorphic forms to establish a holomorphic projection operator for tensor products of vector-valued harmonic weak Maass forms and vector-valued modular forms. We apply this operator to discover simple recursions for Fourier series coefficients of Ramanujan’s mock theta functions. PMID:24591582

  2. A simple method of equine limb force vector analysis and its potential applications

    PubMed Central

    Robinson, Mark A.; Clayton, Hilary M.

    2018-01-01

    Background Ground reaction forces (GRF) measured during equine gait analysis are typically evaluated by analyzing discrete values obtained from continuous force-time data for the vertical, longitudinal and transverse GRF components. This paper describes a simple, temporo-spatial method of displaying and analyzing sagittal plane GRF vectors. In addition, the application of statistical parametric mapping (SPM) is introduced to analyse differences between contra-lateral fore and hindlimb force-time curves throughout the stance phase. The overall aim of the study was to demonstrate alternative methods of evaluating functional (a)symmetry within horses. Methods GRF and kinematic data were collected from 10 horses trotting over a series of four force plates (120 Hz). The kinematic data were used to determine clean hoof contacts. The stance phase of each hoof was determined using a 50 N threshold. Vertical and longitudinal GRF for each stance phase were plotted both as force-time curves and as force vector diagrams in which vectors originating at the centre of pressure on the force plate were drawn at intervals of 8.3 ms for the duration of stance. Visual evaluation was facilitated by overlay of the vector diagrams for different limbs. Summary vectors representing the magnitude (VecMag) and direction (VecAng) of the mean force over the entire stance phase were superimposed on the force vector diagram. Typical measurements extracted from the force-time curves (peak forces, impulses) were compared with VecMag and VecAng using partial correlation (controlling for speed). Paired samples t-tests (left v. right diagonal pair comparison and high v. low vertical force diagonal pair comparison) were performed on discrete and vector variables using traditional methods and Hotelling’s T2 tests on normalized stance phase data using SPM. Results Evidence from traditional statistical tests suggested that VecMag is more influenced by the vertical force and impulse, whereas VecAng is more influenced by the longitudinal force and impulse. When used to evaluate mean data from the group of ten sound horses, SPM did not identify differences between the left and right contralateral limb pairs or between limb pairs classified according to directional asymmetry. When evaluating a single horse, three periods were identified during which differences in the forces between the left and right forelimbs exceeded the critical threshold (p < .01). Discussion Traditional statistical analysis of 2D GRF peak values, summary vector variables and visual evaluation of force vector diagrams gave harmonious results and both methods identified the same inter-limb asymmetries. As alpha was more tightly controlled using SPM, significance was only found in the individual horse although T2 plots followed the same trends as discrete analysis for the group. Conclusions The techniques of force vector analysis and SPM hold promise for investigations of sidedness and asymmetry in horses. PMID:29492341

  3. Improving Photometry and Stellar Signal Preservation with Pixel-Level Systematic Error Correction

    NASA Technical Reports Server (NTRS)

    Kolodzijczak, Jeffrey J.; Smith, Jeffrey C.; Jenkins, Jon M.

    2013-01-01

    The Kepler Mission has demonstrated that excellent stellar photometric performance can be achieved using apertures constructed from optimally selected CCD pixels. The clever methods used to correct for systematic errors, while very successful, still have some limitations in their ability to extract long-term trends in stellar flux. They also leave poorly correlated bias sources, such as drifting moiré pattern, uncorrected. We will illustrate several approaches where applying systematic error correction algorithms to the pixel time series, rather than the co-added raw flux time series, provide significant advantages. Examples include, spatially localized determination of time varying moiré pattern biases, greater sensitivity to radiation-induced pixel sensitivity drops (SPSDs), improved precision of co-trending basis vectors (CBV), and a means of distinguishing the stellar variability from co-trending terms even when they are correlated. For the last item, the approach enables physical interpretation of appropriately scaled coefficients derived in the fit of pixel time series to the CBV as linear combinations of various spatial derivatives of the pixel response function (PRF). We demonstrate that the residuals of a fit of soderived pixel coefficients to various PRF-related components can be deterministically interpreted in terms of physically meaningful quantities, such as the component of the stellar flux time series which is correlated with the CBV, as well as, relative pixel gain, proper motion and parallax. The approach also enables us to parameterize and assess the limiting factors in the uncertainties in these quantities.

  4. Evaluation of a new parallel numerical parameter optimization algorithm for a dynamical system

    NASA Astrophysics Data System (ADS)

    Duran, Ahmet; Tuncel, Mehmet

    2016-10-01

    It is important to have a scalable parallel numerical parameter optimization algorithm for a dynamical system used in financial applications where time limitation is crucial. We use Message Passing Interface parallel programming and present such a new parallel algorithm for parameter estimation. For example, we apply the algorithm to the asset flow differential equations that have been developed and analyzed since 1989 (see [3-6] and references contained therein). We achieved speed-up for some time series to run up to 512 cores (see [10]). Unlike [10], we consider more extensive financial market situations, for example, in presence of low volatility, high volatility and stock market price at a discount/premium to its net asset value with varying magnitude, in this work. Moreover, we evaluated the convergence of the model parameter vector, the nonlinear least squares error and maximum improvement factor to quantify the success of the optimization process depending on the number of initial parameter vectors.

  5. Spatial and temporal variation of life-history traits documented using capture-mark-recapture methods in the vector snail Bulinus truncatus.

    PubMed

    Chlyeh, G; Henry, P Y; Jarne, P

    2003-09-01

    The population biology of the schistosome-vector snail Bulinus truncatus was studied in an irrigation area near Marrakech, Morocco, using demographic approaches, in order to estimate life-history parameters. The survey was conducted using 2 capture-mark-recapture analyses in 2 separate sites from the irrigation area, the first one in 1999 and the second one in 2000. Individuals larger than 5 mm were considered. The capture probability varied through time and space in both analyses. Apparent survival (from 0.7 to 1 per period of 2-3 days) varied with time and space (a series of sinks was considered), as well as a square function of size. These results suggest variation in population intrinsic rate of increase. They also suggest that results from more classical analyses of population demography, aiming, for example at estimating population size, should be interpreted with caution. Together with other results obtained in the same irrigation area, they also lead to some suggestions for population control.

  6. Hyperbolic-symmetry vector fields.

    PubMed

    Gao, Xu-Zhen; Pan, Yue; Cai, Meng-Qiang; Li, Yongnan; Tu, Chenghou; Wang, Hui-Tian

    2015-12-14

    We present and construct a new kind of orthogonal coordinate system, hyperbolic coordinate system. We present and design a new kind of local linearly polarized vector fields, which is defined as the hyperbolic-symmetry vector fields because the points with the same polarization form a series of hyperbolae. We experimentally demonstrate the generation of such a kind of hyperbolic-symmetry vector optical fields. In particular, we also study the modified hyperbolic-symmetry vector optical fields with the twofold and fourfold symmetric states of polarization when introducing the mirror symmetry. The tight focusing behaviors of these vector fields are also investigated. In addition, we also fabricate micro-structures on the K9 glass surfaces by several tightly focused (modified) hyperbolic-symmetry vector fields patterns, which demonstrate that the simulated tightly focused fields are in good agreement with the fabricated micro-structures.

  7. Comparing different approaches to visualizing light waves: An experimental study on teaching wave optics

    NASA Astrophysics Data System (ADS)

    Mešić, Vanes; Hajder, Erna; Neumann, Knut; Erceg, Nataša

    2016-06-01

    Research has shown that students have tremendous difficulties developing a qualitative understanding of wave optics, at all educational levels. In this study, we investigate how three different approaches to visualizing light waves affect students' understanding of wave optics. In the first, the conventional, approach light waves are represented by sinusoidal curves. The second teaching approach includes representing light waves by a series of static images, showing the oscillating electric field vectors at characteristic, subsequent instants of time. Within the third approach phasors are used for visualizing light waves. A total of N =85 secondary school students were randomly assigned to one of the three teaching approaches, each of which lasted a period of four class hours. Students who learned with phasors and students who learned from the series of static images outperformed the students learning according to the conventional approach, i.e., they showed a much better understanding of basic wave optics, as measured by a conceptual survey administered to the students one week after the treatment. Our results suggest that visualizing light waves with phasors or oscillating electric field vectors is a promising approach to developing a deeper understanding of wave optics for students enrolled in conceptual level physics courses.

  8. Efficient and stable expression of GFP through Wheat streak mosaic virus-based vectors in cereal hosts using a range of cleavage sites: Formation of dense fluorescent aggregates for sensitive virus tracking

    USDA-ARS?s Scientific Manuscript database

    A series of Wheat streak mosaic virus (WSMV)-based expression vectors were developed by engineering cycle 3 GFP (GFP) cistron between P1 and HC-Pro cistrons with several catalytic/cleavage peptides at the C-terminus of GFP. WSMV-GFP vectors with the Foot-and-mouth disease virus 1D/2A or 2A catalytic...

  9. Unsupervised and self-mapping category formation and semantic object recognition for mobile robot vision used in an actual environment

    NASA Astrophysics Data System (ADS)

    Madokoro, H.; Tsukada, M.; Sato, K.

    2013-07-01

    This paper presents an unsupervised learning-based object category formation and recognition method for mobile robot vision. Our method has the following features: detection of feature points and description of features using a scale-invariant feature transform (SIFT), selection of target feature points using one class support vector machines (OC-SVMs), generation of visual words using self-organizing maps (SOMs), formation of labels using adaptive resonance theory 2 (ART-2), and creation and classification of categories on a category map of counter propagation networks (CPNs) for visualizing spatial relations between categories. Classification results of dynamic images using time-series images obtained using two different-size robots and according to movements respectively demonstrate that our method can visualize spatial relations of categories while maintaining time-series characteristics. Moreover, we emphasize the effectiveness of our method for category formation of appearance changes of objects.

  10. Three-dimensional study of the vector potential of magnetic structures.

    PubMed

    Phatak, Charudatta; Petford-Long, Amanda K; De Graef, Marc

    2010-06-25

    The vector potential is central to a number of areas of condensed matter physics, such as superconductivity and magnetism. We have used a combination of electron wave phase reconstruction and electron tomographic reconstruction to experimentally measure and visualize the three-dimensional vector potential in and around a magnetic Permalloy structure. The method can probe the vector potential of the patterned structures with a resolution of about 13 nm. A transmission electron microscope operated in the Lorentz mode is used to record four tomographic tilt series. Measurements for a square Permalloy structure with an internal closure domain configuration are presented.

  11. Modeling Dengue vector population using remotely sensed data and machine learning.

    PubMed

    Scavuzzo, Juan M; Trucco, Francisco; Espinosa, Manuel; Tauro, Carolina B; Abril, Marcelo; Scavuzzo, Carlos M; Frery, Alejandro C

    2018-05-16

    Mosquitoes are vectors of many human diseases. In particular, Aedes ægypti (Linnaeus) is the main vector for Chikungunya, Dengue, and Zika viruses in Latin America and it represents a global threat. Public health policies that aim at combating this vector require dependable and timely information, which is usually expensive to obtain with field campaigns. For this reason, several efforts have been done to use remote sensing due to its reduced cost. The present work includes the temporal modeling of the oviposition activity (measured weekly on 50 ovitraps in a north Argentinean city) of Aedes ægypti (Linnaeus), based on time series of data extracted from operational earth observation satellite images. We use are NDVI, NDWI, LST night, LST day and TRMM-GPM rain from 2012 to 2016 as predictive variables. In contrast to previous works which use linear models, we employ Machine Learning techniques using completely accessible open source toolkits. These models have the advantages of being non-parametric and capable of describing nonlinear relationships between variables. Specifically, in addition to two linear approaches, we assess a support vector machine, an artificial neural networks, a K-nearest neighbors and a decision tree regressor. Considerations are made on parameter tuning and the validation and training approach. The results are compared to linear models used in previous works with similar data sets for generating temporal predictive models. These new tools perform better than linear approaches, in particular nearest neighbor regression (KNNR) performs the best. These results provide better alternatives to be implemented operatively on the Argentine geospatial risk system that is running since 2012. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. Feature-space-based FMRI analysis using the optimal linear transformation.

    PubMed

    Sun, Fengrong; Morris, Drew; Lee, Wayne; Taylor, Margot J; Mills, Travis; Babyn, Paul S

    2010-09-01

    The optimal linear transformation (OLT), an image analysis technique of feature space, was first presented in the field of MRI. This paper proposes a method of extending OLT from MRI to functional MRI (fMRI) to improve the activation-detection performance over conventional approaches of fMRI analysis. In this method, first, ideal hemodynamic response time series for different stimuli were generated by convolving the theoretical hemodynamic response model with the stimulus timing. Second, constructing hypothetical signature vectors for different activity patterns of interest by virtue of the ideal hemodynamic responses, OLT was used to extract features of fMRI data. The resultant feature space had particular geometric clustering properties. It was then classified into different groups, each pertaining to an activity pattern of interest; the applied signature vector for each group was obtained by averaging. Third, using the applied signature vectors, OLT was applied again to generate fMRI composite images with high SNRs for the desired activity patterns. Simulations and a blocked fMRI experiment were employed for the method to be verified and compared with the general linear model (GLM)-based analysis. The simulation studies and the experimental results indicated the superiority of the proposed method over the GLM-based analysis in detecting brain activities.

  13. A modular toolset for recombination transgenesis and neurogenetic analysis of Drosophila.

    PubMed

    Wang, Ji-Wu; Beck, Erin S; McCabe, Brian D

    2012-01-01

    Transgenic Drosophila have contributed extensively to our understanding of nervous system development, physiology and behavior in addition to being valuable models of human neurological disease. Here, we have generated a novel series of modular transgenic vectors designed to optimize and accelerate the production and analysis of transgenes in Drosophila. We constructed a novel vector backbone, pBID, that allows both phiC31 targeted transgene integration and incorporates insulator sequences to ensure specific and uniform transgene expression. Upon this framework, we have built a series of constructs that are either backwards compatible with existing restriction enzyme based vectors or utilize Gateway recombination technology for high-throughput cloning. These vectors allow for endogenous promoter or Gal4 targeted expression of transgenic proteins with or without fluorescent protein or epitope tags. In addition, we have generated constructs that facilitate transgenic splice isoform specific RNA inhibition of gene expression. We demonstrate the utility of these constructs to analyze proteins involved in nervous system development, physiology and neurodegenerative disease. We expect that these reagents will facilitate the proficiency and sophistication of Drosophila genetic analysis in both the nervous system and other tissues.

  14. Revealing Real-Time Emotional Responses: a Personalized Assessment based on Heartbeat Dynamics

    NASA Astrophysics Data System (ADS)

    Valenza, Gaetano; Citi, Luca; Lanatá, Antonio; Scilingo, Enzo Pasquale; Barbieri, Riccardo

    2014-05-01

    Emotion recognition through computational modeling and analysis of physiological signals has been widely investigated in the last decade. Most of the proposed emotion recognition systems require relatively long-time series of multivariate records and do not provide accurate real-time characterizations using short-time series. To overcome these limitations, we propose a novel personalized probabilistic framework able to characterize the emotional state of a subject through the analysis of heartbeat dynamics exclusively. The study includes thirty subjects presented with a set of standardized images gathered from the international affective picture system, alternating levels of arousal and valence. Due to the intrinsic nonlinearity and nonstationarity of the RR interval series, a specific point-process model was devised for instantaneous identification considering autoregressive nonlinearities up to the third-order according to the Wiener-Volterra representation, thus tracking very fast stimulus-response changes. Features from the instantaneous spectrum and bispectrum, as well as the dominant Lyapunov exponent, were extracted and considered as input features to a support vector machine for classification. Results, estimating emotions each 10 seconds, achieve an overall accuracy in recognizing four emotional states based on the circumplex model of affect of 79.29%, with 79.15% on the valence axis, and 83.55% on the arousal axis.

  15. Revealing real-time emotional responses: a personalized assessment based on heartbeat dynamics.

    PubMed

    Valenza, Gaetano; Citi, Luca; Lanatá, Antonio; Scilingo, Enzo Pasquale; Barbieri, Riccardo

    2014-05-21

    Emotion recognition through computational modeling and analysis of physiological signals has been widely investigated in the last decade. Most of the proposed emotion recognition systems require relatively long-time series of multivariate records and do not provide accurate real-time characterizations using short-time series. To overcome these limitations, we propose a novel personalized probabilistic framework able to characterize the emotional state of a subject through the analysis of heartbeat dynamics exclusively. The study includes thirty subjects presented with a set of standardized images gathered from the international affective picture system, alternating levels of arousal and valence. Due to the intrinsic nonlinearity and nonstationarity of the RR interval series, a specific point-process model was devised for instantaneous identification considering autoregressive nonlinearities up to the third-order according to the Wiener-Volterra representation, thus tracking very fast stimulus-response changes. Features from the instantaneous spectrum and bispectrum, as well as the dominant Lyapunov exponent, were extracted and considered as input features to a support vector machine for classification. Results, estimating emotions each 10 seconds, achieve an overall accuracy in recognizing four emotional states based on the circumplex model of affect of 79.29%, with 79.15% on the valence axis, and 83.55% on the arousal axis.

  16. High-Obliquity Impact of a Compact Penetrator on a Thin Plate: Penetrator Splitting and Adiabatic Shear

    DTIC Science & Technology

    1998-01-01

    nonideal penetrator on a thin plate at high obliquities. These computations simulated two series of experiments at velocities of 1.5 km/ s and 4.1 km/ s ...3 2. Combined Effects of Obliquity, 0, and Rotation, 4, on Debris Cloud Evolution at 4.1 km/ s and 26 p s ; Impact Velocity Vector Lies in x-z Plane...7 3. Time History of the Penetrator Mass Fraction Exiting the Bottom of the Target at 4.1 km / s

  17. Complex network construction based on user group attention sequence

    NASA Astrophysics Data System (ADS)

    Zhang, Gaowei; Xu, Lingyu; Wang, Lei

    2018-04-01

    In the traditional complex network construction, it is often to use the similarity between nodes, build the weight of the network, and finally build the network. However, this approach tends to focus only on the coupling between nodes, while ignoring the information transfer between nodes and the transfer of directionality. In the network public opinion space, based on the set of stock series that the network groups pay attention to within a certain period of time, we vectorize the different stocks and build a complex network.

  18. Diagnostic doses and times for Phlebotomus papatasi and Lutzomyia longipalpis sand flies (Diptera: Psychodidae: Phlebotominae) using the CDC bottle bioassay to assess insecticide resistance.

    PubMed

    Denlinger, David S; Creswell, Joseph A; Anderson, J Laine; Reese, Conor K; Bernhardt, Scott A

    2016-04-15

    Insecticide resistance to synthetic chemical insecticides is a worldwide concern in phlebotomine sand flies (Diptera: Psychodidae), the vectors of Leishmania spp. parasites. The CDC bottle bioassay assesses resistance by testing populations against verified diagnostic doses and diagnostic times for an insecticide, but the assay has been used limitedly with sand flies. The objective of this study was to determine diagnostic doses and diagnostic times for laboratory Lutzomyia longipalpis (Lutz & Nieva) and Phlebotomus papatasi (Scopoli) to ten insecticides, including pyrethroids, organophosphates, carbamates, and DDT, that are used worldwide to control vectors. Bioassays were conducted in 1,000-ml glass bottles each containing 10-25 sand flies from laboratory colonies of L. longipalpis or P. papatasi. Four pyrethroids, three organophosphates, two carbamates and one organochlorine, were evaluated. A series of concentrations were tested for each insecticide, and four replicates were conducted for each concentration. Diagnostic doses were determined only during the exposure bioassay for the organophosphates and carbamates. For the pyrethroids and DDT, diagnostic doses were determined for both the exposure bioassay and after a 24-hour recovery period. Both species are highly susceptible to the carbamates as their diagnostic doses are under 7.0 μg/ml. Both species are also highly susceptible to DDT during the exposure assay as their diagnostic doses are 7.5 μg/ml, yet their diagnostic doses for the 24-h recovery period are 650.0 μg/ml for Lu. longipalpis and 470.0 μg/ml for P. papatasi. Diagnostic doses and diagnostic times can now be incorporated into vector management programs that use the CDC bottle bioassay to assess insecticide resistance in field populations of Lu. longipalpis and P. papatasi. These findings provide initial starting points for determining diagnostic doses and diagnostic times for other sand fly vector species and wild populations using the CDC bottle bioassay.

  19. Can We Speculate Running Application With Server Power Consumption Trace?

    PubMed

    Li, Yuanlong; Hu, Han; Wen, Yonggang; Zhang, Jun

    2018-05-01

    In this paper, we propose to detect the running applications in a server by classifying the observed power consumption series for the purpose of data center energy consumption monitoring and analysis. Time series classification problem has been extensively studied with various distance measurements developed; also recently the deep learning-based sequence models have been proved to be promising. In this paper, we propose a novel distance measurement and build a time series classification algorithm hybridizing nearest neighbor and long short term memory (LSTM) neural network. More specifically, first we propose a new distance measurement termed as local time warping (LTW), which utilizes a user-specified index set for local warping, and is designed to be noncommutative and nondynamic programming. Second, we hybridize the 1-nearest neighbor (1NN)-LTW and LSTM together. In particular, we combine the prediction probability vector of 1NN-LTW and LSTM to determine the label of the test cases. Finally, using the power consumption data from a real data center, we show that the proposed LTW can improve the classification accuracy of dynamic time warping (DTW) from about 84% to 90%. Our experimental results prove that the proposed LTW is competitive on our data set compared with existed DTW variants and its noncommutative feature is indeed beneficial. We also test a linear version of LTW and find out that it can perform similar to state-of-the-art DTW-based method while it runs as fast as the linear runtime lower bound methods like LB_Keogh for our problem. With the hybrid algorithm, for the power series classification task we achieve an accuracy up to about 93%. Our research can inspire more studies on time series distance measurement and the hybrid of the deep learning models with other traditional models.

  20. Automatic techniques for 3D reconstruction of critical workplace body postures from range imaging data

    NASA Astrophysics Data System (ADS)

    Westfeld, Patrick; Maas, Hans-Gerd; Bringmann, Oliver; Gröllich, Daniel; Schmauder, Martin

    2013-11-01

    The paper shows techniques for the determination of structured motion parameters from range camera image sequences. The core contribution of the work presented here is the development of an integrated least squares 3D tracking approach based on amplitude and range image sequences to calculate dense 3D motion vector fields. Geometric primitives of a human body model are fitted to time series of range camera point clouds using these vector fields as additional information. Body poses and motion information for individual body parts are derived from the model fit. On the basis of these pose and motion parameters, critical body postures are detected. The primary aim of the study is to automate ergonomic studies for risk assessments regulated by law, identifying harmful movements and awkward body postures in a workplace.

  1. Cutaneous Leishmaniasis and Sand Fly Fluctuations Are Associated with El Niño in Panamá

    PubMed Central

    Chaves, Luis Fernando; Calzada, José E.; Valderrama, Anayansí; Saldaña, Azael

    2014-01-01

    Background Cutaneous Leishmaniasis (CL) is a neglected tropical vector-borne disease. Sand fly vectors (SF) and Leishmania spp parasites are sensitive to changes in weather conditions, rendering disease transmission susceptible to changes in local and global scale climatic patterns. Nevertheless, it is unclear how SF abundance is impacted by El Niño Southern Oscillation (ENSO) and how these changes might relate to changes in CL transmission. Methodology and Findings We studied association patterns between monthly time series, from January 2000 to December 2010, of: CL cases, rainfall and temperature from Panamá, and an ENSO index. We employed autoregressive models and cross wavelet coherence, to quantify the seasonal and interannual impact of local climate and ENSO on CL dynamics. We employed Poisson Rate Generalized Linear Mixed Models to study SF abundance patterns across ENSO phases, seasons and eco-epidemiological settings, employing records from 640 night-trap sampling collections spanning 2000–2011. We found that ENSO, rainfall and temperature were associated with CL cycles at interannual scales, while seasonal patterns were mainly associated with rainfall and temperature. Sand fly (SF) vector abundance, on average, decreased during the hot and cold ENSO phases, when compared with the normal ENSO phase, yet variability in vector abundance was largest during the cold ENSO phase. Our results showed a three month lagged association between SF vector abundance and CL cases. Conclusion Association patterns of CL with ENSO and local climatic factors in Panamá indicate that interannual CL cycles might be driven by ENSO, while the CL seasonality was mainly associated with temperature and rainfall variability. CL cases and SF abundance were associated in a fashion suggesting that sudden extraordinary changes in vector abundance might increase the potential for CL epidemic outbreaks, given that CL epidemics occur during the cold ENSO phase, a time when SF abundance shows its highest fluctuations. PMID:25275503

  2. Interannual Variability of Human Plague Occurrence in the Western United States Explained by Tropical and North Pacific Ocean Climate Variability

    PubMed Central

    Ari, Tamara Ben; Gershunov, Alexander; Tristan, Rouyer; Cazelles, Bernard; Gage, Kenneth; Stenseth, Nils C.

    2010-01-01

    Plague is a vector-borne, highly virulent zoonotic disease caused by the bacterium Yersinia pestis. It persists in nature through transmission between its hosts (wild rodents) and vectors (fleas). During epizootics, the disease expands and spills over to other host species such as humans living in or close to affected areas. Here, we investigate the effect of large-scale climate variability on the dynamics of human plague in the western United States using a 56-year time series of plague reports (1950–2005). We found that El Niño Southern Oscillation and Pacific Decadal Oscillation in combination affect the dynamics of human plague over the western United States. The underlying mechanism could involve changes in precipitation and temperatures that impact both hosts and vectors. It is suggested that snow also may play a key role, possibly through its effects on summer soil moisture, which is known to be instrumental for flea survival and development and sustained growth of vegetation for rodents. PMID:20810830

  3. Sentiments Analysis of Reviews Based on ARCNN Model

    NASA Astrophysics Data System (ADS)

    Xu, Xiaoyu; Xu, Ming; Xu, Jian; Zheng, Ning; Yang, Tao

    2017-10-01

    The sentiments analysis of product reviews is designed to help customers understand the status of the product. The traditional method of sentiments analysis relies on the input of a fixed feature vector which is performance bottleneck of the basic codec architecture. In this paper, we propose an attention mechanism with BRNN-CNN model, referring to as ARCNN model. In order to have a good analysis of the semantic relations between words and solves the problem of dimension disaster, we use the GloVe algorithm to train the vector representations for words. Then, ARCNN model is proposed to deal with the problem of deep features training. Specifically, BRNN model is proposed to investigate non-fixed-length vectors and keep time series information perfectly and CNN can study more connection of deep semantic links. Moreover, the attention mechanism can automatically learn from the data and optimize the allocation of weights. Finally, a softmax classifier is designed to complete the sentiment classification of reviews. Experiments show that the proposed method can improve the accuracy of sentiment classification compared with benchmark methods.

  4. Optimizing support vector machine learning for semi-arid vegetation mapping by using clustering analysis

    NASA Astrophysics Data System (ADS)

    Su, Lihong

    In remote sensing communities, support vector machine (SVM) learning has recently received increasing attention. SVM learning usually requires large memory and enormous amounts of computation time on large training sets. According to SVM algorithms, the SVM classification decision function is fully determined by support vectors, which compose a subset of the training sets. In this regard, a solution to optimize SVM learning is to efficiently reduce training sets. In this paper, a data reduction method based on agglomerative hierarchical clustering is proposed to obtain smaller training sets for SVM learning. Using a multiple angle remote sensing dataset of a semi-arid region, the effectiveness of the proposed method is evaluated by classification experiments with a series of reduced training sets. The experiments show that there is no loss of SVM accuracy when the original training set is reduced to 34% using the proposed approach. Maximum likelihood classification (MLC) also is applied on the reduced training sets. The results show that MLC can also maintain the classification accuracy. This implies that the most informative data instances can be retained by this approach.

  5. Clifford support vector machines for classification, regression, and recurrence.

    PubMed

    Bayro-Corrochano, Eduardo Jose; Arana-Daniel, Nancy

    2010-11-01

    This paper introduces the Clifford support vector machines (CSVM) as a generalization of the real and complex-valued support vector machines using the Clifford geometric algebra. In this framework, we handle the design of kernels involving the Clifford or geometric product. In this approach, one redefines the optimization variables as multivectors. This allows us to have a multivector as output. Therefore, we can represent multiple classes according to the dimension of the geometric algebra in which we work. We show that one can apply CSVM for classification and regression and also to build a recurrent CSVM. The CSVM is an attractive approach for the multiple input multiple output processing of high-dimensional geometric entities. We carried out comparisons between CSVM and the current approaches to solve multiclass classification and regression. We also study the performance of the recurrent CSVM with experiments involving time series. The authors believe that this paper can be of great use for researchers and practitioners interested in multiclass hypercomplex computing, particularly for applications in complex and quaternion signal and image processing, satellite control, neurocomputation, pattern recognition, computer vision, augmented virtual reality, robotics, and humanoids.

  6. The application of artificial neural networks and support vector regression for simultaneous spectrophotometric determination of commercial eye drop contents

    NASA Astrophysics Data System (ADS)

    Valizadeh, Maryam; Sohrabi, Mahmoud Reza

    2018-03-01

    In the present study, artificial neural networks (ANNs) and support vector regression (SVR) as intelligent methods coupled with UV spectroscopy for simultaneous quantitative determination of Dorzolamide (DOR) and Timolol (TIM) in eye drop. Several synthetic mixtures were analyzed for validating the proposed methods. At first, neural network time series, which one type of network from the artificial neural network was employed and its efficiency was evaluated. Afterwards, the radial basis network was applied as another neural network. Results showed that the performance of this method is suitable for predicting. Finally, support vector regression was proposed to construct the Zilomole prediction model. Also, root mean square error (RMSE) and mean recovery (%) were calculated for SVR method. Moreover, the proposed methods were compared to the high-performance liquid chromatography (HPLC) as a reference method. One way analysis of variance (ANOVA) test at the 95% confidence level applied to the comparison results of suggested and reference methods that there were no significant differences between them. Also, the effect of interferences was investigated in spike solutions.

  7. Bluetongue Disease Risk Assessment Based on Observed and Projected Culicoides obsoletus spp. Vector Densities

    PubMed Central

    Brugger, Katharina; Rubel, Franz

    2013-01-01

    Bluetongue is an arboviral disease of ruminants causing significant economic losses. Our risk assessment is based on the epidemiological key parameter, the basic reproduction number. It is defined as the number of secondary cases caused by one primary case in a fully susceptible host population, in which values greater than one indicate the possibility, i.e., the risk, for a major disease outbreak. In the course of the Bluetongue virus serotype 8 (BTV-8) outbreak in Europe in 2006 we developed such a risk assessment for the University of Veterinary Medicine Vienna, Austria. Basic reproduction numbers were calculated using a well-known formula for vector-borne diseases considering the population densities of hosts (cattle and small ruminants) and vectors (biting midges of the Culicoides obsoletus spp.) as well as temperature dependent rates. The latter comprise the biting and mortality rate of midges as well as the reciprocal of the extrinsic incubation period. Most important, but generally unknown, is the spatio-temporal distribution of the vector density. Therefore, we established a continuously operating daily monitoring to quantify the seasonal cycle of the vector population by a statistical model. We used cross-correlation maps and Poisson regression to describe vector densities by environmental temperature and precipitation. Our results comprise time series of observed and simulated Culicoides obsoletus spp. counts as well as basic reproduction numbers for the period 2009–2011. For a spatio-temporal risk assessment we projected our results from the location of Vienna to the entire region of Austria. We compiled both daily maps of vector densities and the basic reproduction numbers, respectively. Basic reproduction numbers above one were generally found between June and August except in the mountainous regions of the Alps. The highest values coincide with the locations of confirmed BTV cases. PMID:23560090

  8. Hemispheric Patterns in Electric Current Helicity of Solar Magnetic Fields During Solar Cycle 24: Results from SOLIS, SDO and Hinode

    NASA Astrophysics Data System (ADS)

    Gusain, S.

    2017-12-01

    We study the hemispheric patterns in electric current helicity distribution on the Sun. Magnetic field vector in the photosphere is now routinely measured by variety of instruments. SOLIS/VSM of NSO observes full disk Stokes spectra in photospheric lines which are used to derive vector magnetograms. Hinode SP is a space based spectropolarimeter which has the same observable as SOLIS albeit with limited field-of-view (FOV) but high spatial resolution. SDO/HMI derives vector magnetograms from full disk Stokes measurements, with rather limited spectral resolution, from space in a different photospheric line. Further, these datasets now exist for several years. SOLIS/VSM from 2003, Hinode SP from 2006, and SDO HMI since 2010. Using these time series of vector magnetograms we compute the electric current density in active regions during solar cycle 24 and study the hemispheric distributions. Many studies show that the helicity parameters and proxies show a strong hemispheric bias, such that Northern hemisphere has preferentially negative and southern positive helicity, respectively. We will confirm these results for cycle 24 from three different datasets and evaluate the statistical significance of the hemispheric bias. Further, we discuss the solar cycle variation in the hemispheric helicity pattern during cycle 24 and discuss its implications in terms of solar dynamo models.

  9. Gateway Vectors for Efficient Artificial Gene Assembly In Vitro and Expression in Yeast Saccharomyces cerevisiae

    PubMed Central

    Giuraniuc, Claudiu V.; MacPherson, Murray; Saka, Yasushi

    2013-01-01

    Construction of synthetic genetic networks requires the assembly of DNA fragments encoding functional biological parts in a defined order. Yet this may become a time-consuming procedure. To address this technical bottleneck, we have created a series of Gateway shuttle vectors and an integration vector, which facilitate the assembly of artificial genes and their expression in the budding yeast Saccharomyces cerevisiae. Our method enables the rapid construction of an artificial gene from a promoter and an open reading frame (ORF) cassette by one-step recombination reaction in vitro. Furthermore, the plasmid thus created can readily be introduced into yeast cells to test the assembled gene’s functionality. As flexible regulatory components of a synthetic genetic network, we also created new versions of the tetracycline-regulated transactivators tTA and rtTA by fusing them to the auxin-inducible degron (AID). Using our gene assembly approach, we made yeast expression vectors of these engineered transactivators, AIDtTA and AIDrtTA and then tested their functions in yeast. We showed that these factors can be regulated by doxycycline and degraded rapidly after addition of auxin to the medium. Taken together, the method for combinatorial gene assembly described here is versatile and would be a valuable tool for yeast synthetic biology. PMID:23675537

  10. Construction of fusion vectors of corynebacteria: expression of glutathione-S-transferase fusion protein in Corynebacterium acetoacidophilum ATCC 21476.

    PubMed

    Srivastava, Preeti; Deb, J K

    2002-07-02

    A series of fusion vectors containing glutathione-S-transferase (GST) were constructed by inserting GST fusion cassette of Escherichia coli vectors pGEX4T-1, -2 and -3 in corynebacterial vector pBK2. Efficient expression of GST driven by inducible tac promoter of E. coli was observed in Corynebacterium acetoacidophilum. Fusion of enhanced green fluorescent protein (EGFP) and streptokinase genes in this vector resulted in the synthesis of both the fusion proteins. The ability of this recombinant organism to produce several-fold more of the product in the extracellular medium than in the intracellular space would make this system quite attractive as far as the downstream processing of the product is concerned.

  11. Learning a Mahalanobis Distance-Based Dynamic Time Warping Measure for Multivariate Time Series Classification.

    PubMed

    Mei, Jiangyuan; Liu, Meizhu; Wang, Yuan-Fang; Gao, Huijun

    2016-06-01

    Multivariate time series (MTS) datasets broadly exist in numerous fields, including health care, multimedia, finance, and biometrics. How to classify MTS accurately has become a hot research topic since it is an important element in many computer vision and pattern recognition applications. In this paper, we propose a Mahalanobis distance-based dynamic time warping (DTW) measure for MTS classification. The Mahalanobis distance builds an accurate relationship between each variable and its corresponding category. It is utilized to calculate the local distance between vectors in MTS. Then we use DTW to align those MTS which are out of synchronization or with different lengths. After that, how to learn an accurate Mahalanobis distance function becomes another key problem. This paper establishes a LogDet divergence-based metric learning with triplet constraint model which can learn Mahalanobis matrix with high precision and robustness. Furthermore, the proposed method is applied on nine MTS datasets selected from the University of California, Irvine machine learning repository and Robert T. Olszewski's homepage, and the results demonstrate the improved performance of the proposed approach.

  12. Predictive Features of a Cockpit Traffic Display: A Workload Assessment

    NASA Technical Reports Server (NTRS)

    Wickens, Christopher D.; Morphew, Ephimia

    1997-01-01

    Eighteen pilots flew a series of traffic avoidance maneuvers in an experiment designed to assess the support offered and workload imposed by different levels of traffic display information in a free flight simulation. Three display prototypes were compared which differed in traffic information provided. A BASELINE (BL) display provided current and (2nd order) predicted information regarding ownship and current information of an intruder aircraft, represented on lateral and vertical displays in a coplanar suite. An INTRUDER PREDICTOR (IP) display, augmented the baseline display by providing lateral and vertical prediction of the intruder aircraft. A THREAT VECTOR (TV) display added to the IP display a vector that indicates the direction from ownship to the intruder at the predicted point of closest contact (POCC). The length of the vector corresponds to the radius of the protected zone, and the distance of the intersection of the vector with ownship predictor, corresponds to the time available till POCC or loss of separation. Pilots time shared the traffic avoidance task with a secondary task requiring them to monitor the top of the display for faint targets. This task simulated the visual demands of out-of-cockpit scanning, and hence was used to estimate the head-down time required by the different display formats. The results revealed that both display augmentations improved performance (safety) as assessed by predicted and actual loss of separation (i.e., penetration of the protected zone). Both enhancements also reduced workload, as assessed by the NASA TLX scale. The intruder predictor display produced these benefits with no substantial impact on the qualitative nature of the avoidance maneuvers that were selected. The threat vector produced the safety benefits by inducing a greater degree of (effective) lateral maneuvering, thus partially offsetting the benefits of reduced workload. The three displays did not differ in terms of their effect on performance of the monitoring task, used to infer head-down time, nor in the extent of vertical or airspeed maneuvering. The results are discussed in terms of their implications for 19 cognitive engineering design features.

  13. A native promoter and inclusion of an intron is necessary for efficient expression of GFP or mRFP in Armillaria mellea

    USDA-ARS?s Scientific Manuscript database

    Armillaria mellea is a significant pathogen that causes Armillaria root disease on numerous hosts in forests, gardens and agricultural environments worldwide. Using a yeast-adapted pCAMBIA0380 Agrobacterium vector, we have constructed a series of vectors for transformation of A. mellea, assembled u...

  14. A stochastic global identification framework for aerospace structures operating under varying flight states

    NASA Astrophysics Data System (ADS)

    Kopsaftopoulos, Fotis; Nardari, Raphael; Li, Yu-Hung; Chang, Fu-Kuo

    2018-01-01

    In this work, a novel data-based stochastic "global" identification framework is introduced for aerospace structures operating under varying flight states and uncertainty. In this context, the term "global" refers to the identification of a model that is capable of representing the structure under any admissible flight state based on data recorded from a sample of these states. The proposed framework is based on stochastic time-series models for representing the structural dynamics and aeroelastic response under multiple flight states, with each state characterized by several variables, such as the airspeed, angle of attack, altitude and temperature, forming a flight state vector. The method's cornerstone lies in the new class of Vector-dependent Functionally Pooled (VFP) models which allow the explicit analytical inclusion of the flight state vector into the model parameters and, hence, system dynamics. This is achieved via the use of functional data pooling techniques for optimally treating - as a single entity - the data records corresponding to the various flight states. In this proof-of-concept study the flight state vector is defined by two variables, namely the airspeed and angle of attack of the vehicle. The experimental evaluation and assessment is based on a prototype bio-inspired self-sensing composite wing that is subjected to a series of wind tunnel experiments under multiple flight states. Distributed micro-sensors in the form of stretchable sensor networks are embedded in the composite layup of the wing in order to provide the sensing capabilities. Experimental data collected from piezoelectric sensors are employed for the identification of a stochastic global VFP model via appropriate parameter estimation and model structure selection methods. The estimated VFP model parameters constitute two-dimensional functions of the flight state vector defined by the airspeed and angle of attack. The identified model is able to successfully represent the wing's aeroelastic response under the admissible flight states via a minimum number of estimated parameters compared to standard identification approaches. The obtained results demonstrate the high accuracy and effectiveness of the proposed global identification framework, thus constituting a first step towards the next generation of "fly-by-feel" aerospace vehicles with state awareness capabilities.

  15. Rational approximations from power series of vector-valued meromorphic functions

    NASA Technical Reports Server (NTRS)

    Sidi, Avram

    1992-01-01

    Let F(z) be a vector-valued function, F: C yields C(sup N), which is analytic at z = 0 and meromorphic in a neighborhood of z = 0, and let its Maclaurin series be given. In this work we developed vector-valued rational approximation procedures for F(z) by applying vector extrapolation methods to the sequence of partial sums of its Maclaurin series. We analyzed some of the algebraic and analytic properties of the rational approximations thus obtained, and showed that they were akin to Pade approximations. In particular, we proved a Koenig type theorem concerning their poles and a de Montessus type theorem concerning their uniform convergence. We showed how optical approximations to multiple poles and to Laurent expansions about these poles can be constructed. Extensions of the procedures above and the accompanying theoretical results to functions defined in arbitrary linear spaces was also considered. One of the most interesting and immediate applications of the results of this work is to the matrix eigenvalue problem. In a forthcoming paper we exploited the developments of the present work to devise bona fide generalizations of the classical power method that are especially suitable for very large and sparse matrices. These generalizations can be used to approximate simultaneously several of the largest distinct eigenvalues and corresponding eigenvectors and invariant subspaces of arbitrary matrices which may or may not be diagonalizable, and are very closely related with known Krylov subspace methods.

  16. Retroviral vectors encoding ADA regulatory locus control region provide enhanced T-cell-specific transgene expression.

    PubMed

    Trinh, Alice T; Ball, Bret G; Weber, Erin; Gallaher, Timothy K; Gluzman-Poltorak, Zoya; Anderson, French; Basile, Lena A

    2009-12-30

    Murine retroviral vectors have been used in several hundred gene therapy clinical trials, but have fallen out of favor for a number of reasons. One issue is that gene expression from viral or internal promoters is highly variable and essentially unregulated. Moreover, with retroviral vectors, gene expression is usually silenced over time. Mammalian genes, in contrast, are characterized by highly regulated, precise levels of expression in both a temporal and a cell-specific manner. To ascertain if recapitulation of endogenous adenosine deaminase (ADA) expression can be achieved in a vector construct we created a new series of Moloney murine leukemia virus (MuLV) based retroviral vector that carry human regulatory elements including combinations of the ADA promoter, the ADA locus control region (LCR), ADA introns and human polyadenylation sequences in a self-inactivating vector backbone. A MuLV-based retroviral vector with a self-inactivating (SIN) backbone, the phosphoglycerate kinase promoter (PGK) and the enhanced green fluorescent protein (eGFP), as a reporter gene, was generated. Subsequent vectors were constructed from this basic vector by deletion or addition of certain elements. The added elements that were assessed are the human ADA promoter, human ADA locus control region (LCR), introns 7, 8, and 11 from the human ADA gene, and human growth hormone polyadenylation signal. Retroviral vector particles were produced by transient three-plasmid transfection of 293T cells. Retroviral vectors encoding eGFP were titered by transducing 293A cells, and then the proportion of GFP-positive cells was determined using fluorescence-activated cell sorting (FACS). Non T-cell and T-cell lines were transduced at a multiplicity of infection (MOI) of 0.1 and the yield of eGFP transgene expression was evaluated by FACS analysis using mean fluorescent intensity (MFI) detection. Vectors that contained the ADA LCR were preferentially expressed in T-cell lines. Further improvements in T-cell specific gene expression were observed with the incorporation of additional cis-regulatory elements, such as a human polyadenylation signal and intron 7 from the human ADA gene. These studies suggest that the combination of an authentically regulated ADA gene in a murine retroviral vector, together with additional locus-specific regulatory refinements, will yield a vector with a safer profile and greater efficacy in terms of high-level, therapeutic, regulated gene expression for the treatment of ADA-deficient severe combined immunodeficiency.

  17. Simulation of groundwater level variations using wavelet combined with neural network, linear regression and support vector machine

    NASA Astrophysics Data System (ADS)

    Ebrahimi, Hadi; Rajaee, Taher

    2017-01-01

    Simulation of groundwater level (GWL) fluctuations is an important task in management of groundwater resources. In this study, the effect of wavelet analysis on the training of the artificial neural network (ANN), multi linear regression (MLR) and support vector regression (SVR) approaches was investigated, and the ANN, MLR and SVR along with the wavelet-ANN (WNN), wavelet-MLR (WLR) and wavelet-SVR (WSVR) models were compared in simulating one-month-ahead of GWL. The only variable used to develop the models was the monthly GWL data recorded over a period of 11 years from two wells in the Qom plain, Iran. The results showed that decomposing GWL time series into several sub-time series, extremely improved the training of the models. For both wells 1 and 2, the Meyer and Db5 wavelets produced better results compared to the other wavelets; which indicated wavelet types had similar behavior in similar case studies. The optimal number of delays was 6 months, which seems to be due to natural phenomena. The best WNN model, using Meyer mother wavelet with two decomposition levels, simulated one-month-ahead with RMSE values being equal to 0.069 m and 0.154 m for wells 1 and 2, respectively. The RMSE values for the WLR model were 0.058 m and 0.111 m, and for WSVR model were 0.136 m and 0.060 m for wells 1 and 2, respectively.

  18. Arbitrary norm support vector machines.

    PubMed

    Huang, Kaizhu; Zheng, Danian; King, Irwin; Lyu, Michael R

    2009-02-01

    Support vector machines (SVM) are state-of-the-art classifiers. Typically L2-norm or L1-norm is adopted as a regularization term in SVMs, while other norm-based SVMs, for example, the L0-norm SVM or even the L(infinity)-norm SVM, are rarely seen in the literature. The major reason is that L0-norm describes a discontinuous and nonconvex term, leading to a combinatorially NP-hard optimization problem. In this letter, motivated by Bayesian learning, we propose a novel framework that can implement arbitrary norm-based SVMs in polynomial time. One significant feature of this framework is that only a sequence of sequential minimal optimization problems needs to be solved, thus making it practical in many real applications. The proposed framework is important in the sense that Bayesian priors can be efficiently plugged into most learning methods without knowing the explicit form. Hence, this builds a connection between Bayesian learning and the kernel machines. We derive the theoretical framework, demonstrate how our approach works on the L0-norm SVM as a typical example, and perform a series of experiments to validate its advantages. Experimental results on nine benchmark data sets are very encouraging. The implemented L0-norm is competitive with or even better than the standard L2-norm SVM in terms of accuracy but with a reduced number of support vectors, -9.46% of the number on average. When compared with another sparse model, the relevance vector machine, our proposed algorithm also demonstrates better sparse properties with a training speed over seven times faster.

  19. Three learning phases for radial-basis-function networks.

    PubMed

    Schwenker, F; Kestler, H A; Palm, G

    2001-05-01

    In this paper, learning algorithms for radial basis function (RBF) networks are discussed. Whereas multilayer perceptrons (MLP) are typically trained with backpropagation algorithms, starting the training procedure with a random initialization of the MLP's parameters, an RBF network may be trained in many different ways. We categorize these RBF training methods into one-, two-, and three-phase learning schemes. Two-phase RBF learning is a very common learning scheme. The two layers of an RBF network are learnt separately; first the RBF layer is trained, including the adaptation of centers and scaling parameters, and then the weights of the output layer are adapted. RBF centers may be trained by clustering, vector quantization and classification tree algorithms, and the output layer by supervised learning (through gradient descent or pseudo inverse solution). Results from numerical experiments of RBF classifiers trained by two-phase learning are presented in three completely different pattern recognition applications: (a) the classification of 3D visual objects; (b) the recognition hand-written digits (2D objects); and (c) the categorization of high-resolution electrocardiograms given as a time series (ID objects) and as a set of features extracted from these time series. In these applications, it can be observed that the performance of RBF classifiers trained with two-phase learning can be improved through a third backpropagation-like training phase of the RBF network, adapting the whole set of parameters (RBF centers, scaling parameters, and output layer weights) simultaneously. This, we call three-phase learning in RBF networks. A practical advantage of two- and three-phase learning in RBF networks is the possibility to use unlabeled training data for the first training phase. Support vector (SV) learning in RBF networks is a different learning approach. SV learning can be considered, in this context of learning, as a special type of one-phase learning, where only the output layer weights of the RBF network are calculated, and the RBF centers are restricted to be a subset of the training data. Numerical experiments with several classifier schemes including k-nearest-neighbor, learning vector quantization and RBF classifiers trained through two-phase, three-phase and support vector learning are given. The performance of the RBF classifiers trained through SV learning and three-phase learning are superior to the results of two-phase learning, but SV learning often leads to complex network structures, since the number of support vectors is not a small fraction of the total number of data points.

  20. Farm Management Support on Cloud Computing Platform: A System for Cropland Monitoring Using Multi-Source Remotely Sensed Data

    NASA Astrophysics Data System (ADS)

    Coburn, C. A.; Qin, Y.; Zhang, J.; Staenz, K.

    2015-12-01

    Food security is one of the most pressing issues facing humankind. Recent estimates predict that over one billion people don't have enough food to meet their basic nutritional needs. The ability of remote sensing tools to monitor and model crop production and predict crop yield is essential for providing governments and farmers with vital information to ensure food security. Google Earth Engine (GEE) is a cloud computing platform, which integrates storage and processing algorithms for massive remotely sensed imagery and vector data sets. By providing the capabilities of storing and analyzing the data sets, it provides an ideal platform for the development of advanced analytic tools for extracting key variables used in regional and national food security systems. With the high performance computing and storing capabilities of GEE, a cloud-computing based system for near real-time crop land monitoring was developed using multi-source remotely sensed data over large areas. The system is able to process and visualize the MODIS time series NDVI profile in conjunction with Landsat 8 image segmentation for crop monitoring. With multi-temporal Landsat 8 imagery, the crop fields are extracted using the image segmentation algorithm developed by Baatz et al.[1]. The MODIS time series NDVI data are modeled by TIMESAT [2], a software package developed for analyzing time series of satellite data. The seasonality of MODIS time series data, for example, the start date of the growing season, length of growing season, and NDVI peak at a field-level are obtained for evaluating the crop-growth conditions. The system fuses MODIS time series NDVI data and Landsat 8 imagery to provide information of near real-time crop-growth conditions through the visualization of MODIS NDVI time series and comparison of multi-year NDVI profiles. Stakeholders, i.e., farmers and government officers, are able to obtain crop-growth information at crop-field level online. This unique utilization of GEE in combination with advanced analytic and extraction techniques provides a vital remote sensing tool for decision makers and scientists with a high-degree of flexibility to adapt to different uses.

  1. Continuing Environmental Health Education: A Course for Environmental Health Personnel.

    ERIC Educational Resources Information Center

    Mill, Raymond A.; Walter, William G.

    1979-01-01

    This lesson is the third of a series of six lessons on general environmental health. The series of multiple choice tests covers administration, food sanitation, vector control, housing, radiation, accident prevention, water supplies, waste disposal, air pollution, noise pollution, occupational health, recreation facilities, and water pollution.…

  2. A vectorized Poisson solver over a spherical shell and its application to the quasi-geostrophic omega-equation

    NASA Technical Reports Server (NTRS)

    Mullenmeister, Paul

    1988-01-01

    The quasi-geostrophic omega-equation in flux form is developed as an example of a Poisson problem over a spherical shell. Solutions of this equation are obtained by applying a two-parameter Chebyshev solver in vector layout for CDC 200 series computers. The performance of this vectorized algorithm greatly exceeds the performance of its scalar analog. The algorithm generates solutions of the omega-equation which are compared with the omega fields calculated with the aid of the mass continuity equation.

  3. Hybrid approach of selecting hyperparameters of support vector machine for regression.

    PubMed

    Jeng, Jin-Tsong

    2006-06-01

    To select the hyperparameters of the support vector machine for regression (SVR), a hybrid approach is proposed to determine the kernel parameter of the Gaussian kernel function and the epsilon value of Vapnik's epsilon-insensitive loss function. The proposed hybrid approach includes a competitive agglomeration (CA) clustering algorithm and a repeated SVR (RSVR) approach. Since the CA clustering algorithm is used to find the nearly "optimal" number of clusters and the centers of clusters in the clustering process, the CA clustering algorithm is applied to select the Gaussian kernel parameter. Additionally, an RSVR approach that relies on the standard deviation of a training error is proposed to obtain an epsilon in the loss function. Finally, two functions, one real data set (i.e., a time series of quarterly unemployment rate for West Germany) and an identification of nonlinear plant are used to verify the usefulness of the hybrid approach.

  4. Multifractal detrended cross-correlations between crude oil market and Chinese ten sector stock markets

    NASA Astrophysics Data System (ADS)

    Yang, Liansheng; Zhu, Yingming; Wang, Yudong; Wang, Yiqi

    2016-11-01

    Based on the daily price data of spot prices of West Texas Intermediate (WTI) crude oil and ten CSI300 sector indices in China, we apply multifractal detrended cross-correlation analysis (MF-DCCA) method to investigate the cross-correlations between crude oil and Chinese sector stock markets. We find that the strength of multifractality between WTI crude oil and energy sector stock market is the highest, followed by the strength of multifractality between WTI crude oil and financial sector market, which reflects a close connection between energy and financial market. Then we do vector autoregression (VAR) analysis to capture the interdependencies among the multiple time series. By comparing the strength of multifractality for original data and residual errors of VAR model, we get a conclusion that vector auto-regression (VAR) model could not be used to describe the dynamics of the cross-correlations between WTI crude oil and the ten sector stock markets.

  5. Customer demand prediction of service-oriented manufacturing using the least square support vector machine optimized by particle swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Cao, Jin; Jiang, Zhibin; Wang, Kangzhou

    2017-07-01

    Many nonlinear customer satisfaction-related factors significantly influence the future customer demand for service-oriented manufacturing (SOM). To address this issue and enhance the prediction accuracy, this article develops a novel customer demand prediction approach for SOM. The approach combines the phase space reconstruction (PSR) technique with the optimized least square support vector machine (LSSVM). First, the prediction sample space is reconstructed by the PSR to enrich the time-series dynamics of the limited data sample. Then, the generalization and learning ability of the LSSVM are improved by the hybrid polynomial and radial basis function kernel. Finally, the key parameters of the LSSVM are optimized by the particle swarm optimization algorithm. In a real case study, the customer demand prediction of an air conditioner compressor is implemented. Furthermore, the effectiveness and validity of the proposed approach are demonstrated by comparison with other classical predication approaches.

  6. Packaging of Human Chromosome 19-Specific Adeno-Associated Virus (AAV) Integration Sites in AAV Virions during AAV Wild-Type and Recombinant AAV Vector Production

    PubMed Central

    Hüser, Daniela; Weger, Stefan; Heilbronn, Regine

    2003-01-01

    Adeno-associated virus type 2 (AAV-2) establishes latency by site-specific integration into a unique locus on human chromosome 19, called AAVS1. During the development of a sensitive real-time PCR assay for site-specific integration, AAV-AAVS1 junctions were reproducibly detected in highly purified AAV wild-type and recombinant AAV vector stocks. A series of controls documented that the junctions were packaged in AAV capsids and were newly generated during a single round of AAV production. Cloned junctions displayed variable AAV sequences fused to AAVS1. These data suggest that packaged junctions represent footprints of AAV integration during productive infection. Apparently, AAV latency established by site-specific integration and the helper virus-dependent, productive AAV cycle are more closely related than previously thought. PMID:12663794

  7. Process for structural geologic analysis of topography and point data

    DOEpatents

    Eliason, Jay R.; Eliason, Valerie L. C.

    1987-01-01

    A quantitative method of geologic structural analysis of digital terrain data is described for implementation on a computer. Assuming selected valley segments are controlled by the underlying geologic structure, topographic lows in the terrain data, defining valley bottoms, are detected, filtered and accumulated into a series line segments defining contiguous valleys. The line segments are then vectorized to produce vector segments, defining valley segments, which may be indicative of the underlying geologic structure. Coplanar analysis is performed on vector segment pairs to determine which vectors produce planes which represent underlying geologic structure. Point data such as fracture phenomena which can be related to fracture planes in 3-dimensional space can be analyzed to define common plane orientation and locations. The vectors, points, and planes are displayed in various formats for interpretation.

  8. PGA/MOEAD: a preference-guided evolutionary algorithm for multi-objective decision-making problems with interval-valued fuzzy preferences

    NASA Astrophysics Data System (ADS)

    Luo, Bin; Lin, Lin; Zhong, ShiSheng

    2018-02-01

    In this research, we propose a preference-guided optimisation algorithm for multi-criteria decision-making (MCDM) problems with interval-valued fuzzy preferences. The interval-valued fuzzy preferences are decomposed into a series of precise and evenly distributed preference-vectors (reference directions) regarding the objectives to be optimised on the basis of uniform design strategy firstly. Then the preference information is further incorporated into the preference-vectors based on the boundary intersection approach, meanwhile, the MCDM problem with interval-valued fuzzy preferences is reformulated into a series of single-objective optimisation sub-problems (each sub-problem corresponds to a decomposed preference-vector). Finally, a preference-guided optimisation algorithm based on MOEA/D (multi-objective evolutionary algorithm based on decomposition) is proposed to solve the sub-problems in a single run. The proposed algorithm incorporates the preference-vectors within the optimisation process for guiding the search procedure towards a more promising subset of the efficient solutions matching the interval-valued fuzzy preferences. In particular, lots of test instances and an engineering application are employed to validate the performance of the proposed algorithm, and the results demonstrate the effectiveness and feasibility of the algorithm.

  9. A Modular Toolset for Recombination Transgenesis and Neurogenetic Analysis of Drosophila

    PubMed Central

    Wang, Ji-Wu; Beck, Erin S.; McCabe, Brian D.

    2012-01-01

    Transgenic Drosophila have contributed extensively to our understanding of nervous system development, physiology and behavior in addition to being valuable models of human neurological disease. Here, we have generated a novel series of modular transgenic vectors designed to optimize and accelerate the production and analysis of transgenes in Drosophila. We constructed a novel vector backbone, pBID, that allows both phiC31 targeted transgene integration and incorporates insulator sequences to ensure specific and uniform transgene expression. Upon this framework, we have built a series of constructs that are either backwards compatible with existing restriction enzyme based vectors or utilize Gateway recombination technology for high-throughput cloning. These vectors allow for endogenous promoter or Gal4 targeted expression of transgenic proteins with or without fluorescent protein or epitope tags. In addition, we have generated constructs that facilitate transgenic splice isoform specific RNA inhibition of gene expression. We demonstrate the utility of these constructs to analyze proteins involved in nervous system development, physiology and neurodegenerative disease. We expect that these reagents will facilitate the proficiency and sophistication of Drosophila genetic analysis in both the nervous system and other tissues. PMID:22848718

  10. Identification and classification of transient pulses observed in magnetometer array data by time-domain principal component analysis filtering

    NASA Astrophysics Data System (ADS)

    Kappler, Karl N.; Schneider, Daniel D.; MacLean, Laura S.; Bleier, Thomas E.

    2017-08-01

    A method for identification of pulsations in time series of magnetic field data which are simultaneously present in multiple channels of data at one or more sensor locations is described. Candidate pulsations of interest are first identified in geomagnetic time series by inspection. Time series of these "training events" are represented in matrix form and transpose-multiplied to generate time-domain covariance matrices. The ranked eigenvectors of this matrix are stored as a feature of the pulsation. In the second stage of the algorithm, a sliding window (approximately the width of the training event) is moved across the vector-valued time-series comprising the channels on which the training event was observed. At each window position, the data covariance matrix and associated eigenvectors are calculated. We compare the orientation of the dominant eigenvectors of the training data to those from the windowed data and flag windows where the dominant eigenvectors directions are similar. This was successful in automatically identifying pulses which share polarization and appear to be from the same source process. We apply the method to a case study of continuously sampled (50 Hz) data from six observatories, each equipped with three-component induction coil magnetometers. We examine a 90-day interval of data associated with a cluster of four observatories located within 50 km of Napa, California, together with two remote reference stations-one 100 km to the north of the cluster and the other 350 km south. When the training data contains signals present in the remote reference observatories, we are reliably able to identify and extract global geomagnetic signals such as solar-generated noise. When training data contains pulsations only observed in the cluster of local observatories, we identify several types of non-plane wave signals having similar polarization.

  11. The Helioseismic and Magnetic Imager (HMI) Vector Magnetic Field Pipeline: Overview and Performance

    NASA Astrophysics Data System (ADS)

    Hoeksema, J. Todd; Liu, Yang; Hayashi, Keiji; Sun, Xudong; Schou, Jesper; Couvidat, Sebastien; Norton, Aimee; Bobra, Monica; Centeno, Rebecca; Leka, K. D.; Barnes, Graham; Turmon, Michael

    2014-09-01

    The Helioseismic and Magnetic Imager (HMI) began near-continuous full-disk solar measurements on 1 May 2010 from the Solar Dynamics Observatory (SDO). An automated processing pipeline keeps pace with observations to produce observable quantities, including the photospheric vector magnetic field, from sequences of filtergrams. The basic vector-field frame list cadence is 135 seconds, but to reduce noise the filtergrams are combined to derive data products every 720 seconds. The primary 720 s observables were released in mid-2010, including Stokes polarization parameters measured at six wavelengths, as well as intensity, Doppler velocity, and the line-of-sight magnetic field. More advanced products, including the full vector magnetic field, are now available. Automatically identified HMI Active Region Patches (HARPs) track the location and shape of magnetic regions throughout their lifetime. The vector field is computed using the Very Fast Inversion of the Stokes Vector (VFISV) code optimized for the HMI pipeline; the remaining 180∘ azimuth ambiguity is resolved with the Minimum Energy (ME0) code. The Milne-Eddington inversion is performed on all full-disk HMI observations. The disambiguation, until recently run only on HARP regions, is now implemented for the full disk. Vector and scalar quantities in the patches are used to derive active region indices potentially useful for forecasting; the data maps and indices are collected in the SHARP data series, hmi.sharp_720s. Definitive SHARP processing is completed only after the region rotates off the visible disk; quick-look products are produced in near real time. Patches are provided in both CCD and heliographic coordinates. HMI provides continuous coverage of the vector field, but has modest spatial, spectral, and temporal resolution. Coupled with limitations of the analysis and interpretation techniques, effects of the orbital velocity, and instrument performance, the resulting measurements have a certain dynamic range and sensitivity and are subject to systematic errors and uncertainties that are characterized in this report.

  12. Shortened acquisition protocols for the quantitative assessment of the 2-tissue-compartment model using dynamic PET/CT 18F-FDG studies.

    PubMed

    Strauss, Ludwig G; Pan, Leyun; Cheng, Caixia; Haberkorn, Uwe; Dimitrakopoulou-Strauss, Antonia

    2011-03-01

    (18)F-FDG kinetics are quantified by a 2-tissue-compartment model. The routine use of dynamic PET is limited because of this modality's 1-h acquisition time. We evaluated shortened acquisition protocols up to 0-30 min regarding the accuracy for data analysis with the 2-tissue-compartment model. Full dynamic series for 0-60 min were analyzed using a 2-tissue-compartment model. The time-activity curves and the resulting parameters for the model were stored in a database. Shortened acquisition data were generated from the database using the following time intervals: 0-10, 0-16, 0-20, 0-25, and 0-30 min. Furthermore, the impact of adding a 60-min uptake value to the dynamic series was evaluated. The datasets were analyzed using dedicated software to predict the results of the full dynamic series. The software is based on a modified support vector machines (SVM) algorithm and predicts the compartment parameters of the full dynamic series. The SVM-based software provides user-independent results and was accurate at predicting the compartment parameters of the full dynamic series. If a squared correlation coefficient of 0.8 (corresponding to 80% explained variance of the data) was used as a limit, a shortened acquisition of 0-16 min was accurate at predicting the 60-min 2-tissue-compartment parameters. If a limit of 0.9 (90% explained variance) was used, a dynamic series of at least 0-20 min together with the 60-min uptake values is required. Shortened acquisition protocols can be used to predict the parameters of the 2-tissue-compartment model. Either a dynamic PET series of 0-16 min or a combination of a dynamic PET/CT series of 0-20 min and a 60-min uptake value is accurate for analysis with a 2-tissue-compartment model.

  13. PDT - PARTICLE DISPLACEMENT TRACKING SOFTWARE

    NASA Technical Reports Server (NTRS)

    Wernet, M. P.

    1994-01-01

    Particle Imaging Velocimetry (PIV) is a quantitative velocity measurement technique for measuring instantaneous planar cross sections of a flow field. The technique offers very high precision (1%) directionally resolved velocity vector estimates, but its use has been limited by high equipment costs and complexity of operation. Particle Displacement Tracking (PDT) is an all-electronic PIV data acquisition and reduction procedure which is simple, fast, and easily implemented. The procedure uses a low power, continuous wave laser and a Charged Coupled Device (CCD) camera to electronically record the particle images. A frame grabber board in a PC is used for data acquisition and reduction processing. PDT eliminates the need for photographic processing, system costs are moderately low, and reduced data are available within seconds of acquisition. The technique results in velocity estimate accuracies on the order of 5%. The software is fully menu-driven from the acquisition to the reduction and analysis of the data. Options are available to acquire a single image or 5- or 25-field series of images separated in time by multiples of 1/60 second. The user may process each image, specifying its boundaries to remove unwanted glare from the periphery and adjusting its background level to clearly resolve the particle images. Data reduction routines determine the particle image centroids and create time history files. PDT then identifies the velocity vectors which describe the particle movement in the flow field. Graphical data analysis routines are included which allow the user to graph the time history files and display the velocity vector maps, interpolated velocity vector grids, iso-velocity vector contours, and flow streamlines. The PDT data processing software is written in FORTRAN 77 and the data acquisition routine is written in C-Language for 80386-based IBM PC compatibles running MS-DOS v3.0 or higher. Machine requirements include 4 MB RAM (3 MB Extended), a single or multiple frequency RGB monitor (EGA or better), a math co-processor, and a pointing device. The printers supported by the graphical analysis routines are the HP Laserjet+, Series II, and Series III with at least 1.5 MB memory. The data acquisition routines require the EPIX 4-MEG video board and optional 12.5MHz oscillator, and associated EPIX software. Data can be acquired from any CCD or RS-170 compatible video camera with pixel resolution of 600hX400v or better. PDT is distributed on one 5.25 inch 360K MS-DOS format diskette. Due to the use of required proprietary software, executable code is not provided on the distribution media. Compiling the source code requires the Microsoft C v5.1 compiler, Microsoft QuickC v2.0, the Microsoft Mouse Library, EPIX Image Processing Libraries, the Microway NDP-Fortran-386 v2.1 compiler, and the Media Cybernetics HALO Professional Graphics Kernal System. Due to the complexities of the machine requirements, COSMIC strongly recommends the purchase and review of the documentation prior to the purchase of the program. The source code, and sample input and output files are provided in PKZIP format; the PKUNZIP utility is included. PDT was developed in 1990. All trade names used are the property of their respective corporate owners.

  14. Quantitative structure-retention relationship models for the prediction of the reversed-phase HPLC gradient retention based on the heuristic method and support vector machine.

    PubMed

    Du, Hongying; Wang, Jie; Yao, Xiaojun; Hu, Zhide

    2009-01-01

    The heuristic method (HM) and support vector machine (SVM) were used to construct quantitative structure-retention relationship models by a series of compounds to predict the gradient retention times of reversed-phase high-performance liquid chromatography (HPLC) in three different columns. The aims of this investigation were to predict the retention times of multifarious compounds, to find the main properties of the three columns, and to indicate the theory of separation procedures. In our method, we correlated the retention times of many diverse structural analytes in three columns (Symmetry C18, Chromolith, and SG-MIX) with their representative molecular descriptors, calculated from the molecular structures alone. HM was used to select the most important molecular descriptors and build linear regression models. Furthermore, non-linear regression models were built using the SVM method; the performance of the SVM models were better than that of the HM models, and the prediction results were in good agreement with the experimental values. This paper could give some insights into the factors that were likely to govern the gradient retention process of the three investigated HPLC columns, which could theoretically supervise the practical experiment.

  15. A series of vectors to construct lacZ fusions for the study of gene expression in Schizosaccharomyces pombe.

    PubMed

    Lafuente, M J; Petit, T; Gancedo, C

    1997-12-22

    We have constructed a series of plasmids to facilitate the fusion of promoters with or without coding regions of genes of Schizosaccharomyces pombe to the lacZ gene of Escherichia coli. These vectors carry a multiple cloning region in which fission yeast DNA may be inserted in three different reading frames with respect to the coding region of lacZ. The plasmids were constructed with the ura4+ or the his3+ marker of S. pombe. Functionality of the plasmids was tested measuring in parallel the expression of fructose 1,6-bisphosphatase and beta-galactosidase under the control of the fbp1+ promoter in different conditions.

  16. A SEASAT SASS simulation experiment to quantify the errors related to a + or - 3 hour intermittent assimilation technique

    NASA Technical Reports Server (NTRS)

    Sylvester, W. B.

    1984-01-01

    A series of SEASAT repeat orbits over a sequence of best Low center positions is simulated by using the Seatrak satellite calculator. These Low centers are, upon appropriate interpolation to hourly positions, Located at various times during the + or - 3 hour assimilation cycle. Error analysis for a sample of best cyclone center positions taken from the Atlantic and Pacific oceans reveals a minimum average error of 1.1 deg of Longitude and a standard deviation of 0.9 deg of Longitude. The magnitude of the average error seems to suggest that by utilizing the + or - 3 hour window in the assimilation cycle, the quality of the SASS data is degraded to the Level of the background. A further consequence of this assimilation scheme is the effect which is manifested as a result of the blending of two or more more juxtaposed vector winds, generally possessing different properties (vector quantity and time). The outcome of this is to reduce gradients in the wind field and to deform isobaric and frontal patterns of the intial field.

  17. Environmental noise forecasting based on support vector machine

    NASA Astrophysics Data System (ADS)

    Fu, Yumei; Zan, Xinwu; Chen, Tianyi; Xiang, Shihan

    2018-01-01

    As an important pollution source, the noise pollution is always the researcher's focus. Especially in recent years, the noise pollution is seriously harmful to the human beings' environment, so the research about the noise pollution is a very hot spot. Some noise monitoring technologies and monitoring systems are applied in the environmental noise test, measurement and evaluation. But, the research about the environmental noise forecasting is weak. In this paper, a real-time environmental noise monitoring system is introduced briefly. This monitoring system is working in Mianyang City, Sichuan Province. It is monitoring and collecting the environmental noise about more than 20 enterprises in this district. Based on the large amount of noise data, the noise forecasting by the Support Vector Machine (SVM) is studied in detail. Compared with the time series forecasting model and the artificial neural network forecasting model, the SVM forecasting model has some advantages such as the smaller data size, the higher precision and stability. The noise forecasting results based on the SVM can provide the important and accuracy reference to the prevention and control of the environmental noise.

  18. SSAW: A new sequence similarity analysis method based on the stationary discrete wavelet transform.

    PubMed

    Lin, Jie; Wei, Jing; Adjeroh, Donald; Jiang, Bing-Hua; Jiang, Yue

    2018-05-02

    Alignment-free sequence similarity analysis methods often lead to significant savings in computational time over alignment-based counterparts. A new alignment-free sequence similarity analysis method, called SSAW is proposed. SSAW stands for Sequence Similarity Analysis using the Stationary Discrete Wavelet Transform (SDWT). It extracts k-mers from a sequence, then maps each k-mer to a complex number field. Then, the series of complex numbers formed are transformed into feature vectors using the stationary discrete wavelet transform. After these steps, the original sequence is turned into a feature vector with numeric values, which can then be used for clustering and/or classification. Using two different types of applications, namely, clustering and classification, we compared SSAW against the the-state-of-the-art alignment free sequence analysis methods. SSAW demonstrates competitive or superior performance in terms of standard indicators, such as accuracy, F-score, precision, and recall. The running time was significantly better in most cases. These make SSAW a suitable method for sequence analysis, especially, given the rapidly increasing volumes of sequence data required by most modern applications.

  19. Resonances in a Chaotic Attractor Crisis of the Lorenz Flow

    NASA Astrophysics Data System (ADS)

    Tantet, Alexis; Lucarini, Valerio; Dijkstra, Henk A.

    2018-02-01

    Local bifurcations of stationary points and limit cycles have successfully been characterized in terms of the critical exponents of these solutions. Lyapunov exponents and their associated covariant Lyapunov vectors have been proposed as tools for supporting the understanding of critical transitions in chaotic dynamical systems. However, it is in general not clear how the statistical properties of dynamical systems change across a boundary crisis during which a chaotic attractor collides with a saddle. This behavior is investigated here for a boundary crisis in the Lorenz flow, for which neither the Lyapunov exponents nor the covariant Lyapunov vectors provide a criterion for the crisis. Instead, the convergence of the time evolution of probability densities to the invariant measure, governed by the semigroup of transfer operators, is expected to slow down at the approach of the crisis. Such convergence is described by the eigenvalues of the generator of this semigroup, which can be divided into two families, referred to as the stable and unstable Ruelle-Pollicott resonances, respectively. The former describes the convergence of densities to the attractor (or escape from a repeller) and is estimated from many short time series sampling the state space. The latter is responsible for the decay of correlations, or mixing, and can be estimated from a long times series, invoking ergodicity. It is found numerically for the Lorenz flow that the stable resonances do approach the imaginary axis during the crisis, as is indicative of the loss of global stability of the attractor. On the other hand, the unstable resonances, and a fortiori the decay of correlations, do not flag the proximity of the crisis, thus questioning the usual design of early warning indicators of boundary crises of chaotic attractors and the applicability of response theory close to such crises.

  20. Resonances in a Chaotic Attractor Crisis of the Lorenz Flow

    NASA Astrophysics Data System (ADS)

    Tantet, Alexis; Lucarini, Valerio; Dijkstra, Henk A.

    2017-12-01

    Local bifurcations of stationary points and limit cycles have successfully been characterized in terms of the critical exponents of these solutions. Lyapunov exponents and their associated covariant Lyapunov vectors have been proposed as tools for supporting the understanding of critical transitions in chaotic dynamical systems. However, it is in general not clear how the statistical properties of dynamical systems change across a boundary crisis during which a chaotic attractor collides with a saddle. This behavior is investigated here for a boundary crisis in the Lorenz flow, for which neither the Lyapunov exponents nor the covariant Lyapunov vectors provide a criterion for the crisis. Instead, the convergence of the time evolution of probability densities to the invariant measure, governed by the semigroup of transfer operators, is expected to slow down at the approach of the crisis. Such convergence is described by the eigenvalues of the generator of this semigroup, which can be divided into two families, referred to as the stable and unstable Ruelle-Pollicott resonances, respectively. The former describes the convergence of densities to the attractor (or escape from a repeller) and is estimated from many short time series sampling the state space. The latter is responsible for the decay of correlations, or mixing, and can be estimated from a long times series, invoking ergodicity. It is found numerically for the Lorenz flow that the stable resonances do approach the imaginary axis during the crisis, as is indicative of the loss of global stability of the attractor. On the other hand, the unstable resonances, and a fortiori the decay of correlations, do not flag the proximity of the crisis, thus questioning the usual design of early warning indicators of boundary crises of chaotic attractors and the applicability of response theory close to such crises.

  1. Using Google Earth to Explore Multiple Data Sets and Plate Tectonic Concepts

    NASA Astrophysics Data System (ADS)

    Goodell, L. P.

    2015-12-01

    Google Earth (GE) offers an engaging and dynamic environment for exploration of earth science data. While GIS software offers higher-level analytical capability, it comes with a steep learning curve and complex interface that is not easy for the novice, and in many cases the instructor, to negotiate. In contrast, the intuitive interface of GE makes it easy for students to quickly become proficient in manipulating the globe and independently exploring relationships between multiple data sets at a wide range of scales. Inquiry-based, data-rich exercises have been developed for both introductory and upper-level activities including: exploration of plate boundary characteristics and relative motion across plate boundaries; determination and comparison of short-term and long-term average plate velocities; crustal strain analysis (modeled after the UNAVCO activity); and determining earthquake epicenters, body-wave magnitudes, and focal plane solutions. Used successfully in undergraduate course settings, for TA training and for professional development programs for middle and high school teachers, the exercises use the following GE data sets (with sources) that have been collected/compiled by the author and are freely available for non-commercial use: 1) tectonic plate boundaries and plate names (Bird, 2003 model); 2) real-time earthquakes (USGS); 3) 30 years of M>=5.0 earthquakes, plotted by depth (USGS); 4) seafloor age (Mueller et al., 1997, 2008); 5) location and age data for hot spot tracks (published literature); 6) Holocene volcanoes (Smithsonian Global Volcanism Program); 7) GPS station locations with links to times series (JPL, NASA, UNAVCO); 8) short-term motion vectors derived from GPS times series; 9) long-term average motion vectors derived from plate motion models (UNAVCO plate motion calculator); 10) earthquake data sets consisting of seismic station locations and links to relevant seismograms (Rapid Earthquake Viewer, USC/IRIS/DELESE).

  2. Discrimination of Anopheles species of the Arribalzagia series in Colombia using a multilocus approach.

    PubMed

    Álvarez, Natalí; Gómez, Giovan F; Naranjo-Díaz, Nelson; Correa, Margarita M

    2018-06-18

    The Arribalzagia Series of the Anopheles Subgenus comprises morphologically similar species or members of species complexes which makes correct species identification difficult. Therefore, the aim of this work was to discriminate the morphospecies of the Arribalzagia Series present in Colombia using a multilocus approach based on ITS2, COI and CAD sequences. Specimens of the Arribalzagia Series collected at 32 localities in nine departments were allocated to seven species. Individual and concatenated Bayesian analyses showed high support for each of the species and reinforced the previous report of the Apicimacula species Complex with distribution in the Pacific Coast and northwestern Colombia. In addition, a new molecular operational taxonomic unit-MOTU was identified, herein denominated near Anopheles peryassui, providing support for the existence of a Peryassui species Complex. Further, the CAD gene, just recently used for Anopheles taxonomy and phylogeny, demonstrated its power in resolving phylogenetic relationships among species of the Arribalzagia Series. The divergence times for these species correspond to the early Pliocene and the Miocene. Considering the epidemiological importance of some species of the Series and their co-occurrence in malaria endemic regions of Colombia, their discrimination constitutes an important step for vector incrimination and control in the country. Copyright © 2018. Published by Elsevier B.V.

  3. A comparison of performance of several artificial intelligence methods for forecasting monthly discharge time series

    NASA Astrophysics Data System (ADS)

    Wang, Wen-Chuan; Chau, Kwok-Wing; Cheng, Chun-Tian; Qiu, Lin

    2009-08-01

    SummaryDeveloping a hydrological forecasting model based on past records is crucial to effective hydropower reservoir management and scheduling. Traditionally, time series analysis and modeling is used for building mathematical models to generate hydrologic records in hydrology and water resources. Artificial intelligence (AI), as a branch of computer science, is capable of analyzing long-series and large-scale hydrological data. In recent years, it is one of front issues to apply AI technology to the hydrological forecasting modeling. In this paper, autoregressive moving-average (ARMA) models, artificial neural networks (ANNs) approaches, adaptive neural-based fuzzy inference system (ANFIS) techniques, genetic programming (GP) models and support vector machine (SVM) method are examined using the long-term observations of monthly river flow discharges. The four quantitative standard statistical performance evaluation measures, the coefficient of correlation ( R), Nash-Sutcliffe efficiency coefficient ( E), root mean squared error (RMSE), mean absolute percentage error (MAPE), are employed to evaluate the performances of various models developed. Two case study river sites are also provided to illustrate their respective performances. The results indicate that the best performance can be obtained by ANFIS, GP and SVM, in terms of different evaluation criteria during the training and validation phases.

  4. Ecological Momentary Assessments and Automated Time Series Analysis to Promote Tailored Health Care: A Proof-of-Principle Study.

    PubMed

    van der Krieke, Lian; Emerencia, Ando C; Bos, Elisabeth H; Rosmalen, Judith Gm; Riese, Harriëtte; Aiello, Marco; Sytema, Sjoerd; de Jonge, Peter

    2015-08-07

    Health promotion can be tailored by combining ecological momentary assessments (EMA) with time series analysis. This combined method allows for studying the temporal order of dynamic relationships among variables, which may provide concrete indications for intervention. However, application of this method in health care practice is hampered because analyses are conducted manually and advanced statistical expertise is required. This study aims to show how this limitation can be overcome by introducing automated vector autoregressive modeling (VAR) of EMA data and to evaluate its feasibility through comparisons with results of previously published manual analyses. We developed a Web-based open source application, called AutoVAR, which automates time series analyses of EMA data and provides output that is intended to be interpretable by nonexperts. The statistical technique we used was VAR. AutoVAR tests and evaluates all possible VAR models within a given combinatorial search space and summarizes their results, thereby replacing the researcher's tasks of conducting the analysis, making an informed selection of models, and choosing the best model. We compared the output of AutoVAR to the output of a previously published manual analysis (n=4). An illustrative example consisting of 4 analyses was provided. Compared to the manual output, the AutoVAR output presents similar model characteristics and statistical results in terms of the Akaike information criterion, the Bayesian information criterion, and the test statistic of the Granger causality test. Results suggest that automated analysis and interpretation of times series is feasible. Compared to a manual procedure, the automated procedure is more robust and can save days of time. These findings may pave the way for using time series analysis for health promotion on a larger scale. AutoVAR was evaluated using the results of a previously conducted manual analysis. Analysis of additional datasets is needed in order to validate and refine the application for general use.

  5. Ecological Momentary Assessments and Automated Time Series Analysis to Promote Tailored Health Care: A Proof-of-Principle Study

    PubMed Central

    Emerencia, Ando C; Bos, Elisabeth H; Rosmalen, Judith GM; Riese, Harriëtte; Aiello, Marco; Sytema, Sjoerd; de Jonge, Peter

    2015-01-01

    Background Health promotion can be tailored by combining ecological momentary assessments (EMA) with time series analysis. This combined method allows for studying the temporal order of dynamic relationships among variables, which may provide concrete indications for intervention. However, application of this method in health care practice is hampered because analyses are conducted manually and advanced statistical expertise is required. Objective This study aims to show how this limitation can be overcome by introducing automated vector autoregressive modeling (VAR) of EMA data and to evaluate its feasibility through comparisons with results of previously published manual analyses. Methods We developed a Web-based open source application, called AutoVAR, which automates time series analyses of EMA data and provides output that is intended to be interpretable by nonexperts. The statistical technique we used was VAR. AutoVAR tests and evaluates all possible VAR models within a given combinatorial search space and summarizes their results, thereby replacing the researcher’s tasks of conducting the analysis, making an informed selection of models, and choosing the best model. We compared the output of AutoVAR to the output of a previously published manual analysis (n=4). Results An illustrative example consisting of 4 analyses was provided. Compared to the manual output, the AutoVAR output presents similar model characteristics and statistical results in terms of the Akaike information criterion, the Bayesian information criterion, and the test statistic of the Granger causality test. Conclusions Results suggest that automated analysis and interpretation of times series is feasible. Compared to a manual procedure, the automated procedure is more robust and can save days of time. These findings may pave the way for using time series analysis for health promotion on a larger scale. AutoVAR was evaluated using the results of a previously conducted manual analysis. Analysis of additional datasets is needed in order to validate and refine the application for general use. PMID:26254160

  6. Conjunction of wavelet transform and SOM-mutual information data pre-processing approach for AI-based Multi-Station nitrate modeling of watersheds

    NASA Astrophysics Data System (ADS)

    Nourani, Vahid; Andalib, Gholamreza; Dąbrowska, Dominika

    2017-05-01

    Accurate nitrate load predictions can elevate decision management of water quality of watersheds which affects to environment and drinking water. In this paper, two scenarios were considered for Multi-Station (MS) nitrate load modeling of the Little River watershed. In the first scenario, Markovian characteristics of streamflow-nitrate time series were proposed for the MS modeling. For this purpose, feature extraction criterion of Mutual Information (MI) was employed for input selection of artificial intelligence models (Feed Forward Neural Network, FFNN and least square support vector machine). In the second scenario for considering seasonality-based characteristics of the time series, wavelet transform was used to extract multi-scale features of streamflow-nitrate time series of the watershed's sub-basins to model MS nitrate loads. Self-Organizing Map (SOM) clustering technique which finds homogeneous sub-series clusters was also linked to MI for proper cluster agent choice to be imposed into the models for predicting the nitrate loads of the watershed's sub-basins. The proposed MS method not only considers the prediction of the outlet nitrate but also covers predictions of interior sub-basins nitrate load values. The results indicated that the proposed FFNN model coupled with the SOM-MI improved the performance of MS nitrate predictions compared to the Markovian-based models up to 39%. Overall, accurate selection of dominant inputs which consider seasonality-based characteristics of streamflow-nitrate process could enhance the efficiency of nitrate load predictions.

  7. AI-based (ANN and SVM) statistical downscaling methods for precipitation estimation under climate change scenarios

    NASA Astrophysics Data System (ADS)

    Mehrvand, Masoud; Baghanam, Aida Hosseini; Razzaghzadeh, Zahra; Nourani, Vahid

    2017-04-01

    Since statistical downscaling methods are the most largely used models to study hydrologic impact studies under climate change scenarios, nonlinear regression models known as Artificial Intelligence (AI)-based models such as Artificial Neural Network (ANN) and Support Vector Machine (SVM) have been used to spatially downscale the precipitation outputs of Global Climate Models (GCMs). The study has been carried out using GCM and station data over GCM grid points located around the Peace-Tampa Bay watershed weather stations. Before downscaling with AI-based model, correlation coefficient values have been computed between a few selected large-scale predictor variables and local scale predictands to select the most effective predictors. The selected predictors are then assessed considering grid location for the site in question. In order to increase AI-based downscaling model accuracy pre-processing has been developed on precipitation time series. In this way, the precipitation data derived from various GCM data analyzed thoroughly to find the highest value of correlation coefficient between GCM-based historical data and station precipitation data. Both GCM and station precipitation time series have been assessed by comparing mean and variances over specific intervals. Results indicated that there is similar trend between GCM and station precipitation data; however station data has non-stationary time series while GCM data does not. Finally AI-based downscaling model have been applied to several GCMs with selected predictors by targeting local precipitation time series as predictand. The consequences of recent step have been used to produce multiple ensembles of downscaled AI-based models.

  8. Quantification of shoreline change along Hatteras Island, North Carolina: Oregon Inlet to Cape Hatteras, 1978-2002, and associated vector shoreline data

    USGS Publications Warehouse

    Hapke, Cheryl J.; Henderson, Rachel E.

    2015-01-01

    Shoreline change spanning twenty-four years was assessed along the coastline of Cape Hatteras National Seashore, at Hatteras Island, North Carolina. The shorelines used in the analysis were generated from georeferenced historical aerial imagery and are used to develop shoreline change rates for Hatteras Island, from Oregon Inlet to Cape Hatteras. A total of 14 dates of aerial photographs ranging from 1978 through 2002 were obtained from the U.S. Army Corp of Engineers Field Research Facility in Duck, North Carolina, and scanned to generate digital imagery. The digital imagery was georeferenced and high water line shorelines (interpreted from the wet/dry line) were digitized from each date to produce a time series of shorelines for the study area. Rates of shoreline change were calculated for three periods: the full span of the time series, 1978 through 2002, and two approximately decadal subsets, 1978–89 and 1989–2002.

  9. The string prediction models as invariants of time series in the forex market

    NASA Astrophysics Data System (ADS)

    Pincak, R.

    2013-12-01

    In this paper we apply a new approach of string theory to the real financial market. The models are constructed with an idea of prediction models based on the string invariants (PMBSI). The performance of PMBSI is compared to support vector machines (SVM) and artificial neural networks (ANN) on an artificial and a financial time series. A brief overview of the results and analysis is given. The first model is based on the correlation function as invariant and the second one is an application based on the deviations from the closed string/pattern form (PMBCS). We found the difference between these two approaches. The first model cannot predict the behavior of the forex market with good efficiency in comparison with the second one which is, in addition, able to make relevant profit per year. The presented string models could be useful for portfolio creation and financial risk management in the banking sector as well as for a nonlinear statistical approach to data optimization.

  10. Cardiorespiratory and cardiovascular interactions in cardiomyopathy patients using joint symbolic dynamic analysis.

    PubMed

    Giraldo, Beatriz F; Rodriguez, Javier; Caminal, Pere; Bayes-Genis, Antonio; Voss, Andreas

    2015-01-01

    Cardiovascular diseases are the first cause of death in developed countries. Using electrocardiographic (ECG), blood pressure (BP) and respiratory flow signals, we obtained parameters for classifying cardiomyopathy patients. 42 patients with ischemic (ICM) and dilated (DCM) cardiomyopathies were studied. The left ventricular ejection fraction (LVEF) was used to stratify patients with low risk (LR: LVEF>35%, 14 patients) and high risk (HR: LVEF≤ 35%, 28 patients) of heart attack. RR, SBP and TTot time series were extracted from the ECG, BP and respiratory flow signals, respectively. The time series were transformed to a binary space and then analyzed using Joint Symbolic Dynamic with a word length of three, characterizing them by the probability of occurrence of the words. Extracted parameters were then reduced using correlation and statistical analysis. Principal component analysis and support vector machines methods were applied to characterize the cardiorespiratory and cardiovascular interactions in ICM and DCM cardiomyopathies, obtaining an accuracy of 85.7%.

  11. Applications and Comparisons of Four Time Series Models in Epidemiological Surveillance Data

    PubMed Central

    Young, Alistair A.; Li, Xiaosong

    2014-01-01

    Public health surveillance systems provide valuable data for reliable predication of future epidemic events. This paper describes a study that used nine types of infectious disease data collected through a national public health surveillance system in mainland China to evaluate and compare the performances of four time series methods, namely, two decomposition methods (regression and exponential smoothing), autoregressive integrated moving average (ARIMA) and support vector machine (SVM). The data obtained from 2005 to 2011 and in 2012 were used as modeling and forecasting samples, respectively. The performances were evaluated based on three metrics: mean absolute error (MAE), mean absolute percentage error (MAPE), and mean square error (MSE). The accuracy of the statistical models in forecasting future epidemic disease proved their effectiveness in epidemiological surveillance. Although the comparisons found that no single method is completely superior to the others, the present study indeed highlighted that the SVMs outperforms the ARIMA model and decomposition methods in most cases. PMID:24505382

  12. Multineuronal vectorization is more efficient than time-segmental vectorization for information extraction from neuronal activities in the inferior temporal cortex.

    PubMed

    Kaneko, Hidekazu; Tamura, Hiroshi; Tate, Shunta; Kawashima, Takahiro; Suzuki, Shinya S; Fujita, Ichiro

    2010-08-01

    In order for patients with disabilities to control assistive devices with their own neural activity, multineuronal spike trains must be efficiently decoded because only limited computational resources can be used to generate prosthetic control signals in portable real-time applications. In this study, we compare the abilities of two vectorizing procedures (multineuronal and time-segmental) to extract information from spike trains during the same total neuron-seconds. In the multineuronal vectorizing procedure, we defined a response vector whose components represented the spike counts of one to five neurons. In the time-segmental vectorizing procedure, a response vector consisted of components representing a neuron's spike counts for one to five time-segment(s) of a response period of 1 s. Spike trains were recorded from neurons in the inferior temporal cortex of monkeys presented with visual stimuli. We examined whether the amount of information of the visual stimuli carried by these neurons differed between the two vectorizing procedures. The amount of information calculated with the multineuronal vectorizing procedure, but not the time-segmental vectorizing procedure, significantly increased with the dimensions of the response vector. We conclude that the multineuronal vectorizing procedure is superior to the time-segmental vectorizing procedure in efficiently extracting information from neuronal signals. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  13. Support vector machine for day ahead electricity price forecasting

    NASA Astrophysics Data System (ADS)

    Razak, Intan Azmira binti Wan Abdul; Abidin, Izham bin Zainal; Siah, Yap Keem; Rahman, Titik Khawa binti Abdul; Lada, M. Y.; Ramani, Anis Niza binti; Nasir, M. N. M.; Ahmad, Arfah binti

    2015-05-01

    Electricity price forecasting has become an important part of power system operation and planning. In a pool- based electric energy market, producers submit selling bids consisting in energy blocks and their corresponding minimum selling prices to the market operator. Meanwhile, consumers submit buying bids consisting in energy blocks and their corresponding maximum buying prices to the market operator. Hence, both producers and consumers use day ahead price forecasts to derive their respective bidding strategies to the electricity market yet reduce the cost of electricity. However, forecasting electricity prices is a complex task because price series is a non-stationary and highly volatile series. Many factors cause for price spikes such as volatility in load and fuel price as well as power import to and export from outside the market through long term contract. This paper introduces an approach of machine learning algorithm for day ahead electricity price forecasting with Least Square Support Vector Machine (LS-SVM). Previous day data of Hourly Ontario Electricity Price (HOEP), generation's price and demand from Ontario power market are used as the inputs for training data. The simulation is held using LSSVMlab in Matlab with the training and testing data of 2004. SVM that widely used for classification and regression has great generalization ability with structured risk minimization principle rather than empirical risk minimization. Moreover, same parameter settings in trained SVM give same results that absolutely reduce simulation process compared to other techniques such as neural network and time series. The mean absolute percentage error (MAPE) for the proposed model shows that SVM performs well compared to neural network.

  14. System Dynamics based Dengue modeling environment to simulate evolution of Dengue infection under different climate scenarios

    NASA Astrophysics Data System (ADS)

    Anwar, R.; Khan, R.; Usmani, M.; Colwell, R. R.; Jutla, A.

    2017-12-01

    Vector borne infectious diseases such as Dengue, Zika and Chikungunya remain a public health threat. An estimate of the World Health Organization (WHO) suggests that about 2.5 billion people, representing ca. 40% of human population,are at increased risk of dengue; with more than 100 million infection cases every year. Vector-borne infections cannot be eradicated since disease causing pathogens survive in the environment. Over the last few decades dengue infection has been reported in more than 100 countries and is expanding geographically. Female Ae. Aegypti mosquito, the daytime active and a major vector for dengue virus, is associated with urban population density and regional climatic processes. However, mathematical quantification of relationships on abundance of vectors and climatic processes remain a challenge, particularly in regions where such data are not routinely collected. Here, using system dynamics based feedback mechanism, an algorithm integrating knowledge from entomological, meteorological and epidemiological processes is developed that has potential to provide ensemble simulations on risk of occurrence of dengue infection in human population. Using dataset from satellite remote sensing, the algorithm was calibrated and validated using actual dengue case data of Iquitos, Peru. We will show results on model capabilities in capturing initiation and peak in the observed time series. In addition, results from several simulation scenarios under different climatic conditions will be discussed.

  15. Adaptive developmental delay in Chagas disease vectors: an evolutionary ecology approach.

    PubMed

    Menu, Frédéric; Ginoux, Marine; Rajon, Etienne; Lazzari, Claudio R; Rabinovich, Jorge E

    2010-05-25

    The developmental time of vector insects is important in population dynamics, evolutionary biology, epidemiology and in their responses to global climatic change. In the triatomines (Triatominae, Reduviidae), vectors of Chagas disease, evolutionary ecology concepts, which may allow for a better understanding of their biology, have not been applied. Despite delay in the molting in some individuals observed in triatomines, no effort was made to explain this variability. We applied four methods: (1) an e-mail survey sent to 30 researchers with experience in triatomines, (2) a statistical description of the developmental time of eleven triatomine species, (3) a relationship between development time pattern and climatic inter-annual variability, (4) a mathematical optimization model of evolution of developmental delay (diapause). 85.6% of responses informed on prolonged developmental times in 5(th) instar nymphs, with 20 species identified with remarkable developmental delays. The developmental time analysis showed some degree of bi-modal pattern of the development time of the 5(th) instars in nine out of eleven species but no trend between development time pattern and climatic inter-annual variability was observed. Our optimization model predicts that the developmental delays could be due to an adaptive risk-spreading diapause strategy, only if survival throughout the diapause period and the probability of random occurrence of "bad" environmental conditions are sufficiently high. Developmental delay may not be a simple non-adaptive phenotypic plasticity in development time, and could be a form of adaptive diapause associated to a physiological mechanism related to the postponement of the initiation of reproduction, as an adaptation to environmental stochasticity through a spreading of risk (bet-hedging) strategy. We identify a series of parameters that can be measured in the field and laboratory to test this hypothesis. The importance of these findings is discussed in terms of global climatic change and epidemiological consequences.

  16. A Software Package for Neural Network Applications Development

    NASA Technical Reports Server (NTRS)

    Baran, Robert H.

    1993-01-01

    Original Backprop (Version 1.2) is an MS-DOS package of four stand-alone C-language programs that enable users to develop neural network solutions to a variety of practical problems. Original Backprop generates three-layer, feed-forward (series-coupled) networks which map fixed-length input vectors into fixed length output vectors through an intermediate (hidden) layer of binary threshold units. Version 1.2 can handle up to 200 input vectors at a time, each having up to 128 real-valued components. The first subprogram, TSET, appends a number (up to 16) of classification bits to each input, thus creating a training set of input output pairs. The second subprogram, BACKPROP, creates a trilayer network to do the prescribed mapping and modifies the weights of its connections incrementally until the training set is leaned. The learning algorithm is the 'back-propagating error correction procedures first described by F. Rosenblatt in 1961. The third subprogram, VIEWNET, lets the trained network be examined, tested, and 'pruned' (by the deletion of unnecessary hidden units). The fourth subprogram, DONET, makes a TSR routine by which the finished product of the neural net design-and-training exercise can be consulted under other MS-DOS applications.

  17. The magnetic tides of Honolulu

    USGS Publications Warehouse

    Love, Jeffrey J.; Rigler, Erin Joshua

    2013-01-01

    We review the phenomenon of time-stationary, periodic quiet-time geomagnetic tides. These are generated by the ionospheric and oceanic dynamos, and, to a lesser-extent, by the quiet-time magnetosphere, and they are affected by currents induced in the Earth's electrically conducting interior. We examine historical time series of hourly magnetic-vector measurements made at the Honolulu observatory. We construct high-resolution, frequency-domain Lomb-periodogram and maximum-entropy power spectra that reveal a panorama of stationary harmonics across periods from 0.1 to 10000.0-d, including harmonics that result from amplitude and phase modulation. We identify solar-diurnal tides and their annual and solar-cycle sideband modulations, lunar semi-diurnal tides and their solar-diurnal sidebands, and tides due to precession of lunar eccentricity and nodes. We provide evidence that a method intended for separating the ionospheric and oceanic dynamo signals by midnight subsampling of observatory data time series is prone to frequency-domain aliasing. The tidal signals we summarize in this review can be used to test our fundamental understanding of the dynamics of the quiet-time ionosphere and magnetosphere, induction in the ocean and in the electrically conducting interior of the Earth, and they are useful for defining a quiet-time baseline against which magnetospheric-storm intensity is measured.

  18. Retroviral vectors encoding ADA regulatory locus control region provide enhanced T-cell-specific transgene expression

    PubMed Central

    2009-01-01

    Background Murine retroviral vectors have been used in several hundred gene therapy clinical trials, but have fallen out of favor for a number of reasons. One issue is that gene expression from viral or internal promoters is highly variable and essentially unregulated. Moreover, with retroviral vectors, gene expression is usually silenced over time. Mammalian genes, in contrast, are characterized by highly regulated, precise levels of expression in both a temporal and a cell-specific manner. To ascertain if recapitulation of endogenous adenosine deaminase (ADA) expression can be achieved in a vector construct we created a new series of Moloney murine leukemia virus (MuLV) based retroviral vector that carry human regulatory elements including combinations of the ADA promoter, the ADA locus control region (LCR), ADA introns and human polyadenylation sequences in a self-inactivating vector backbone. Methods A MuLV-based retroviral vector with a self-inactivating (SIN) backbone, the phosphoglycerate kinase promoter (PGK) and the enhanced green fluorescent protein (eGFP), as a reporter gene, was generated. Subsequent vectors were constructed from this basic vector by deletion or addition of certain elements. The added elements that were assessed are the human ADA promoter, human ADA locus control region (LCR), introns 7, 8, and 11 from the human ADA gene, and human growth hormone polyadenylation signal. Retroviral vector particles were produced by transient three-plasmid transfection of 293T cells. Retroviral vectors encoding eGFP were titered by transducing 293A cells, and then the proportion of GFP-positive cells was determined using fluorescence-activated cell sorting (FACS). Non T-cell and T-cell lines were transduced at a multiplicity of infection (MOI) of 0.1 and the yield of eGFP transgene expression was evaluated by FACS analysis using mean fluorescent intensity (MFI) detection. Results Vectors that contained the ADA LCR were preferentially expressed in T-cell lines. Further improvements in T-cell specific gene expression were observed with the incorporation of additional cis-regulatory elements, such as a human polyadenylation signal and intron 7 from the human ADA gene. Conclusion These studies suggest that the combination of an authentically regulated ADA gene in a murine retroviral vector, together with additional locus-specific regulatory refinements, will yield a vector with a safer profile and greater efficacy in terms of high-level, therapeutic, regulated gene expression for the treatment of ADA-deficient severe combined immunodeficiency. PMID:20042112

  19. (New hosts and vectors for genome cloning)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    The main goal of our project remains the development of new bacterial hosts and vectors for the stable propagation of human DNA clones in E. coli. During the past six months of our current budget period, we have (1) continued to develop new hosts that permit the stable maintenance of unstable features of human DNA, and (2) developed a series of vectors for (a) cloning large DNA inserts, (b) assessing the frequency of human sequences that are lethal to the growth of E. coli, and (c) assessing the stability of human sequences cloned in M13 for large-scale sequencing projects.

  20. [New hosts and vectors for genome cloning]. Progress report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    The main goal of our project remains the development of new bacterial hosts and vectors for the stable propagation of human DNA clones in E. coli. During the past six months of our current budget period, we have (1) continued to develop new hosts that permit the stable maintenance of unstable features of human DNA, and (2) developed a series of vectors for (a) cloning large DNA inserts, (b) assessing the frequency of human sequences that are lethal to the growth of E. coli, and (c) assessing the stability of human sequences cloned in M13 for large-scale sequencing projects.

  1. Calibration of the ER-2 meteorological measurement system

    NASA Technical Reports Server (NTRS)

    Bowen, Stuart W.; Chan, K. Roland; Bui, T. Paul

    1991-01-01

    The Meteorological Measurement System (MMS) on the high altitude ER-2 aircraft was developed specifically for atmospheric research. The MMS provides accurate measurements of pressure, temperature, wind vector, position (longitude, latitude, altitude), pitch, roll, heading, angle of attack, angle of sideslip, true airspeed, aircraft eastward velocity, northward velocity, vertical acceleration, and time, at a sample rate of 5/s. MMS data products are presented in the form of either 5 or 1 Hz time series. The 1 Hz data stream, generally used by ER-2 investigators, is obtained from the 5 Hz data stream by filtering and desampling. The method of measurement of the meteorological parameters is given and the results of their analyses are discussed.

  2. Generation of cylindrically polarized vector vortex beams with digital micromirror device

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gong, Lei; Liu, Weiwei; Wang, Meng

    We propose a novel technique to directly transform a linearly polarized Gaussian beam into vector-vortex beams with various spatial patterns. Full high-quality control of amplitude and phase is implemented via a Digital Micro-mirror Device (DMD) binary holography for generating Laguerre-Gaussian, Bessel-Gaussian, and helical Mathieu–Gaussian modes, while a radial polarization converter (S-waveplate) is employed to effectively convert the optical vortices into cylindrically polarized vortex beams. Additionally, the generated vector-vortex beams maintain their polarization symmetry after arbitrary polarization manipulation. Due to the high frame rates of DMD, rapid switching among a series of vector modes carrying different orbital angular momenta paves themore » way for optical microscopy, trapping, and communication.« less

  3. Principle component analysis to separate deformation signals from multiple sources during a 2015 intrusive sequence at Kīlauea Volcano

    NASA Astrophysics Data System (ADS)

    Johanson, I. A.; Miklius, A.; Poland, M. P.

    2016-12-01

    A sequence of magmatic events in April-May 2015 at Kīlauea Volcano produced a complex deformation pattern that can be described by multiple deforming sources, active simultaneously. The 2015 intrusive sequence began with inflation in the volcano's summit caldera near Halema`uma`u (HMM) Crater, which continued over a few weeks, followed by rapid deflation of the HMM source and inflation of a source in the south caldera region during the next few days. In Kīlauea Volcano's summit area, multiple deformation centers are active at varying times, and all contribute to the overall pattern observed with GPS, tiltmeters, and InSAR. Isolating the contribution of different signals related to each source is a challenge and complicates the determination of optimal source geometry for the underlying magma bodies. We used principle component analysis of continuous GPS time series from the 2015 intrusion sequence to determine three basis vectors which together account for 83% of the variance in the data set. The three basis vectors are non-orthogonal and not strictly the principle components of the data set. In addition to separating deformation sources in the continuous GPS data, the basis vectors provide a means to scale the contribution of each source in a given interferogram. This provides an additional constraint in a joint model of GPS and InSAR data (COSMO-SkyMed and Sentinel-1A) to determine source geometry. The first basis vector corresponds with inflation in the south caldera region, an area long recognized as the location of a long-term storage reservoir. The second vector represents deformation of the HMM source, which is in the same location as a previously modeled shallow reservoir, however InSAR data suggest a more complicated source. Preliminary modeling of the deformation attributed to the third basis vector shows that it is consistent with inflation of a steeply dipping ellipsoid centered below Keanakāko`i crater, southeast of HMM. Keanakāko`i crater is the locus of a known, intermittently active deformation source, which was not previously recognized to have been active during the 2015 event.

  4. Attractor reconstruction for non-linear systems: a methodological note

    USGS Publications Warehouse

    Nichols, J.M.; Nichols, J.D.

    2001-01-01

    Attractor reconstruction is an important step in the process of making predictions for non-linear time-series and in the computation of certain invariant quantities used to characterize the dynamics of such series. The utility of computed predictions and invariant quantities is dependent on the accuracy of attractor reconstruction, which in turn is determined by the methods used in the reconstruction process. This paper suggests methods by which the delay and embedding dimension may be selected for a typical delay coordinate reconstruction. A comparison is drawn between the use of the autocorrelation function and mutual information in quantifying the delay. In addition, a false nearest neighbor (FNN) approach is used in minimizing the number of delay vectors needed. Results highlight the need for an accurate reconstruction in the computation of the Lyapunov spectrum and in prediction algorithms.

  5. The relationship between the change of magnetic energy and eruption behavior in NOAA AR 11429

    NASA Astrophysics Data System (ADS)

    Wang, R.; Liu, Y. D.

    2013-12-01

    We study the evolution of magnetic energy in an active region (AR) NOAA 11429, which produced a series of X/M class flares and fast coronal mass ejections (CMEs) in March 2012. In particular, this AR spawned double X-class flares (X5.4/X1.3) within a time internal of only 1 hr on March 7, which are associated with wide and fast CMEs with speeds of ~2000 km/s. A nonlinear force-free field extrapolation method is adopted to reconstruct the coronal magnetic field. We apply this method to a time series of 176 high-cadence vector magnetograms of the AR acquired by the Helioseismic and Magnetic Imager on board the Solar Dynamics Observatory (HMI/SDO), which span a time interval of 1.5 days. We investigate the budgets of the free magnetic energy and relative magnetic helicity. We find that there exist some relations between the changes of magnetic energy and flare magnitudes. Compared with previous studies, our results indicate that the magnetic energy decrease occurs before the flare and CME launch time. We will also combine images from the Atmospheric Imaging Assembly (AIA) to further explore the detailed process of the eruptions.

  6. Time series inversion of spectra from ground-based radiometers

    NASA Astrophysics Data System (ADS)

    Christensen, O. M.; Eriksson, P.

    2013-02-01

    Retrieving time series of atmospheric constituents from ground-based spectrometers often requires different temporal averaging depending on the altitude region in focus. This can lead to several datasets existing for one instrument which complicates validation and comparisons between instruments. This paper puts forth a possible solution by incorporating the temporal domain into the maximum a posteriori (MAP) retrieval algorithm. The state vector is increased to include measurements spanning a time period, and the temporal correlations between the true atmospheric states are explicitly specified in the a priori uncertainty matrix. This allows the MAP method to effectively select the best temporal smoothing for each altitude, removing the need for several datasets to cover different altitudes. The method is compared to traditional averaging of spectra using a simulated retrieval of water vapour in the mesosphere. The simulations show that the method offers a significant advantage compared to the traditional method, extending the sensitivity an additional 10 km upwards without reducing the temporal resolution at lower altitudes. The method is also tested on the OSO water vapour microwave radiometer confirming the advantages found in the simulation. Additionally, it is shown how the method can interpolate data in time and provide diagnostic values to evaluate the interpolated data.

  7. Identification of Boolean Network Models From Time Series Data Incorporating Prior Knowledge.

    PubMed

    Leifeld, Thomas; Zhang, Zhihua; Zhang, Ping

    2018-01-01

    Motivation: Mathematical models take an important place in science and engineering. A model can help scientists to explain dynamic behavior of a system and to understand the functionality of system components. Since length of a time series and number of replicates is limited by the cost of experiments, Boolean networks as a structurally simple and parameter-free logical model for gene regulatory networks have attracted interests of many scientists. In order to fit into the biological contexts and to lower the data requirements, biological prior knowledge is taken into consideration during the inference procedure. In the literature, the existing identification approaches can only deal with a subset of possible types of prior knowledge. Results: We propose a new approach to identify Boolean networks from time series data incorporating prior knowledge, such as partial network structure, canalizing property, positive and negative unateness. Using vector form of Boolean variables and applying a generalized matrix multiplication called the semi-tensor product (STP), each Boolean function can be equivalently converted into a matrix expression. Based on this, the identification problem is reformulated as an integer linear programming problem to reveal the system matrix of Boolean model in a computationally efficient way, whose dynamics are consistent with the important dynamics captured in the data. By using prior knowledge the number of candidate functions can be reduced during the inference. Hence, identification incorporating prior knowledge is especially suitable for the case of small size time series data and data without sufficient stimuli. The proposed approach is illustrated with the help of a biological model of the network of oxidative stress response. Conclusions: The combination of efficient reformulation of the identification problem with the possibility to incorporate various types of prior knowledge enables the application of computational model inference to systems with limited amount of time series data. The general applicability of this methodological approach makes it suitable for a variety of biological systems and of general interest for biological and medical research.

  8. Real-time tsunami inundation forecasting and damage mapping towards enhancing tsunami disaster resiliency

    NASA Astrophysics Data System (ADS)

    Koshimura, S.; Hino, R.; Ohta, Y.; Kobayashi, H.; Musa, A.; Murashima, Y.

    2014-12-01

    With use of modern computing power and advanced sensor networks, a project is underway to establish a new system of real-time tsunami inundation forecasting, damage estimation and mapping to enhance society's resilience in the aftermath of major tsunami disaster. The system consists of fusion of real-time crustal deformation monitoring/fault model estimation by Ohta et al. (2012), high-performance real-time tsunami propagation/inundation modeling with NEC's vector supercomputer SX-ACE, damage/loss estimation models (Koshimura et al., 2013), and geo-informatics. After a major (near field) earthquake is triggered, the first response of the system is to identify the tsunami source model by applying RAPiD Algorithm (Ohta et al., 2012) to observed RTK-GPS time series at GEONET sites in Japan. As performed in the data obtained during the 2011 Tohoku event, we assume less than 10 minutes as the acquisition time of the source model. Given the tsunami source, the system moves on to running tsunami propagation and inundation model which was optimized on the vector supercomputer SX-ACE to acquire the estimation of time series of tsunami at offshore/coastal tide gauges to determine tsunami travel and arrival time, extent of inundation zone, maximum flow depth distribution. The implemented tsunami numerical model is based on the non-linear shallow-water equations discretized by finite difference method. The merged bathymetry and topography grids are prepared with 10 m resolution to better estimate the tsunami inland penetration. Given the maximum flow depth distribution, the system performs GIS analysis to determine the numbers of exposed population and structures using census data, then estimates the numbers of potential death and damaged structures by applying tsunami fragility curve (Koshimura et al., 2013). Since the tsunami source model is determined, the model is supposed to complete the estimation within 10 minutes. The results are disseminated as mapping products to responders and stakeholders, e.g. national and regional municipalities, to be utilized for their emergency/response activities. In 2014, the system is verified through the case studies of 2011 Tohoku event and potential earthquake scenarios along Nankai Trough with regard to its capability and robustness.

  9. Investigating flow patterns and related dynamics in multi-instability turbulent plasmas using a three-point cross-phase time delay estimation velocimetry scheme

    NASA Astrophysics Data System (ADS)

    Brandt, C.; Thakur, S. C.; Tynan, G. R.

    2016-04-01

    Complexities of flow patterns in the azimuthal cross-section of a cylindrical magnetized helicon plasma and the corresponding plasma dynamics are investigated by means of a novel scheme for time delay estimation velocimetry. The advantage of this introduced method is the capability of calculating the time-averaged 2D velocity fields of propagating wave-like structures and patterns in complex spatiotemporal data. It is able to distinguish and visualize the details of simultaneously present superimposed entangled dynamics and it can be applied to fluid-like systems exhibiting frequently repeating patterns (e.g., waves in plasmas, waves in fluids, dynamics in planetary atmospheres, etc.). The velocity calculations are based on time delay estimation obtained from cross-phase analysis of time series. Each velocity vector is unambiguously calculated from three time series measured at three different non-collinear spatial points. This method, when applied to fast imaging, has been crucial to understand the rich plasma dynamics in the azimuthal cross-section of a cylindrical linear magnetized helicon plasma. The capabilities and the limitations of this velocimetry method are discussed and demonstrated for two completely different plasma regimes, i.e., for quasi-coherent wave dynamics and for complex broadband wave dynamics involving simultaneously present multiple instabilities.

  10. Investigating flow patterns and related dynamics in multi-instability turbulent plasmas using a three-point cross-phase time delay estimation velocimetry scheme

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brandt, C.; Max-Planck-Institute for Plasma Physics, Wendelsteinstr. 1, D-17491 Greifswald; Thakur, S. C.

    2016-04-15

    Complexities of flow patterns in the azimuthal cross-section of a cylindrical magnetized helicon plasma and the corresponding plasma dynamics are investigated by means of a novel scheme for time delay estimation velocimetry. The advantage of this introduced method is the capability of calculating the time-averaged 2D velocity fields of propagating wave-like structures and patterns in complex spatiotemporal data. It is able to distinguish and visualize the details of simultaneously present superimposed entangled dynamics and it can be applied to fluid-like systems exhibiting frequently repeating patterns (e.g., waves in plasmas, waves in fluids, dynamics in planetary atmospheres, etc.). The velocity calculationsmore » are based on time delay estimation obtained from cross-phase analysis of time series. Each velocity vector is unambiguously calculated from three time series measured at three different non-collinear spatial points. This method, when applied to fast imaging, has been crucial to understand the rich plasma dynamics in the azimuthal cross-section of a cylindrical linear magnetized helicon plasma. The capabilities and the limitations of this velocimetry method are discussed and demonstrated for two completely different plasma regimes, i.e., for quasi-coherent wave dynamics and for complex broadband wave dynamics involving simultaneously present multiple instabilities.« less

  11. Design of 2D time-varying vector fields.

    PubMed

    Chen, Guoning; Kwatra, Vivek; Wei, Li-Yi; Hansen, Charles D; Zhang, Eugene

    2012-10-01

    Design of time-varying vector fields, i.e., vector fields that can change over time, has a wide variety of important applications in computer graphics. Existing vector field design techniques do not address time-varying vector fields. In this paper, we present a framework for the design of time-varying vector fields, both for planar domains as well as manifold surfaces. Our system supports the creation and modification of various time-varying vector fields with desired spatial and temporal characteristics through several design metaphors, including streamlines, pathlines, singularity paths, and bifurcations. These design metaphors are integrated into an element-based design to generate the time-varying vector fields via a sequence of basis field summations or spatial constrained optimizations at the sampled times. The key-frame design and field deformation are also introduced to support other user design scenarios. Accordingly, a spatial-temporal constrained optimization and the time-varying transformation are employed to generate the desired fields for these two design scenarios, respectively. We apply the time-varying vector fields generated using our design system to a number of important computer graphics applications that require controllable dynamic effects, such as evolving surface appearance, dynamic scene design, steerable crowd movement, and painterly animation. Many of these are difficult or impossible to achieve via prior simulation-based methods. In these applications, the time-varying vector fields have been applied as either orientation fields or advection fields to control the instantaneous appearance or evolving trajectories of the dynamic effects.

  12. What is the 'true' effect of Trypanosoma rangeli on its triatomine bug vector?

    PubMed

    Peterson, Jennifer K; Graham, Andrea L

    2016-06-01

    The phrase, "T. rangeli is pathogenic to its insect vector," is commonly found in peer-reviewed publications on the matter, such that it has become the orthodox view of this interaction. In a literature survey, we identified over 20 papers with almost the exact phrase and several others alluding to it. The idea is of particular importance in triatomine population dynamics and the study of vector-borne T. cruzi transmission, as it could mean that triatomines infected with T. rangeli have lower fitness than uninfected insects. Trypanosoma rangeli pathogenicity was first observed in a series of studies carried out over fifty years ago using the triatomine species Rhodnius prolixus. However, there are few studies of the effect of T. rangeli on its other vector species, and several of the studies were carried out with R. prolixus under non-physiological conditions. Here, we re-evaluate the published studies that led to the conclusion that T. rangeli is pathogenic to its vector, to determine whether or not this indeed is the "true" effect of T. rangeli on its triatomine vector. © 2016 The Society for Vector Ecology.

  13. Pattern Recognition Application of Support Vector Machine for Fault Classification of Thyristor Controlled Series Compensated Transmission Lines

    NASA Astrophysics Data System (ADS)

    Yashvantrai Vyas, Bhargav; Maheshwari, Rudra Prakash; Das, Biswarup

    2016-06-01

    Application of series compensation in extra high voltage (EHV) transmission line makes the protection job difficult for engineers, due to alteration in system parameters and measurements. The problem amplifies with inclusion of electronically controlled compensation like thyristor controlled series compensation (TCSC) as it produce harmonics and rapid change in system parameters during fault associated with TCSC control. This paper presents a pattern recognition based fault type identification approach with support vector machine. The scheme uses only half cycle post fault data of three phase currents to accomplish the task. The change in current signal features during fault has been considered as discriminatory measure. The developed scheme in this paper is tested over a large set of fault data with variation in system and fault parameters. These fault cases have been generated with PSCAD/EMTDC on a 400 kV, 300 km transmission line model. The developed algorithm has proved better for implementation on TCSC compensated line with its improved accuracy and speed.

  14. Analysis of the DORIS, GNSS, SLR, VLBI and gravimetric time series at the GGOS core sites

    NASA Astrophysics Data System (ADS)

    Moreaux, G.; Lemoine, F. G.; Luceri, V.; Pavlis, E. C.; MacMillan, D. S.; Bonvalot, S.; Saunier, J.

    2017-12-01

    Since June 2016 and the installation of a new DORIS station in Wettzell (Germany), four geodetic sites (Badary, Greenbelt, Wettzell and Yarragadee) are equipped with the four space geodetic techniques (DORIS, GNSS, SLR and VLBI). In line with the GGOS (Global Geodetic Observing System) objective of achieving a terrestrial reference frame at the millimetric level of accuracy, the combination centers of the four space techniques initiated a joint study to assess the level of agreement among these space geodetic techniques. In addition to the four sites, we will consider all the GGOS core sites including the seven sites with at least two space geodetic techniques in addition to DORIS. Starting from the coordinate time series, we will estimate and compare the mean positions and velocities of the co-located instruments. The temporal evolution of the coordinate differences will also be evaluated with respect to the local tie vectors and discrepancies will be investigated. Then, the analysis of the signal content of the time series will be carried out. Amplitudes and phases of the common signals among the techniques, and eventually from gravity data, will be compared. The first objective of this talk is to describe our joint study: the sites, the data, and the objectives. The second purpose is to present the first results obtained from the GGAO (Goddard Geophysical and Astronomic Observatory) site of Greenbelt.

  15. High-performance ultra-low power VLSI analog processor for data compression

    NASA Technical Reports Server (NTRS)

    Tawel, Raoul (Inventor)

    1996-01-01

    An apparatus for data compression employing a parallel analog processor. The apparatus includes an array of processor cells with N columns and M rows wherein the processor cells have an input device, memory device, and processor device. The input device is used for inputting a series of input vectors. Each input vector is simultaneously input into each column of the array of processor cells in a pre-determined sequential order. An input vector is made up of M components, ones of which are input into ones of M processor cells making up a column of the array. The memory device is used for providing ones of M components of a codebook vector to ones of the processor cells making up a column of the array. A different codebook vector is provided to each of the N columns of the array. The processor device is used for simultaneously comparing the components of each input vector to corresponding components of each codebook vector, and for outputting a signal representative of the closeness between the compared vector components. A combination device is used to combine the signal output from each processor cell in each column of the array and to output a combined signal. A closeness determination device is then used for determining which codebook vector is closest to an input vector from the combined signals, and for outputting a codebook vector index indicating which of the N codebook vectors was the closest to each input vector input into the array.

  16. Predicting Culex pipiens/restuans population dynamics by interval lagged weather data

    PubMed Central

    2013-01-01

    Background Culex pipiens/restuans mosquitoes are important vectors for a variety of arthropod borne viral infections. In this study, the associations between 20 years of mosquito capture data and the time lagged environmental quantities daytime length, temperature, precipitation, relative humidity and wind speed were used to generate a predictive model for the population dynamics of this vector species. Methods Mosquito population in the study area was represented by averaged time series of mosquitos counts captured at 6 sites in Cook County (Illinois, USA). Cross-correlation maps (CCMs) were compiled to investigate the association between mosquito abundances and environmental quantities. The results obtained from the CCMs were incorporated into a Poisson regression to generate a predictive model. To optimize the predictive model the time lags obtained from the CCMs were adjusted using a genetic algorithm. Results CCMs for weekly data showed a highly positive correlation of mosquito abundances with daytime length 4 to 5 weeks prior to capture (quantified by a Spearman rank order correlation of rS = 0.898) and with temperature during 2 weeks prior to capture (rS = 0.870). Maximal negative correlations were found for wind speed averaged over 3 week prior to capture (rS = −0.621). Cx. pipiens/restuans population dynamics was predicted by integrating the CCM results in Poisson regression models. They were used to simulate the average seasonal cycle of the mosquito abundance. Verification with observations resulted in a correlation of rS = 0.899 for daily and rS = 0.917 for weekly data. Applying the optimized models to the entire 20-years time series also resulted in a suitable fit with rS = 0.876 for daily and rS = 0.899 for weekly data. Conclusions The study demonstrates the application of interval lagged weather data to predict mosquito abundances with a feasible accuracy, especially when related to weekly Cx. pipiens/restuans populations. PMID:23634763

  17. A freestream-preserving fourth-order finite-volume method in mapped coordinates with adaptive-mesh refinement

    DOE PAGES

    Guzik, Stephen M.; Gao, Xinfeng; Owen, Landon D.; ...

    2015-12-20

    We present a fourth-order accurate finite-volume method for solving time-dependent hyperbolic systems of conservation laws on mapped grids that are adaptively refined in space and time. Some novel considerations for formulating the semi-discrete system of equations in computational space are combined with detailed mechanisms for accommodating the adapting grids. Furthermore, these considerations ensure that conservation is maintained and that the divergence of a constant vector field is always zero (freestream-preservation property). The solution in time is advanced with a fourth-order Runge-Kutta method. A series of tests verifies that the expected accuracy is achieved in smooth flows and the solution ofmore » a Mach reflection problem demonstrates the effectiveness of the algorithm in resolving strong discontinuities.« less

  18. A synopsis of X-band radar-derived results from New River Inlet, NC (May 2012): Wave transformation, bathymetry, and tidal currents

    NASA Astrophysics Data System (ADS)

    Honegger, D. A.; Haller, M. C.; Diaz Mendez, G. M.; Pittman, R.; Catalan, P. A.

    2012-12-01

    Land-based X-band marine radar observations were collected as part of the month-long DARLA-MURI / RIVET-DRI field experiment at New River Inlet, NC in May 2012. Here we present a synopsis of preliminary results utilizing microwave radar backscatter time series collected from an antenna located 400 m inside the inlet mouth and with a footprint spanning 1000 m beyond the ebb shoals. Two crucial factors in the forcing and constraining of nearshore numerical models are accurate bathymetry and offshore variability in the wave field. Image time series of radar backscatter from surface gravity waves can be utilized to infer these parameters over a large swath and during times of poor optical visibility. Presented are radar-derived wavenumber vector maps obtained from the Plant et al. (2008) algorithm and bathymetric estimates as calculated using Holman et al. (JGR, in review). We also evaluate the effects of tidal currents on the wave directions and depth inversion accuracy. In addition, shifts in the average wave breaking patterns at tidal frequencies shed light on depth- (and possibly current-) induced breaking as a function of tide level and tidal current velocity, while shifts over longer timescales imply bedform movement during the course of the experiment. Lastly, lowpass filtered radar image time series of backscatter intensity are shown to identify the structure and propagation of tidal plume fronts and multiscale ebb jets at the offshore shoal boundary.

  19. A study of real-time computer graphic display technology for aeronautical applications

    NASA Technical Reports Server (NTRS)

    Rajala, S. A.

    1981-01-01

    The development, simulation, and testing of an algorithm for anti-aliasing vector drawings is discussed. The pseudo anti-aliasing line drawing algorithm is an extension to Bresenham's algorithm for computer control of a digital plotter. The algorithm produces a series of overlapping line segments where the display intensity shifts from one segment to the other in this overlap (transition region). In this algorithm the length of the overlap and the intensity shift are essentially constants because the transition region is an aid to the eye in integrating the segments into a single smooth line.

  20. Cointegration of output, capital, labor, and energy

    NASA Astrophysics Data System (ADS)

    Stresing, R.; Lindenberger, D.; Kã¼mmel, R.

    2008-11-01

    Cointegration analysis is applied to the linear combinations of the time series of (the logarithms of) output, capital, labor, and energy for Germany, Japan, and the USA since 1960. The computed cointegration vectors represent the output elasticities of the aggregate energy-dependent Cobb-Douglas function. The output elasticities give the economic weights of the production factors capital, labor, and energy. We find that they are for labor much smaller and for energy much larger than the cost shares of these factors. In standard economic theory output elasticities equal cost shares. Our heterodox findings support results obtained with LINEX production functions.

  1. Bivariate sub-Gaussian model for stock index returns

    NASA Astrophysics Data System (ADS)

    Jabłońska-Sabuka, Matylda; Teuerle, Marek; Wyłomańska, Agnieszka

    2017-11-01

    Financial time series are commonly modeled with methods assuming data normality. However, the real distribution can be nontrivial, also not having an explicitly formulated probability density function. In this work we introduce novel parameter estimation and high-powered distribution testing methods which do not rely on closed form densities, but use the characteristic functions for comparison. The approach applied to a pair of stock index returns demonstrates that such a bivariate vector can be a sample coming from a bivariate sub-Gaussian distribution. The methods presented here can be applied to any nontrivially distributed financial data, among others.

  2. An iterative solver for the 3D Helmholtz equation

    NASA Astrophysics Data System (ADS)

    Belonosov, Mikhail; Dmitriev, Maxim; Kostin, Victor; Neklyudov, Dmitry; Tcheverda, Vladimir

    2017-09-01

    We develop a frequency-domain iterative solver for numerical simulation of acoustic waves in 3D heterogeneous media. It is based on the application of a unique preconditioner to the Helmholtz equation that ensures convergence for Krylov subspace iteration methods. Effective inversion of the preconditioner involves the Fast Fourier Transform (FFT) and numerical solution of a series of boundary value problems for ordinary differential equations. Matrix-by-vector multiplication for iterative inversion of the preconditioned matrix involves inversion of the preconditioner and pointwise multiplication of grid functions. Our solver has been verified by benchmarking against exact solutions and a time-domain solver.

  3. Quantification of cardiorespiratory interactions based on joint symbolic dynamics.

    PubMed

    Kabir, Muammar M; Saint, David A; Nalivaiko, Eugene; Abbott, Derek; Voss, Andreas; Baumert, Mathias

    2011-10-01

    Cardiac and respiratory rhythms are highly nonlinear and nonstationary. As a result traditional time-domain techniques are often inadequate to characterize their complex dynamics. In this article, we introduce a novel technique to investigate the interactions between R-R intervals and respiratory phases based on their joint symbolic dynamics. To evaluate the technique, electrocardiograms (ECG) and respiratory signals were recorded in 13 healthy subjects in different body postures during spontaneous and controlled breathing. Herein, the R-R time series were extracted from ECG and respiratory phases were obtained from abdomen impedance belts using the Hilbert transform. Both time series were transformed into ternary symbol vectors based on the changes between two successive R-R intervals or respiratory phases. Subsequently, words of different symbol lengths were formed and the correspondence between the two series of words was determined to quantify the interaction between cardiac and respiratory cycles. To validate our results, respiratory sinus arrhythmia (RSA) was further studied using the phase-averaged characterization of the RSA pattern. The percentage of similarity of the sequence of symbols, between the respective words of the two series determined by joint symbolic dynamics, was significantly reduced in the upright position compared to the supine position (26.4 ± 4.7 vs. 20.5 ± 5.4%, p < 0.01). Similarly, RSA was also reduced during upright posture, but the difference was less significant (0.11 ± 0.02 vs. 0.08 ± 0.01 s, p < 0.05). In conclusion, joint symbolic dynamics provides a new efficient technique for the analysis of cardiorespiratory interaction that is highly sensitive to the effects of orthostatic challenge.

  4. A transient stochastic weather generator incorporating climate model uncertainty

    NASA Astrophysics Data System (ADS)

    Glenis, Vassilis; Pinamonti, Valentina; Hall, Jim W.; Kilsby, Chris G.

    2015-11-01

    Stochastic weather generators (WGs), which provide long synthetic time series of weather variables such as rainfall and potential evapotranspiration (PET), have found widespread use in water resources modelling. When conditioned upon the changes in climatic statistics (change factors, CFs) predicted by climate models, WGs provide a useful tool for climate impacts assessment and adaption planning. The latest climate modelling exercises have involved large numbers of global and regional climate models integrations, designed to explore the implications of uncertainties in the climate model formulation and parameter settings: so called 'perturbed physics ensembles' (PPEs). In this paper we show how these climate model uncertainties can be propagated through to impact studies by testing multiple vectors of CFs, each vector derived from a different sample from a PPE. We combine this with a new methodology to parameterise the projected time-evolution of CFs. We demonstrate how, when conditioned upon these time-dependent CFs, an existing, well validated and widely used WG can be used to generate non-stationary simulations of future climate that are consistent with probabilistic outputs from the Met Office Hadley Centre's Perturbed Physics Ensemble. The WG enables extensive sampling of natural variability and climate model uncertainty, providing the basis for development of robust water resources management strategies in the context of a non-stationary climate.

  5. Vector Topographic Map Data over the BOREAS NSA and SSA in SIF Format

    NASA Technical Reports Server (NTRS)

    Knapp, David; Nickeson, Jaime; Hall, Forrest G. (Editor)

    2000-01-01

    This data set contains vector contours and other features of individual topographic map sheets from the National Topographic Series (NTS). The map sheet files were received in Standard Interchange Format (SIF) and cover the BOReal Ecosystem-Atmosphere Study (BOREAS) Northern Study Area (NSA) and Southern Study Area (SSA) at scales of 1:50,000 and 1:250,000. The individual files are stored in compressed Unix tar archives.

  6. Spatial Pyramid Covariance based Compact Video Code for Robust Face Retrieval in TV-series.

    PubMed

    Li, Yan; Wang, Ruiping; Cui, Zhen; Shan, Shiguang; Chen, Xilin

    2016-10-10

    We address the problem of face video retrieval in TV-series which searches video clips based on the presence of specific character, given one face track of his/her. This is tremendously challenging because on one hand, faces in TV-series are captured in largely uncontrolled conditions with complex appearance variations, and on the other hand retrieval task typically needs efficient representation with low time and space complexity. To handle this problem, we propose a compact and discriminative representation for the huge body of video data, named Compact Video Code (CVC). Our method first models the face track by its sample (i.e., frame) covariance matrix to capture the video data variations in a statistical manner. To incorporate discriminative information and obtain more compact video signature suitable for retrieval, the high-dimensional covariance representation is further encoded as a much lower-dimensional binary vector, which finally yields the proposed CVC. Specifically, each bit of the code, i.e., each dimension of the binary vector, is produced via supervised learning in a max margin framework, which aims to make a balance between the discriminability and stability of the code. Besides, we further extend the descriptive granularity of covariance matrix from traditional pixel-level to more general patchlevel, and proceed to propose a novel hierarchical video representation named Spatial Pyramid Covariance (SPC) along with a fast calculation method. Face retrieval experiments on two challenging TV-series video databases, i.e., the Big Bang Theory and Prison Break, demonstrate the competitiveness of the proposed CVC over state-of-the-art retrieval methods. In addition, as a general video matching algorithm, CVC is also evaluated in traditional video face recognition task on a standard Internet database, i.e., YouTube Celebrities, showing its quite promising performance by using an extremely compact code with only 128 bits.

  7. Applications of Java and Vector Graphics to Astrophysical Visualization

    NASA Astrophysics Data System (ADS)

    Edirisinghe, D.; Budiardja, R.; Chae, K.; Edirisinghe, G.; Lingerfelt, E.; Guidry, M.

    2002-12-01

    We describe a series of projects utilizing the portability of Java programming coupled with the compact nature of vector graphics (SVG and SWF formats) for setup and control of calculations, local and collaborative visualization, and interactive 2D and 3D animation presentations in astrophysics. Through a set of examples, we demonstrate how such an approach can allow efficient and user-friendly control of calculations in compiled languages such as Fortran 90 or C++ through portable graphical interfaces written in Java, and how the output of such calculations can be packaged in vector-based animation having interactive controls and extremely high visual quality, but very low bandwidth requirements.

  8. [New hosts and vectors for genome cloning]. Progress report, 1990--1991

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    The main goal of our project remains the development of new bacterial hosts and vectors for the stable propagation of human DNA clones in E. coli. During the past six months of our current budget period, we have (1) continued to develop new hosts that permit the stable maintenance of unstable features of human DNA, and (2) developed a series of vectors for (a) cloning large DNA inserts, (b) assessing the frequency of human sequences that are lethal to the growth of E. coli, and (c) assessing the stability of human sequences cloned in M13 for large-scale sequencing projects.

  9. Molecular identification of vectors of Leishmania in Colombia: mitochondrial introgression in the Lutzomyia townsendi series.

    PubMed

    Testa, J M; Montoya-Lerma, J; Cadena, H; Oviedo, M; Ready, P D

    2002-12-01

    The identity of the sandfly vectors of Leishmania braziliensis in Valle del Cauca Department, Colombia, was originally given as Lutzomyia townsendi, but then changed to L. youngi, another member of the L. townsendi series (Verrucarum group) with isomorphic females. To identify members of this series in Valle del Cauca, we analyzed the nuclear gene elongation factor-alpha (EF-alpha) and the mitochondrial gene cytochrome b (Cyt b). DNA sequences from the L. verrucarum series (L. columbiana, L. evansi and L. ovallesi) were used as outgroups. Flies from two locations on the western cordillera of the Andes were identified as L. townsendi s.s., according to male morphology and distinctive gene lineages. In the third location, on the central cordillera of the Andes, most specimens were identified as belonging to a geographical population of L. youngi, according to male morphology, an EF-alpha lineage shared with L. youngi from the Venezuelan-type locality, and a distinctive Cyt b sub-lineage. All other specimens were identified as L. youngi with the introgressed Cyt b sequences of L. townsendi. Such interspecific introgression implies that vectorial traits and ecological associations may no longer be viewed as fixed properties of different morphospecies.

  10. Do foreign exchange and equity markets co-move in Latin American region? Detrended cross-correlation approach

    NASA Astrophysics Data System (ADS)

    Bashir, Usman; Yu, Yugang; Hussain, Muntazir; Zebende, Gilney F.

    2016-11-01

    This paper investigates the dynamics of the relationship between foreign exchange markets and stock markets through time varying co-movements. In this sense, we analyzed the time series monthly of Latin American countries for the period from 1991 to 2015. Furthermore, we apply Granger causality to verify the direction of causality between foreign exchange and stock market and detrended cross-correlation approach (ρDCCA) for any co-movements at different time scales. Our empirical results suggest a positive cross correlation between exchange rate and stock price for all Latin American countries. The findings reveal two clear patterns of correlation. First, Brazil and Argentina have positive correlation in both short and long time frames. Second, the remaining countries are negatively correlated in shorter time scale, gradually moving to positive. This paper contributes to the field in three ways. First, we verified the co-movements of exchange rate and stock prices that were rarely discussed in previous empirical studies. Second, ρDCCA coefficient is a robust and powerful methodology to measure the cross correlation when dealing with non stationarity of time series. Third, most of the studies employed one or two time scales using co-integration and vector autoregressive approaches. Not much is known about the co-movements at varying time scales between foreign exchange and stock markets. ρDCCA coefficient facilitates the understanding of its explanatory depth.

  11. Decomposition of the complex system into nonlinear spatio-temporal modes: algorithm and application to climate data mining

    NASA Astrophysics Data System (ADS)

    Feigin, Alexander; Gavrilov, Andrey; Loskutov, Evgeny; Mukhin, Dmitry

    2015-04-01

    Proper decomposition of the complex system into well separated "modes" is a way to reveal and understand the mechanisms governing the system behaviour as well as discover essential feedbacks and nonlinearities. The decomposition is also natural procedure that provides to construct adequate and concurrently simplest models of both corresponding sub-systems, and of the system in whole. In recent works two new methods of decomposition of the Earth's climate system into well separated modes were discussed. The first method [1-3] is based on the MSSA (Multichannel Singular Spectral Analysis) [4] for linear expanding vector (space-distributed) time series and makes allowance delayed correlations of the processes recorded in spatially separated points. The second one [5-7] allows to construct nonlinear dynamic modes, but neglects delay of correlations. It was demonstrated [1-3] that first method provides effective separation of different time scales, but prevent from correct reduction of data dimension: slope of variance spectrum of spatio-temporal empirical orthogonal functions that are "structural material" for linear spatio-temporal modes, is too flat. The second method overcomes this problem: variance spectrum of nonlinear modes falls essentially sharply [5-7]. However neglecting time-lag correlations brings error of mode selection that is uncontrolled and increases with growth of mode time scale. In the report we combine these two methods in such a way that the developed algorithm allows constructing nonlinear spatio-temporal modes. The algorithm is applied for decomposition of (i) multi hundreds years globally distributed data generated by the INM RAS Coupled Climate Model [8], and (ii) 156 years time series of SST anomalies distributed over the globe [9]. We compare efficiency of different methods of decomposition and discuss the abilities of nonlinear spatio-temporal modes for construction of adequate and concurrently simplest ("optimal") models of climate systems. 1. Feigin A.M., Mukhin D., Gavrilov A., Volodin E.M., and Loskutov E.M. (2013) "Separation of spatial-temporal patterns ("climatic modes") by combined analysis of really measured and generated numerically vector time series", AGU 2013 Fall Meeting, Abstract NG33A-1574. 2. Alexander Feigin, Dmitry Mukhin, Andrey Gavrilov, Evgeny Volodin, and Evgeny Loskutov (2014) "Approach to analysis of multiscale space-distributed time series: separation of spatio-temporal modes with essentially different time scales", Geophysical Research Abstracts, Vol. 16, EGU2014-6877. 3. Dmitry Mukhin, Dmitri Kondrashov, Evgeny Loskutov, Andrey Gavrilov, Alexander Feigin, and Michael Ghil (2014) "Predicting critical transitions in ENSO models, Part II: Spatially dependent models", Journal of Climate (accepted, doi: 10.1175/JCLI-D-14-00240.1). 4. Ghil, M., R. M. Allen, M. D. Dettinger, K. Ide, D. Kondrashov, et al. (2002) "Advanced spectral methods for climatic time series", Rev. Geophys. 40(1), 3.1-3.41. 5. Dmitry Mukhin, Andrey Gavrilov, Evgeny M Loskutov and Alexander M Feigin (2014) "Nonlinear Decomposition of Climate Data: a New Method for Reconstruction of Dynamical Modes", AGU 2014 Fall Meeting, Abstract NG43A-3752. 6. Andrey Gavrilov, Dmitry Mukhin, Evgeny Loskutov, and Alexander Feigin (2015) "Empirical decomposition of climate data into nonlinear dynamic modes", Geophysical Research Abstracts, Vol. 17, EGU2015-627. 7. Dmitry Mukhin, Andrey Gavrilov, Evgeny Loskutov, Alexander Feigin, and Juergen Kurths (2015) "Reconstruction of principal dynamical modes from climatic variability: nonlinear approach", Geophysical Research Abstracts, Vol. 17, EGU2015-5729. 8. http://83.149.207.89/GCM_DATA_PLOTTING/GCM_INM_DATA_XY_en.htm. 9. http://iridl.ldeo.columbia.edu/SOURCES/.KAPLAN/.EXTENDED/.v2/.ssta/.

  12. Watershed reliability, resilience and vulnerability analysis under uncertainty using water quality data.

    PubMed

    Hoque, Yamen M; Tripathi, Shivam; Hantush, Mohamed M; Govindaraju, Rao S

    2012-10-30

    A method for assessment of watershed health is developed by employing measures of reliability, resilience and vulnerability (R-R-V) using stream water quality data. Observed water quality data are usually sparse, so that a water quality time-series is often reconstructed using surrogate variables (streamflow). A Bayesian algorithm based on relevance vector machine (RVM) was employed to quantify the error in the reconstructed series, and a probabilistic assessment of watershed status was conducted based on established thresholds for various constituents. As an application example, observed water quality data for several constituents at different monitoring points within the Cedar Creek watershed in north-east Indiana (USA) were utilized. Considering uncertainty in the data for the period 2002-2007, the R-R-V analysis revealed that the Cedar Creek watershed tends to be in compliance with respect to selected pesticides, ammonia and total phosphorus. However, the watershed was found to be prone to violations of sediment standards. Ignoring uncertainty in the water quality time-series led to misleading results especially in the case of sediments. Results indicate that the methods presented in this study may be used for assessing the effects of different stressors over a watershed. The method shows promise as a management tool for assessing watershed health. Copyright © 2012 Elsevier Ltd. All rights reserved.

  13. The past, present, and future of the U.S. electric power sector: Examining regulatory changes using multivariate time series approaches

    NASA Astrophysics Data System (ADS)

    Binder, Kyle Edwin

    The U.S. energy sector has undergone continuous change in the regulatory, technological, and market environments. These developments show no signs of slowing. Accordingly, it is imperative that energy market regulators and participants develop a strong comprehension of market dynamics and the potential implications of their actions. This dissertation contributes to a better understanding of the past, present, and future of U.S. energy market dynamics and interactions with policy. Advancements in multivariate time series analysis are employed in three related studies of the electric power sector. Overall, results suggest that regulatory changes have had and will continue to have important implications for the electric power sector. The sector, however, has exhibited adaptability to past regulatory changes and is projected to remain resilient in the future. Tests for constancy of the long run parameters in a vector error correction model are applied to determine whether relationships among coal inventories in the electric power sector, input prices, output prices, and opportunity costs have remained constant over the past 38 years. Two periods of instability are found, the first following railroad deregulation in the U.S. and the second corresponding to a number of major regulatory changes in the electric power and natural gas sectors. Relationships among Renewable Energy Credit prices, electricity prices, and natural gas prices are estimated using a vector error correction model. Results suggest that Renewable Energy Credit prices do not completely behave as previously theorized in the literature. Potential reasons for the divergence between theory and empirical evidence are the relative immaturity of current markets and continuous institutional intervention. Potential impacts of future CO2 emissions reductions under the Clean Power Plan on economic and energy sector activity are estimated. Conditional forecasts based on an outlined path for CO2 emissions are developed from a factor-augmented vector autoregressive model for a large dataset. Unconditional and conditional forecasts are compared for U.S. industrial production, real personal income, and estimated factors. Results suggest that economic growth will be slower under the Clean Power Plan than it would otherwise; however, CO2 emissions reductions and economic growth can be achieved simultaneously.

  14. A Feature Fusion Based Forecasting Model for Financial Time Series

    PubMed Central

    Guo, Zhiqiang; Wang, Huaiqing; Liu, Quan; Yang, Jie

    2014-01-01

    Predicting the stock market has become an increasingly interesting research area for both researchers and investors, and many prediction models have been proposed. In these models, feature selection techniques are used to pre-process the raw data and remove noise. In this paper, a prediction model is constructed to forecast stock market behavior with the aid of independent component analysis, canonical correlation analysis, and a support vector machine. First, two types of features are extracted from the historical closing prices and 39 technical variables obtained by independent component analysis. Second, a canonical correlation analysis method is utilized to combine the two types of features and extract intrinsic features to improve the performance of the prediction model. Finally, a support vector machine is applied to forecast the next day's closing price. The proposed model is applied to the Shanghai stock market index and the Dow Jones index, and experimental results show that the proposed model performs better in the area of prediction than other two similar models. PMID:24971455

  15. Zn(II)-dipicolylamine-based metallo-lipids as novel non-viral gene vectors.

    PubMed

    Su, Rong-Chuan; Liu, Qiang; Yi, Wen-Jing; Zhao, Zhi-Gang

    2017-08-01

    In this study, a series of Zn(II)-dipicolylamine (Zn-DPA) based cationic lipids bearing different hydrophobic tails (long chains, α-tocopherol, cholesterol or diosgenin) were synthesized. Structure-activity relationship (SAR) of these lipids was studied in detail by investigating the effects of several structural aspects including the type of hydrophobic tails, the chain length and saturation degree. In addition, several assays were used to study their interactions with plasmid DNA, and results reveal that these lipids could condense DNA into nanosized particles with appropriate size and zeta-potentials. MTT-based cell viability assays showed that lipoplexes 5 had low cytotoxicity. The in vitro gene transfection studies showed the hydrophobic tails clearly affected the TE, and hexadecanol-containing lipid 5b gives the best TE, which was 2.2 times higher than bPEI 25k in the presence of 10% serum. The results not only demonstrate that these lipids might be promising non-viral gene vectors, but also afford us clues for further optimization of lipidic gene delivery materials.

  16. The infinitesimal operator for the semigroup of the Frobenius-Perron operator from image sequence data: vector fields and transport barriers from movies.

    PubMed

    Santitissadeekorn, N; Bollt, E M

    2007-06-01

    In this paper, we present an approach to approximate the Frobenius-Perron transfer operator from a sequence of time-ordered images, that is, a movie dataset. Unlike time-series data, successive images do not provide a direct access to a trajectory of a point in a phase space; more precisely, a pixel in an image plane. Therefore, we reconstruct the velocity field from image sequences based on the infinitesimal generator of the Frobenius-Perron operator. Moreover, we relate this problem to the well-known optical flow problem from the computer vision community and we validate the continuity equation derived from the infinitesimal operator as a constraint equation for the optical flow problem. Once the vector field and then a discrete transfer operator are found, then, in addition, we present a graph modularity method as a tool to discover basin structure in the phase space. Together with a tool to reconstruct a velocity field, this graph-based partition method provides us with a way to study transport behavior and other ergodic properties of measurable dynamical systems captured only through image sequences.

  17. Fast computation of voxel-level brain connectivity maps from resting-state functional MRI using l₁-norm as approximation of Pearson's temporal correlation: proof-of-concept and example vector hardware implementation.

    PubMed

    Minati, Ludovico; Zacà, Domenico; D'Incerti, Ludovico; Jovicich, Jorge

    2014-09-01

    An outstanding issue in graph-based analysis of resting-state functional MRI is choice of network nodes. Individual consideration of entire brain voxels may represent a less biased approach than parcellating the cortex according to pre-determined atlases, but entails establishing connectedness for 1(9)-1(11) links, with often prohibitive computational cost. Using a representative Human Connectome Project dataset, we show that, following appropriate time-series normalization, it may be possible to accelerate connectivity determination replacing Pearson correlation with l1-norm. Even though the adjacency matrices derived from correlation coefficients and l1-norms are not identical, their similarity is high. Further, we describe and provide in full an example vector hardware implementation of l1-norm on an array of 4096 zero instruction-set processors. Calculation times <1000 s are attainable, removing the major deterrent to voxel-based resting-sate network mapping and revealing fine-grained node degree heterogeneity. L1-norm should be given consideration as a substitute for correlation in very high-density resting-state functional connectivity analyses. Copyright © 2014 IPEM. Published by Elsevier Ltd. All rights reserved.

  18. Comparison of the Performances of Five Primer Sets for the Detection and Quantification of Plasmodium in Anopheline Vectors by Real-Time PCR.

    PubMed

    Chaumeau, V; Andolina, C; Fustec, B; Tuikue Ndam, N; Brengues, C; Herder, S; Cerqueira, D; Chareonviriyaphap, T; Nosten, F; Corbel, V

    2016-01-01

    Quantitative real-time polymerase chain reaction (qrtPCR) has made a significant improvement for the detection of Plasmodium in anopheline vectors. A wide variety of primers has been used in different assays, mostly adapted from molecular diagnosis of malaria in human. However, such an adaptation can impact the sensitivity of the PCR. Therefore we compared the sensitivity of five primer sets with different molecular targets on blood stages, sporozoites and oocysts standards of Plasmodium falciparum (Pf) and P. vivax (Pv). Dilution series of standard DNA were used to discriminate between methods at low concentrations of parasite and to generate standard curves suitable for the absolute quantification of Plasmodium sporozoites. Our results showed that the best primers to detect blood stages were not necessarily the best ones to detect sporozoites. Absolute detection threshold of our qrtPCR assay varied between 3.6 and 360 Pv sporozoites and between 6 and 600 Pf sporozoites per mosquito according to the primer set used in the reaction mix. In this paper, we discuss the general performance of each primer set and highlight the need to use efficient detection methods for transmission studies.

  19. Comparison of the Performances of Five Primer Sets for the Detection and Quantification of Plasmodium in Anopheline Vectors by Real-Time PCR

    PubMed Central

    Chaumeau, V.; Andolina, C.; Fustec, B.; Tuikue Ndam, N.; Brengues, C.; Herder, S.; Cerqueira, D.; Chareonviriyaphap, T.; Nosten, F.; Corbel, V.

    2016-01-01

    Quantitative real-time polymerase chain reaction (qrtPCR) has made a significant improvement for the detection of Plasmodium in anopheline vectors. A wide variety of primers has been used in different assays, mostly adapted from molecular diagnosis of malaria in human. However, such an adaptation can impact the sensitivity of the PCR. Therefore we compared the sensitivity of five primer sets with different molecular targets on blood stages, sporozoites and oocysts standards of Plasmodium falciparum (Pf) and P. vivax (Pv). Dilution series of standard DNA were used to discriminate between methods at low concentrations of parasite and to generate standard curves suitable for the absolute quantification of Plasmodium sporozoites. Our results showed that the best primers to detect blood stages were not necessarily the best ones to detect sporozoites. Absolute detection threshold of our qrtPCR assay varied between 3.6 and 360 Pv sporozoites and between 6 and 600 Pf sporozoites per mosquito according to the primer set used in the reaction mix. In this paper, we discuss the general performance of each primer set and highlight the need to use efficient detection methods for transmission studies. PMID:27441839

  20. Essays on price dynamics, discovery, and dynamic threshold effects among energy spot markets in North America

    NASA Astrophysics Data System (ADS)

    Park, Haesun

    2005-12-01

    Given the role electricity and natural gas sectors play in the North American economy, an understanding of how markets for these commodities interact is important. This dissertation independently characterizes the price dynamics of major electricity and natural gas spot markets in North America by combining directed acyclic graphs with time series analyses. Furthermore, the dissertation explores a generalization of price difference bands associated with the law of one price. Interdependencies among 11 major electricity spot markets are examined in Chapter II using a vector autoregression model. Results suggest that the relationships between the markets vary by time. Western markets are separated from the eastern markets and the Electricity Reliability Council of Texas. At longer time horizons these separations disappear. Palo Verde is the important spot market in the west for price discovery. Southwest Power Pool is the dominant market in Eastern Interconnected System for price discovery. Interdependencies among eight major natural gas spot markets are investigated using a vector error correction model and the Greedy Equivalence Search Algorithm in Chapter III. Findings suggest that the eight price series are tied together through six long-run cointegration relationships, supporting the argument that the natural gas market has developed into a single integrated market in North America since deregulation. Results indicate that price discovery tends to occur in the excess consuming regions and move to the excess producing regions. Across North America, the U.S. Midwest region, represented by the Chicago spot market, is the most important for price discovery. The Ellisburg-Leidy Hub in Pennsylvania and Malin Hub in Oregon are important for eastern and western markets. In Chapter IV, a threshold vector error correction model is applied to the natural gas markets to examine nonlinearities in adjustments to the law of one price. Results show that there are nonlinear adjustments to the law of one price in seven pair-wise markets. Four alternative cases for the law of one price are presented as a theoretical background. A methodology is developed for finding a threshold cointegration model that accounts for seasonality in the threshold levels. Results indicate that dynamic threshold effects vary depending on geographical location and whether the markets are excess producing or excess consuming markets.

  1. Tourism demand in the Algarve region: Evolution and forecast using SVARMA models

    NASA Astrophysics Data System (ADS)

    Lopes, Isabel Cristina; Soares, Filomena; Silva, Eliana Costa e.

    2017-06-01

    Tourism is one of the Portuguese economy's key sectors, and its relative weight has grown over recent years. The Algarve region is particularly focused on attracting foreign tourists and has built over the years a large offer of diversified hotel units. In this paper we present multivariate time series approach to forecast the number of overnight stays in hotel units (hotels, guesthouses or hostels, and tourist apartments) in Algarve. We adjust a seasonal vector autoregressive and moving averages model (SVARMA) to monthly data between 2006 and 2016. The forecast values were compared with the actual values of the overnight stays in Algarve in 2016 and led to a MAPE of 15.1% and RMSE= 53847.28. The MAPE for the Hotel series was merely 4.56%. These forecast values can be used by a hotel manager to predict their occupancy and to determine the best pricing policy.

  2. Correlation and 3D-tracking of objects by pointing sensors

    DOEpatents

    Griesmeyer, J. Michael

    2017-04-04

    A method and system for tracking at least one object using a plurality of pointing sensors and a tracking system are disclosed herein. In a general embodiment, the tracking system is configured to receive a series of observation data relative to the at least one object over a time base for each of the plurality of pointing sensors. The observation data may include sensor position data, pointing vector data and observation error data. The tracking system may further determine a triangulation point using a magnitude of a shortest line connecting a line of sight value from each of the series of observation data from each of the plurality of sensors to the at least one object, and perform correlation processing on the observation data and triangulation point to determine if at least two of the plurality of sensors are tracking the same object. Observation data may also be branched, associated and pruned using new incoming observation data.

  3. The Effect of Three-Dimensional Freestream Disturbances on the Supersonic Flow Past a Wedge

    NASA Technical Reports Server (NTRS)

    Duck, Peter W.; Lasseigne, D. Glenn; Hussaini, M. Y.

    1997-01-01

    The interaction between a shock wave (attached to a wedge) and small amplitude, three-dimensional disturbances of a uniform, supersonic, freestream flow are investigated. The paper extends the two-dimensional study of Duck et al, through the use of vector potentials, which render the problem tractable by the same techniques as in the two-dimensional case, in particular by expansion of the solution by means of a Fourier-Bessel series, in appropriately chosen coordinates. Results are presented for specific classes of freestream disturbances, and the study shows conclusively that the shock is stable to all classes of disturbances (i.e. time periodic perturbations to the shock do not grow downstream), provided the flow downstream of the shock is supersonic (loosely corresponding to the weak shock solution). This is shown from our numerical results and also by asymptotic analysis of the Fourier-Bessel series, valid far downstream of the shock.

  4. Molecular surface representation using 3D Zernike descriptors for protein shape comparison and docking.

    PubMed

    Kihara, Daisuke; Sael, Lee; Chikhi, Rayan; Esquivel-Rodriguez, Juan

    2011-09-01

    The tertiary structures of proteins have been solved in an increasing pace in recent years. To capitalize the enormous efforts paid for accumulating the structure data, efficient and effective computational methods need to be developed for comparing, searching, and investigating interactions of protein structures. We introduce the 3D Zernike descriptor (3DZD), an emerging technique to describe molecular surfaces. The 3DZD is a series expansion of mathematical three-dimensional function, and thus a tertiary structure is represented compactly by a vector of coefficients of terms in the series. A strong advantage of the 3DZD is that it is invariant to rotation of target object to be represented. These two characteristics of the 3DZD allow rapid comparison of surface shapes, which is sufficient for real-time structure database screening. In this article, we review various applications of the 3DZD, which have been recently proposed.

  5. Time-series panel analysis (TSPA): multivariate modeling of temporal associations in psychotherapy process.

    PubMed

    Ramseyer, Fabian; Kupper, Zeno; Caspar, Franz; Znoj, Hansjörg; Tschacher, Wolfgang

    2014-10-01

    Processes occurring in the course of psychotherapy are characterized by the simple fact that they unfold in time and that the multiple factors engaged in change processes vary highly between individuals (idiographic phenomena). Previous research, however, has neglected the temporal perspective by its traditional focus on static phenomena, which were mainly assessed at the group level (nomothetic phenomena). To support a temporal approach, the authors introduce time-series panel analysis (TSPA), a statistical methodology explicitly focusing on the quantification of temporal, session-to-session aspects of change in psychotherapy. TSPA-models are initially built at the level of individuals and are subsequently aggregated at the group level, thus allowing the exploration of prototypical models. TSPA is based on vector auto-regression (VAR), an extension of univariate auto-regression models to multivariate time-series data. The application of TSPA is demonstrated in a sample of 87 outpatient psychotherapy patients who were monitored by postsession questionnaires. Prototypical mechanisms of change were derived from the aggregation of individual multivariate models of psychotherapy process. In a 2nd step, the associations between mechanisms of change (TSPA) and pre- to postsymptom change were explored. TSPA allowed a prototypical process pattern to be identified, where patient's alliance and self-efficacy were linked by a temporal feedback-loop. Furthermore, therapist's stability over time in both mastery and clarification interventions was positively associated with better outcomes. TSPA is a statistical tool that sheds new light on temporal mechanisms of change. Through this approach, clinicians may gain insight into prototypical patterns of change in psychotherapy. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  6. Interpretation of a compositional time series

    NASA Astrophysics Data System (ADS)

    Tolosana-Delgado, R.; van den Boogaart, K. G.

    2012-04-01

    Common methods for multivariate time series analysis use linear operations, from the definition of a time-lagged covariance/correlation to the prediction of new outcomes. However, when the time series response is a composition (a vector of positive components showing the relative importance of a set of parts in a total, like percentages and proportions), then linear operations are afflicted of several problems. For instance, it has been long recognised that (auto/cross-)correlations between raw percentages are spurious, more dependent on which other components are being considered than on any natural link between the components of interest. Also, a long-term forecast of a composition in models with a linear trend will ultimately predict negative components. In general terms, compositional data should not be treated in a raw scale, but after a log-ratio transformation (Aitchison, 1986: The statistical analysis of compositional data. Chapman and Hill). This is so because the information conveyed by a compositional data is relative, as stated in their definition. The principle of working in coordinates allows to apply any sort of multivariate analysis to a log-ratio transformed composition, as long as this transformation is invertible. This principle is of full application to time series analysis. We will discuss how results (both auto/cross-correlation functions and predictions) can be back-transformed, viewed and interpreted in a meaningful way. One view is to use the exhaustive set of all possible pairwise log-ratios, which allows to express the results into D(D - 1)/2 separate, interpretable sets of one-dimensional models showing the behaviour of each possible pairwise log-ratios. Another view is the interpretation of estimated coefficients or correlations back-transformed in terms of compositions. These two views are compatible and complementary. These issues are illustrated with time series of seasonal precipitation patterns at different rain gauges of the USA. In this data set, the proportion of annual precipitation falling in winter, spring, summer and autumn is considered a 4-component time series. Three invertible log-ratios are defined for calculations, balancing rainfall in autumn vs. winter, in summer vs. spring, and in autumn-winter vs. spring-summer. Results suggest a 2-year correlation range, and certain oscillatory behaviour in the last balance, which does not occur in the other two.

  7. Volatility forecasting for low-volatility portfolio selection in the US and the Korean equity markets

    NASA Astrophysics Data System (ADS)

    Kim, Saejoon

    2018-01-01

    We consider the problem of low-volatility portfolio selection which has been the subject of extensive research in the field of portfolio selection. To improve the currently existing techniques that rely purely on past information to select low-volatility portfolios, this paper investigates the use of time series regression techniques that make forecasts of future volatility to select the portfolios. In particular, for the first time, the utility of support vector regression and its enhancements as portfolio selection techniques is provided. It is shown that our regression-based portfolio selection provides attractive outperformances compared to the benchmark index and the portfolio defined by a well-known strategy on the data-sets of the S&P 500 and the KOSPI 200.

  8. The application of a shift theorem analysis technique to multipoint measurements

    NASA Astrophysics Data System (ADS)

    Dieckmann, M. E.; Chapman, S. C.

    1999-03-01

    A Fourier domain technique has been proposed previously which, in principle, quantifies the extent to which multipoint in-situ measurements can identify whether or not an observed structure is time stationary in its rest frame. Once a structure, sampled for example by four spacecraft, is shown to be quasi-stationary in its rest frame, the structure's velocity vector can be determined with respect to the sampling spacecraft. We investigate the properties of this technique, which we will refer to as a stationarity test, by applying it to two point measurements of a simulated boundary layer. The boundary layer was evolved using a PIC (particle in cell) electromagnetic code. Initial and boundary conditions were chosen such, that two cases could be considered, i.e. a spacecraft pair moving through (1) a time stationary boundary structure and (2) a boundary structure which is evolving (expanding) in time. The code also introduces noise in the simulated data time series which is uncorrelated between the two spacecraft. We demonstrate that, provided that the time series is Hanning windowed, the test is effective in determining the relative velocity between the boundary layer and spacecraft and in determining the range of frequencies over which the data can be treated as time stationary or time evolving. This work presents a first step towards understanding the effectiveness of this technique, as required in order for it to be applied to multispacecraft data.

  9. Retrieval and Mapping of Heavy Metal Concentration in Soil Using Time Series Landsat 8 Imagery

    NASA Astrophysics Data System (ADS)

    Fang, Y.; Xu, L.; Peng, J.; Wang, H.; Wong, A.; Clausi, D. A.

    2018-04-01

    Heavy metal pollution is a critical global environmental problem which has always been a concern. Traditional approach to obtain heavy metal concentration relying on field sampling and lab testing is expensive and time consuming. Although many related studies use spectrometers data to build relational model between heavy metal concentration and spectra information, and then use the model to perform prediction using the hyperspectral imagery, this manner can hardly quickly and accurately map soil metal concentration of an area due to the discrepancies between spectrometers data and remote sensing imagery. Taking the advantage of easy accessibility of Landsat 8 data, this study utilizes Landsat 8 imagery to retrieve soil Cu concentration and mapping its distribution in the study area. To enlarge the spectral information for more accurate retrieval and mapping, 11 single date Landsat 8 imagery from 2013-2017 are selected to form a time series imagery. Three regression methods, partial least square regression (PLSR), artificial neural network (ANN) and support vector regression (SVR) are used to model construction. By comparing these models unbiasedly, the best model are selected to mapping Cu concentration distribution. The produced distribution map shows a good spatial autocorrelation and consistency with the mining area locations.

  10. Computer models of social processes: the case of migration.

    PubMed

    Beshers, J M

    1967-06-01

    The demographic model is a program for representing births, deaths, migration, and social mobility as social processes in a non-stationary stochastic process (Markovian). Transition probabilities for each age group are stored and then retrieved at the next appearance of that age cohort. In this way new transition probabilities can be calculated as a function of the old transition probabilities and of two successive distribution vectors.Transition probabilities can be calculated to represent effects of the whole age-by-state distribution at any given time period, too. Such effects as saturation or queuing may be represented by a market mechanism; for example, migration between metropolitan areas can be represented as depending upon job supplies and labor markets. Within metropolitan areas, migration can be represented as invasion and succession processes with tipping points (acceleration curves), and the market device has been extended to represent this phenomenon.Thus, the demographic model makes possible the representation of alternative classes of models of demographic processes. With each class of model one can deduce implied time series (varying parame-terswithin the class) and the output of the several classes can be compared to each other and to outside criteria, such as empirical time series.

  11. Classification of damage in structural systems using time series analysis and supervised and unsupervised pattern recognition techniques

    NASA Astrophysics Data System (ADS)

    Omenzetter, Piotr; de Lautour, Oliver R.

    2010-04-01

    Developed for studying long, periodic records of various measured quantities, time series analysis methods are inherently suited and offer interesting possibilities for Structural Health Monitoring (SHM) applications. However, their use in SHM can still be regarded as an emerging application and deserves more studies. In this research, Autoregressive (AR) models were used to fit experimental acceleration time histories from two experimental structural systems, a 3- storey bookshelf-type laboratory structure and the ASCE Phase II SHM Benchmark Structure, in healthy and several damaged states. The coefficients of the AR models were chosen as damage sensitive features. Preliminary visual inspection of the large, multidimensional sets of AR coefficients to check the presence of clusters corresponding to different damage severities was achieved using Sammon mapping - an efficient nonlinear data compression technique. Systematic classification of damage into states based on the analysis of the AR coefficients was achieved using two supervised classification techniques: Nearest Neighbor Classification (NNC) and Learning Vector Quantization (LVQ), and one unsupervised technique: Self-organizing Maps (SOM). This paper discusses the performance of AR coefficients as damage sensitive features and compares the efficiency of the three classification techniques using experimental data.

  12. Quantitative maps of geomagnetic perturbation vectors during substorm onset and recovery

    PubMed Central

    Pothier, N M; Weimer, D R; Moore, W B

    2015-01-01

    We have produced the first series of spherical harmonic, numerical maps of the time-dependent surface perturbations in the Earth's magnetic field following the onset of substorms. Data from 124 ground magnetometer stations in the Northern Hemisphere at geomagnetic latitudes above 33° were used. Ground station data averaged over 5 min intervals covering 8 years (1998–2005) were used to construct pseudo auroral upper, auroral lower, and auroral electrojet (AU*, AL*, and AE*) indices. These indices were used to generate a list of substorms that extended from 1998 to 2005, through a combination of automated processing and visual checks. Events were sorted by interplanetary magnetic field (IMF) orientation (at the Advanced Composition Explorer (ACE) satellite), dipole tilt angle, and substorm magnitude. Within each category, the events were aligned on substorm onset. A spherical cap harmonic analysis was used to obtain a least error fit of the substorm disturbance patterns at 5 min intervals up to 90 min after onset. The fits obtained at onset time were subtracted from all subsequent fits, for each group of substorm events. Maps of the three vector components of the averaged magnetic perturbations were constructed to show the effects of substorm currents. These maps are produced for several specific ranges of values for the peak |AL*| index, IMF orientation, and dipole tilt angle. We demonstrate an influence of the dipole tilt angle on the response to substorms. Our results indicate that there are downward currents poleward and upward currents just equatorward of the peak in the substorms' westward electrojet. Key Points Show quantitative maps of ground geomagnetic perturbations due to substorms Three vector components mapped as function of time during onset and recovery Compare/contrast results for different tilt angle and sign of IMF Y-component PMID:26167445

  13. 3-component time-dependent crustal deformation in Southern California from Sentinel-1 and GPS

    NASA Astrophysics Data System (ADS)

    Tymofyeyeva, E.; Fialko, Y. A.

    2017-12-01

    We combine data from the Sentinel-1 InSAR mission collected between 2014-2017 with continuous GPS measurements to calculate the three components of the interseismic surface velocity field in Southern California at the resolution of InSAR data ( 100 m). We use overlapping InSAR tracks with two different look geometries (descending tracks 71, 173, and 144, and ascending tracks 64 and 166) to obtain the 3 orthogonal components of surface motion. Because of the under-determined nature of the problem, we use the local azimuth of the horizontal velocity vector as an additional constraint. The spatially variable azimuths of the horizontal velocity are obtained by interpolating data from the continuous GPS network. We estimate both secular velocities and displacement time series. The latter are obtained by combining InSAR time series from different lines of sight with time-dependent azimuths computed using continuous GPS time series at every InSAR epoch. We use the CANDIS method [Tymofyeyeva and Fialko, 2015], a technique based on iterative common point stacking, to correct the InSAR data for tropospheric and ionospheric artifacts when calculating secular velocities and time series, and to isolate low-amplitude deformation signals in our study region. The obtained horizontal (East and North) components of secular velocity exhibit long-wavelength patterns consistent with strain accumulation on major faults of the Pacific-North America plate boundary. The vertical component of velocity reveals a number of localized uplift and subsidence anomalies, most likely related to hydrologic effects and anthropogenic activity. In particular, in the Los Angeles basin we observe localized uplift of about 10-15mm/yr near Anaheim, Long Beach, and Redondo Beach, as well as areas of rapid subsidence near Irvine and Santa Monica, which are likely caused by the injection of water in the oil fields, and the pumping and recharge cycles of the aquifers in the basin.

  14. A comparison of classical and intelligent methods to detect potential thermal anomalies before the 11 August 2012 Varzeghan, Iran, earthquake (Mw = 6.4)

    NASA Astrophysics Data System (ADS)

    Akhoondzadeh, M.

    2013-04-01

    In this paper, a number of classical and intelligent methods, including interquartile, autoregressive integrated moving average (ARIMA), artificial neural network (ANN) and support vector machine (SVM), have been proposed to quantify potential thermal anomalies around the time of the 11 August 2012 Varzeghan, Iran, earthquake (Mw = 6.4). The duration of the data set, which is comprised of Aqua-MODIS land surface temperature (LST) night-time snapshot images, is 62 days. In order to quantify variations of LST data obtained from satellite images, the air temperature (AT) data derived from the meteorological station close to the earthquake epicenter has been taken into account. For the models examined here, results indicate the following: (i) ARIMA models, which are the most widely used in the time series community for short-term forecasting, are quickly and easily implemented, and can efficiently act through linear solutions. (ii) A multilayer perceptron (MLP) feed-forward neural network can be a suitable non-parametric method to detect the anomalous changes of a non-linear time series such as variations of LST. (iii) Since SVMs are often used due to their many advantages for classification and regression tasks, it can be shown that, if the difference between the predicted value using the SVM method and the observed value exceeds the pre-defined threshold value, then the observed value could be regarded as an anomaly. (iv) ANN and SVM methods could be powerful tools in modeling complex phenomena such as earthquake precursor time series where we may not know what the underlying data generating process is. There is good agreement in the results obtained from the different methods for quantifying potential anomalies in a given LST time series. This paper indicates that the detection of the potential thermal anomalies derive credibility from the overall efficiencies and potentialities of the four integrated methods.

  15. Joining of Components of Complex Structures for Improved Dynamic Response

    DTIC Science & Technology

    2011-10-28

    system- level mass and stiffness matrices and force vector (at each frequency in the range of interest). To address this issue a series of complex...displacements of all candidate joint locations by using the system- level mass and stiffness matrices and force vector (at each frequency in the range of...joints. In contrast, Li et al. [10] proposed a fastener layout/topology that achieves an almost uniform stress level in each joint, and adopted

  16. Emergency Department Visit Forecasting and Dynamic Nursing Staff Allocation Using Machine Learning Techniques With Readily Available Open-Source Software.

    PubMed

    Zlotnik, Alexander; Gallardo-Antolín, Ascensión; Cuchí Alfaro, Miguel; Pérez Pérez, María Carmen; Montero Martínez, Juan Manuel

    2015-08-01

    Although emergency department visit forecasting can be of use for nurse staff planning, previous research has focused on models that lacked sufficient resolution and realistic error metrics for these predictions to be applied in practice. Using data from a 1100-bed specialized care hospital with 553,000 patients assigned to its healthcare area, forecasts with different prediction horizons, from 2 to 24 weeks ahead, with an 8-hour granularity, using support vector regression, M5P, and stratified average time-series models were generated with an open-source software package. As overstaffing and understaffing errors have different implications, error metrics and potential personnel monetary savings were calculated with a custom validation scheme, which simulated subsequent generation of predictions during a 4-year period. Results were then compared with a generalized estimating equation regression. Support vector regression and M5P models were found to be superior to the stratified average model with a 95% confidence interval. Our findings suggest that medium and severe understaffing situations could be reduced in more than an order of magnitude and average yearly savings of up to €683,500 could be achieved if dynamic nursing staff allocation was performed with support vector regression instead of the static staffing levels currently in use.

  17. Advances in satellite remote sensing of environmental variables for epidemiological applications.

    PubMed

    Goetz, S J; Prince, S D; Small, J

    2000-01-01

    Earth-observing satellites have provided an unprecedented view of the land surface but have been exploited relatively little for the measurement of environmental variables of particular relevance to epidemiology. Recent advances in techniques to recover continuous fields of air temperature, humidity, and vapour pressure deficit from remotely sensed observations have significant potential for disease vector monitoring and related epidemiological applications. We report on the development of techniques to map environmental variables with relevance to the prediction of the relative abundance of disease vectors and intermediate hosts. Improvements to current methods of obtaining information on vegetation properties, canopy and surface temperature and soil moisture over large areas are also discussed. Algorithms used to measure these variables incorporate visible, near-infrared and thermal infrared radiation observations derived from time series of satellite-based sensors, focused here primarily but not exclusively on the Advanced Very High Resolution Radiometer (AVHRR) instruments. The variables compare favourably with surface measurements over a broad array of conditions at several study sites, and maps of retrieved variables captured patterns of spatial variability comparable to, and locally more accurate than, spatially interpolated meteorological observations. Application of multi-temporal maps of these variables are discussed in relation to current epidemiological research on the distribution and abundance of some common disease vectors.

  18. Boolean dynamics of genetic regulatory networks inferred from microarray time series data

    DOE PAGES

    Martin, Shawn; Zhang, Zhaoduo; Martino, Anthony; ...

    2007-01-31

    Methods available for the inference of genetic regulatory networks strive to produce a single network, usually by optimizing some quantity to fit the experimental observations. In this paper we investigate the possibility that multiple networks can be inferred, all resulting in similar dynamics. This idea is motivated by theoretical work which suggests that biological networks are robust and adaptable to change, and that the overall behavior of a genetic regulatory network might be captured in terms of dynamical basins of attraction. We have developed and implemented a method for inferring genetic regulatory networks for time series microarray data. Our methodmore » first clusters and discretizes the gene expression data using k-means and support vector regression. We then enumerate Boolean activation–inhibition networks to match the discretized data. In conclusion, the dynamics of the Boolean networks are examined. We have tested our method on two immunology microarray datasets: an IL-2-stimulated T cell response dataset and a LPS-stimulated macrophage response dataset. In both cases, we discovered that many networks matched the data, and that most of these networks had similar dynamics.« less

  19. Ship Speed Retrieval From Single Channel TerraSAR-X Data

    NASA Astrophysics Data System (ADS)

    Soccorsi, Matteo; Lehner, Susanne

    2010-04-01

    A method to estimate the speed of a moving ship is presented. The technique, introduced in Kirscht (1998), is extended to marine application and validated on TerraSAR-X High-Resolution (HR) data. The generation of a sequence of single-look SAR images from a single- channel image corresponds to an image time series with reduced resolution. This allows applying change detection techniques on the time series to evaluate the velocity components in range and azimuth of the ship. The evaluation of the displacement vector of a moving target in consecutive images of the sequence allows the estimation of the azimuth velocity component. The range velocity component is estimated by evaluating the variation of the signal amplitude during the sequence. In order to apply the technique on TerraSAR-X Spot Light (SL) data a further processing step is needed. The phase has to be corrected as presented in Eineder et al. (2009) due to the SL acquisition mode; otherwise the image sequence cannot be generated. The analysis, when possible validated by the Automatic Identification System (AIS), was performed in the framework of the ESA project MARISS.

  20. A gradient method for the quantitative analysis of cell movement and tissue flow and its application to the analysis of multicellular Dictyostelium development.

    PubMed

    Siegert, F; Weijer, C J; Nomura, A; Miike, H

    1994-01-01

    We describe the application of a novel image processing method, which allows quantitative analysis of cell and tissue movement in a series of digitized video images. The result is a vector velocity field showing average direction and velocity of movement for every pixel in the frame. We apply this method to the analysis of cell movement during different stages of the Dictyostelium developmental cycle. We analysed time-lapse video recordings of cell movement in single cells, mounds and slugs. The program can correctly assess the speed and direction of movement of either unlabelled or labelled cells in a time series of video images depending on the illumination conditions. Our analysis of cell movement during multicellular development shows that the entire morphogenesis of Dictyostelium is characterized by rotational cell movement. The analysis of cell and tissue movement by the velocity field method should be applicable to the analysis of morphogenetic processes in other systems such as gastrulation and neurulation in vertebrate embryos.

  1. Decomposing Time Series Data by a Non-negative Matrix Factorization Algorithm with Temporally Constrained Coefficients

    PubMed Central

    Cheung, Vincent C. K.; Devarajan, Karthik; Severini, Giacomo; Turolla, Andrea; Bonato, Paolo

    2017-01-01

    The non-negative matrix factorization algorithm (NMF) decomposes a data matrix into a set of non-negative basis vectors, each scaled by a coefficient. In its original formulation, the NMF assumes the data samples and dimensions to be independently distributed, making it a less-than-ideal algorithm for the analysis of time series data with temporal correlations. Here, we seek to derive an NMF that accounts for temporal dependencies in the data by explicitly incorporating a very simple temporal constraint for the coefficients into the NMF update rules. We applied the modified algorithm to 2 multi-dimensional electromyographic data sets collected from the human upper-limb to identify muscle synergies. We found that because it reduced the number of free parameters in the model, our modified NMF made it possible to use the Akaike Information Criterion to objectively identify a model order (i.e., the number of muscle synergies composing the data) that is more functionally interpretable, and closer to the numbers previously determined using ad hoc measures. PMID:26737046

  2. State-Space Analysis of Granger-Geweke Causality Measures with Application to fMRI.

    PubMed

    Solo, Victor

    2016-05-01

    The recent interest in the dynamics of networks and the advent, across a range of applications, of measuring modalities that operate on different temporal scales have put the spotlight on some significant gaps in the theory of multivariate time series. Fundamental to the description of network dynamics is the direction of interaction between nodes, accompanied by a measure of the strength of such interactions. Granger causality and its associated frequency domain strength measures (GEMs) (due to Geweke) provide a framework for the formulation and analysis of these issues. In pursuing this setup, three significant unresolved issues emerge. First, computing GEMs involves computing submodels of vector time series models, for which reliable methods do not exist. Second, the impact of filtering on GEMs has never been definitively established. Third, the impact of downsampling on GEMs has never been established. In this work, using state-space methods, we resolve all these issues and illustrate the results with some simulations. Our analysis is motivated by some problems in (fMRI) brain imaging, to which we apply it, but it is of general applicability.

  3. State-Space Analysis of Granger-Geweke Causality Measures with Application to fMRI

    PubMed Central

    Solo, Victor

    2017-01-01

    The recent interest in the dynamics of networks and the advent, across a range of applications, of measuring modalities that operate on different temporal scales have put the spotlight on some significant gaps in the theory of multivariate time series. Fundamental to the description of network dynamics is the direction of interaction between nodes, accompanied by a measure of the strength of such interactions. Granger causality and its associated frequency domain strength measures (GEMs) (due to Geweke) provide a framework for the formulation and analysis of these issues. In pursuing this setup, three significant unresolved issues emerge. First, computing GEMs involves computing submodels of vector time series models, for which reliable methods do not exist. Second, the impact of filtering on GEMs has never been definitively established. Third, the impact of downsampling on GEMs has never been established. In this work, using state-space methods, we resolve all these issues and illustrate the results with some simulations. Our analysis is motivated by some problems in (fMRI) brain imaging, to which we apply it, but it is of general applicability. PMID:26942749

  4. Regional autonomy changes in resting-state functional MRI in patients with HIV associated neurocognitive disorder

    NASA Astrophysics Data System (ADS)

    DSouza, Adora M.; Abidin, Anas Z.; Chockanathan, Udaysankar; Wismüller, Axel

    2018-03-01

    In this study, we investigate whether there are discernable changes in influence that brain regions have on themselves once patients show symptoms of HIV Associated Neurocognitive Disorder (HAND) using functional MRI (fMRI). Simple functional connectivity measures, such as correlation cannot reveal such information. To this end, we use mutual connectivity analysis (MCA) with Local Models (LM), which reveals a measure of influence in terms of predictability. Once such measures of interaction are obtained, we train two classifiers to characterize difference in patterns of regional self-influence between healthy subjects and subjects presenting with HAND symptoms. The two classifiers we use are Support Vector Machines (SVM) and Localized Generalized Matrix Learning Vector Quantization (LGMLVQ). Performing machine learning on fMRI connectivity measures is popularly known as multi-voxel pattern analysis (MVPA). By performing such an analysis, we are interested in studying the impact HIV infection has on an individual's brain. The high area under receiver operating curve (AUC) and accuracy values for 100 different train/test separations using MCA-LM self-influence measures (SVM: mean AUC=0.86, LGMLVQ: mean AUC=0.88, SVM and LGMLVQ: mean accuracy=0.78) compared with standard MVPA analysis using cross-correlation between fMRI time-series (SVM: mean AUC=0.58, LGMLVQ: mean AUC=0.57), demonstrates that self-influence features can be more discriminative than measures of interaction between time-series pairs. Furthermore, our results suggest that incorporating measures of self-influence in MVPA analysis used commonly in fMRI analysis has the potential to provide a performance boost and indicate important changes in dynamics of regions in the brain as a consequence of HIV infection.

  5. Disruption of Vector Host Preference with Plant Volatiles May Reduce Spread of Insect-Transmitted Plant Pathogens.

    PubMed

    Martini, Xavier; Willett, Denis S; Kuhns, Emily H; Stelinski, Lukasz L

    2016-05-01

    Plant pathogens can manipulate the odor of their host; the odor of an infected plant is often attractive to the plant pathogen vector. It has been suggested that this odor-mediated manipulation attracts vectors and may contribute to spread of disease; however, this requires further broad demonstration among vector-pathogen systems. In addition, disruption of this indirect chemical communication between the pathogen and the vector has not been attempted. We present a model that demonstrates how a phytophathogen (Candidatus Liberibacter asiaticus) can increase its spread by indirectly manipulating the behavior of its vector (Asian citrus psyllid, Diaphorina citri Kuwayama). The model indicates that when vectors are attracted to pathogen-infected hosts, the proportion of infected vectors increases, as well as, the proportion of infected hosts. Additionally, the peak of infected host populations occurs earlier as compared with controls. These changes in disease dynamics were more important during scenarios with higher vector mortality. Subsequently, we conducted a series of experiments to disrupt the behavior of the Asian citrus psyllid. To do so, we exposed the vector to methyl salicylate, the major compound released following host infection with the pathogen. We observed that during exposure or after pre-exposure to methyl salicylate, the host preference can be altered; indeed, the Asian citrus psyllids were unable to select infected hosts over uninfected counterparts. We suggest mechanisms to explain these interactions and potential applications of disrupting herbivore host preference with plant volatiles for sustainable management of insect vectors.

  6. Gordan—Capelli series in superalgebras

    PubMed Central

    Brini, Andrea; Palareti, Aldopaolo; Teolis, Antonio G. B.

    1988-01-01

    We derive two Gordan—Capelli series for the supersymmetric algebra of the tensor product of two [unk]2-graded [unk]-vector spaces U and V, being [unk] a field of characteristic zero. These expansions yield complete decompositions of the supersymmetric algebra regarded as a pl(U)- and a pl(V)- module, where pl(U) and pl(V) are the general linear Lie superalgebras of U and V, respectively. PMID:16593911

  7. A representation of solution of stochastic differential equations

    NASA Astrophysics Data System (ADS)

    Kim, Yoon Tae; Jeon, Jong Woo

    2006-03-01

    We prove that the logarithm of the formal power series, obtained from a stochastic differential equation, is an element in the closure of the Lie algebra generated by vector fields being coefficients of equations. By using this result, we obtain a representation of the solution of stochastic differential equations in terms of Lie brackets and iterated Stratonovich integrals in the algebra of formal power series.

  8. Analysis of vector wind change with respect to time for Cape Kennedy, Florida: Wind aloft profile change vs. time, phase 1

    NASA Technical Reports Server (NTRS)

    Adelfang, S. I.

    1977-01-01

    Wind vector change with respect to time at Cape Kennedy, Florida, is examined according to the theory of multivariate normality. The joint distribution of the four variables represented by the components of the wind vector at an initial time and after a specified elapsed time is hypothesized to be quadravariate normal; the fourteen statistics of this distribution, calculated from fifteen years of twice daily Rawinsonde data are presented by monthly reference periods for each month from 0 to 27 km. The hypotheses that the wind component changes with respect to time is univariate normal, the joint distribution of wind component changes is bivariate normal, and the modulus of vector wind change is Rayleigh, has been tested by comparison with observed distributions. Statistics of the conditional bivariate normal distributions of vector wind at a future time given the vector wind at an initial time are derived. Wind changes over time periods from one to five hours, calculated from Jimsphere data, are presented.

  9. Internal performance characteristics of thrust-vectored axisymmetric ejector nozzles

    NASA Technical Reports Server (NTRS)

    Lamb, Milton

    1995-01-01

    A series of thrust-vectored axisymmetric ejector nozzles were designed and experimentally tested for internal performance and pumping characteristics at the Langley research center. This study indicated that discontinuities in the performance occurred at low primary nozzle pressure ratios and that these discontinuities were mitigated by decreasing expansion area ratio. The addition of secondary flow increased the performance of the nozzles. The mid-to-high range of secondary flow provided the most overall improvements, and the greatest improvements were seen for the largest ejector area ratio. Thrust vectoring the ejector nozzles caused a reduction in performance and discharge coefficient. With or without secondary flow, the vectored ejector nozzles produced thrust vector angles that were equivalent to or greater than the geometric turning angle. With or without secondary flow, spacing ratio (ejector passage symmetry) had little effect on performance (gross thrust ratio), discharge coefficient, or thrust vector angle. For the unvectored ejectors, a small amount of secondary flow was sufficient to reduce the pressure levels on the shroud to provide cooling, but for the vectored ejector nozzles, a larger amount of secondary air was required to reduce the pressure levels to provide cooling.

  10. Amino acid "little Big Bang": representing amino acid substitution matrices as dot products of Euclidian vectors.

    PubMed

    Zimmermann, Karel; Gibrat, Jean-François

    2010-01-04

    Sequence comparisons make use of a one-letter representation for amino acids, the necessary quantitative information being supplied by the substitution matrices. This paper deals with the problem of finding a representation that provides a comprehensive description of amino acid intrinsic properties consistent with the substitution matrices. We present a Euclidian vector representation of the amino acids, obtained by the singular value decomposition of the substitution matrices. The substitution matrix entries correspond to the dot product of amino acid vectors. We apply this vector encoding to the study of the relative importance of various amino acid physicochemical properties upon the substitution matrices. We also characterize and compare the PAM and BLOSUM series substitution matrices. This vector encoding introduces a Euclidian metric in the amino acid space, consistent with substitution matrices. Such a numerical description of the amino acid is useful when intrinsic properties of amino acids are necessary, for instance, building sequence profiles or finding consensus sequences, using machine learning algorithms such as Support Vector Machine and Neural Networks algorithms.

  11. Vector Doppler: spatial sampling analysis and presentation techniques for real-time systems

    NASA Astrophysics Data System (ADS)

    Capineri, Lorenzo; Scabia, Marco; Masotti, Leonardo F.

    2001-05-01

    The aim of the vector Doppler (VD) technique is the quantitative reconstruction of a velocity field independently of the ultrasonic probe axis to flow angle. In particular vector Doppler is interesting for studying vascular pathologies related to complex blood flow conditions. Clinical applications require a real-time operating mode and the capability to perform Doppler measurements over a defined volume. The combination of these two characteristics produces a real-time vector velocity map. In previous works the authors investigated the theory of pulsed wave (PW) vector Doppler and developed an experimental system capable of producing off-line 3D vector velocity maps. Afterwards, for producing dynamic velocity vector maps, we realized a new 2D vector Doppler system based on a modified commercial echograph. The measurement and presentation of a vector velocity field requires a correct spatial sampling that must satisfy the Shannon criterion. In this work we tackled this problem, establishing a relationship between sampling steps and scanning system characteristics. Another problem posed by the vector Doppler technique is the data representation in real-time that should be easy to interpret for the physician. With this in mine we attempted a multimedia solution that uses both interpolated images and sound to represent the information of the measured vector velocity map. These presentation techniques were experimented for real-time scanning on flow phantoms and preliminary measurements in vivo on a human carotid artery.

  12. Evaluation of the concentration and bioactivity of adenovirus vectors for gene therapy.

    PubMed Central

    Mittereder, N; March, K L; Trapnell, B C

    1996-01-01

    Development of adenovirus vectors as potential therapeutic agents for multiple applications of in vivo human gene therapy has resulted in numerous preclinical and clinical studies. However, lack of standardization of the methods for quantifying the physical concentration and functionally active fraction of virions in these studies has often made comparison between various studies difficult or impossible. This study was therefore carried out to define the variables for quantification of the concentration of adenovirus vectors. The methods for evaluation of total virion concentration included electron microscopy and optical absorbance. The methods for evaluation of the concentration of functional virions included detection of gene transfer (transgene transfer and expression) and the plaque assay on 293 cells. Enumeration of total virion concentration by optical absorbance was found to be a precise procedure, but accuracy was dependent on physical disruption of the virion to eliminate artifacts from light scattering and also on a correct value for the extinction coefficient. Both biological assays for enumerating functional virions were highly dependent on the assay conditions and in particular the time of virion adsorption and adsorption volume. Under optimal conditions, the bioactivity of the vector, defined as the fraction of total virions which leads to detected target cell infection, was determined to be 0.10 in the plaque assay and 0.29 in the gene transfer assay. This difference is most likely due to the fact that detection by gene transfer requires only measurement of levels of transgene expression in the infected cell whereas plaque formation is dependent on a series of biological events of much greater complexity. These results show that the exact conditions for determination of infectious virion concentration and bioactivity of recombinant adenovirus vectors are critical and must be standardized for comparability. These observations may be very useful in comparison of data from different preclinical and clinical studies and may also have important implications for how adenovirus vectors can optimally be used in human gene therapy. PMID:8892868

  13. Time series inversion of spectra from ground-based radiometers

    NASA Astrophysics Data System (ADS)

    Christensen, O. M.; Eriksson, P.

    2013-07-01

    Retrieving time series of atmospheric constituents from ground-based spectrometers often requires different temporal averaging depending on the altitude region in focus. This can lead to several datasets existing for one instrument, which complicates validation and comparisons between instruments. This paper puts forth a possible solution by incorporating the temporal domain into the maximum a posteriori (MAP) retrieval algorithm. The state vector is increased to include measurements spanning a time period, and the temporal correlations between the true atmospheric states are explicitly specified in the a priori uncertainty matrix. This allows the MAP method to effectively select the best temporal smoothing for each altitude, removing the need for several datasets to cover different altitudes. The method is compared to traditional averaging of spectra using a simulated retrieval of water vapour in the mesosphere. The simulations show that the method offers a significant advantage compared to the traditional method, extending the sensitivity an additional 10 km upwards without reducing the temporal resolution at lower altitudes. The method is also tested on the Onsala Space Observatory (OSO) water vapour microwave radiometer confirming the advantages found in the simulation. Additionally, it is shown how the method can interpolate data in time and provide diagnostic values to evaluate the interpolated data.

  14. Genetic algorithm for TEC seismo-ionospheric anomalies detection around the time of the Solomon (Mw = 8.0) earthquake of 06 February 2013

    NASA Astrophysics Data System (ADS)

    Akhoondzadeh, M.

    2013-08-01

    On 6 February 2013, at 12:12:27 local time (01:12:27 UTC) a seismic event registering Mw 8.0 struck the Solomon Islands, located at the boundaries of the Australian and Pacific tectonic plates. Time series prediction is an important and widely interesting topic in the research of earthquake precursors. This paper describes a new computational intelligence approach to detect the unusual variations of the total electron content (TEC) seismo-ionospheric anomalies induced by the powerful Solomon earthquake using genetic algorithm (GA). The GA detected a considerable number of anomalous occurrences on earthquake day and also 7 and 8 days prior to the earthquake in a period of high geomagnetic activities. In this study, also the detected TEC anomalies using the proposed method are compared to the results dealing with the observed TEC anomalies by applying the mean, median, wavelet, Kalman filter, ARIMA, neural network and support vector machine methods. The accordance in the final results of all eight methods is a convincing indication for the efficiency of the GA method. It indicates that GA can be an appropriate non-parametric tool for anomaly detection in a non linear time series showing the seismo-ionospheric precursors variations.

  15. Dynamic cross correlation studies of wave particle interactions in ULF phenomena

    NASA Technical Reports Server (NTRS)

    Mcpherron, R. L.

    1979-01-01

    Magnetic field observations made by satellites in the earth's magnetic field reveal a wide variety of ULF waves. These waves interact with the ambient particle populations in complex ways, causing modulation of the observed particle fluxes. This modulation is found to be a function of species, pitch angle, energy and time. The characteristics of this modulation provide information concerning the wave mode and interaction process. One important characteristic of wave-particle interactions is the phase of the particle flux modulation relative to the magnetic field variations. To display this phase as a function of time a dynamic cross spectrum program has been developed. The program produces contour maps in the frequency time plane of the cross correlation coefficient between any particle flux time series and the magnetic field vector. This program has been utilized in several studies of ULF wave-particle interactions at synchronous orbit.

  16. Tides, and tidal and residual currents in Suisun and San Pablo bays, California; results of measurements, 1986

    USGS Publications Warehouse

    Gartner, J.W.; Yost, B.T.

    1988-01-01

    Current meter data collected at 11 stations and water level data collected at one station in Suisun and San Pablo Bays, California, in 1986 are compiled in this report. Current-meter measurements include current speed and direction, and water temperature and salinity (computed from temperature and conductivity). For each of the 19 current-meter records, data are presented in two forms. These are: (1) results of harmonic analysis; and (2) plots of tidal current speed and direction versus time and plots of temperature and salinity versus time. Spatial distribution of the properties of tidal currents are given in graphic form. In addition, Eulerian residual currents have been compiled by using a vector-averaging technique. Water level data are presented in the form of a time-series plot and the results of harmonic analysis. (USGS)

  17. Hidden Markov models for fault detection in dynamic systems

    NASA Technical Reports Server (NTRS)

    Smyth, Padhraic J. (Inventor)

    1995-01-01

    The invention is a system failure monitoring method and apparatus which learns the symptom-fault mapping directly from training data. The invention first estimates the state of the system at discrete intervals in time. A feature vector x of dimension k is estimated from sets of successive windows of sensor data. A pattern recognition component then models the instantaneous estimate of the posterior class probability given the features, p(w(sub i) (vertical bar)/x), 1 less than or equal to i isless than or equal to m. Finally, a hidden Markov model is used to take advantage of temporal context and estimate class probabilities conditioned on recent past history. In this hierarchical pattern of information flow, the time series data is transformed and mapped into a categorical representation (the fault classes) and integrated over time to enable robust decision-making.

  18. Hidden Markov models for fault detection in dynamic systems

    NASA Technical Reports Server (NTRS)

    Smyth, Padhraic J. (Inventor)

    1993-01-01

    The invention is a system failure monitoring method and apparatus which learns the symptom-fault mapping directly from training data. The invention first estimates the state of the system at discrete intervals in time. A feature vector x of dimension k is estimated from sets of successive windows of sensor data. A pattern recognition component then models the instantaneous estimate of the posterior class probability given the features, p(w(sub i) perpendicular to x), 1 less than or equal to i is less than or equal to m. Finally, a hidden Markov model is used to take advantage of temporal context and estimate class probabilities conditioned on recent past history. In this hierarchical pattern of information flow, the time series data is transformed and mapped into a categorical representation (the fault classes) and integrated over time to enable robust decision-making.

  19. Algorithms for solving large sparse systems of simultaneous linear equations on vector processors

    NASA Technical Reports Server (NTRS)

    David, R. E.

    1984-01-01

    Very efficient algorithms for solving large sparse systems of simultaneous linear equations have been developed for serial processing computers. These involve a reordering of matrix rows and columns in order to obtain a near triangular pattern of nonzero elements. Then an LU factorization is developed to represent the matrix inverse in terms of a sequence of elementary Gaussian eliminations, or pivots. In this paper it is shown how these algorithms are adapted for efficient implementation on vector processors. Results obtained on the CYBER 200 Model 205 are presented for a series of large test problems which show the comparative advantages of the triangularization and vector processing algorithms.

  20. Phenology-based Spartina alterniflora mapping in coastal wetland of the Yangtze Estuary using time series of GaoFen satellite no. 1 wide field of view imagery

    NASA Astrophysics Data System (ADS)

    Ai, Jinquan; Gao, Wei; Gao, Zhiqiang; Shi, Runhe; Zhang, Chao

    2017-04-01

    Spartina alterniflora is an aggressive invasive plant species that replaces native species, changes the structure and function of the ecosystem across coastal wetlands in China, and is thus a major conservation concern. Mapping the spread of its invasion is a necessary first step for the implementation of effective ecological management strategies. The performance of a phenology-based approach for S. alterniflora mapping is explored in the coastal wetland of the Yangtze Estuary using a time series of GaoFen satellite no. 1 wide field of view camera (GF-1 WFV) imagery. First, a time series of the normalized difference vegetation index (NDVI) was constructed to evaluate the phenology of S. alterniflora. Two phenological stages (the senescence stage from November to mid-December and the green-up stage from late April to May) were determined as important for S. alterniflora detection in the study area based on NDVI temporal profiles, spectral reflectance curves of S. alterniflora and its coexistent species, and field surveys. Three phenology feature sets representing three major phenology-based detection strategies were then compared to map S. alterniflora: (1) the single-date imagery acquired within the optimal phenological window, (2) the multitemporal imagery, including four images from the two important phenological windows, and (3) the monthly NDVI time series imagery. Support vector machines and maximum likelihood classifiers were applied on each phenology feature set at different training sample sizes. For all phenology feature sets, the overall results were produced consistently with high mapping accuracies under sufficient training samples sizes, although significantly improved classification accuracies (10%) were obtained when the monthly NDVI time series imagery was employed. The optimal single-date imagery had the lowest accuracies of all detection strategies. The multitemporal analysis demonstrated little reduction in the overall accuracy compared with the use of monthly NDVI time series imagery. These results show the importance of considering the phenological stage for image selection for mapping S. alterniflora using GF-1 WFV imagery. Furthermore, in light of the better tradeoff between the number of images and classification accuracy when using multitemporal GF-1 WFV imagery, we suggest using multitemporal imagery acquired at appropriate phenological windows for S. alterniflora mapping at regional scales.

  1. Bioengineering a non-genotoxic vector for genetic modification of mesenchymal stem cells.

    PubMed

    Chen, Xuguang; Nomani, Alireza; Patel, Niket; Nouri, Faranak S; Hatefi, Arash

    2018-01-01

    Vectors used for stem cell transfection must be non-genotoxic, in addition to possessing high efficiency, because they could potentially transform normal stem cells into cancer-initiating cells. The objective of this research was to bioengineer an efficient vector that can be used for genetic modification of stem cells without any negative somatic or genetic impact. Two types of multifunctional vectors, namely targeted and non-targeted were genetically engineered and purified from E. coli. The targeted vectors were designed to enter stem cells via overexpressed receptors. The non-targeted vectors were equipped with MPG and Pep1 cell penetrating peptides. A series of commercial synthetic non-viral vectors and an adenoviral vector were used as controls. All vectors were evaluated for their efficiency and impact on metabolic activity, cell membrane integrity, chromosomal aberrations (micronuclei formation), gene dysregulation, and differentiation ability of stem cells. The results of this study showed that the bioengineered vector utilizing VEGFR-1 receptors for cellular entry could transfect mesenchymal stem cells with high efficiency without inducing genotoxicity, negative impact on gene function, or ability to differentiate. Overall, the vectors that utilized receptors as ports for cellular entry (viral and non-viral) showed considerably better somato- and genosafety profiles in comparison to those that entered through electrostatic interaction with cellular membrane. The genetically engineered vector in this study demonstrated that it can be safely and efficiently used to genetically modify stem cells with potential applications in tissue engineering and cancer therapy. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. An online spatio-temporal prediction model for dengue fever epidemic in Kaohsiung,Taiwan

    NASA Astrophysics Data System (ADS)

    Cheng, Ming-Hung; Yu, Hwa-Lung; Angulo, Jose; Christakos, George

    2013-04-01

    Dengue Fever (DF) is one of the most serious vector-borne infectious diseases in tropical and subtropical areas. DF epidemics occur in Taiwan annually especially during summer and fall seasons. Kaohsiung city has been one of the major DF hotspots in decades. The emergence and re-emergence of the DF epidemic is complex and can be influenced by various factors including space-time dynamics of human and vector populations and virus serotypes as well as the associated uncertainties. This study integrates a stochastic space-time "Susceptible-Infected-Recovered" model under Bayesian maximum entropy framework (BME-SIR) to perform real-time prediction of disease diffusion across space-time. The proposed model is applied for spatiotemporal prediction of the DF epidemic at Kaohsiung city during 2002 when the historical series of high DF cases was recorded. The online prediction by BME-SIR model updates the parameters of SIR model and infected cases across districts over time. Results show that the proposed model is rigorous to initial guess of unknown model parameters, i.e. transmission and recovery rates, which can depend upon the virus serotypes and various human interventions. This study shows that spatial diffusion can be well characterized by BME-SIR model, especially at the district surrounding the disease outbreak locations. The prediction performance at DF hotspots, i.e. Cianjhen and Sanmin, can be degraded due to the implementation of various disease control strategies during the epidemics. The proposed online disease prediction BME-SIR model can provide the governmental agency with a valuable reference to timely identify, control, and efficiently prevent DF spread across space-time.

  3. Non-linear processes in the Earth atmosphere boundary layer

    NASA Astrophysics Data System (ADS)

    Grunskaya, Lubov; Valery, Isakevich; Dmitry, Rubay

    2013-04-01

    The work is connected with studying electromagnetic fields in the resonator Earth-Ionosphere. There is studied the interconnection of tide processes of geophysical and astrophysical origin with the Earth electromagnetic fields. On account of non-linear property of the resonator Earth-Ionosphere the tides (moon and astrophysical tides) in the electromagnetic Earth fields are kinds of polyharmonic nature. It is impossible to detect such non-linear processes with the help of the classical spectral analysis. Therefore to extract tide processes in the electromagnetic fields, the method of covariance matrix eigen vectors is used. Experimental investigations of electromagnetic fields in the atmosphere boundary layer are done at the distance spaced stations, situated on Vladimir State University test ground, at Main Geophysical Observatory (St. Petersburg), on Kamchatka pen., on Lake Baikal. In 2012 there was continued to operate the multichannel synchronic monitoring system of electrical and geomagnetic fields at the spaced apart stations: VSU physical experimental proving ground; the station of the Institute of Solar and Terrestrial Physics of Russian Academy of Science (RAS) at Lake Baikal; the station of the Institute of volcanology and seismology of RAS in Paratunka; the station in Obninsk on the base of the scientific and production society "Typhoon". Such investigations turned out to be possible after developing the method of scanning experimental signal of electromagnetic field into non- correlated components. There was used a method of the analysis of the eigen vectors ofthe time series covariance matrix for exposing influence of the moon tides on Ez. The method allows to distribute an experimental signal into non-correlated periodicities. The present method is effective just in the situation when energetical deposit because of possible influence of moon tides upon the electromagnetic fields is little. There have been developed and realized in program components in the form of PAS instruments of processes of geophysical and man-triggered nature; to predict the presence of the features of geophysical nature in the electromagnetic field of the atmosphere boundary surface layer; to study dynamics the analyzed signals coming from the geophysical and man-triggered sources in the electrical and magnetic fields of the atmosphere boundary surface layer; to expose changes of the investigated time series in the periods preceding the appearance of the predicted phenomena; to form clusters of the time series being the features of the predicted events. On the base of the exposed clusters of the time series there have been built the predicting rules allowing to coordinate the probability of appearing the groups of the occurred events. The work is carried out with supporting of Program FPP #14.B37.210668, FPP #5.2071.2011, RFBR #11-05-97518.

  4. Comparison of Hybrid Classifiers for Crop Classification Using Normalized Difference Vegetation Index Time Series: A Case Study for Major Crops in North Xinjiang, China

    PubMed Central

    Hao, Pengyu; Wang, Li; Niu, Zheng

    2015-01-01

    A range of single classifiers have been proposed to classify crop types using time series vegetation indices, and hybrid classifiers are used to improve discriminatory power. Traditional fusion rules use the product of multi-single classifiers, but that strategy cannot integrate the classification output of machine learning classifiers. In this research, the performance of two hybrid strategies, multiple voting (M-voting) and probabilistic fusion (P-fusion), for crop classification using NDVI time series were tested with different training sample sizes at both pixel and object levels, and two representative counties in north Xinjiang were selected as study area. The single classifiers employed in this research included Random Forest (RF), Support Vector Machine (SVM), and See 5 (C 5.0). The results indicated that classification performance improved (increased the mean overall accuracy by 5%~10%, and reduced standard deviation of overall accuracy by around 1%) substantially with the training sample number, and when the training sample size was small (50 or 100 training samples), hybrid classifiers substantially outperformed single classifiers with higher mean overall accuracy (1%~2%). However, when abundant training samples (4,000) were employed, single classifiers could achieve good classification accuracy, and all classifiers obtained similar performances. Additionally, although object-based classification did not improve accuracy, it resulted in greater visual appeal, especially in study areas with a heterogeneous cropping pattern. PMID:26360597

  5. Temporal relationships of emotional avoidance in a patient with anorexia nervosa--a time series analysis.

    PubMed

    Stroe-Kunold, Esther; Wesche, Daniela; Friederich, Hans-Christoph; Herzog, Wolfgang; Zastrow, Arne; Wild, Beate

    2012-01-01

    Anorexia nervosa (AN) is a serious eating disorder marked by self-induced underweight. In patients with AN, the avoidance of emotions appears to be a central feature that is reinforced during the acute state of the disorder. This single case study investigated the role of emotional avoidance of a 25-year-old patient with AN during her inpatient treatment. Throughout the course of 96 days, the patient answered questions daily about her emotional avoidance, pro-anorectic beliefs, perfectionism, and further variables on an electronic diary. The patient's daily self-assessment of emotional avoidance was described in terms of mean value, range, and variability for the various treatment phases. Temporal relationships between emotional avoidance and further variables were determined using a time series approach (vector autoregressive (VAR) modelling). Diary data reflect that the patient's ability to tolerate unpleasant emotions appeared to undergo a process of change during inpatient treatment. Results of the time series analysis indicate that the more the patient was able to deal with negative emotions on any one day (t-1), the less she would be socially avoidant, cognitively confined to food and eating, as well as feeling less secure with her AN, and less depressive on the following day (t). The findings show that for this patient emotional avoidance plays a central role in the interacting system of various psychosocial variables. Replication of these results in other patients with AN would support the recommendation to focus more on emotional regulation in the treatment of AN.

  6. Determination of efficiencies, loss mechanisms, and performance degradation factors in chopper controlled dc vehical motors. Section 2: The time dependent finite element modeling of the electromagnetic field in electrical machines: Methods and applications. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Hamilton, H. B.; Strangas, E.

    1980-01-01

    The time dependent solution of the magnetic field is introduced as a method for accounting for the variation, in time, of the machine parameters in predicting and analyzing the performance of the electrical machines. The method of time dependent finite element was used in combination with an also time dependent construction of a grid for the air gap region. The Maxwell stress tensor was used to calculate the airgap torque from the magnetic vector potential distribution. Incremental inductances were defined and calculated as functions of time, depending on eddy currents and saturation. The currents in all the machine circuits were calculated in the time domain based on these inductances, which were continuously updated. The method was applied to a chopper controlled DC series motor used for electric vehicle drive, and to a salient pole sychronous motor with damper bars. Simulation results were compared to experimentally obtained ones.

  7. The Flare Genesis Experiment

    NASA Technical Reports Server (NTRS)

    Rust, D. M.

    2002-01-01

    Using the Flare Genesis Experiment (FGE), a balloon-borne observatory with an 80-cm solar telescope we observed the active region NOAA 8844 on January 25, 2000 for several hours. FGE was equipped with a vector polarimeter and a tunable Fabry-Perot narrow-band filter. It recorded time series of filtergrams, vector magnetograms, and Dopplergrams at the Ca(I) 6122.2 angstrom line, and H-alpha filtergrams with a cadence between 2.5 and 7.5 minutes. At the time of the observations, NOAA 8844 was located at approximately 5 N 30 W. The region was rapidly growing during the observations; new magnetic flux was constantly emerging in three supergranules near its center. We describe in detail how the FGE data were analyzed and report on the structure and behavior of peculiar moving dipolar features (MDFs) observed in the active region. In longitudinal magnetograms, the MDFs appeared to be small dipoles in the emerging fields. The east-west orientation of their polarities was opposite that of the sunspots. The dipoles were oriented parallel to their direction of motion, which was in most cases towards the sunspots. Previously, dipolar moving magnetic features have only been observed flowing out from sunspots. Vector magnetograms show that the magnetic field of each MDF negative part was less inclined to the local horizontal than the ones of the positive part. We identify the MDFs as undulations, or stitches, where the emerging flux ropes are still tied to the photosphere. We present a U-loop model that can account for their unusual structure and behavior, and it shows how emerging flux can shed its entrained mass.

  8. Lipoic acid functionalized amino acids cationic lipids as gene vectors.

    PubMed

    Su, Rong-Chuan; Liu, Qiang; Yi, Wen-Jing; Zheng, Li-Ting; Zhao, Zhi-Gang

    2016-10-01

    A series of reducible cationic lipids 4a-4f with different amino acid polar-head groups were prepared. The novel lipid contains a hydrophobic lipoic acid (LA) moiety, which can be reduced under reductive conditions to release of the encapsulated plasmid DNA. The particle size, zeta potential and cellular uptake of lipoplexes formed with DNA, as well as the transfection efficacy (TE) were characterized. The TE of the cationic lipid based on arginine was especially high, and was 2.5times higher than that of a branched polyethylenimine in the presence of 10% serum. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Wind Field Extractions from SAR Sentinel-1 Images Using Electromagnetic Models

    NASA Astrophysics Data System (ADS)

    La, Tran Vu; Khenchaf, Ali; Comblet, Fabrice; Nahum, Carole

    2016-08-01

    Among available wind sources, i.e. measured data, numeric weather models, the retrieval of wind vectors from Synthetic Aperture Radar (SAR) data / images is particularly preferred due to a lot of SAR systems (available data in most meteorological conditions, revisit mode, high resolution, etc.). For this purpose, the retrieval of wind vectors is principally based on the empirical (EP) models, e.g. CMOD series in C-band. Little studies have been reported about the use of the electromagnetic (EM) models for wind vector retrieval, since it is quite complicated to invert. However, the EM models can be applied for most cases of polarization, frequency and wind regime. In order to evaluate the advantages and limits of the EM models for wind vector retrieval, we compare in this study estimated results by the EM and EP models for both cases of polarization (vertical-vertical, or VV-pol and horizontal- horizontal, or HH-pol).

  10. Toward lattice fractional vector calculus

    NASA Astrophysics Data System (ADS)

    Tarasov, Vasily E.

    2014-09-01

    An analog of fractional vector calculus for physical lattice models is suggested. We use an approach based on the models of three-dimensional lattices with long-range inter-particle interactions. The lattice analogs of fractional partial derivatives are represented by kernels of lattice long-range interactions, where the Fourier series transformations of these kernels have a power-law form with respect to wave vector components. In the continuum limit, these lattice partial derivatives give derivatives of non-integer order with respect to coordinates. In the three-dimensional description of the non-local continuum, the fractional differential operators have the form of fractional partial derivatives of the Riesz type. As examples of the applications of the suggested lattice fractional vector calculus, we give lattice models with long-range interactions for the fractional Maxwell equations of non-local continuous media and for the fractional generalization of the Mindlin and Aifantis continuum models of gradient elasticity.

  11. Vectorization of agrochemicals: amino acid carriers are more efficient than sugar carriers to translocate phenylpyrrole conjugates in the Ricinus system.

    PubMed

    Wu, Hanxiang; Marhadour, Sophie; Lei, Zhi-Wei; Yang, Wen; Marivingt-Mounir, Cécile; Bonnemain, Jean-Louis; Chollet, Jean-François

    2018-05-01

    Producing quality food in sufficient quantity while using less agrochemical inputs will be one of the great challenges of the twenty-first century. One way of achieving this goal is to greatly reduce the doses of plant protection compounds by improving the targeting of pests to eradicate. Therefore, we developed a vectorization strategy to confer phloem mobility to fenpiclonil, a contact fungicide from the phenylpyrrole family used as a model molecule. It consists in coupling the antifungal compound to an amino acid or a sugar, so that the resulting conjugates are handled by active nutrient transport systems. The method of click chemistry was used to synthesize three conjugates combining fenpiclonil to glucose or glutamic acid with a spacer containing a triazole ring. Systemicity tests with the Ricinus model have shown that the amino acid promoiety was clearly more favorable to phloem mobility than that of glucose. In addition, the transport of the amino acid conjugate is carrier mediated since the derivative of the L series was about five times more concentrated in the phloem sap than its counterpart of the D series. The systemicity of the L-derivative is pH dependent and almost completely inhibited by the protonophore carbonyl cyanide 3-chlorophenylhydrazone (CCCP). These data suggest that the phloem transport of the L-derivative is governed by a stereospecific amino acid carrier system energized by the proton motive force.

  12. Development and Validation of Remote Sensing-Based Surface Inundation Products for Vector-Borne Disease Risk in East Africa

    NASA Astrophysics Data System (ADS)

    Jensen, K.; McDonald, K. C.; Ceccato, P.; Schroeder, R.; Podest, E.

    2014-12-01

    The potential impact of climate variability and change on the spread of infectious disease is of increasingly critical concern to public health. Newly-available remote sensing datasets may be combined with predictive modeling to develop new capabilities to mitigate risks of vector-borne diseases such as malaria, leishmaniasis, and rift valley fever. We have developed improved remote sensing-based products for monitoring water bodies and inundation dynamics that have potential utility for improving risk forecasts of vector-borne disease epidemics. These products include daily and seasonal surface inundation based on the global mappings of inundated area fraction derived at the 25-km scale from active and passive microwave instruments ERS, QuikSCAT, ASCAT, and SSM/I data - the Satellite Water Microwave Product Series (SWAMPS). Focusing on the East African region, we present validation of this product using multi-temporal classification of inundated areas in this region derived from high resolution PALSAR (100m) and Landsat (30m) observations. We assess historical occurrence of malaria in the east African country of Eritrea with respect to the time series SWAMPS datasets, and we aim to construct a framework for use of these new datasets to improve prediction of future malaria risk in this region. This work is supported through funding from the NASA Applied Sciences Program, the NASA Terrestrial Ecology Program, and the NASA Making Earth System Data Records for Use in Research Environments (MEaSUREs) Program. This study is also supported and monitored by National Oceanic and Atmospheric Administration (NOAA) under Grant - CREST Grant # NA11SEC4810004. The statements contained within the manuscript/research article are not the opinions of the funding agency or the U.S. government, but reflect the authors' opinions. This work was conducted in part under the framework of the ALOS Kyoto and Carbon Initiative. ALOS PALSAR data were provided by JAXA EORC.

  13. Reconstruction of disease transmission rates: Applications to measles, dengue, and influenza.

    PubMed

    Lange, Alexander

    2016-07-07

    Transmission rates are key in understanding the spread of infectious diseases. Using the framework of compartmental models, we introduce a simple method to reconstruct time series of transmission rates directly from incidence or disease-related mortality data. The reconstruction employs differential equations, which model the time evolution of infective stages and strains. Being sensitive to initial values, the method produces asymptotically correct solutions. The computations are fast, with time complexity being quadratic. We apply the reconstruction to data of measles (England and Wales, 1948-1967), dengue (Thailand, 1982-1999), and influenza (U.S., 1910-1927). The Measles example offers comparison with earlier work. Here we re-investigate reporting corrections, include and exclude demographic information. The dengue example deals with the failure of vector-control measures in reducing dengue hemorrhagic fever (DHF) in Thailand. Two competing mechanisms have been held responsible: strain interaction and demographic transitions. Our reconstruction reveals that both explanations are possible, showing that the increase in DHF cases is consistent with decreasing transmission rates resulting from reduced vector counts. The flu example focuses on the 1918/1919 pandemic, examining the transmission rate evolution for an invading strain. Our analysis indicates that the pandemic strain could have circulated in the population for many months before the pandemic was initiated by an event of highly increased transmission. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Spatio-temporal optimization of sampling for bluetongue vectors (Culicoides) near grazing livestock

    PubMed Central

    2013-01-01

    Background Estimating the abundance of Culicoides using light traps is influenced by a large variation in abundance in time and place. This study investigates the optimal trapping strategy to estimate the abundance or presence/absence of Culicoides on a field with grazing animals. We used 45 light traps to sample specimens from the Culicoides obsoletus species complex on a 14 hectare field during 16 nights in 2009. Findings The large number of traps and catch nights enabled us to simulate a series of samples consisting of different numbers of traps (1-15) on each night. We also varied the number of catch nights when simulating the sampling, and sampled with increasing minimum distances between traps. We used resampling to generate a distribution of different mean and median abundance in each sample. Finally, we used the hypergeometric distribution to estimate the probability of falsely detecting absence of vectors on the field. The variation in the estimated abundance decreased steeply when using up to six traps, and was less pronounced when using more traps, although no clear cutoff was found. Conclusions Despite spatial clustering in vector abundance, we found no effect of increasing the distance between traps. We found that 18 traps were generally required to reach 90% probability of a true positive catch when sampling just one night. But when sampling over two nights the same probability level was obtained with just three traps per night. The results are useful for the design of vector monitoring programmes on fields with grazing animals. PMID:23705770

  15. Fast temporal neural learning using teacher forcing

    NASA Technical Reports Server (NTRS)

    Toomarian, Nikzad (Inventor); Bahren, Jacob (Inventor)

    1992-01-01

    A neural network is trained to output a time dependent target vector defined over a predetermined time interval in response to a time dependent input vector defined over the same time interval by applying corresponding elements of the error vector, or difference between the target vector and the actual neuron output vector, to the inputs of corresponding output neurons of the network as corrective feedback. This feedback decreases the error and quickens the learning process, so that a much smaller number of training cycles are required to complete the learning process. A conventional gradient descent algorithm is employed to update the neural network parameters at the end of the predetermined time interval. The foregoing process is repeated in repetitive cycles until the actual output vector corresponds to the target vector. In the preferred embodiment, as the overall error of the neural network output decreasing during successive training cycles, the portion of the error fed back to the output neurons is decreased accordingly, allowing the network to learn with greater freedom from teacher forcing as the network parameters converge to their optimum values. The invention may also be used to train a neural network with stationary training and target vectors.

  16. Fast temporal neural learning using teacher forcing

    NASA Technical Reports Server (NTRS)

    Toomarian, Nikzad (Inventor); Bahren, Jacob (Inventor)

    1995-01-01

    A neural network is trained to output a time dependent target vector defined over a predetermined time interval in response to a time dependent input vector defined over the same time interval by applying corresponding elements of the error vector, or difference between the target vector and the actual neuron output vector, to the inputs of corresponding output neurons of the network as corrective feedback. This feedback decreases the error and quickens the learning process, so that a much smaller number of training cycles are required to complete the learning process. A conventional gradient descent algorithm is employed to update the neural network parameters at the end of the predetermined time interval. The foregoing process is repeated in repetitive cycles until the actual output vector corresponds to the target vector. In the preferred embodiment, as the overall error of the neural network output decreasing during successive training cycles, the portion of the error fed back to the output neurons is decreased accordingly, allowing the network to learn with greater freedom from teacher forcing as the network parameters converge to their optimum values. The invention may also be used to train a neural network with stationary training and target vectors.

  17. Binary black hole spacetimes with a helical Killing vector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klein, Christian

    Binary black hole spacetimes with a helical Killing vector, which are discussed as an approximation for the early stage of a binary system, are studied in a projection formalism. In this setting the four-dimensional Einstein equations are equivalent to a three-dimensional gravitational theory with a SL(2,R)/SO(1,1) sigma model as the material source. The sigma model is determined by a complex Ernst equation. 2+1 decompositions of the three-metric are used to establish the field equations on the orbit space of the Killing vector. The two Killing horizons of spherical topology which characterize the black holes, the cylinder of light where themore » Killing vector changes from timelike to spacelike, and infinity are singular points of the equations. The horizon and the light cylinder are shown to be regular singularities, i.e., the metric functions can be expanded in a formal power series in the vicinity. The behavior of the metric at spatial infinity is studied in terms of formal series solutions to the linearized Einstein equations. It is shown that the spacetime is not asymptotically flat in the strong sense to have a smooth null infinity under the assumption that the metric tends asymptotically to the Minkowski metric. In this case the metric functions have an oscillatory behavior in the radial coordinate in a nonaxisymmetric setting, the asymptotic multipoles are not defined. The asymptotic behavior of the Weyl tensor near infinity shows that there is no smooth null infinity.« less

  18. Cone-Specific Promoters for Gene Therapy of Achromatopsia and Other Retinal Diseases

    PubMed Central

    Ye, Guo-Jie; Budzynski, Ewa; Sonnentag, Peter; Nork, T. Michael; Sheibani, Nader; Gurel, Zafer; Boye, Sanford L.; Peterson, James J.; Boye, Shannon E.; Hauswirth, William W.; Chulay, Jeffrey D.

    2016-01-01

    Adeno-associated viral (AAV) vectors containing cone-specific promoters have rescued cone photoreceptor function in mouse and dog models of achromatopsia, but cone-specific promoters have not been optimized for use in primates. Using AAV vectors administered by subretinal injection, we evaluated a series of promoters based on the human L-opsin promoter, or a chimeric human cone transducin promoter, for their ability to drive gene expression of green fluorescent protein (GFP) in mice and nonhuman primates. Each of these promoters directed high-level GFP expression in mouse photoreceptors. In primates, subretinal injection of an AAV-GFP vector containing a 1.7-kb L-opsin promoter (PR1.7) achieved strong and specific GFP expression in all cone photoreceptors and was more efficient than a vector containing the 2.1-kb L-opsin promoter that was used in AAV vectors that rescued cone function in mouse and dog models of achromatopsia. A chimeric cone transducin promoter that directed strong GFP expression in mouse and dog cone photoreceptors was unable to drive GFP expression in primate cones. An AAV vector expressing a human CNGB3 gene driven by the PR1.7 promoter rescued cone function in the mouse model of achromatopsia. These results have informed the design of an AAV vector for treatment of patients with achromatopsia. PMID:26603570

  19. Cone-Specific Promoters for Gene Therapy of Achromatopsia and Other Retinal Diseases.

    PubMed

    Ye, Guo-Jie; Budzynski, Ewa; Sonnentag, Peter; Nork, T Michael; Sheibani, Nader; Gurel, Zafer; Boye, Sanford L; Peterson, James J; Boye, Shannon E; Hauswirth, William W; Chulay, Jeffrey D

    2016-01-01

    Adeno-associated viral (AAV) vectors containing cone-specific promoters have rescued cone photoreceptor function in mouse and dog models of achromatopsia, but cone-specific promoters have not been optimized for use in primates. Using AAV vectors administered by subretinal injection, we evaluated a series of promoters based on the human L-opsin promoter, or a chimeric human cone transducin promoter, for their ability to drive gene expression of green fluorescent protein (GFP) in mice and nonhuman primates. Each of these promoters directed high-level GFP expression in mouse photoreceptors. In primates, subretinal injection of an AAV-GFP vector containing a 1.7-kb L-opsin promoter (PR1.7) achieved strong and specific GFP expression in all cone photoreceptors and was more efficient than a vector containing the 2.1-kb L-opsin promoter that was used in AAV vectors that rescued cone function in mouse and dog models of achromatopsia. A chimeric cone transducin promoter that directed strong GFP expression in mouse and dog cone photoreceptors was unable to drive GFP expression in primate cones. An AAV vector expressing a human CNGB3 gene driven by the PR1.7 promoter rescued cone function in the mouse model of achromatopsia. These results have informed the design of an AAV vector for treatment of patients with achromatopsia.

  20. Climate change and vector-borne diseases of public health significance.

    PubMed

    Ogden, Nicholas H

    2017-10-16

    There has been much debate as to whether or not climate change will have, or has had, any significant effect on risk from vector-borne diseases. The debate on the former has focused on the degree to which occurrence and levels of risk of vector-borne diseases are determined by climate-dependent or independent factors, while the debate on the latter has focused on whether changes in disease incidence are due to climate at all, and/or are attributable to recent climate change. Here I review possible effects of climate change on vector-borne diseases, methods used to predict these effects and the evidence to date of changes in vector-borne disease risks that can be attributed to recent climate change. Predictions have both over- and underestimated the effects of climate change. Mostly under-estimations of effects are due to a focus only on direct effects of climate on disease ecology while more distal effects on society's capacity to control and prevent vector-borne disease are ignored. There is increasing evidence for possible impacts of recent climate change on some vector-borne diseases but for the most part, observed data series are too short (or non-existent), and impacts of climate-independent factors too great, to confidently attribute changing risk to climate change. © Crown copyright 2017.

  1. Object recognition of ladar with support vector machine

    NASA Astrophysics Data System (ADS)

    Sun, Jian-Feng; Li, Qi; Wang, Qi

    2005-01-01

    Intensity, range and Doppler images can be obtained by using laser radar. Laser radar can detect much more object information than other detecting sensor, such as passive infrared imaging and synthetic aperture radar (SAR), so it is well suited as the sensor of object recognition. Traditional method of laser radar object recognition is extracting target features, which can be influenced by noise. In this paper, a laser radar recognition method-Support Vector Machine is introduced. Support Vector Machine (SVM) is a new hotspot of recognition research after neural network. It has well performance on digital written and face recognition. Two series experiments about SVM designed for preprocessing and non-preprocessing samples are performed by real laser radar images, and the experiments results are compared.

  2. A Real-Time Phase Vector Display for EEG Monitoring

    NASA Technical Reports Server (NTRS)

    Finger, Herbert J.; Anliker, James E.; Rimmer, Tamara

    1973-01-01

    A real-time, computer-based, phase vector display system has been developed which will output a vector whose phase is equal to the delay between a trigger and the peak of a function which is quasi-coherent with respect to the trigger. The system also contains a sliding averager which enables the operator to average successive trials before calculating the phase vector. Data collection, averaging and display generation are performed on a LINC-8 computer. Output displays appear on several X-Y CRT display units and on a kymograph camera/oscilloscope unit which is used to generate photographs of time-varying phase vectors or contourograms of time-varying averages of input functions.

  3. A host-restricted viral vector for antigen-specific immunization against Lyme disease pathogen.

    PubMed

    Xiao, Sa; Kumar, Manish; Yang, Xiuli; Akkoyunlu, Mustafa; Collins, Peter L; Samal, Siba K; Pal, Utpal

    2011-07-18

    Newcastle disease virus (NDV) is an avian virus that is attenuated in primates and is a potential vaccine vector for human use. We evaluated NDV as a vector for expressing selected antigens of the Lyme disease pathogen Borrelia burgdorferi. A series of recombinant NDVs were generated that expressed intracellular or extracellular forms of two B. burgdorferi antigens: namely, the basic membrane protein A (BmpA) and the outer surface protein C (OspC). Expression of the intracellular and extracellular forms of these antigens was confirmed in cultured chicken cells. C3H or Balb/C mice that were immunized intranasally with the NDV vectors mounted vigorous serum antibody responses against the NDV vector, but failed to mount a robust response against either the intracellular or extracellular forms of BmpA or OspC. By contrast, a single immunization of hamsters with the NDV vectors via the intranasal, intramuscular, or intraperitoneal route resulted in rapid and rigorous antibody responses against the intracellular or extracellular forms of BmpA and OspC. When groups of hamsters were separately inoculated with various NDV vectors and challenged with B. burgdorferi (10(8)cells/animal), immunization with vector expressing either intracellular or extracellular BmpA was associated with a significant reduction of the pathogen load in the joints. Taken together, our studies highlighted the importance of NDV as vaccine vector that can be used for simple yet effective immunization of hosts against bacterial infections including Lyme disease. Copyright © 2011 Elsevier Ltd. All rights reserved.

  4. Micro PIV Measurements of the Internal Flow of an Amoeba proteus

    NASA Astrophysics Data System (ADS)

    Resagk, Christian; Lobutova, Elka; Li, Ling; Voges, Danja

    2011-11-01

    We report about micro PIV measurements of the internal flow in the protoplasm of an amoeba. The velocity data shall give information about the mechanism of the change of amoeba's contour during its locomotion in water. The experimental data is used for an analytical modeling of the locomotion mechanism with the help of a variable contour and finally for the development of locomotion principles for micro robots. The experimental set-up consists of a microscope and a CCD camera with 12 frames per second and image analysis software. The illumination of the amoeba was done by the built-in microscope halogen lamp. We use the phase contrast configuration to capture images of the amoeba moving in water. We applied an electrical field to the water channel in order to control the movement of the amoeba in one direction. During this motion we measured time dependent velocity vector fields of the protoplasm flow, estimated velocity profiles and analyzed time series of the maximum velocity. The velocity vector plots are calculated from the images by using cross correlation and naturally occurring particles in the protoplasm. Beside the analyses of the internal flow we recorded the motion of the center of gravity and the variation of the sectional area.

  5. On the definition of the time evolution operator for time-independent Hamiltonians in non-relativistic quantum mechanics

    NASA Astrophysics Data System (ADS)

    Amaku, Marcos; Coutinho, Francisco A. B.; Masafumi Toyama, F.

    2017-09-01

    The usual definition of the time evolution operator e-i H t /ℏ=∑n=0∞1/n ! (-i/ℏHt ) n , where H is the Hamiltonian of the system, as given in almost every book on quantum mechanics, causes problems in some situations. The operators that appear in quantum mechanics are either bounded or unbounded. Unbounded operators are not defined for all the vectors (wave functions) of the Hilbert space of the system; when applied to some states, they give a non-normalizable state. Therefore, if H is an unbounded operator, the definition in terms of the power series expansion does not make sense because it may diverge or result in a non-normalizable wave function. In this article, we explain why this is so and suggest, as an alternative, another definition used by mathematicians.

  6. Vector adaptive predictive coder for speech and audio

    NASA Technical Reports Server (NTRS)

    Chen, Juin-Hwey (Inventor); Gersho, Allen (Inventor)

    1990-01-01

    A real-time vector adaptive predictive coder which approximates each vector of K speech samples by using each of M fixed vectors in a first codebook to excite a time-varying synthesis filter and picking the vector that minimizes distortion. Predictive analysis for each frame determines parameters used for computing from vectors in the first codebook zero-state response vectors that are stored at the same address (index) in a second codebook. Encoding of input speech vectors s.sub.n is then carried out using the second codebook. When the vector that minimizes distortion is found, its index is transmitted to a decoder which has a codebook identical to the first codebook of the decoder. There the index is used to read out a vector that is used to synthesize an output speech vector s.sub.n. The parameters used in the encoder are quantized, for example by using a table, and the indices are transmitted to the decoder where they are decoded to specify transfer characteristics of filters used in producing the vector s.sub.n from the receiver codebook vector selected by the vector index transmitted.

  7. Novel images extraction model using improved delay vector variance feature extraction and multi-kernel neural network for EEG detection and prediction.

    PubMed

    Ge, Jing; Zhang, Guoping

    2015-01-01

    Advanced intelligent methodologies could help detect and predict diseases from the EEG signals in cases the manual analysis is inefficient available, for instance, the epileptic seizures detection and prediction. This is because the diversity and the evolution of the epileptic seizures make it very difficult in detecting and identifying the undergoing disease. Fortunately, the determinism and nonlinearity in a time series could characterize the state changes. Literature review indicates that the Delay Vector Variance (DVV) could examine the nonlinearity to gain insight into the EEG signals but very limited work has been done to address the quantitative DVV approach. Hence, the outcomes of the quantitative DVV should be evaluated to detect the epileptic seizures. To develop a new epileptic seizure detection method based on quantitative DVV. This new epileptic seizure detection method employed an improved delay vector variance (IDVV) to extract the nonlinearity value as a distinct feature. Then a multi-kernel functions strategy was proposed in the extreme learning machine (ELM) network to provide precise disease detection and prediction. The nonlinearity is more sensitive than the energy and entropy. 87.5% overall accuracy of recognition and 75.0% overall accuracy of forecasting were achieved. The proposed IDVV and multi-kernel ELM based method was feasible and effective for epileptic EEG detection. Hence, the newly proposed method has importance for practical applications.

  8. Vector autoregressive model approach for forecasting outflow cash in Central Java

    NASA Astrophysics Data System (ADS)

    hoyyi, Abdul; Tarno; Maruddani, Di Asih I.; Rahmawati, Rita

    2018-05-01

    Multivariate time series model is more applied in economic and business problems as well as in other fields. Applications in economic problems one of them is the forecasting of outflow cash. This problem can be viewed globally in the sense that there is no spatial effect between regions, so the model used is the Vector Autoregressive (VAR) model. The data used in this research is data on the money supply in Bank Indonesia Semarang, Solo, Purwokerto and Tegal. The model used in this research is VAR (1), VAR (2) and VAR (3) models. Ordinary Least Square (OLS) is used to estimate parameters. The best model selection criteria use the smallest Akaike Information Criterion (AIC). The result of data analysis shows that the AIC value of VAR (1) model is equal to 42.72292, VAR (2) equals 42.69119 and VAR (3) equals 42.87662. The difference in AIC values is not significant. Based on the smallest AIC value criteria, the best model is the VAR (2) model. This model has satisfied the white noise assumption.

  9. Climatic, high tide and vector variables and the transmission of Ross River virus.

    PubMed

    Tong, S; Hu, W; Nicholls, N; Dale, P; MacKenzie, J S; Patz, J; McMichael, A J

    2005-11-01

    This report assesses the impact of the variability in environmental and vector factors on the transmission of Ross River virus (RRV) in Brisbane, Australia. Poisson time series regression analyses were conducted using monthly data on the counts of RRV cases, climate variables (Southern Oscillation Index and rainfall), high tides and mosquito density for the period of 1998-2001. The results indicate that increases in the high tide (relative risk (RR): 1.65; 95% confidence interval (CI): 1.20-2.26), rainfall (RR: 1.45; 95% CI: 1.21-1.73), mosquito density (RR: 1.17; 95% CI: 1.09-1.27), the density of Culex annulirostris (RR: 1.25; 95% CI: 1.13-1.37) and the density of Ochlerotatus vigilax (RR: 2.39; 95% CI: 2.30-2.48), each at a lag of 1 month, were statistically significantly associated with the rise of monthly RRV incidence. The results of the present study might facilitate the development of early warning systems for reducing the incidence of this wide-spread disease in Australia and other Pacific island nations.

  10. Coarse-Grained and Atomistic Modeling of Polyimides

    NASA Technical Reports Server (NTRS)

    Clancy, Thomas C.; Hinkley, Jeffrey A.

    2004-01-01

    A coarse-grained model for a set of three polyimide isomers is developed. Each polyimide is comprised of BPDA (3,3,4,4' - biphenyltetracarboxylic dianhydride) and one of three APB isomers: 1,3-bis(4-aminophenoxy)benzene, 1,4-bis(4-aminophenoxy)benzene or 1,3-bis(3-aminophenoxy)benzene. The coarse-grained model is constructed as a series of linked vectors following the contour of the polymer backbone. Beads located at the midpoint of each vector define centers for long range interaction energy between monomer subunits. A bulk simulation of each coarse-grained polyimide model is performed with a dynamic Monte Carlo procedure. These coarsegrained models are then reverse-mapped to fully atomistic models. The coarse-grained models show the expected trends in decreasing chain dimensions with increasing meta linkage in the APB section of the repeat unit, although these differences were minor due to the relatively short chains simulated here. Considerable differences are seen among the dynamic Monte Carlo properties of the three polyimide isomers. Decreasing relaxation times are seen with increasing meta linkage in the APB section of the repeat unit.

  11. Multivariate Time Series Forecasting of Crude Palm Oil Price Using Machine Learning Techniques

    NASA Astrophysics Data System (ADS)

    Kanchymalay, Kasturi; Salim, N.; Sukprasert, Anupong; Krishnan, Ramesh; Raba'ah Hashim, Ummi

    2017-08-01

    The aim of this paper was to study the correlation between crude palm oil (CPO) price, selected vegetable oil prices (such as soybean oil, coconut oil, and olive oil, rapeseed oil and sunflower oil), crude oil and the monthly exchange rate. Comparative analysis was then performed on CPO price forecasting results using the machine learning techniques. Monthly CPO prices, selected vegetable oil prices, crude oil prices and monthly exchange rate data from January 1987 to February 2017 were utilized. Preliminary analysis showed a positive and high correlation between the CPO price and soy bean oil price and also between CPO price and crude oil price. Experiments were conducted using multi-layer perception, support vector regression and Holt Winter exponential smoothing techniques. The results were assessed by using criteria of root mean square error (RMSE), means absolute error (MAE), means absolute percentage error (MAPE) and Direction of accuracy (DA). Among these three techniques, support vector regression(SVR) with Sequential minimal optimization (SMO) algorithm showed relatively better results compared to multi-layer perceptron and Holt Winters exponential smoothing method.

  12. Per-field crop classification in irrigated agricultural regions in middle Asia using random forest and support vector machine ensemble

    NASA Astrophysics Data System (ADS)

    Löw, Fabian; Schorcht, Gunther; Michel, Ulrich; Dech, Stefan; Conrad, Christopher

    2012-10-01

    Accurate crop identification and crop area estimation are important for studies on irrigated agricultural systems, yield and water demand modeling, and agrarian policy development. In this study a novel combination of Random Forest (RF) and Support Vector Machine (SVM) classifiers is presented that (i) enhances crop classification accuracy and (ii) provides spatial information on map uncertainty. The methodology was implemented over four distinct irrigated sites in Middle Asia using RapidEye time series data. The RF feature importance statistics was used as feature-selection strategy for the SVM to assess possible negative effects on classification accuracy caused by an oversized feature space. The results of the individual RF and SVM classifications were combined with rules based on posterior classification probability and estimates of classification probability entropy. SVM classification performance was increased by feature selection through RF. Further experimental results indicate that the hybrid classifier improves overall classification accuracy in comparison to the single classifiers as well as useŕs and produceŕs accuracy.

  13. Development of anomaly detection models for deep subsurface monitoring

    NASA Astrophysics Data System (ADS)

    Sun, A. Y.

    2017-12-01

    Deep subsurface repositories are used for waste disposal and carbon sequestration. Monitoring deep subsurface repositories for potential anomalies is challenging, not only because the number of sensor networks and the quality of data are often limited, but also because of the lack of labeled data needed to train and validate machine learning (ML) algorithms. Although physical simulation models may be applied to predict anomalies (or the system's nominal state for that sake), the accuracy of such predictions may be limited by inherent conceptual and parameter uncertainties. The main objective of this study was to demonstrate the potential of data-driven models for leakage detection in carbon sequestration repositories. Monitoring data collected during an artificial CO2 release test at a carbon sequestration repository were used, which include both scalar time series (pressure) and vector time series (distributed temperature sensing). For each type of data, separate online anomaly detection algorithms were developed using the baseline experiment data (no leak) and then tested on the leak experiment data. Performance of a number of different online algorithms was compared. Results show the importance of including contextual information in the dataset to mitigate the impact of reservoir noise and reduce false positive rate. The developed algorithms were integrated into a generic Web-based platform for real-time anomaly detection.

  14. TIME-DOMAIN METHODS FOR DIFFUSIVE TRANSPORT IN SOFT MATTER

    PubMed Central

    Fricks, John; Yao, Lingxing; Elston, Timothy C.; Gregory Forest, And M.

    2015-01-01

    Passive microrheology [12] utilizes measurements of noisy, entropic fluctuations (i.e., diffusive properties) of micron-scale spheres in soft matter to infer bulk frequency-dependent loss and storage moduli. Here, we are concerned exclusively with diffusion of Brownian particles in viscoelastic media, for which the Mason-Weitz theoretical-experimental protocol is ideal, and the more challenging inference of bulk viscoelastic moduli is decoupled. The diffusive theory begins with a generalized Langevin equation (GLE) with a memory drag law specified by a kernel [7, 16, 22, 23]. We start with a discrete formulation of the GLE as an autoregressive stochastic process governing microbead paths measured by particle tracking. For the inverse problem (recovery of the memory kernel from experimental data) we apply time series analysis (maximum likelihood estimators via the Kalman filter) directly to bead position data, an alternative to formulas based on mean-squared displacement statistics in frequency space. For direct modeling, we present statistically exact GLE algorithms for individual particle paths as well as statistical correlations for displacement and velocity. Our time-domain methods rest upon a generalization of well-known results for a single-mode exponential kernel [1, 7, 22, 23] to an arbitrary M-mode exponential series, for which the GLE is transformed to a vector Ornstein-Uhlenbeck process. PMID:26412904

  15. Characteristic vector analysis of inflection ratio spectra: New technique for analysis of ocean color data

    NASA Technical Reports Server (NTRS)

    Grew, G. W.

    1985-01-01

    Characteristic vector analysis applied to inflection ratio spectra is a new approach to analyzing spectral data. The technique applied to remote data collected with the multichannel ocean color sensor (MOCS), a passive sensor, simultaneously maps the distribution of two different phytopigments, chlorophyll alpha and phycoerythrin, the ocean. The data set presented is from a series of warm core ring missions conducted during 1982. The data compare favorably with a theoretical model and with data collected on the same mission by an active sensor, the airborne oceanographic lidar (AOL).

  16. A path model for Whittaker vectors

    NASA Astrophysics Data System (ADS)

    Di Francesco, Philippe; Kedem, Rinat; Turmunkh, Bolor

    2017-06-01

    In this paper we construct weighted path models to compute Whittaker vectors in the completion of Verma modules, as well as Whittaker functions of fundamental type, for all finite-dimensional simple Lie algebras, affine Lie algebras, and the quantum algebra U_q(slr+1) . This leads to series expressions for the Whittaker functions. We show how this construction leads directly to the quantum Toda equations satisfied by these functions, and to the q-difference equations in the quantum case. We investigate the critical limit of affine Whittaker functions computed in this way.

  17. A Vector Library for Silencing Central Carbon Metabolism Genes with Antisense RNAs in Escherichia coli

    PubMed Central

    Ohno, Satoshi; Yoshikawa, Katsunori; Shimizu, Hiroshi; Tamura, Tomohiro

    2014-01-01

    We describe here the construction of a series of 71 vectors to silence central carbon metabolism genes in Escherichia coli. The vectors inducibly express antisense RNAs called paired-terminus antisense RNAs, which have a higher silencing efficacy than ordinary antisense RNAs. By measuring mRNA amounts, measuring activities of target proteins, or observing specific phenotypes, it was confirmed that all the vectors were able to silence the expression of target genes efficiently. Using this vector set, each of the central carbon metabolism genes was silenced individually, and the accumulation of metabolites was investigated. We were able to obtain accurate information on ways to increase the production of pyruvate, an industrially valuable compound, from the silencing results. Furthermore, the experimental results of pyruvate accumulation were compared to in silico predictions, and both sets of results were consistent. Compared to the gene disruption approach, the silencing approach has an advantage in that any E. coli strain can be used and multiple gene silencing is easily possible in any combination. PMID:24212579

  18. Mapping Brazilian savanna vegetation gradients with Landsat time series

    NASA Astrophysics Data System (ADS)

    Schwieder, Marcel; Leitão, Pedro J.; da Cunha Bustamante, Mercedes Maria; Ferreira, Laerte Guimarães; Rabe, Andreas; Hostert, Patrick

    2016-10-01

    Global change has tremendous impacts on savanna systems around the world. Processes related to climate change or agricultural expansion threaten the ecosystem's state, function and the services it provides. A prominent example is the Brazilian Cerrado that has an extent of around 2 million km2 and features high biodiversity with many endemic species. It is characterized by landscape patterns from open grasslands to dense forests, defining a heterogeneous gradient in vegetation structure throughout the biome. While it is undisputed that the Cerrado provides a multitude of valuable ecosystem services, it is exposed to changes, e.g. through large scale land conversions or climatic changes. Monitoring of the Cerrado is thus urgently needed to assess the state of the system as well as to analyze and further understand ecosystem responses and adaptations to ongoing changes. Therefore we explored the potential of dense Landsat time series to derive phenological information for mapping vegetation gradients in the Cerrado. Frequent data gaps, e.g. due to cloud contamination, impose a serious challenge for such time series analyses. We synthetically filled data gaps based on Radial Basis Function convolution filters to derive continuous pixel-wise temporal profiles capable of representing Land Surface Phenology (LSP). Derived phenological parameters revealed differences in the seasonal cycle between the main Cerrado physiognomies and could thus be used to calibrate a Support Vector Classification model to map their spatial distribution. Our results show that it is possible to map the main spatial patterns of the observed physiognomies based on their phenological differences, whereat inaccuracies occurred especially between similar classes and data-scarce areas. The outcome emphasizes the need for remote sensing based time series analyses at fine scales. Mapping heterogeneous ecosystems such as savannas requires spatial detail, as well as the ability to derive important phenological parameters for monitoring habitats or ecosystem responses to climate change. The open Landsat and Sentinel-2 archives provide the satellite data needed for improved analyses of savanna ecosystems globally.

  19. Prediction of GWL with the help of GRACE TWS for unevenly spaced time series data in India : Analysis of comparative performances of SVR, ANN and LRM

    NASA Astrophysics Data System (ADS)

    Mukherjee, Amritendu; Ramachandran, Parthasarathy

    2018-03-01

    Prediction of Ground Water Level (GWL) is extremely important for sustainable use and management of ground water resource. The motivations for this work is to understand the relationship between Gravity Recovery and Climate Experiment (GRACE) derived terrestrial water change (ΔTWS) data and GWL, so that ΔTWS could be used as a proxy measurement for GWL. In our study, we have selected five observation wells from different geographic regions in India. The datasets are unevenly spaced time series data which restricts us from applying standard time series methodologies and therefore in order to model and predict GWL with the help of ΔTWS, we have built Linear Regression Model (LRM), Support Vector Regression (SVR) and Artificial Neural Network (ANN). Comparative performances of LRM, SVR and ANN have been evaluated with the help of correlation coefficient (ρ) and Root Mean Square Error (RMSE) between the actual and fitted (for training dataset) or predicted (for test dataset) values of GWL. It has been observed in our study that ΔTWS is highly significant variable to model GWL and the amount of total variations in GWL that could be explained with the help of ΔTWS varies from 36.48% to 74.28% (0.3648 ⩽R2 ⩽ 0.7428) . We have found that for the model GWL ∼ Δ TWS, for both training and test dataset, performances of SVR and ANN are better than that of LRM in terms of ρ and RMSE. It also has been found in our study that with the inclusion of meteorological variables along with ΔTWS as input parameters to model GWL, the performance of SVR improves and it performs better than ANN. These results imply that for modelling irregular time series GWL data, ΔTWS could be very useful.

  20. Ngram time series model to predict activity type and energy cost from wrist, hip and ankle accelerometers: implications of age

    PubMed Central

    Strath, Scott J; Kate, Rohit J; Keenan, Kevin G; Welch, Whitney A; Swartz, Ann M

    2016-01-01

    To develop and test time series single site and multi-site placement models, we used wrist, hip and ankle processed accelerometer data to estimate energy cost and type of physical activity in adults. Ninety-nine subjects in three age groups (18–39, 40–64, 65 + years) performed 11 activities while wearing three triaxial accelereometers: one each on the non-dominant wrist, hip, and ankle. During each activity net oxygen cost (METs) was assessed. The time series of accelerometer signals were represented in terms of uniformly discretized values called bins. Support Vector Machine was used for activity classification with bins and every pair of bins used as features. Bagged decision tree regression was used for net metabolic cost prediction. To evaluate model performance we employed the jackknife leave-one-out cross validation method. Single accelerometer and multi-accelerometer site model estimates across and within age group revealed similar accuracy, with a bias range of −0.03 to 0.01 METs, bias percent of −0.8 to 0.3%, and a rMSE range of 0.81–1.04 METs. Multi-site accelerometer location models improved activity type classification over single site location models from a low of 69.3% to a maximum of 92.8% accuracy. For each accelerometer site location model, or combined site location model, percent accuracy classification decreased as a function of age group, or when young age groups models were generalized to older age groups. Specific age group models on average performed better than when all age groups were combined. A time series computation show promising results for predicting energy cost and activity type. Differences in prediction across age group, a lack of generalizability across age groups, and that age group specific models perform better than when all ages are combined needs to be considered as analytic calibration procedures to detect energy cost and type are further developed. PMID:26449155

  1. Estimating water temperatures in small streams in western Oregon using neural network models

    USGS Publications Warehouse

    Risley, John C.; Roehl, Edwin A.; Conrads, Paul

    2003-01-01

    Artificial neural network models were developed to estimate water temperatures in small streams using data collected at 148 sites throughout western Oregon from June to September 1999. The sites were located on 1st-, 2nd-, or 3rd-order streams having undisturbed or minimally disturbed conditions. Data collected at each site for model development included continuous hourly water temperature and description of riparian habitat. Additional data pertaining to the landscape characteristics of the basins upstream of the sites were assembled using geographic information system (GIS) techniques. Hourly meteorological time series data collected at 25 locations within the study region also were assembled. Clustering analysis was used to partition 142 sites into 3 groups. Separate models were developed for each group. The riparian habitat, basin characteristic, and meteorological time series data were independent variables and water temperature time series were dependent variables to the models, respectively. Approximately one-third of the data vectors were used for model training, and the remaining two-thirds were used for model testing. Critical input variables included riparian shade, site elevation, and percentage of forested area of the basin. Coefficient of determination and root mean square error for the models ranged from 0.88 to 0.99 and 0.05 to 0.59 oC, respectively. The models also were tested and validated using temperature time series, habitat, and basin landscape data from 6 sites that were separate from the 142 sites that were used to develop the models. The models are capable of estimating water temperatures at locations along 1st-, 2nd-, and 3rd-order streams in western Oregon. The model user must assemble riparian habitat and basin landscape characteristics data for a site of interest. These data, in addition to meteorological data, are model inputs. Output from the models include simulated hourly water temperatures for the June to September period. Adjustments can be made to the shade input data to simulate the effects of minimum or maximum shade on water temperatures.

  2. Edge smoothing for real-time simulation of a polygon face object system as viewed by a moving observer

    NASA Technical Reports Server (NTRS)

    Lotz, Robert W. (Inventor); Westerman, David J. (Inventor)

    1980-01-01

    The visual system within an aircraft flight simulation system receives flight data and terrain data which is formated into a buffer memory. The image data is forwarded to an image processor which translates the image data into face vertex vectors Vf, defining the position relationship between the vertices of each terrain object and the aircraft. The image processor then rotates, clips, and projects the image data into two-dimensional display vectors (Vd). A display generator receives the Vd faces, and other image data to provide analog inputs to CRT devices which provide the window displays for the simulated aircraft. The video signal to the CRT devices passes through an edge smoothing device which prolongs the rise time (and fall time) of the video data inversely as the slope of the edge being smoothed. An operational amplifier within the edge smoothing device has a plurality of independently selectable feedback capacitors each having a different value. The values of the capacitors form a series which doubles as a power of two. Each feedback capacitor has a fast switch responsive to the corresponding bit of a digital binary control word for selecting (1) or not selecting (0) that capacitor. The control word is determined by the slope of each edge. The resulting actual feedback capacitance for each edge is the sum of all the selected capacitors and is directly proportional to the value of the binary control word. The output rise time (or fall time) is a function of the feedback capacitance, and is controlled by the slope through the binary control word.

  3. Extended vector-tensor theories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kimura, Rampei; Naruko, Atsushi; Yoshida, Daisuke, E-mail: rampei@th.phys.titech.ac.jp, E-mail: naruko@th.phys.titech.ac.jp, E-mail: yoshida@th.phys.titech.ac.jp

    Recently, several extensions of massive vector theory in curved space-time have been proposed in many literatures. In this paper, we consider the most general vector-tensor theories that contain up to two derivatives with respect to metric and vector field. By imposing a degeneracy condition of the Lagrangian in the context of ADM decomposition of space-time to eliminate an unwanted mode, we construct a new class of massive vector theories where five degrees of freedom can propagate, corresponding to three for massive vector modes and two for massless tensor modes. We find that the generalized Proca and the beyond generalized Procamore » theories up to the quartic Lagrangian, which should be included in this formulation, are degenerate theories even in curved space-time. Finally, introducing new metric and vector field transformations, we investigate the properties of thus obtained theories under such transformations.« less

  4. Models for discrete-time self-similar vector processes with application to network traffic

    NASA Astrophysics Data System (ADS)

    Lee, Seungsin; Rao, Raghuveer M.; Narasimha, Rajesh

    2003-07-01

    The paper defines self-similarity for vector processes by employing the discrete-time continuous-dilation operation which has successfully been used previously by the authors to define 1-D discrete-time stochastic self-similar processes. To define self-similarity of vector processes, it is required to consider the cross-correlation functions between different 1-D processes as well as the autocorrelation function of each constituent 1-D process in it. System models to synthesize self-similar vector processes are constructed based on the definition. With these systems, it is possible to generate self-similar vector processes from white noise inputs. An important aspect of the proposed models is that they can be used to synthesize various types of self-similar vector processes by choosing proper parameters. Additionally, the paper presents evidence of vector self-similarity in two-channel wireless LAN data and applies the aforementioned systems to simulate the corresponding network traffic traces.

  5. An SVM model with hybrid kernels for hydrological time series

    NASA Astrophysics Data System (ADS)

    Wang, C.; Wang, H.; Zhao, X.; Xie, Q.

    2017-12-01

    Support Vector Machine (SVM) models have been widely applied to the forecast of climate/weather and its impact on other environmental variables such as hydrologic response to climate/weather. When using SVM, the choice of the kernel function plays the key role. Conventional SVM models mostly use one single type of kernel function, e.g., radial basis kernel function. Provided that there are several featured kernel functions available, each having its own advantages and drawbacks, a combination of these kernel functions may give more flexibility and robustness to SVM approach, making it suitable for a wide range of application scenarios. This paper presents such a linear combination of radial basis kernel and polynomial kernel for the forecast of monthly flowrate in two gaging stations using SVM approach. The results indicate significant improvement in the accuracy of predicted series compared to the approach with either individual kernel function, thus demonstrating the feasibility and advantages of such hybrid kernel approach for SVM applications.

  6. Coupling detrended fluctuation analysis of Asian stock markets

    NASA Astrophysics Data System (ADS)

    Wang, Qizhen; Zhu, Yingming; Yang, Liansheng; Mul, Remco A. H.

    2017-04-01

    This paper uses the coupling detrended fluctuation analysis (CDFA) method to investigate the multifractal characteristics of four Asian stock markets using three stock indices: stock price returns, trading volumes and the composite index. The results show that coupled correlations exist among the four stock markets and the coupled correlations have multifractal characteristics. We then use the chi square (χ2) test to identify the sources of multifractality. For the different stock indices, the contributions of a single series to multifractality are different. In other words, the contributions of each country to coupled correlations are different. The comparative analysis shows that the research on the combine effect of stock price returns and trading volumes may be more comprehensive than on an individual index. By comparing the strength of multifractality for original data with the residual errors of the vector autoregression (VAR) model, we find that the VAR model could not be used to describe the dynamics of the coupled correlations among four financial time series.

  7. The Ocean's Abyssal Mass Flux Sustained Primarily By the Wind: Vector Correlation of Time Series in Upper and Abyssal Layers

    NASA Astrophysics Data System (ADS)

    Hancock, L. O.

    2003-12-01

    As Wunsch has recently noted (2002), use of the term "thermohaline circulation" is muddled. The term is used with at least seven inconsistent meanings, among them abyssal circulation, the circulation driven by density and pressure differences in the deep ocean, the global conveyor, and at least four others. The use of a single term for all these concepts can create an impression that an understanding exists whereby in various combinations the seven meanings have been demonstrated to mean the same thing. But that is not the case. A particularly important consequence of the muddle is the way in which abyssal circulation is sometimes taken to be driven mostly or entirely by temperature and density differences, and equivalent to the global conveyor. But in fact the distinction between abyssal and upper-layer circulation has not been measured. To find out whether available data justifies a distinction between the upper-layer and abyssal circulations, this study surveyed velocity time series obtained by deep current meter moorings. Altogether, 114 moorings were identified, drawn from about three dozen experiments worldwide over the period 1973-1996, each of which deployed current meters in both the upper (2003750) layers. For each pair of current meters, the Kundu and Crosby measures of vector correlation were estimated, as well as coherences for periods from 10 to 60 days. In the North Atlantic, for example, Kundu vector correlation (50-day window): 0.48 +/- .03 Crosby vector correlation (absolute value, 50 day window): 0.46 +/- .07 Coherence at 60 days: .36 +/- .07 - at 30 days: 0.40 +/- .06 - at 10 days: 0.22 +/- .05 Most figures for the South Atlantic, Pacific and Southern Oceans are similar. Those obtained in the Indian Ocean or near the Equator are somewhat different. The statistics obtained here are consistent with the work of Wunsch (1997), and tend to confirm Wunsch's result that current velocities at depth are linked with those in the upper layers. Energetics of the circulation that do not take this into account are making an unjustifiable approximation of the physics. These results do not tell us whether time averaged flow on longer time scales might permit distinction of upper layer and abyssal flow components. Some intriguing corollaries do follow. First, the abyssal circulation is not identically the same thing as a global conveyor belt driven by temperature and density differences. Rather, as Wunsch noted (2002), the ocean's mass flux is sustained primarily by the wind. We may add that these wind patterns are about as robust as the temperature differences between equator and pole; this major driver of circulation is not a frail phenomenon. Second, the classical notion of a level of no motion that is also a constant-density surface, an LNM, is inconsistent with the results presented here. Such an LNM would wall off the upper layer circulation from the lower, and as they are not walled off, there can be no such LNM. Third, wind stress is being transmitted down column, presumably to the sea floor.

  8. A Time Series Analysis: Weather Factors, Human Migration and Malaria Cases in Endemic Area of Purworejo, Indonesia, 2005–2014

    PubMed Central

    REJEKI, Dwi Sarwani Sri; NURHAYATI, Nunung; AJI, Budi; MURHANDARWATI, E. Elsa Herdiana; KUSNANTO, Hari

    2018-01-01

    Background: Climatic and weather factors become important determinants of vector-borne diseases transmission like malaria. This study aimed to prove relationships between weather factors with considering human migration and previous case findings and malaria cases in endemic areas in Purworejo during 2005–2014. Methods: This study employed ecological time series analysis by using monthly data. The independent variables were the maximum temperature, minimum temperature, maximum humidity, minimum humidity, precipitation, human migration, and previous malaria cases, while the dependent variable was positive malaria cases. Three models of count data regression analysis i.e. Poisson model, quasi-Poisson model, and negative binomial model were applied to measure the relationship. The least Akaike Information Criteria (AIC) value was also performed to find the best model. Negative binomial regression analysis was considered as the best model. Results: The model showed that humidity (lag 2), precipitation (lag 3), precipitation (lag 12), migration (lag1) and previous malaria cases (lag 12) had a significant relationship with malaria cases. Conclusion: Weather, migration and previous malaria cases factors need to be considered as prominent indicators for the increase of malaria case projection. PMID:29900134

  9. Comparison of six methods for the detection of causality in a bivariate time series

    NASA Astrophysics Data System (ADS)

    Krakovská, Anna; Jakubík, Jozef; Chvosteková, Martina; Coufal, David; Jajcay, Nikola; Paluš, Milan

    2018-04-01

    In this comparative study, six causality detection methods were compared, namely, the Granger vector autoregressive test, the extended Granger test, the kernel version of the Granger test, the conditional mutual information (transfer entropy), the evaluation of cross mappings between state spaces, and an assessment of predictability improvement due to the use of mixed predictions. Seven test data sets were analyzed: linear coupling of autoregressive models, a unidirectional connection of two Hénon systems, a unidirectional connection of chaotic systems of Rössler and Lorenz type and of two different Rössler systems, an example of bidirectionally connected two-species systems, a fishery model as an example of two correlated observables without a causal relationship, and an example of mediated causality. We tested not only 20 000 points long clean time series but also noisy and short variants of the data. The standard and the extended Granger tests worked only for the autoregressive models. The remaining methods were more successful with the more complex test examples, although they differed considerably in their capability to reveal the presence and the direction of coupling and to distinguish causality from mere correlation.

  10. A fault diagnosis scheme for rolling bearing based on local mean decomposition and improved multiscale fuzzy entropy

    NASA Astrophysics Data System (ADS)

    Li, Yongbo; Xu, Minqiang; Wang, Rixin; Huang, Wenhu

    2016-01-01

    This paper presents a new rolling bearing fault diagnosis method based on local mean decomposition (LMD), improved multiscale fuzzy entropy (IMFE), Laplacian score (LS) and improved support vector machine based binary tree (ISVM-BT). When the fault occurs in rolling bearings, the measured vibration signal is a multi-component amplitude-modulated and frequency-modulated (AM-FM) signal. LMD, a new self-adaptive time-frequency analysis method can decompose any complicated signal into a series of product functions (PFs), each of which is exactly a mono-component AM-FM signal. Hence, LMD is introduced to preprocess the vibration signal. Furthermore, IMFE that is designed to avoid the inaccurate estimation of fuzzy entropy can be utilized to quantify the complexity and self-similarity of time series for a range of scales based on fuzzy entropy. Besides, the LS approach is introduced to refine the fault features by sorting the scale factors. Subsequently, the obtained features are fed into the multi-fault classifier ISVM-BT to automatically fulfill the fault pattern identifications. The experimental results validate the effectiveness of the methodology and demonstrate that proposed algorithm can be applied to recognize the different categories and severities of rolling bearings.

  11. Performance Evaluation of Machine Learning Methods for Leaf Area Index Retrieval from Time-Series MODIS Reflectance Data

    PubMed Central

    Wang, Tongtong; Xiao, Zhiqiang; Liu, Zhigang

    2017-01-01

    Leaf area index (LAI) is an important biophysical parameter and the retrieval of LAI from remote sensing data is the only feasible method for generating LAI products at regional and global scales. However, most LAI retrieval methods use satellite observations at a specific time to retrieve LAI. Because of the impacts of clouds and aerosols, the LAI products generated by these methods are spatially incomplete and temporally discontinuous, and thus they cannot meet the needs of practical applications. To generate high-quality LAI products, four machine learning algorithms, including back-propagation neutral network (BPNN), radial basis function networks (RBFNs), general regression neutral networks (GRNNs), and multi-output support vector regression (MSVR) are proposed to retrieve LAI from time-series Moderate Resolution Imaging Spectroradiometer (MODIS) reflectance data in this study and performance of these machine learning algorithms is evaluated. The results demonstrated that GRNNs, RBFNs, and MSVR exhibited low sensitivity to training sample size, whereas BPNN had high sensitivity. The four algorithms performed slightly better with red, near infrared (NIR), and short wave infrared (SWIR) bands than red and NIR bands, and the results were significantly better than those obtained using single band reflectance data (red or NIR). Regardless of band composition, GRNNs performed better than the other three methods. Among the four algorithms, BPNN required the least training time, whereas MSVR needed the most for any sample size. PMID:28045443

  12. Performance Evaluation of Machine Learning Methods for Leaf Area Index Retrieval from Time-Series MODIS Reflectance Data.

    PubMed

    Wang, Tongtong; Xiao, Zhiqiang; Liu, Zhigang

    2017-01-01

    Leaf area index (LAI) is an important biophysical parameter and the retrieval of LAI from remote sensing data is the only feasible method for generating LAI products at regional and global scales. However, most LAI retrieval methods use satellite observations at a specific time to retrieve LAI. Because of the impacts of clouds and aerosols, the LAI products generated by these methods are spatially incomplete and temporally discontinuous, and thus they cannot meet the needs of practical applications. To generate high-quality LAI products, four machine learning algorithms, including back-propagation neutral network (BPNN), radial basis function networks (RBFNs), general regression neutral networks (GRNNs), and multi-output support vector regression (MSVR) are proposed to retrieve LAI from time-series Moderate Resolution Imaging Spectroradiometer (MODIS) reflectance data in this study and performance of these machine learning algorithms is evaluated. The results demonstrated that GRNNs, RBFNs, and MSVR exhibited low sensitivity to training sample size, whereas BPNN had high sensitivity. The four algorithms performed slightly better with red, near infrared (NIR), and short wave infrared (SWIR) bands than red and NIR bands, and the results were significantly better than those obtained using single band reflectance data (red or NIR). Regardless of band composition, GRNNs performed better than the other three methods. Among the four algorithms, BPNN required the least training time, whereas MSVR needed the most for any sample size.

  13. Towards robust identification and tracking of nevi in sparse photographic time series

    NASA Astrophysics Data System (ADS)

    Vogel, Jakob; Duliu, Alexandru; Oyamada, Yuji; Gardiazabal, Jose; Lasser, Tobias; Ziai, Mahzad; Hein, Rüdiger; Navab, Nassir

    2014-03-01

    In dermatology, photographic imagery is acquired in large volumes in order to monitor the progress of diseases, especially melanocytic skin cancers. For this purpose, overview (macro) images are taken of the region of interest and used as a reference map to re-localize highly magni ed images of individual lesions. The latter are then used for diagnosis. These pictures are acquired at irregular intervals under only partially constrained circumstances, where patient positions as well as camera positions are not reliable. In the presence of a large number of nevi, correct identi cation of the same nevus in a series of such images is thus a time consuming task with ample chances for error. This paper introduces a method for largely automatic and simultaneous identi cation of nevi in di erent images, thus allowing the tracking of a single nevus over time, as well as pattern evaluation. The method uses a rotation-invariant feature descriptor that uses the local neighborhood of a nevus to describe it. The texture, size and shape of the nevus are not used to describe it, as these can change over time, especially in the case of a malignancy. We then use the Random Walks framework to compute the correspondences based on the probabilities derived from comparing the feature vectors. Evaluation is performed on synthetic and patient data at the university clinic.

  14. Changing image of correlation optics: introduction.

    PubMed

    Angelsky, Oleg V; Desyatnikov, Anton S; Gbur, Gregory J; Hanson, Steen G; Lee, Tim; Miyamoto, Yoko; Schneckenburger, Herbert; Wyant, James C

    2016-04-20

    This feature issue of Applied Optics contains a series of selected papers reflecting recent progress of correlation optics and illustrating current trends in vector singular optics, internal energy flows at light fields, optical science of materials, and new biomedical applications of lasers.

  15. The Helioseismic and Magnetic Imager (HMI) Vector Magnetic Field Pipeline: SHARPs - Space-Weather HMI Active Region Patches

    NASA Astrophysics Data System (ADS)

    Bobra, M. G.; Sun, X.; Hoeksema, J. T.; Turmon, M.; Liu, Y.; Hayashi, K.; Barnes, G.; Leka, K. D.

    2014-09-01

    A new data product from the Helioseismic and Magnetic Imager (HMI) onboard the Solar Dynamics Observatory (SDO) called Space-weather HMI Active Region Patches ( SHARPs) is now available. SDO/HMI is the first space-based instrument to map the full-disk photospheric vector magnetic field with high cadence and continuity. The SHARP data series provide maps in patches that encompass automatically tracked magnetic concentrations for their entire lifetime; map quantities include the photospheric vector magnetic field and its uncertainty, along with Doppler velocity, continuum intensity, and line-of-sight magnetic field. Furthermore, keywords in the SHARP data series provide several parameters that concisely characterize the magnetic-field distribution and its deviation from a potential-field configuration. These indices may be useful for active-region event forecasting and for identifying regions of interest. The indices are calculated per patch and are available on a twelve-minute cadence. Quick-look data are available within approximately three hours of observation; definitive science products are produced approximately five weeks later. SHARP data are available at jsoc.stanford.edu and maps are available in either of two different coordinate systems. This article describes the SHARP data products and presents examples of SHARP data and parameters.

  16. Broad-host-range plasmids for red fluorescent protein labeling of gram-negative bacteria for use in the zebrafish model system.

    PubMed

    Singer, John T; Phennicie, Ryan T; Sullivan, Matthew J; Porter, Laura A; Shaffer, Valerie J; Kim, Carol H

    2010-06-01

    To observe real-time interactions between green fluorescent protein-labeled immune cells and invading bacteria in the zebrafish (Danio rerio), a series of plasmids was constructed for the red fluorescent protein (RFP) labeling of a variety of fish and human pathogens. The aim of this study was to create a collection of plasmids that would express RFP pigments both constitutively and under tac promoter regulation and that would be nontoxic and broadly transmissible to a variety of Gram-negative bacteria. DNA fragments encoding the RFP dimeric (d), monomeric (m), and tandem dimeric (td) derivatives d-Tomato, td-Tomato, m-Orange, and m-Cherry were cloned into the IncQ-based vector pMMB66EH in Escherichia coli. Plasmids were mobilized into recipient strains by conjugal mating. Pigment production was inducible in Escherichia coli, Pseudomonas aeruginosa, Edwardsiella tarda, and Vibrio (Listonella) anguillarum strains by isopropyl-beta-d-thiogalactopyranoside (IPTG) treatment. A spontaneous mutant exconjugant of P. aeruginosa PA14 was isolated that expressed td-Tomato constitutively. Complementation analysis revealed that the constitutive phenotype likely was due to a mutation in lacI(q) carried on pMMB66EH. DNA sequence analysis confirmed the presence of five transitions, four transversions, and a 2-bp addition within a 14-bp region of lacI. Vector DNA was purified from this constitutive mutant, and structural DNA sequences for RFP pigments were cloned into the constitutive vector. Exconjugants of P. aeruginosa, E. tarda, and V. anguillarum expressed all pigments in an IPTG-independent fashion. Results from zebrafish infectivity studies indicate that RFP-labeled pathogens will be useful for the study of real-time interactions between host cells of the innate immune system and the infecting pathogen.

  17. Structure of the lunar interior from magnetic field measurements

    NASA Technical Reports Server (NTRS)

    Dyal, P.; Parkin, C. W.; Daily, W. D.

    1976-01-01

    A network of lunar surface and orbiting magnetometers was used to obtain measurements of electrical conductivity and magnetic permeability of the lunar interior. An exceptionally large solar transient event, when the moon was in a geomagnetic tail lobe, enabled the most accurate lunar electromagnetic sounding information to date to be obtained. A new analytical technique using a network of two surface magnetometers and a satellite magnetometer superimposes many time series measurements to improve the signal-to-noise ratio and uses both the amplitude and phase information of all three vector components of the magnetic field data. Size constraints on a hypothetical highly conducting lunar core are investigated with the aid of the permeability results.

  18. On the time-dependent Aharonov-Bohm effect

    NASA Astrophysics Data System (ADS)

    Jing, Jian; Zhang, Yu-Fei; Wang, Kang; Long, Zheng-Wen; Dong, Shi-Hai

    2017-11-01

    The Aharonov-Bohm effect in the background of a time-dependent vector potential is re-examined for both non-relativistic and relativistic cases. Based on the solutions to the Schrodinger and Dirac equations which contain the time-dependent magnetic vector potential, we find that contrary to the conclusions in a recent paper (Singleton and Vagenas 2013 [4]), the interference pattern will be altered with respect to time because of the time-dependent vector potential.

  19. Synthesis and aggregation properties of dissymmetric phytanyl-gemini surfactants for use as improved DNA transfection vectors.

    PubMed

    Wang, Haitang; Wettig, Shawn D

    2011-01-14

    Improvements in transfection efficiency are required in order to make the goal of cellular gene delivery by non-viral vectors realizable. Novel derivatives of gemini surfactants having dissymmetric tail groups have been designed specifically as a means to improve DNA transfection; the micelle and interfacial properties are reported herein. The effect of these substitutions on the aggregation properties of the gemini surfactants is discussed in the context of results for the m-3-m gemini series, previously reported in the literature. Phytanyl substitution results in lower cmc and higher micelle ionization. In addition, the phytanyl substituted gemini surfactants form vesicles at room temperature. Preliminary in vitro transfection assays showed the phytanyl substituted gemini surfactants to be more efficient transfection vectors as compared to symmetric gemini surfactants.

  20. Classification of a set of vectors using self-organizing map- and rule-based technique

    NASA Astrophysics Data System (ADS)

    Ae, Tadashi; Okaniwa, Kaishirou; Nosaka, Kenzaburou

    2005-02-01

    There exist various objects, such as pictures, music, texts, etc., around our environment. We have a view for these objects by looking, reading or listening. Our view is concerned with our behaviors deeply, and is very important to understand our behaviors. We have a view for an object, and decide the next action (data selection, etc.) with our view. Such a series of actions constructs a sequence. Therefore, we propose a method which acquires a view as a vector from several words for a view, and apply the vector to sequence generation. We focus on sequences of the data of which a user selects from a multimedia database containing pictures, music, movie, etc... These data cannot be stereotyped because user's view for them changes by each user. Therefore, we represent the structure of the multimedia database as the vector representing user's view and the stereotyped vector, and acquire sequences containing the structure as elements. Such a vector can be classified by SOM (Self-Organizing Map). Hidden Markov Model (HMM) is a method to generate sequences. Therefore, we use HMM of which a state corresponds to the representative vector of user's view, and acquire sequences containing the change of user's view. We call it Vector-state Markov Model (VMM). We introduce the rough set theory as a rule-base technique, which plays a role of classifying the sets of data such as the sets of "Tour".

  1. Hybrid biosynthetic gene therapy vector development and dual engineering capacity.

    PubMed

    Jones, Charles H; Ravikrishnan, Anitha; Chen, Mingfu; Reddinger, Ryan; Kamal Ahmadi, Mahmoud; Rane, Snehal; Hakansson, Anders P; Pfeifer, Blaine A

    2014-08-26

    Genetic vaccines offer a treatment opportunity based upon successful gene delivery to specific immune cell modulators. Driving the process is the vector chosen for gene cargo packaging and subsequent delivery to antigen-presenting cells (APCs) capable of triggering an immune cascade. As such, the delivery process must successfully navigate a series of requirements and obstacles associated with the chosen vector and target cell. In this work, we present the development and assessment of a hybrid gene delivery vector containing biological and biomaterial components. Each component was chosen to design and engineer gene delivery separately in a complimentary and fundamentally distinct fashion. A bacterial (Escherichia coli) inner core and a biomaterial [poly(beta-amino ester)]-coated outer surface allowed the simultaneous application of molecular biology and polymer chemistry to address barriers associated with APC gene delivery, which include cellular uptake and internalization, phagosomal escape, and intracellular cargo concentration. The approach combined and synergized normally disparate vector properties and tools, resulting in increased in vitro gene delivery beyond individual vector components or commercially available transfection agents. Furthermore, the hybrid device demonstrated a strong, efficient, and safe in vivo humoral immune response compared with traditional forms of antigen delivery. In summary, the flexibility, diversity, and potential of the hybrid design were developed and featured in this work as a platform for multivariate engineering at the vector and cellular scales for new applications in gene delivery immunotherapy.

  2. pSW2, a Novel Low-Temperature-Inducible Gene Expression Vector Based on a Filamentous Phage of the Deep-Sea Bacterium Shewanella piezotolerans WP3.

    PubMed

    Yang, Xin-Wei; Jian, Hua-Hua; Wang, Feng-Ping

    2015-08-15

    A low-temperature-inducible protein expression vector (pSW2) based on a filamentous phage (SW1) of the deep-sea bacterium Shewanella piezotolerans WP3 was constructed. This vector replicated stably in Escherichia coli and Shewanella species, and its copy number increased at low temperatures. The pSW2 vector can be utilized as a complementation plasmid in WP3, and it can also be used for the production of complex cytochromes with multiple heme groups, which has the potential for application for metal ion recovery or bioremediation. Promoters of low-temperature-inducible genes in WP3 were fused into the vector to construct a series of vectors for enhancing protein expression at low temperature. The maximum green fluorescent protein intensity was obtained when the promoter for the hfq gene was used. The WP3/pSW2 system can efficiently produce a patatin-like protein (PLP) from a metagenomic library that tends to form inclusion bodies in E. coli. The yields of PLP in the soluble fraction were 8.3 mg/liter and 4.7 mg/liter of culture at 4°C and 20°C, respectively. Moreover, the pSW2 vector can be broadly utilized in other Shewanella species, such as S. oneidensis and S. psychrophila. Copyright © 2015, American Society for Microbiology. All Rights Reserved.

  3. Magnetosheath for almost-aligned solar wind magnetic field and flow vectors: Wind observations across the dawnside magnetosheath at X = -12 Re

    NASA Astrophysics Data System (ADS)

    Farrugia, C. J.; Erkaev, N. V.; Torbert, R. B.; Biernat, H. K.; Gratton, F. T.; Szabo, A.; Kucharek, H.; Matsui, H.; Lin, R. P.; Ogilvie, K. W.; Lepping, R. P.; Smith, C. W.

    2010-08-01

    While there are many approximations describing the flow of the solar wind past the magnetosphere in the magnetosheath, the case of perfectly aligned (parallel or anti-parallel) interplanetary magnetic field (IMF) and solar wind flow vectors can be treated exactly in a magnetohydrodynamic (MHD) approach. In this work we examine a case of nearly-opposed (to within 15°) interplanetary field and flow vectors, which occurred on October 24-25, 2001 during passage of the last interplanetary coronal mass ejection in an ejecta merger. Interplanetary data are from the ACE spacecraft. Simultaneously Wind was crossing the near-Earth (X ˜ -13 Re) geomagnetic tail and subsequently made an approximately 5-hour-long magnetosheath crossing close to the ecliptic plane (Z = -0.7 Re). Geomagnetic activity was returning steadily to quiet, “ground” conditions. We first compare the predictions of the Spreiter and Rizzi theory with the Wind magnetosheath observations and find fair agreement, in particular as regards the proportionality of the magnetic field strength and the product of the plasma density and bulk speed. We then carry out a small-perturbation analysis of the Spreiter and Rizzi solution to account for the small IMF components perpendicular to the flow vector. The resulting expression is compared to the time series of the observations and satisfactory agreement is obtained. We also present and discuss observations in the dawnside boundary layer of pulsed, high-speed (v ˜ 600 km/s) flows exceeding the solar wind flow speeds. We examine various generating mechanisms and suggest that the most likely cause is a wave of frequency 3.2 mHz excited at the inner edge of the boundary layer by the Kelvin-Helmholtz instability.

  4. Magnetosheath for almost-aligned solar wind magnetic field and flow vectors: Windobservations across the dawnside magnetosheath at X = -12 Re

    NASA Astrophysics Data System (ADS)

    Farrugia, Charles

    While there are many approximations describing the flow of the solar wind past the mag-netosphere in the magnetosheath, the case of perfectly aligned (parallel or anti-parallel) in-terplanetary magnetic field (IMF) and solar wind flow vectors can be treated exactly in an magnetohydrodynamic (MHD) approach (Spreiter and Rizzi, 1974). In this work we examine a case of nearly-opposed (to within 15 deg) interplanetary field and flow vectors, which occurred on October 24-25, 2001 during passage of the last interplanetary coronal mass ejection in an ejecta merger. Interplanetary data are from the ACE spacecraft. Simultaneously Wind was crossing the near-Earth (X -13 Re) geomagnetic tail and subsequently made a 5-hour-long magnetosheath crossing close to the ecliptic plane (Z = -0.7 Re). Geomagnetic activity was returning steadily to quiet, "ground" conditions. We first compare the predictions of the Spre-iter and Rizzi theory with the Wind magnetosheath observations and find fair agreement, in particular as regards the proportionality of the magnetic field strength and the product of the plasma density and bulk speed. We then carry out a small-perturbation analysis of the Spreiter and Rizzi solution to account for the small IMF components perpendicular to the flow vector. The resulting expression is compared to the time series of the observations and satisfactory agreement is obtained. We also present and discuss observations in the dawnside boundary layer of pulsed, high-speed (v 600 km/s) flows exceeding the solar wind flow speeds. We examine various generating mechanisms and suggest that the most likely causeis a wave of frequency 3.2 mHz excited at the inner edge of the boundary layer.

  5. Spatial Models for Prediction and Early Warning of Aedes aegypti Proliferation from Data on Climate Change and Variability in Cuba.

    PubMed

    Ortiz, Paulo L; Rivero, Alina; Linares, Yzenia; Pérez, Alina; Vázquez, Juan R

    2015-04-01

    Climate variability, the primary expression of climate change, is one of the most important environmental problems affecting human health, particularly vector-borne diseases. Despite research efforts worldwide, there are few studies addressing the use of information on climate variability for prevention and early warning of vector-borne infectious diseases. Show the utility of climate information for vector surveillance by developing spatial models using an entomological indicator and information on predicted climate variability in Cuba to provide early warning of danger of increased risk of dengue transmission. An ecological study was carried out using retrospective and prospective analyses of time series combined with spatial statistics. Several entomological and climatic indicators were considered using complex Bultó indices -1 and -2. Moran's I spatial autocorrelation coefficient specified for a matrix of neighbors with a radius of 20 km, was used to identify the spatial structure. Spatial structure simulation was based on simultaneous autoregressive and conditional autoregressive models; agreement between predicted and observed values for number of Aedes aegypti foci was determined by the concordance index Di and skill factor Bi. Spatial and temporal distributions of populations of Aedes aegypti were obtained. Models for describing, simulating and predicting spatial patterns of Aedes aegypti populations associated with climate variability patterns were put forward. The ranges of climate variability affecting Aedes aegypti populations were identified. Forecast maps were generated for the municipal level. Using the Bultó indices of climate variability, it is possible to construct spatial models for predicting increased Aedes aegypti populations in Cuba. At 20 x 20 km resolution, the models are able to provide warning of potential changes in vector populations in rainy and dry seasons and by month, thus demonstrating the usefulness of climate information for epidemiological surveillance.

  6. Acid-Labile Poly(glycidyl methacrylate)-Based Star Gene Vectors.

    PubMed

    Yang, Yan-Yu; Hu, Hao; Wang, Xing; Yang, Fei; Shen, Hong; Xu, Fu-Jian; Wu, De-Cheng

    2015-06-10

    It was recently reported that ethanolamine-functionalized poly(glycidyl methacrylate) (PGEA) possesses great potential applications in gene therapy due to its good biocompatibility and high transfection efficiency. Importing responsivity into PGEA vectors would further improve their performances. Herein, a series of responsive star-shaped vectors, acetaled β-cyclodextrin-PGEAs (A-CD-PGEAs) consisting of a β-CD core and five PGEA arms linked by acid-labile acetal groups, were proposed and characterized as therapeutic pDNA vectors. The A-CD-PGEAs owned abundant hydroxyl groups to shield extra positive charges of A-CD-PGEAs/pDNA complexes, and the star structure could decrease charge density. The incorporation of acetal linkers endowed A-CD-PGEAs with pH responsivity and degradation. In weakly acidic endosome, the broken acetal linkers resulted in decomposition of A-CD-PGEAs and morphological transformation of A-CD-PGEAs/pDNA complexes, lowering cytotoxicity and accelerating release of pDNA. In comparison with control CD-PGEAs without acetal linkers, A-CD-PGEAs exhibited significantly better transfection performances.

  7. A novel multi-target regression framework for time-series prediction of drug efficacy.

    PubMed

    Li, Haiqing; Zhang, Wei; Chen, Ying; Guo, Yumeng; Li, Guo-Zheng; Zhu, Xiaoxin

    2017-01-18

    Excavating from small samples is a challenging pharmacokinetic problem, where statistical methods can be applied. Pharmacokinetic data is special due to the small samples of high dimensionality, which makes it difficult to adopt conventional methods to predict the efficacy of traditional Chinese medicine (TCM) prescription. The main purpose of our study is to obtain some knowledge of the correlation in TCM prescription. Here, a novel method named Multi-target Regression Framework to deal with the problem of efficacy prediction is proposed. We employ the correlation between the values of different time sequences and add predictive targets of previous time as features to predict the value of current time. Several experiments are conducted to test the validity of our method and the results of leave-one-out cross-validation clearly manifest the competitiveness of our framework. Compared with linear regression, artificial neural networks, and partial least squares, support vector regression combined with our framework demonstrates the best performance, and appears to be more suitable for this task.

  8. A novel multi-target regression framework for time-series prediction of drug efficacy

    PubMed Central

    Li, Haiqing; Zhang, Wei; Chen, Ying; Guo, Yumeng; Li, Guo-Zheng; Zhu, Xiaoxin

    2017-01-01

    Excavating from small samples is a challenging pharmacokinetic problem, where statistical methods can be applied. Pharmacokinetic data is special due to the small samples of high dimensionality, which makes it difficult to adopt conventional methods to predict the efficacy of traditional Chinese medicine (TCM) prescription. The main purpose of our study is to obtain some knowledge of the correlation in TCM prescription. Here, a novel method named Multi-target Regression Framework to deal with the problem of efficacy prediction is proposed. We employ the correlation between the values of different time sequences and add predictive targets of previous time as features to predict the value of current time. Several experiments are conducted to test the validity of our method and the results of leave-one-out cross-validation clearly manifest the competitiveness of our framework. Compared with linear regression, artificial neural networks, and partial least squares, support vector regression combined with our framework demonstrates the best performance, and appears to be more suitable for this task. PMID:28098186

  9. Extending the length and time scales of Gram-Schmidt Lyapunov vector computations

    NASA Astrophysics Data System (ADS)

    Costa, Anthony B.; Green, Jason R.

    2013-08-01

    Lyapunov vectors have found growing interest recently due to their ability to characterize systems out of thermodynamic equilibrium. The computation of orthogonal Gram-Schmidt vectors requires multiplication and QR decomposition of large matrices, which grow as N2 (with the particle count). This expense has limited such calculations to relatively small systems and short time scales. Here, we detail two implementations of an algorithm for computing Gram-Schmidt vectors. The first is a distributed-memory message-passing method using Scalapack. The second uses the newly-released MAGMA library for GPUs. We compare the performance of both codes for Lennard-Jones fluids from N=100 to 1300 between Intel Nahalem/Infiniband DDR and NVIDIA C2050 architectures. To our best knowledge, these are the largest systems for which the Gram-Schmidt Lyapunov vectors have been computed, and the first time their calculation has been GPU-accelerated. We conclude that Lyapunov vector calculations can be significantly extended in length and time by leveraging the power of GPU-accelerated linear algebra.

  10. The Interaction between Vector Life History and Short Vector Life in Vector-Borne Disease Transmission and Control.

    PubMed

    Brand, Samuel P C; Rock, Kat S; Keeling, Matt J

    2016-04-01

    Epidemiological modelling has a vital role to play in policy planning and prediction for the control of vectors, and hence the subsequent control of vector-borne diseases. To decide between competing policies requires models that can generate accurate predictions, which in turn requires accurate knowledge of vector natural histories. Here we highlight the importance of the distribution of times between life-history events, using short-lived midge species as an example. In particular we focus on the distribution of the extrinsic incubation period (EIP) which determines the time between infection and becoming infectious, and the distribution of the length of the gonotrophic cycle which determines the time between successful bites. We show how different assumptions for these periods can radically change the basic reproductive ratio (R0) of an infection and additionally the impact of vector control on the infection. These findings highlight the need for detailed entomological data, based on laboratory experiments and field data, to correctly construct the next-generation of policy-informing models.

  11. Strategies and approaches to vector control in nine malaria-eliminating countries: a cross-case study analysis.

    PubMed

    Smith Gueye, Cara; Newby, Gretchen; Gosling, Roland D; Whittaker, Maxine A; Chandramohan, Daniel; Slutsker, Laurence; Tanner, Marcel

    2016-01-04

    There has been progress towards malaria elimination in the last decade. In response, WHO launched the Global Technical Strategy (GTS), in which vector surveillance and control play important roles. Country experiences in the Eliminating Malaria Case Study Series were reviewed to identify success factors on the road to elimination using a cross-case study analytic approach. Reports were included in the analysis if final English language draft reports or publications were available at the time of analysis (Bhutan, Cape Verde, Malaysia, Mauritius, Namibia, Philippines, Sri Lanka, Turkey, Turkmenistan). A conceptual framework for vector control in malaria elimination was developed, reviewed, formatted as a matrix, and case study data was extracted and entered into the matrix. A workshop was convened during which participants conducted reviews of the case studies and matrices and arrived at a consensus on the evidence and lessons. The framework was revised and a second round of data extraction, synthesis and summary of the case study reports was conducted. Countries implemented a range of vector control interventions. Most countries aligned with integrated vector management, however its impact was not well articulated. All programmes conducted entomological surveillance, but the response (i.e., stratification and targeting of interventions, outbreak forecasting and strategy) was limited or not described. Indoor residual spraying (IRS) was commonly used by countries. There were several examples of severe reductions or halting of IRS coverage and subsequent resurgence of malaria. Funding and operational constraints and poor implementation had roles. Bed nets were commonly used by most programmes; coverage and effectiveness were either not measured or not articulated. Larval control was an important intervention for several countries, preventing re-introduction, however coverage and impact on incidence were not described. Across all interventions, coverage indicators were incomparable, and the rationale for which tools were used and which were not used appeared to be a function of the availability of funding, operational issues and cost instead of evidence of effectiveness to reduce incidence. More work is required to fill gaps in programme guidance, clarify the best methods for choosing and targeting vector control interventions, and support to measure cost, cost-effectiveness and cost-benefit of vector surveillance and control interventions.

  12. Break and trend analysis of EUMETSAT Climate Data Records

    NASA Astrophysics Data System (ADS)

    Doutriaux-Boucher, Marie; Zeder, Joel; Lattanzio, Alessio; Khlystova, Iryna; Graw, Kathrin

    2016-04-01

    EUMETSAT reprocessed imagery acquired by the Spinning Enhanced Visible and Infrared Imager (SEVIRI) on board Meteosat 8-9. The data covers the period from 2004 to 2012. Climate Data Records (CDRs) of atmospheric parameters such as Atmospheric Motion Vectors (AMV) as well as Clear and All Sky Radiances (CSR and ASR) have been generated. Such CDRs are mainly ingested by ECMWF to produce a reanalysis data. In addition, EUMETSAT produced a long CDR (1982-2004) of land surface albedo exploiting imagery acquired by the Meteosat Visible and Infrared Imager (MVIRI) on board Meteosat 2-7. Such CDR is key information in climate analysis and climate models. Extensive validation has been performed for the surface albedo record and a first validation of the winds and clear sky radiances have been done. All validation results demonstrated that the time series of all parameter appear homogeneous at first sight. Statistical science offers a variety of analyses methods that have been applied to further analyse the homogeneity of the CDRs. Many breakpoint analysis techniques depend on the comparison of two time series which incorporates the issue that both may have breakpoints. This paper will present a quantitative and statistical analysis of eventual breakpoints found in the MVIRI and SEVIRI CDRs that includes attribution of breakpoints to changes of instruments and other events in the data series compared. The value of different methods applied will be discussed with suggestions how to further develop this type of analysis for quality evaluation of CDRs.

  13. A Guided Tour of Mathematical Methods

    NASA Astrophysics Data System (ADS)

    Snieder, Roel

    2009-04-01

    1. Introduction; 2. Dimensional analysis; 3. Power series; 4. Spherical and cylindrical co-ordinates; 5. The gradient; 6. The divergence of a vector field; 7. The curl of a vector field; 8. The theorem of Gauss; 9. The theorem of Stokes; 10. The Laplacian; 11. Conservation laws; 12. Scale analysis; 13. Linear algebra; 14. The Dirac delta function; 15. Fourier analysis; 16. Analytic functions; 17. Complex integration; 18. Green's functions: principles; 19. Green's functions: examples; 20. Normal modes; 21. Potential theory; 22. Cartesian tensors; 23. Perturbation theory; 24. Asymptotic evaluation of integrals; 25. Variational calculus; 26. Epilogue, on power and knowledge; References.

  14. Study of the modifications needed for efficient operation of NASTRAN on the Control Data Corporation STAR-100 computer

    NASA Technical Reports Server (NTRS)

    1975-01-01

    NASA structural analysis (NASTRAN) computer program is operational on three series of third generation computers. The problem and difficulties involved in adapting NASTRAN to a fourth generation computer, namely, the Control Data STAR-100, are discussed. The salient features which distinguish Control Data STAR-100 from third generation computers are hardware vector processing capability and virtual memory. A feasible method is presented for transferring NASTRAN to Control Data STAR-100 system while retaining much of the machine-independent code. Basic matrix operations are noted for optimization for vector processing.

  15. On diagrammatic technique for nonlinear dynamical systems

    NASA Astrophysics Data System (ADS)

    Semenyakin, Mykola

    2014-11-01

    In this paper, we investigate phase flows over ℂn and ℝn generated by vector fields V = ∑ Pi∂i where Pi are finite degree polynomials. With the convenient diagrammatic technique, we get expressions for evolution operators ev{V|t} : x(0) ↦ x(t) through the series in powers of x(0) and t, represented as sum over all trees of a particular type. Estimates are made for the radius of convergence in some particular cases. The phase flows behavior in the neighborhood of vector field fixed points are examined. Resonance cases are considered separately.

  16. MOBBED: a computational data infrastructure for handling large collections of event-rich time series datasets in MATLAB

    PubMed Central

    Cockfield, Jeremy; Su, Kyungmin; Robbins, Kay A.

    2013-01-01

    Experiments to monitor human brain activity during active behavior record a variety of modalities (e.g., EEG, eye tracking, motion capture, respiration monitoring) and capture a complex environmental context leading to large, event-rich time series datasets. The considerable variability of responses within and among subjects in more realistic behavioral scenarios requires experiments to assess many more subjects over longer periods of time. This explosion of data requires better computational infrastructure to more systematically explore and process these collections. MOBBED is a lightweight, easy-to-use, extensible toolkit that allows users to incorporate a computational database into their normal MATLAB workflow. Although capable of storing quite general types of annotated data, MOBBED is particularly oriented to multichannel time series such as EEG that have event streams overlaid with sensor data. MOBBED directly supports access to individual events, data frames, and time-stamped feature vectors, allowing users to ask questions such as what types of events or features co-occur under various experimental conditions. A database provides several advantages not available to users who process one dataset at a time from the local file system. In addition to archiving primary data in a central place to save space and avoid inconsistencies, such a database allows users to manage, search, and retrieve events across multiple datasets without reading the entire dataset. The database also provides infrastructure for handling more complex event patterns that include environmental and contextual conditions. The database can also be used as a cache for expensive intermediate results that are reused in such activities as cross-validation of machine learning algorithms. MOBBED is implemented over PostgreSQL, a widely used open source database, and is freely available under the GNU general public license at http://visual.cs.utsa.edu/mobbed. Source and issue reports for MOBBED are maintained at http://vislab.github.com/MobbedMatlab/ PMID:24124417

  17. MOBBED: a computational data infrastructure for handling large collections of event-rich time series datasets in MATLAB.

    PubMed

    Cockfield, Jeremy; Su, Kyungmin; Robbins, Kay A

    2013-01-01

    Experiments to monitor human brain activity during active behavior record a variety of modalities (e.g., EEG, eye tracking, motion capture, respiration monitoring) and capture a complex environmental context leading to large, event-rich time series datasets. The considerable variability of responses within and among subjects in more realistic behavioral scenarios requires experiments to assess many more subjects over longer periods of time. This explosion of data requires better computational infrastructure to more systematically explore and process these collections. MOBBED is a lightweight, easy-to-use, extensible toolkit that allows users to incorporate a computational database into their normal MATLAB workflow. Although capable of storing quite general types of annotated data, MOBBED is particularly oriented to multichannel time series such as EEG that have event streams overlaid with sensor data. MOBBED directly supports access to individual events, data frames, and time-stamped feature vectors, allowing users to ask questions such as what types of events or features co-occur under various experimental conditions. A database provides several advantages not available to users who process one dataset at a time from the local file system. In addition to archiving primary data in a central place to save space and avoid inconsistencies, such a database allows users to manage, search, and retrieve events across multiple datasets without reading the entire dataset. The database also provides infrastructure for handling more complex event patterns that include environmental and contextual conditions. The database can also be used as a cache for expensive intermediate results that are reused in such activities as cross-validation of machine learning algorithms. MOBBED is implemented over PostgreSQL, a widely used open source database, and is freely available under the GNU general public license at http://visual.cs.utsa.edu/mobbed. Source and issue reports for MOBBED are maintained at http://vislab.github.com/MobbedMatlab/

  18. geomIO: A tool for geodynamicists to turn 2D cross-sections into 3D geometries

    NASA Astrophysics Data System (ADS)

    Baumann, Tobias; Bauville, Arthur

    2016-04-01

    In numerical deformation models, material properties are usually defined on elements (e.g., in body-fitted finite elements), or on a set of Lagrangian markers (Eulerian, ALE or mesh-free methods). In any case, geometrical constraints are needed to assign different material properties to the model domain. Whereas simple geometries such as spheres, layers or cuboids can easily be programmed, it quickly gets complex and time-consuming to create more complicated geometries for numerical model setups, especially in three dimensions. geomIO (geometry I/O, http://geomio.bitbucket.org/) is a MATLAB-based library that has two main functionalities. First, it can be used to create 3D volumes based on series of 2D vector drawings similar to a CAD program; and second, it uses these 3D volumes to assign material properties to the numerical model domain. The drawings can conveniently be created using the open-source vector graphics software Inkscape. Adobe Illustrator is also partially supported. The drawings represent a series of cross-sections in the 3D model domain, for example, cross-sectional interpretations of seismic tomography. geomIO is then used to read the drawings and to create 3D volumes by interpolating between the cross-sections. In the second part, the volumes are used to assign material phases to markers inside the volumes. Multiple volumes can be created at the same time and, depending on the order of assignment, unions or intersections can be built to assign additional material phases. geomIO also offers the possibility to create 3D temperature structures for geodynamic models based on depth dependent parameterisations, for example the half space cooling model. In particular, this can be applied to geometries of subducting slabs of arbitrary shape. Yet, geomIO is held very general, and can be used for a variety of applications. We present examples of setup generation from pictures of micro-scale tectonics and lithospheric scale setups of 3D present-day model geometries.

  19. Structural and Practical Identifiability Issues of Immuno-Epidemiological Vector-Host Models with Application to Rift Valley Fever.

    PubMed

    Tuncer, Necibe; Gulbudak, Hayriye; Cannataro, Vincent L; Martcheva, Maia

    2016-09-01

    In this article, we discuss the structural and practical identifiability of a nested immuno-epidemiological model of arbovirus diseases, where host-vector transmission rate, host recovery, and disease-induced death rates are governed by the within-host immune system. We incorporate the newest ideas and the most up-to-date features of numerical methods to fit multi-scale models to multi-scale data. For an immunological model, we use Rift Valley Fever Virus (RVFV) time-series data obtained from livestock under laboratory experiments, and for an epidemiological model we incorporate a human compartment to the nested model and use the number of human RVFV cases reported by the CDC during the 2006-2007 Kenya outbreak. We show that the immunological model is not structurally identifiable for the measurements of time-series viremia concentrations in the host. Thus, we study the non-dimensionalized and scaled versions of the immunological model and prove that both are structurally globally identifiable. After fixing estimated parameter values for the immunological model derived from the scaled model, we develop a numerical method to fit observable RVFV epidemiological data to the nested model for the remaining parameter values of the multi-scale system. For the given (CDC) data set, Monte Carlo simulations indicate that only three parameters of the epidemiological model are practically identifiable when the immune model parameters are fixed. Alternatively, we fit the multi-scale data to the multi-scale model simultaneously. Monte Carlo simulations for the simultaneous fitting suggest that the parameters of the immunological model and the parameters of the immuno-epidemiological model are practically identifiable. We suggest that analytic approaches for studying the structural identifiability of nested models are a necessity, so that identifiable parameter combinations can be derived to reparameterize the nested model to obtain an identifiable one. This is a crucial step in developing multi-scale models which explain multi-scale data.

  20. Ecosystem functional assessment based on the "optical type" concept and self-similarity patterns: An application using MODIS-NDVI time series autocorrelation

    NASA Astrophysics Data System (ADS)

    Huesca, Margarita; Merino-de-Miguel, Silvia; Eklundh, Lars; Litago, Javier; Cicuéndez, Victor; Rodríguez-Rastrero, Manuel; Ustin, Susan L.; Palacios-Orueta, Alicia

    2015-12-01

    Remote sensing (RS) time series are an excellent operative source for information about the land surface across several scales and different levels of landscape heterogeneity. Ustin and Gamon (2010) proposed the new concept of "optical types" (OT), meaning "optically distinguishable functional types", as a way to better understand remote sensing signals related to the actual functional behavior of species that share common physiognomic forms but differ in functionality. Whereas the OT approach seems to be promising and consistent with ecological theory as a way to monitor vegetation derived from RS, it received little implementation. This work presents a method for implementing the OT concept for efficient monitoring of ecosystems based on RS time series. We propose relying on an ecosystem's repetitive pattern in the temporal domain (self-similarity) to assess its dynamics. Based on this approach, our main hypothesis is that distinct dynamics are intrinsic to a specific OT. Self-similarity level in the temporal domain within a broadleaf forest class was quantitatively assessed using the auto-correlation function (ACF), from statistical time series analysis. A vector comparison classification method, spectral angle mapper, and principal component analysis were used to identify general patterns related to forest dynamics. Phenological metrics derived from MODIS NDVI time series using the TIMESAT software, together with information from the National Forest Map were used to explain the different dynamics found. Results showed significant and highly stable self-similarity patterns in OTs that corresponded to forests under non-moisture-limited environments with an adaptation strategy based on a strong phenological synchrony with climate seasonality. These forests are characterized by dense closed canopy deciduous forests associated with high productivity and low biodiversity in terms of dominant species. Forests in transitional areas were associated with patterns of less temporal stability probably due to mixtures of different adaptation strategies (i.e., deciduous, marcescent and evergreen species) and higher functional diversity related to climate variability at long and short terms. A less distinct seasonality and even a double season appear in the OT of the broadleaf Mediterranean forest characterized by an open canopy dominated by evergreen-sclerophyllous formations. Within this forest, understory and overstory dynamics maximize functional diversity resulting in contrasting traits adapted to summer drought, winter frosts, and high precipitation variability.

  1. Design of thrust vectoring exhaust nozzles for real-time applications using neural networks

    NASA Technical Reports Server (NTRS)

    Prasanth, Ravi K.; Markin, Robert E.; Whitaker, Kevin W.

    1991-01-01

    Thrust vectoring continues to be an important issue in military aircraft system designs. A recently developed concept of vectoring aircraft thrust makes use of flexible exhaust nozzles. Subtle modifications in the nozzle wall contours produce a non-uniform flow field containing a complex pattern of shock and expansion waves. The end result, due to the asymmetric velocity and pressure distributions, is vectored thrust. Specification of the nozzle contours required for a desired thrust vector angle (an inverse design problem) has been achieved with genetic algorithms. This approach is computationally intensive and prevents the nozzles from being designed in real-time, which is necessary for an operational aircraft system. An investigation was conducted into using genetic algorithms to train a neural network in an attempt to obtain, in real-time, two-dimensional nozzle contours. Results show that genetic algorithm trained neural networks provide a viable, real-time alternative for designing thrust vectoring nozzles contours. Thrust vector angles up to 20 deg were obtained within an average error of 0.0914 deg. The error surfaces encountered were highly degenerate and thus the robustness of genetic algorithms was well suited for minimizing global errors.

  2. Integrated optic vector-matrix multiplier

    DOEpatents

    Watts, Michael R [Albuquerque, NM

    2011-09-27

    A vector-matrix multiplier is disclosed which uses N different wavelengths of light that are modulated with amplitudes representing elements of an N.times.1 vector and combined to form an input wavelength-division multiplexed (WDM) light stream. The input WDM light stream is split into N streamlets from which each wavelength of the light is individually coupled out and modulated for a second time using an input signal representing elements of an M.times.N matrix, and is then coupled into an output waveguide for each streamlet to form an output WDM light stream which is detected to generate a product of the vector and matrix. The vector-matrix multiplier can be formed as an integrated optical circuit using either waveguide amplitude modulators or ring resonator amplitude modulators.

  3. Metal-organic frameworks for precise inclusion of single-stranded DNA and transfection in immune cells.

    PubMed

    Peng, Shuang; Bie, Binglin; Sun, Yangzesheng; Liu, Min; Cong, Hengjiang; Zhou, Wentao; Xia, Yucong; Tang, Heng; Deng, Hexiang; Zhou, Xiang

    2018-04-03

    Effective transfection of genetic molecules such as DNA usually relies on vectors that can reversibly uptake and release these molecules, and protect them from digestion by nuclease. Non-viral vectors meeting these requirements are rare due to the lack of specific interactions with DNA. Here, we design a series of four isoreticular metal-organic frameworks (Ni-IRMOF-74-II to -V) with progressively tuned pore size from 2.2 to 4.2 nm to precisely include single-stranded DNA (ssDNA, 11-53 nt), and to achieve reversible interaction between MOFs and ssDNA. The entire nucleic acid chain is completely confined inside the pores providing excellent protection, and the geometric distribution of the confined ssDNA is visualized by X-ray diffraction. Two MOFs in this series exhibit excellent transfection efficiency in mammalian immune cells, 92% in the primary mouse immune cells (CD4+ T cell) and 30% in human immune cells (THP-1 cell), unrivaled by the commercialized agents (Lipo and Neofect).

  4. Stochastic nature of Landsat MSS data

    NASA Technical Reports Server (NTRS)

    Labovitz, M. L.; Masuoka, E. J.

    1987-01-01

    A multiple series generalization of the ARIMA models is used to model Landsat MSS scan lines as sequences of vectors, each vector having four elements (bands). The purpose of this work is to investigate if Landsat scan lines can be described by a general multiple series linear stochastic model and if the coefficients of such a model vary as a function of satellite system and target attributes. To accomplish this objective, an exploratory experimental design was set up incorporating six factors, four representing target attributes - location, cloud cover, row (within location), and column (within location) - and two factors representing system attributes - satellite number and detector bank. Each factor was included in the design at two levels and, with two replicates per treatment, 128 scan lines were analyzed. The results of the analysis suggests that a multiple AR(4) model is an adequate representation across all scan lines. Furthermore, the coefficients of the AR(4) model vary with location, particularly changes in physiography (slope regimes), and with percent cloud cover, but are insensitive to changes in system attributes.

  5. Motion in a Central Field in the Presence of a Constant Perturbing Acceleration in a Coordinate System Comoving with the Velocity Vector

    NASA Astrophysics Data System (ADS)

    Batmunkh, N.; Sannikova, T. N.; Kholshevnikov, K. V.

    2018-04-01

    The motion of a zero-mass point under the action of gravitation toward a central body and a perturbing acceleration P is considered. The magnitude of P is taken to be small compared to the main acceleration due to the gravitation of the central body, and the components of the vector P are taken to be constant in a reference frame with its origin at the central body and its axes directed along the velocity vector, normal to the velocity vector in the plane of the osculating orbit, and along the binormal. The equations in the mean elements were obtained in an earlier study. The algorithm used to solve these equations is given in this study. This algorithm is analogous to one constructed earlier for the case when P is constant in a reference frame tied to the radius vector. The properties of the solutions are similar. The main difference is that, in the most important cases, the quadratures to which the solution reduces lead to non-elementary functions. However, they can be expressed as series in powers of the eccentricity e that converge for e < 1, and often also for e = 1.

  6. Comprehensive seismic monitoring of the Cascadia megathrust with real-time GPS

    NASA Astrophysics Data System (ADS)

    Melbourne, T. I.; Szeliga, W. M.; Santillan, V. M.; Scrivner, C. W.; Webb, F.

    2013-12-01

    We have developed a comprehensive real-time GPS-based seismic monitoring system for the Cascadia subduction zone based on 1- and 5-second point position estimates computed within the ITRF08 reference frame. A Kalman filter stream editor that uses a geometry-free combination of phase and range observables to speed convergence while also producing independent estimation of carrier phase biases and ionosphere delay pre-cleans raw satellite measurements. These are then analyzed with GIPSY-OASIS using satellite clock and orbit corrections streamed continuously from the International GNSS Service (IGS) and the German Aerospace Center (DLR). The resulting RMS position scatter is less than 3 cm, and typical latencies are under 2 seconds. Currently 31 coastal Washington, Oregon, and northern California stations from the combined PANGA and PBO networks are analyzed. We are now ramping up to include all of the remaining 400+ stations currently operating throughout the Cascadia subduction zone, all of which are high-rate and telemetered in real-time to CWU. These receivers span the M9 megathrust, M7 crustal faults beneath population centers, several active Cascades volcanoes, and a host of other hazard sources. To use the point position streams for seismic monitoring, we have developed an inter-process client communication package that captures, buffers and re-broadcasts real-time positions and covariances to a variety of seismic estimation routines running on distributed hardware. An aggregator ingests, re-streams and can rebroadcast up to 24 hours of point-positions and resultant seismic estimates derived from the point positions to application clients distributed across web. A suite of seismic monitoring applications has also been written, which includes position time series analysis, instantaneous displacement vectors, and peak ground displacement contouring and mapping. We have also implemented a continuous estimation of finite-fault slip along the Cascadia megathrust using a NIF-type approach. This currently operates on the terrestrial GPS data streams, but could readily be expanded to use real-time offshore geodetic measurements as well. The continuous slip distributions are used in turn to compute tsunami excitation and, when convolved with pre-computed, hydrodynamic Green functions calculated using the COMCOT tsunami modeling software, run-up estimates for the entire Cascadia coastal margin. Finally, a suite of data visualization tools has been written to allow interaction with the real-time position streams and seismic estimates based on them, including time series plotting, instantaneous offset vectors, peak ground deformation contouring, finite-fault inversions, and tsunami run-up. This suite is currently bundled within a single client written in JAVA, called ';GPS Cockpit,' which is available for download.

  7. An Efficient Wait-Free Vector

    DOE PAGES

    Feldman, Steven; Valera-Leon, Carlos; Dechev, Damian

    2016-03-01

    The vector is a fundamental data structure, which provides constant-time access to a dynamically-resizable range of elements. Currently, there exist no wait-free vectors. The only non-blocking version supports only a subset of the sequential vector API and exhibits significant synchronization overhead caused by supporting opposing operations. Since many applications operate in phases of execution, wherein each phase only a subset of operations are used, this overhead is unnecessary for the majority of the application. To address the limitations of the non-blocking version, we present a new design that is wait-free, supports more of the operations provided by the sequential vector,more » and provides alternative implementations of key operations. These alternatives allow the developer to balance the performance and functionality of the vector as requirements change throughout execution. Compared to the known non-blocking version and the concurrent vector found in Intel’s TBB library, our design outperforms or provides comparable performance in the majority of tested scenarios. Over all tested scenarios, the presented design performs an average of 4.97 times more operations per second than the non-blocking vector and 1.54 more than the TBB vector. In a scenario designed to simulate the filling of a vector, performance improvement increases to 13.38 and 1.16 times. This work presents the first ABA-free non-blocking vector. Finally, unlike the other non-blocking approach, all operations are wait-free and bounds-checked and elements are stored contiguously in memory.« less

  8. Nonstationary Influence of El Niño on the Synchronous Dengue Epidemics in Thailand

    PubMed Central

    Cazelles, Bernard; Chavez, Mario; McMichael, Anthony J; Hales, Simon

    2005-01-01

    Background Several factors, including environmental and climatic factors, influence the transmission of vector-borne diseases. Nevertheless, the identification and relative importance of climatic factors for vector-borne diseases remain controversial. Dengue is the world's most important viral vector-borne disease, and the controversy about climatic effects also applies in this case. Here we address the role of climate variability in shaping the interannual pattern of dengue epidemics. Methods and Findings We have analysed monthly data for Thailand from 1983 to 1997 using wavelet approaches that can describe nonstationary phenomena and that also allow the quantification of nonstationary associations between time series. We report a strong association between monthly dengue incidence in Thailand and the dynamics of El Niño for the 2–3-y periodic mode. This association is nonstationary, seen only from 1986 to 1992, and appears to have a major influence on the synchrony of dengue epidemics in Thailand. Conclusion The underlying mechanism for the synchronisation of dengue epidemics may resemble that of a pacemaker, in which intrinsic disease dynamics interact with climate variations driven by El Niño to propagate travelling waves of infection. When association with El Niño is strong in the 2–3-y periodic mode, one observes high synchrony of dengue epidemics over Thailand. When this association is absent, the seasonal dynamics become dominant and the synchrony initiated in Bangkok collapses. PMID:15839751

  9. Spherical Harmonic Inductive Detection Coils and their use In Dynamic Pre-emphasis for Magnetic Resonance Imaging

    NASA Astrophysics Data System (ADS)

    Edler, Karl T.

    The issue of eddy currents induced by the rapid switching of magnetic field gradients is a long-standing problem in magnetic resonance imaging. A new method for dealing with this problem is presented whereby spatial harmonic components of the magnetic field are continuously sensed, through their temporal rates of change, and corrected. In this way, the effects of the eddy currents on multiple spatial harmonic components of the magnetic field can be detected and corrections applied during the rise time of the gradients. Sensing the temporal changes in each spatial harmonic is made possible with specially designed detection coils. However to make the design of these coils possible, general relationships between the spatial harmonics of the field, scalar potential, and vector potential are found within the quasi-static approximation. These relationships allow the vector potential to be found from the field -- an inverse curl operation -- and may be of use beyond the specific problem of detection coil design. Using the detection coils as sensors, methods are developed for designing a negative feedback system to control the eddy current effects and optimizing that system with respect to image noise and distortion. The design methods are successfully tested in a series of proof-of-principle experiments which lead to a discussion of how to incorporate similar designs into an operational MRI. Keywords: magnetic resonance imaging, eddy currents, dynamic shimming, negative feedback, quasi-static fields, vector potential, inverse curl

  10. Diurnal biting periodicity of parous Simulium (Diptera: Simuliidae) vectors in the onchocerciasis Amazonian focus.

    PubMed

    Grillet, M-E; Villamizar, N J; Cortez, J; Frontado, H L; Escalona, M; Vivas-Martínez, S; Basáñez, M-G

    2005-05-01

    We describe the hourly patterns of parous biting activity of the three main simuliid vectors of human onchocerciasis in the Amazonian focus straddling between Venezuela and Brazil, namely, Simulium guianense s.l. Wise; S. incrustatum Lutz, and S. oyapockense s.l. Floch and Abonnenc. Time series of the hourly numbers of host-seeking parous flies caught in five Yanomami villages during dry, rainy, and their transition periods from 1995 to 2001 were investigated using harmonic analysis (assuming an underlying circadian rhythm) and periodic correlation (based on Spearman's r). Parous S guianense s.l. showed a bimodal activity pattern, with a minor peak in mid-morning and a major peak at 16:00 h. S. incrustatum exhibited mainly unimodal activity during either early morning or midday according to locality. S. oyapockense s.l. bit humans throughout the day mainly between 10:00 and 16:00 h but also showed bimodal periodicity in some localities. Superimposed on the endogenous, species-specific daily cycles, parous activity showed variation according to locality, season, air temperature and relative humidity, with biting being promoted by warmer and drier hours during wet seasons/periods and reduced during hotter times in dry seasons or transitions. The results are discussed in terms of their implications for blackfly biology and ecology as well as onchocerciasis epidemiology and control.

  11. A graphical vector autoregressive modelling approach to the analysis of electronic diary data

    PubMed Central

    2010-01-01

    Background In recent years, electronic diaries are increasingly used in medical research and practice to investigate patients' processes and fluctuations in symptoms over time. To model dynamic dependence structures and feedback mechanisms between symptom-relevant variables, a multivariate time series method has to be applied. Methods We propose to analyse the temporal interrelationships among the variables by a structural modelling approach based on graphical vector autoregressive (VAR) models. We give a comprehensive description of the underlying concepts and explain how the dependence structure can be recovered from electronic diary data by a search over suitable constrained (graphical) VAR models. Results The graphical VAR approach is applied to the electronic diary data of 35 obese patients with and without binge eating disorder (BED). The dynamic relationships for the two subgroups between eating behaviour, depression, anxiety and eating control are visualized in two path diagrams. Results show that the two subgroups of obese patients with and without BED are distinguishable by the temporal patterns which influence their respective eating behaviours. Conclusion The use of the graphical VAR approach for the analysis of electronic diary data leads to a deeper insight into patient's dynamics and dependence structures. An increasing use of this modelling approach could lead to a better understanding of complex psychological and physiological mechanisms in different areas of medical care and research. PMID:20359333

  12. Flare Prediction Using Photospheric and Coronal Image Data

    NASA Astrophysics Data System (ADS)

    Jonas, E.; Shankar, V.; Bobra, M.; Recht, B.

    2016-12-01

    We attempt to forecast M-and X-class solar flares using a machine-learning algorithm and five years of image data from both the Helioseismic and Magnetic Imager (HMI) and Atmospheric Imaging Assembly (AIA) instruments aboard the Solar Dynamics Observatory. HMI is the first instrument to continuously map the full-disk photospheric vector magnetic field from space (Schou et al., 2012). The AIA instrument maps the transition region and corona using various ultraviolet wavelengths (Lemen et al., 2012). HMI and AIA data are taken nearly simultaneously, providing an opportunity to study the entire solar atmosphere at a rapid cadence. Most flare forecasting efforts described in the literature use some parameterization of solar data - typically of the photospheric magnetic field within active regions. These numbers are considered to capture the information in any given image relevant to predicting solar flares. In our approach, we use HMI and AIA images of solar active regions and a deep convolutional kernel network to predict solar flares. This is effectively a series of shallow-but-wide random convolutional neural networks stacked and then trained with a large-scale block-weighted least squares solver. This algorithm automatically determines which patterns in the image data are most correlated with flaring activity and then uses these patterns to predict solar flares. Using the recently-developed KeystoneML machine learning framework, we construct a pipeline to process millions of images in a few hours on commodity cloud computing infrastructure. This is the first time vector magnetic field images have been combined with coronal imagery to forecast solar flares. This is also the first time such a large dataset of solar images, some 8.5 terabytes of images that together capture over 3000 active regions, has been used to forecast solar flares. We evaluate our method using various flare prediction windows defined in the literature (e.g. Ahmed et al., 2013) and a novel per-hour time series we've constructed which more closely mimics the demands of an operational solar flare prediction system. We estimate the performance of our algorithm using the True Skill Statistic (TSS; Bloomfield et al., 2012). We find that our algorithm gives a high TSS score and predictive abilities.

  13. Selection of optimal complexity for ENSO-EMR model by minimum description length principle

    NASA Astrophysics Data System (ADS)

    Loskutov, E. M.; Mukhin, D.; Mukhina, A.; Gavrilov, A.; Kondrashov, D. A.; Feigin, A. M.

    2012-12-01

    One of the main problems arising in modeling of data taken from natural system is finding a phase space suitable for construction of the evolution operator model. Since we usually deal with strongly high-dimensional behavior, we are forced to construct a model working in some projection of system phase space corresponding to time scales of interest. Selection of optimal projection is non-trivial problem since there are many ways to reconstruct phase variables from given time series, especially in the case of a spatio-temporal data field. Actually, finding optimal projection is significant part of model selection, because, on the one hand, the transformation of data to some phase variables vector can be considered as a required component of the model. On the other hand, such an optimization of a phase space makes sense only in relation to the parametrization of the model we use, i.e. representation of evolution operator, so we should find an optimal structure of the model together with phase variables vector. In this paper we propose to use principle of minimal description length (Molkov et al., 2009) for selection models of optimal complexity. The proposed method is applied to optimization of Empirical Model Reduction (EMR) of ENSO phenomenon (Kravtsov et al. 2005, Kondrashov et. al., 2005). This model operates within a subset of leading EOFs constructed from spatio-temporal field of SST in Equatorial Pacific, and has a form of multi-level stochastic differential equations (SDE) with polynomial parameterization of the right-hand side. Optimal values for both the number of EOF, the order of polynomial and number of levels are estimated from the Equatorial Pacific SST dataset. References: Ya. Molkov, D. Mukhin, E. Loskutov, G. Fidelin and A. Feigin, Using the minimum description length principle for global reconstruction of dynamic systems from noisy time series, Phys. Rev. E, Vol. 80, P 046207, 2009 Kravtsov S, Kondrashov D, Ghil M, 2005: Multilevel regression modeling of nonlinear processes: Derivation and applications to climatic variability. J. Climate, 18 (21): 4404-4424. D. Kondrashov, S. Kravtsov, A. W. Robertson and M. Ghil, 2005. A hierarchy of data-based ENSO models. J. Climate, 18, 4425-4444.

  14. Application of a VLSI vector quantization processor to real-time speech coding

    NASA Technical Reports Server (NTRS)

    Davidson, G.; Gersho, A.

    1986-01-01

    Attention is given to a working vector quantization processor for speech coding that is based on a first-generation VLSI chip which efficiently performs the pattern-matching operation needed for the codebook search process (CPS). Using this chip, the CPS architecture has been successfully incorporated into a compact, single-board Vector PCM implementation operating at 7-18 kbits/sec. A real time Adaptive Vector Predictive Coder system using the CPS has also been implemented.

  15. The New Self-Inactivating Lentiviral Vector for Thalassemia Gene Therapy Combining Two HPFH Activating Elements Corrects Human Thalassemic Hematopoietic Stem Cells

    PubMed Central

    Papanikolaou, Eleni; Georgomanoli, Maria; Stamateris, Evangelos; Panetsos, Fottes; Karagiorga, Markisia; Tsaftaridis, Panagiotis; Graphakos, Stelios

    2012-01-01

    Abstract To address how low titer, variable expression, and gene silencing affect gene therapy vectors for hemoglobinopathies, in a previous study we successfully used the HPFH (hereditary persistence of fetal hemoglobin)-2 enhancer in a series of oncoretroviral vectors. On the basis of these data, we generated a novel insulated self-inactivating (SIN) lentiviral vector, termed GGHI, carrying the Aγ-globin gene with the −117 HPFH point mutation and the HPFH-2 enhancer and exhibiting a pancellular pattern of Aγ-globin gene expression in MEL-585 clones. To assess the eventual clinical feasibility of this vector, GGHI was tested on CD34+ hematopoietic stem cells from nonmobilized peripheral blood or bone marrow from 20 patients with β-thalassemia. Our results show that GGHI increased the production of γ-globin by 32.9% as measured by high-performance liquid chromatography (p=0.001), with a mean vector copy number per cell of 1.1 and a mean transduction efficiency of 40.3%. Transduced populations also exhibited a lower rate of apoptosis and resulted in improvement of erythropoiesis with a higher percentage of orthochromatic erythroblasts. This is the first report of a locus control region (LCR)-free SIN insulated lentiviral vector that can be used to efficiently produce the anticipated therapeutic levels of γ-globin protein in the erythroid progeny of primary human thalassemic hematopoietic stem cells in vitro. PMID:21875313

  16. Comparison of vector autoregressive (VAR) and vector error correction models (VECM) for index of ASEAN stock price

    NASA Astrophysics Data System (ADS)

    Suharsono, Agus; Aziza, Auliya; Pramesti, Wara

    2017-12-01

    Capital markets can be an indicator of the development of a country's economy. The presence of capital markets also encourages investors to trade; therefore investors need information and knowledge of which shares are better. One way of making decisions for short-term investments is the need for modeling to forecast stock prices in the period to come. Issue of stock market-stock integration ASEAN is very important. The problem is that ASEAN does not have much time to implement one market in the economy, so it would be very interesting if there is evidence whether the capital market in the ASEAN region, especially the countries of Indonesia, Malaysia, Philippines, Singapore and Thailand deserve to be integrated or still segmented. Furthermore, it should also be known and proven What kind of integration is happening: what A capital market affects only the market Other capital, or a capital market only Influenced by other capital markets, or a Capital market as well as affecting as well Influenced by other capital markets in one ASEAN region. In this study, it will compare forecasting of Indonesian share price (IHSG) with neighboring countries (ASEAN) including developed and developing countries such as Malaysia (KLSE), Singapore (SGE), Thailand (SETI), Philippines (PSE) to find out which stock country the most superior and influential. These countries are the founders of ASEAN and share price index owners who have close relations with Indonesia in terms of trade, especially exports and imports. Stock price modeling in this research is using multivariate time series analysis that is VAR (Vector Autoregressive) and VECM (Vector Error Correction Modeling). VAR and VECM models not only predict more than one variable but also can see the interrelations between variables with each other. If the assumption of white noise is not met in the VAR modeling, then the cause can be assumed that there is an outlier. With this modeling will be able to know the pattern of relationship or linkage of share prices of each country in ASEAN. The best modeling comparison result of the ASEAN stock price index is VAR.

  17. Extending the length and time scales of Gram–Schmidt Lyapunov vector computations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Costa, Anthony B., E-mail: acosta@northwestern.edu; Green, Jason R., E-mail: jason.green@umb.edu; Department of Chemistry, University of Massachusetts Boston, Boston, MA 02125

    Lyapunov vectors have found growing interest recently due to their ability to characterize systems out of thermodynamic equilibrium. The computation of orthogonal Gram–Schmidt vectors requires multiplication and QR decomposition of large matrices, which grow as N{sup 2} (with the particle count). This expense has limited such calculations to relatively small systems and short time scales. Here, we detail two implementations of an algorithm for computing Gram–Schmidt vectors. The first is a distributed-memory message-passing method using Scalapack. The second uses the newly-released MAGMA library for GPUs. We compare the performance of both codes for Lennard–Jones fluids from N=100 to 1300 betweenmore » Intel Nahalem/Infiniband DDR and NVIDIA C2050 architectures. To our best knowledge, these are the largest systems for which the Gram–Schmidt Lyapunov vectors have been computed, and the first time their calculation has been GPU-accelerated. We conclude that Lyapunov vector calculations can be significantly extended in length and time by leveraging the power of GPU-accelerated linear algebra.« less

  18. Evaluation of the SPAR thermal analyzer on the CYBER-203 computer

    NASA Technical Reports Server (NTRS)

    Robinson, J. C.; Riley, K. M.; Haftka, R. T.

    1982-01-01

    The use of the CYBER 203 vector computer for thermal analysis is investigated. Strengths of the CYBER 203 include the ability to perform, in vector mode using a 64 bit word, 50 million floating point operations per second (MFLOPS) for addition and subtraction, 25 MFLOPS for multiplication and 12.5 MFLOPS for division. The speed of scalar operation is comparable to that of a CDC 7600 and is some 2 to 3 times faster than Langley's CYBER 175s. The CYBER 203 has 1,048,576 64-bit words of real memory with an 80 nanosecond (nsec) access time. Memory is bit addressable and provides single error correction, double error detection (SECDED) capability. The virtual memory capability handles data in either 512 or 65,536 word pages. The machine has 256 registers with a 40 nsec access time. The weaknesses of the CYBER 203 include the amount of vector operation overhead and some data storage limitations. In vector operations there is a considerable amount of time before a single result is produced so that vector calculation speed is slower than scalar operation for short vectors.

  19. Generating synthetic wave climates for coastal modelling: a linear mixed modelling approach

    NASA Astrophysics Data System (ADS)

    Thomas, C.; Lark, R. M.

    2013-12-01

    Numerical coastline morphological evolution models require wave climate properties to drive morphological change through time. Wave climate properties (typically wave height, period and direction) may be temporally fixed, culled from real wave buoy data, or allowed to vary in some way defined by a Gaussian or other pdf. However, to examine sensitivity of coastline morphologies to wave climate change, it seems desirable to be able to modify wave climate time series from a current to some new state along a trajectory, but in a way consistent with, or initially conditioned by, the properties of existing data, or to generate fully synthetic data sets with realistic time series properties. For example, mean or significant wave height time series may have underlying periodicities, as revealed in numerous analyses of wave data. Our motivation is to develop a simple methodology to generate synthetic wave climate time series that can change in some stochastic way through time. We wish to use such time series in a coastline evolution model to test sensitivities of coastal landforms to changes in wave climate over decadal and centennial scales. We have worked initially on time series of significant wave height, based on data from a Waverider III buoy located off the coast of Yorkshire, England. The statistical framework for the simulation is the linear mixed model. The target variable, perhaps after transformation (Box-Cox), is modelled as a multivariate Gaussian, the mean modelled as a function of a fixed effect, and two random components, one of which is independently and identically distributed (iid) and the second of which is temporally correlated. The model was fitted to the data by likelihood methods. We considered the option of a periodic mean, the period either fixed (e.g. at 12 months) or estimated from the data. We considered two possible correlation structures for the second random effect. In one the correlation decays exponentially with time. In the second (spherical) model, it cuts off at a temporal range. Having fitted the model, multiple realisations were generated; the random effects were simulated by specifying a covariance matrix for the simulated values, with the estimated parameters. The Cholesky factorisation of the covariance matrix was computed and realizations of the random component of the model generated by pre-multiplying a vector of iid standard Gaussian variables by the lower triangular factor. The resulting random variate was added to the mean value computed from the fixed effects, and the result back-transformed to the original scale of the measurement. Realistic simulations result from approach described above. Background exploratory data analysis was undertaken on 20-day sets of 30-minute buoy data, selected from days 5-24 of months January, April, July, October, 2011, to elucidate daily to weekly variations, and to keep numerical analysis tractable computationally. Work remains to be undertaken to develop suitable models for synthetic directional data. We suggest that the general principles of the method will have applications in other geomorphological modelling endeavours requiring time series of stochastically variable environmental parameters.

  20. Analysis of vector wind change with respect to time for Cape Kennedy, Florida

    NASA Technical Reports Server (NTRS)

    Adelfang, S. I.

    1978-01-01

    Multivariate analysis was used to determine the joint distribution of the four variables represented by the components of the wind vector at an initial time and after a specified elapsed time is hypothesized to be quadravariate normal; the fourteen statistics of this distribution, calculated from 15 years of twice-daily rawinsonde data are presented by monthly reference periods for each month from 0 to 27 km. The hypotheses that the wind component changes with respect to time is univariate normal, that the joint distribution of wind component change with respect to time is univariate normal, that the joint distribution of wind component changes is bivariate normal, and that the modulus of vector wind change is Rayleigh are tested by comparison with observed distributions. Statistics of the conditional bivariate normal distributions of vector wind at a future time given the vector wind at an initial time are derived. Wind changes over time periods from 1 to 5 hours, calculated from Jimsphere data, are presented. Extension of the theoretical prediction (based on rawinsonde data) of wind component change standard deviation to time periods of 1 to 5 hours falls (with a few exceptions) within the 95 percentile confidence band of the population estimate obtained from the Jimsphere sample data. The joint distributions of wind change components, conditional wind components, and 1 km vector wind shear change components are illustrated by probability ellipses at the 95 percentile level.

  1. Evaluation of Aerodynamic Drag and Torque for External Tanks in Low Earth Orbit

    PubMed Central

    Stone, William C.; Witzgall, Christoph

    2006-01-01

    A numerical procedure is described in which the aerodynamic drag and torque in low Earth orbit are calculated for a prototype Space Shuttle external tank and its components, the “LO2” and “LH2” tanks, carrying liquid oxygen and hydrogen, respectively, for any given angle of attack. Calculations assume the hypersonic limit of free molecular flow theory. Each shell of revolution is assumed to be described by a series of parametric equations for their respective contours. It is discretized into circular cross sections perpendicular to the axis of revolution, which yield a series of ellipses when projected according to the given angle of attack. The drag profile, that is, the projection of the entire shell is approximated by the convex envelope of those ellipses. The area of the drag profile, that is, the drag area, and its center of area moment, that is, the drag center, are then calculated and permit determination of the drag vector and the eccentricity vector from the center of gravity of the shell to the drag center. The aerodynamic torque is obtained as the cross product of those vectors. The tanks are assumed to be either evacuated or pressurized with a uniform internal gas distribution: dynamic shifting of the tank center of mass due to residual propellant sloshing is not considered. PMID:27274926

  2. Detecting dynamical boundaries from kinematic data in biomechanics

    NASA Astrophysics Data System (ADS)

    Ross, Shane D.; Tanaka, Martin L.; Senatore, Carmine

    2010-03-01

    Ridges in the state space distribution of finite-time Lyapunov exponents can be used to locate dynamical boundaries. We describe a method for obtaining dynamical boundaries using only trajectories reconstructed from time series, expanding on the current approach which requires a vector field in the phase space. We analyze problems in musculoskeletal biomechanics, considered as exemplars of a class of experimental systems that contain separatrix features. Particular focus is given to postural control and balance, considering both models and experimental data. Our success in determining the boundary between recovery and failure in human balance activities suggests this approach will provide new robust stability measures, as well as measures of fall risk, that currently are not available and may have benefits for the analysis and prevention of low back pain and falls leading to injury, both of which affect a significant portion of the population.

  3. Ensemble Data Assimilation Without Ensembles: Methodology and Application to Ocean Data Assimilation

    NASA Technical Reports Server (NTRS)

    Keppenne, Christian L.; Rienecker, Michele M.; Kovach, Robin M.; Vernieres, Guillaume

    2013-01-01

    Two methods to estimate background error covariances for data assimilation are introduced. While both share properties with the ensemble Kalman filter (EnKF), they differ from it in that they do not require the integration of multiple model trajectories. Instead, all the necessary covariance information is obtained from a single model integration. The first method is referred-to as SAFE (Space Adaptive Forecast error Estimation) because it estimates error covariances from the spatial distribution of model variables within a single state vector. It can thus be thought of as sampling an ensemble in space. The second method, named FAST (Flow Adaptive error Statistics from a Time series), constructs an ensemble sampled from a moving window along a model trajectory. The underlying assumption in these methods is that forecast errors in data assimilation are primarily phase errors in space and/or time.

  4. A biometeorological model of an encephalitis vector

    NASA Astrophysics Data System (ADS)

    Raddatz, R. L.

    1986-01-01

    Multiple linear regression techniques and seven years of data were used to build a biometeorological model of Winnipeg's mean daily levels of Culex tarsalis Coquillett. An eighth year of data was used to test the model. Hydrologic accounting of precipitation, evapotranspiration and runoff provided estimates of wetness while the warmness of the season was gauged in terms of the average temperature difference from normal and a threshold antecedent temperature regime. These factors were found to be highly correlated with the time-series of Cx. tarsalis counts. The impact of mosquito adulticiding measures was included in the model via a control effectiveness parameter. An activity-level adjustment, based on mean daily temperatures, was also made to the counts. This model can, by monitoring the weather, provide forecasts of Cx. tarsalis populations for Winnipeg with a lead-time of three weeks, thereby, contributing to an early warning of an impending Western Equine Encephalitis outbreak.

  5. Remotely sensed vegetation moisture as explanatory variable of Lyme borreliosis incidence

    NASA Astrophysics Data System (ADS)

    Barrios, J. M.; Verstraeten, W. W.; Maes, P.; Clement, J.; Aerts, J. M.; Farifteh, J.; Lagrou, K.; Van Ranst, M.; Coppin, P.

    2012-08-01

    The strong correlation between environmental conditions and abundance and spatial spread of the tick Ixodes ricinus is widely documented. I. ricinus is in Europe the main vector of the bacterium Borrelia burgdorferi, the pathogen causing Lyme borreliosis (LB). Humidity in vegetated systems is a major factor in tick ecology and its effects might translate into disease incidence in humans. Time series of two remotely sensed indices with sensitivity to vegetation greenness and moisture were tested as explanatory variables of LB incidence. Wavelet-based multiresolution analysis allowed the examination of these signals at different temporal scales in study sites in Belgium, where increases in LB incidence were reported in recent years. The analysis showed the potential of the tested indices for disease monitoring, the usefulness of analyzing the signal in different time frames and the importance of local characteristics of the study area for the selection of the vegetation index.

  6. Modeling Geomagnetic Variations using a Machine Learning Framework

    NASA Astrophysics Data System (ADS)

    Cheung, C. M. M.; Handmer, C.; Kosar, B.; Gerules, G.; Poduval, B.; Mackintosh, G.; Munoz-Jaramillo, A.; Bobra, M.; Hernandez, T.; McGranaghan, R. M.

    2017-12-01

    We present a framework for data-driven modeling of Heliophysics time series data. The Solar Terrestrial Interaction Neural net Generator (STING) is an open source python module built on top of state-of-the-art statistical learning frameworks (traditional machine learning methods as well as deep learning). To showcase the capability of STING, we deploy it for the problem of predicting the temporal variation of geomagnetic fields. The data used includes solar wind measurements from the OMNI database and geomagnetic field data taken by magnetometers at US Geological Survey observatories. We examine the predictive capability of different machine learning techniques (recurrent neural networks, support vector machines) for a range of forecasting times (minutes to 12 hours). STING is designed to be extensible to other types of data. We show how STING can be used on large sets of data from different sensors/observatories and adapted to tackle other problems in Heliophysics.

  7. Method for enhanced accuracy in predicting peptides using liquid separations or chromatography

    DOEpatents

    Kangas, Lars J.; Auberry, Kenneth J.; Anderson, Gordon A.; Smith, Richard D.

    2006-11-14

    A method for predicting the elution time of a peptide in chromatographic and electrophoretic separations by first providing a data set of known elution times of known peptides, then creating a plurality of vectors, each vector having a plurality of dimensions, and each dimension representing the elution time of amino acids present in each of these known peptides from the data set. The elution time of any protein is then be predicted by first creating a vector by assigning dimensional values for the elution time of amino acids of at least one hypothetical peptide and then calculating a predicted elution time for the vector by performing a multivariate regression of the dimensional values of the hypothetical peptide using the dimensional values of the known peptides. Preferably, the multivariate regression is accomplished by the use of an artificial neural network and the elution times are first normalized using a transfer function.

  8. WEBGIS based CropWatch online agriculture monitoring system

    NASA Astrophysics Data System (ADS)

    Zhang, X.; Wu, B.; Zeng, H.; Zhang, M.; Yan, N.

    2015-12-01

    CropWatch, which was developed by the Institute of Remote Sensing and Digital Earth (RADI), Chinese Academy of Sciences (CAS), has achieved breakthrough results in the integration of methods, independence of the assessments and support to emergency response by periodically releasing global agricultural information. Taking advantages of the multi-source remote sensing data and the openness of the data sharing policies, CropWatch group reported their monitoring results by publishing four bulletins one year. In order to better analysis and generate the bulletin and provide an alternative way to access agricultural monitoring indicators and results in CropWatch, The CropWatch online system based on the WEBGIS techniques has been developed. Figure 1 shows the CropWatch online system structure and the system UI in Clustering mode. Data visualization is sorted into three different modes: Vector mode, Raster mode and Clustering mode. Vector mode provides the statistic value for all the indicators over each monitoring units which allows users to compare current situation with historical values (average, maximum, etc.). Users can compare the profiles of each indicator over the current growing season with the historical data in a chart by selecting the region of interest (ROI). Raster mode provides pixel based anomaly of CropWatch indicators globally. In this mode, users are able to zoom in to the regions where the notable anomaly was identified from statistic values in vector mode. Data from remote sensing image series at high temporal and low spatial resolution provide key information in agriculture monitoring. Clustering mode provides integrated information on different classes in maps, the corresponding profiles for each class and the percentage of area of each class to the total area of all classes. The time series data is categorized into limited types by the ISODATA algorithm. For each clustering type, pixels on the map, profiles, and percentage legend are all linked together. All the three visualization methods are applied to four scales including 65 monitoring and reporting units (MRUs), 7 major production zones (MPZs), 173 countries and sub-countries for 9 large countries. Agro-Climatic information, Agronomic information and indicators related with crop area, crop yield and crop production are provided.

  9. A finite element approach for solution of the 3D Euler equations

    NASA Technical Reports Server (NTRS)

    Thornton, E. A.; Ramakrishnan, R.; Dechaumphai, P.

    1986-01-01

    Prediction of thermal deformations and stresses has prime importance in the design of the next generation of high speed flight vehicles. Aerothermal load computations for complex three-dimensional shapes necessitate development of procedures to solve the full Navier-Stokes equations. This paper details the development of a three-dimensional inviscid flow approach which can be extended for three-dimensional viscous flows. A finite element formulation, based on a Taylor series expansion in time, is employed to solve the compressible Euler equations. Model generation and results display are done using a commercially available program, PATRAN, and vectorizing strategies are incorporated to ensure computational efficiency. Sample problems are presented to demonstrate the validity of the approach for analyzing high speed compressible flows.

  10. Digital Images on the DIME

    NASA Technical Reports Server (NTRS)

    2003-01-01

    With NASA on its side, Positive Systems, Inc., of Whitefish, Montana, is veering away from the industry standards defined for producing and processing remotely sensed images. A top developer of imaging products for geographic information system (GIS) and computer-aided design (CAD) applications, Positive Systems is bucking traditional imaging concepts with a cost-effective and time-saving software tool called Digital Images Made Easy (DIME(trademark)). Like piecing a jigsaw puzzle together, DIME can integrate a series of raw aerial or satellite snapshots into a single, seamless panoramic image, known as a 'mosaic.' The 'mosaicked' images serve as useful backdrops to GIS maps - which typically consist of line drawings called 'vectors' - by allowing users to view a multidimensional map that provides substantially more geographic information.

  11. The vibro-acoustic mapping of low gravity trajectories on a Learjet aircraft

    NASA Technical Reports Server (NTRS)

    Grodsinsky, C. M.; Sutliff, T. J.

    1990-01-01

    Terrestrial low gravity research techniques have been employed to gain a more thorough understanding of basic science and technology concepts. One technique frequently used involves flying parabolic trajectories aboard the NASA Lewis Research Center Learjet aircraft. A measurement program was developed to support an isolation system conceptual design. This program primarily was intended to measure time correlated high frequency accelerations (up to 100 Hz) present at various locations throughout the Learjet during a series of trajectories and flights. As suspected, the measurements obtained revealed that the environment aboard such an aircraft can not simply be described in terms of the static level low gravity g vector obtained, but that it also must account for both rigid body and high frequency vibro-acoustic dynamics.

  12. Estimation of Discontinuous Displacement Vector Fields with the Minimum Description Length Criterion.

    DTIC Science & Technology

    1990-10-01

    type of approach for finding a dense displacement vector field has a time complexity that allows a real - time implementation when an appropriate control...hardly vector fields as they appear in Stereo or motion. The reason for this is the fact that local displacement vector field ( DVF ) esti- mates bave...2 objects’ motion, but that the quantitative optical flow is not a reliable measure of the real motion [VP87, SU87]. This applies even more to the

  13. Multivariate analysis of fMRI time series: classification and regression of brain responses using machine learning.

    PubMed

    Formisano, Elia; De Martino, Federico; Valente, Giancarlo

    2008-09-01

    Machine learning and pattern recognition techniques are being increasingly employed in functional magnetic resonance imaging (fMRI) data analysis. By taking into account the full spatial pattern of brain activity measured simultaneously at many locations, these methods allow detecting subtle, non-strictly localized effects that may remain invisible to the conventional analysis with univariate statistical methods. In typical fMRI applications, pattern recognition algorithms "learn" a functional relationship between brain response patterns and a perceptual, cognitive or behavioral state of a subject expressed in terms of a label, which may assume discrete (classification) or continuous (regression) values. This learned functional relationship is then used to predict the unseen labels from a new data set ("brain reading"). In this article, we describe the mathematical foundations of machine learning applications in fMRI. We focus on two methods, support vector machines and relevance vector machines, which are respectively suited for the classification and regression of fMRI patterns. Furthermore, by means of several examples and applications, we illustrate and discuss the methodological challenges of using machine learning algorithms in the context of fMRI data analysis.

  14. Analysis of degree of nonlinearity and stochastic nature of HRV signal during meditation using delay vector variance method.

    PubMed

    Reddy, L Ram Gopal; Kuntamalla, Srinivas

    2011-01-01

    Heart rate variability analysis is fast gaining acceptance as a potential non-invasive means of autonomic nervous system assessment in research as well as clinical domains. In this study, a new nonlinear analysis method is used to detect the degree of nonlinearity and stochastic nature of heart rate variability signals during two forms of meditation (Chi and Kundalini). The data obtained from an online and widely used public database (i.e., MIT/BIH physionet database), is used in this study. The method used is the delay vector variance (DVV) method, which is a unified method for detecting the presence of determinism and nonlinearity in a time series and is based upon the examination of local predictability of a signal. From the results it is clear that there is a significant change in the nonlinearity and stochastic nature of the signal before and during the meditation (p value > 0.01). During Chi meditation there is a increase in stochastic nature and decrease in nonlinear nature of the signal. There is a significant decrease in the degree of nonlinearity and stochastic nature during Kundalini meditation.

  15. Sensor network based solar forecasting using a local vector autoregressive ridge framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, J.; Yoo, S.; Heiser, J.

    2016-04-04

    The significant improvements and falling costs of photovoltaic (PV) technology make solar energy a promising resource, yet the cloud induced variability of surface solar irradiance inhibits its effective use in grid-tied PV generation. Short-term irradiance forecasting, especially on the minute scale, is critically important for grid system stability and auxiliary power source management. Compared to the trending sky imaging devices, irradiance sensors are inexpensive and easy to deploy but related forecasting methods have not been well researched. The prominent challenge of applying classic time series models on a network of irradiance sensors is to address their varying spatio-temporal correlations duemore » to local changes in cloud conditions. We propose a local vector autoregressive framework with ridge regularization to forecast irradiance without explicitly determining the wind field or cloud movement. By using local training data, our learned forecast model is adaptive to local cloud conditions and by using regularization, we overcome the risk of overfitting from the limited training data. Our systematic experimental results showed an average of 19.7% RMSE and 20.2% MAE improvement over the benchmark Persistent Model for 1-5 minute forecasts on a comprehensive 25-day dataset.« less

  16. Universal series induced by approximate identities and some relevant applications

    PubMed Central

    Nestoridis, Vassili; Schmutzhard, Sebastian; Stefanopoulos, Vangelis

    2011-01-01

    We prove the existence of series ∑anψn, whose coefficients (an) are in ∩p>1ℓp and whose terms (ψn) are translates by rational vectors in Rd of a family of approximations to the identity, having the property that the partial sums are dense in various spaces of functions such as Wiener’s algebra W(C0,ℓ1), Cb(Rd), C0(Rd), Lp(Rd), for every p∈[1,∞), and the space of measurable functions. Applying this theory to particular situations, we establish approximations by such series to solutions of the heat and Laplace equations as well as to probability density functions. PMID:28298658

  17. INGN 007, an oncolytic adenovirus vector, replicates in Syrian hamsters but not mice: comparison of biodistribution studies.

    PubMed

    Ying, B; Toth, K; Spencer, J F; Meyer, J; Tollefson, A E; Patra, D; Dhar, D; Shashkova, E V; Kuppuswamy, M; Doronin, K; Thomas, M A; Zumstein, L A; Wold, W S M; Lichtenstein, D L

    2009-08-01

    Preclinical biodistribution studies with INGN 007, an oncolytic adenovirus (Ad) vector, supporting an early stage clinical trial were conducted in Syrian hamsters, which are permissive for Ad replication, and mice, which are a standard model for assessing toxicity and biodistribution of replication-defective (RD) Ad vectors. Vector dissemination and pharmacokinetics following intravenous administration were examined by real-time PCR in nine tissues and blood at five time points spanning 1 year. Select organs were also examined for the presence of infectious vector/virus. INGN 007 (VRX-007), wild-type Ad5 and AdCMVpA (an RD vector) were compared in the hamster model, whereas only INGN 007 was examined in mice. DNA of all vectors was widely disseminated early after injection, but decayed rapidly in most organs. In the hamster model, DNA of INGN 007 and Ad5 was more abundant than that of the RD vector AdCMVpA at early times after injection, but similar levels were seen later. An increased level of INGN 007 and Ad5 DNA but not AdCMVpA DNA in certain organs early after injection, and the presence of infectious INGN 007 and Ad5 in lung and liver samples at early times after injection, strongly suggests that replication of INGN 007 and Ad5 occurred in several Syrian hamster organs. There was no evidence of INGN 007 replication in mice. In addition to providing important information about INGN 007, the results underscore the utility of the Syrian hamster as a permissive immunocompetent model for Ad5 pathogenesis and oncolytic Ad vectors.

  18. Vectorization of a classical trajectory code on a floating point systems, Inc. Model 164 attached processor.

    PubMed

    Kraus, Wayne A; Wagner, Albert F

    1986-04-01

    A triatomic classical trajectory code has been modified by extensive vectorization of the algorithms to achieve much improved performance on an FPS 164 attached processor. Extensive timings on both the FPS 164 and a VAX 11/780 with floating point accelerator are presented as a function of the number of trajectories simultaneously run. The timing tests involve a potential energy surface of the LEPS variety and trajectories with 1000 time steps. The results indicate that vectorization results in timing improvements on both the VAX and the FPS. For larger numbers of trajectories run simultaneously, up to a factor of 25 improvement in speed occurs between VAX and FPS vectorized code. Copyright © 1986 John Wiley & Sons, Inc.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feldman, Steven; Valera-Leon, Carlos; Dechev, Damian

    The vector is a fundamental data structure, which provides constant-time access to a dynamically-resizable range of elements. Currently, there exist no wait-free vectors. The only non-blocking version supports only a subset of the sequential vector API and exhibits significant synchronization overhead caused by supporting opposing operations. Since many applications operate in phases of execution, wherein each phase only a subset of operations are used, this overhead is unnecessary for the majority of the application. To address the limitations of the non-blocking version, we present a new design that is wait-free, supports more of the operations provided by the sequential vector,more » and provides alternative implementations of key operations. These alternatives allow the developer to balance the performance and functionality of the vector as requirements change throughout execution. Compared to the known non-blocking version and the concurrent vector found in Intel’s TBB library, our design outperforms or provides comparable performance in the majority of tested scenarios. Over all tested scenarios, the presented design performs an average of 4.97 times more operations per second than the non-blocking vector and 1.54 more than the TBB vector. In a scenario designed to simulate the filling of a vector, performance improvement increases to 13.38 and 1.16 times. This work presents the first ABA-free non-blocking vector. Finally, unlike the other non-blocking approach, all operations are wait-free and bounds-checked and elements are stored contiguously in memory.« less

  20. Spatiotemporal Dynamics of Dengue Epidemics, Southern Vietnam

    PubMed Central

    Cuong, Hoang Quoc; Vu, Nguyen Thanh; Cazelles, Bernard; Boni, Maciej F.; Thai, Khoa T.D.; Rabaa, Maia A.; Quang, Luong Chan; Simmons, Cameron P.; Huu, Tran Ngoc

    2013-01-01

    An improved understanding of heterogeneities in dengue virus transmission might provide insights into biological and ecologic drivers and facilitate predictions of the magnitude, timing, and location of future dengue epidemics. To investigate dengue dynamics in urban Ho Chi Minh City and neighboring rural provinces in Vietnam, we analyzed a 10-year monthly time series of dengue surveillance data from southern Vietnam. The per capita incidence of dengue was lower in Ho Chi Minh City than in most rural provinces; annual epidemics occurred 1–3 months later in Ho Chi Minh City than elsewhere. The timing and the magnitude of annual epidemics were significantly more correlated in nearby districts than in remote districts, suggesting that local biological and ecologic drivers operate at a scale of 50–100 km. Dengue incidence during the dry season accounted for 63% of variability in epidemic magnitude. These findings can aid the targeting of vector-control interventions and the planning for dengue vaccine implementation. PMID:23735713

  1. The maxillary palp of aedes aegypti, a model of multisensory integration

    USDA-ARS?s Scientific Manuscript database

    Female yellow-fever mosquitoes, Aedes aegypti, are obligate blood-feeders and vectors of the pathogens that cause dengue fever, yellow fever and Chikungunya. This feeding behavior concludes a series of multisensory events guiding the mosquito to its host from a distance. The antennae and maxillary...

  2. Synthesis and Mosquitocidal Activity of a Series of Hydrazone Derivatives against Aedes aegypti

    USDA-ARS?s Scientific Manuscript database

    Background: Aedes aegypti is an important mosquito vector for the transmission of several infectious diseases. Current insecticides play a vital role in controlling mosquitoes; however, the frequent use of insecticides has led to the development of insecticide resistance. In order to control mosquit...

  3. Atomic Spectra and the Vector Model

    NASA Astrophysics Data System (ADS)

    Candler, A. C.

    2015-05-01

    12. Displaced terms; 13. Combination of several electrons; 14. Short periods; 15. Long periods; 16. Rare earths; 17. Intensity relsations; 18. Sum rules and (jj) coupling; 19. Series limit; 20. Hyperfine structure; 21. Quadripole radiation; 22. Fluorescent crystals; Appendix 5. Key to references; Appendix 6. Bibliography; Subject index; Author index.

  4. Quantification of scaling exponents and dynamical complexity of microwave refractivity in a tropical climate

    NASA Astrophysics Data System (ADS)

    Fuwape, Ibiyinka A.; Ogunjo, Samuel T.

    2016-12-01

    Radio refractivity index is used to quantify the effect of atmospheric parameters in communication systems. Scaling and dynamical complexities of radio refractivity across different climatic zones of Nigeria have been studied. Scaling property of the radio refractivity across Nigeria was estimated from the Hurst Exponent obtained using two different scaling methods namely: The Rescaled Range (R/S) and the detrended fluctuation analysis(DFA). The delay vector variance (DVV), Largest Lyapunov Exponent (λ1) and Correlation Dimension (D2) methods were used to investigate nonlinearity and the results confirm the presence of deterministic nonlinear profile in the radio refractivity time series. The recurrence quantification analysis (RQA) was used to quantify the degree of chaoticity in the radio refractivity across the different climatic zones. RQA was found to be a good measure for identifying unique fingerprint and signature of chaotic time series data. Microwave radio refractivity was found to be persistent and chaotic in all the study locations. The dynamics of radio refractivity increases in complexity and chaoticity from the Coastal region towards the Sahelian climate. The design, development and deployment of robust and reliable microwave communication link in the region will be greatly affected by the chaotic nature of radio refractivity in the region.

  5. Estimating the Volterra Series Transfer Function over coherent optical OFDM for efficient monitoring of the fiber channel nonlinearity.

    PubMed

    Shulkind, Gal; Nazarathy, Moshe

    2012-12-17

    We present an efficient method for system identification (nonlinear channel estimation) of third order nonlinear Volterra Series Transfer Function (VSTF) characterizing the four-wave-mixing nonlinear process over a coherent OFDM fiber link. Despite the seemingly large number of degrees of freedom in the VSTF (cubic in the number of frequency points) we identified a compressed VSTF representation which does not entail loss of information. Additional slightly lossy compression may be obtained by discarding very low power VSTF coefficients associated with regions of destructive interference in the FWM phased array effect. Based on this two-staged VSTF compressed representation, we develop a robust and efficient algorithm of nonlinear system identification (optical performance monitoring) estimating the VSTF by transmission of an extended training sequence over the OFDM link, performing just a matrix-vector multiplication at the receiver by a pseudo-inverse matrix which is pre-evaluated offline. For 512 (1024) frequency samples per channel, the VSTF measurement takes less than 1 (10) msec to complete with computational complexity of one real-valued multiply-add operation per time sample. Relative to a naïve exhaustive three-tone-test, our algorithm is far more tolerant of ASE additive noise and its acquisition time is orders of magnitude faster.

  6. Numerical Investigations of Capabilities and Limits of Photospheric Data Driven Magnetic Flux Emergence

    NASA Astrophysics Data System (ADS)

    Linton, Mark; Leake, James; Schuck, Peter W.

    2016-05-01

    The magnetic field of the solar atmosphere is the primary driver of solar activity. Understanding the magnetic state of the solar atmosphere is therefore of key importance to predicting solaractivity. One promising means of studying the magnetic atmosphere is to dynamically build up and evolve this atmosphere from the time evolution of the magnetic field at the photosphere, where it can be measured with current solar vector magnetograms at high temporal and spatial resolution.We report here on a series of numerical experiments investigating the capabilities and limits of magnetohydrodynamical simulations of such a process, where a magnetic corona is dynamically built up and evolved from a time series of synthetic photospheric data. These synthetic data are composed of photospheric slices taken from self consistent convection zone to corona simulations of flux emergence. The driven coronae are then quantitatively compared against the coronae of the original simulations. We investigate and report on the fidelity of these driven simulations, both as a function of the emergence timescale of the magnetic flux, and as a function of the driving cadence of the input data.This work was supported by the Chief of Naval Research and the NASA Living with a Star and Heliophysics Supporting Research programs.

  7. A BMI-based occupational therapy assist suit: asynchronous control by SSVEP

    PubMed Central

    Sakurada, Takeshi; Kawase, Toshihiro; Takano, Kouji; Komatsu, Tomoaki; Kansaku, Kenji

    2013-01-01

    A brain-machine interface (BMI) is an interface technology that uses neurophysiological signals from the brain to control external machines. Recent invasive BMI technologies have succeeded in the asynchronous control of robot arms for a useful series of actions, such as reaching and grasping. In this study, we developed non-invasive BMI technologies aiming to make such useful movements using the subject's own hands by preparing a BMI-based occupational therapy assist suit (BOTAS). We prepared a pre-recorded series of useful actions—a grasping-a-ball movement and a carrying-the-ball movement—and added asynchronous control using steady-state visual evoked potential (SSVEP) signals. A SSVEP signal was used to trigger the grasping-a-ball movement and another SSVEP signal was used to trigger the carrying-the-ball movement. A support vector machine was used to classify EEG signals recorded from the visual cortex (Oz) in real time. Untrained, able-bodied participants (n = 12) operated the system successfully. Classification accuracy and time required for SSVEP detection were ~88% and 3 s, respectively. We further recruited three patients with upper cervical spinal cord injuries (SCIs); they also succeeded in operating the system without training. These data suggest that our BOTAS system is potentially useful in terms of rehabilitation of patients with upper limb disabilities. PMID:24068982

  8. Definition of Contravariant Velocity Components

    NASA Technical Reports Server (NTRS)

    Hung, Ching-Mao; Kwak, Dochan (Technical Monitor)

    2002-01-01

    This is an old issue in computational fluid dynamics (CFD). What is the so-called contravariant velocity or contravariant velocity component? In the article, we review the basics of tensor analysis and give the contravariant velocity component a rigorous explanation. For a given coordinate system, there exist two uniquely determined sets of base vector systems - one is the covariant and another is the contravariant base vector system. The two base vector systems are reciprocal. The so-called contravariant velocity component is really the contravariant component of a velocity vector for a time-independent coordinate system, or the contravariant component of a relative velocity between fluid and coordinates, for a time-dependent coordinate system. The contravariant velocity components are not physical quantities of the velocity vector. Their magnitudes, dimensions, and associated directions are controlled by their corresponding covariant base vectors. Several 2-D (two-dimensional) linear examples and 2-D mass-conservation equation are used to illustrate the details of expressing a vector with respect to the covariant and contravariant base vector systems, respectively.

  9. A comparison of breeding and ensemble transform vectors for global ensemble generation

    NASA Astrophysics Data System (ADS)

    Deng, Guo; Tian, Hua; Li, Xiaoli; Chen, Jing; Gong, Jiandong; Jiao, Meiyan

    2012-02-01

    To compare the initial perturbation techniques using breeding vectors and ensemble transform vectors, three ensemble prediction systems using both initial perturbation methods but with different ensemble member sizes based on the spectral model T213/L31 are constructed at the National Meteorological Center, China Meteorological Administration (NMC/CMA). A series of ensemble verification scores such as forecast skill of the ensemble mean, ensemble resolution, and ensemble reliability are introduced to identify the most important attributes of ensemble forecast systems. The results indicate that the ensemble transform technique is superior to the breeding vector method in light of the evaluation of anomaly correlation coefficient (ACC), which is a deterministic character of the ensemble mean, the root-mean-square error (RMSE) and spread, which are of probabilistic attributes, and the continuous ranked probability score (CRPS) and its decomposition. The advantage of the ensemble transform approach is attributed to its orthogonality among ensemble perturbations as well as its consistence with the data assimilation system. Therefore, this study may serve as a reference for configuration of the best ensemble prediction system to be used in operation.

  10. Internal performance characteristics of vectored axisymmetric ejector nozzles

    NASA Technical Reports Server (NTRS)

    Lamb, Milton

    1993-01-01

    A series of vectoring axisymmetric ejector nozzles were designed and experimentally tested for internal performance and pumping characteristics at NASA-Langley Research Center. These ejector nozzles used convergent-divergent nozzles as the primary nozzles. The model geometric variables investigated were primary nozzle throat area, primary nozzle expansion ratio, effective ejector expansion ratio (ratio of shroud exit area to primary nozzle throat area), ratio of minimum ejector area to primary nozzle throat area, ratio of ejector upper slot height to lower slot height (measured on the vertical centerline), and thrust vector angle. The primary nozzle pressure ratio was varied from 2.0 to 10.0 depending upon primary nozzle throat area. The corrected ejector-to-primary nozzle weight-flow ratio was varied from 0 (no secondary flow) to approximately 0.21 (21 percent of primary weight-flow rate) depending on ejector nozzle configuration. In addition to the internal performance and pumping characteristics, static pressures were obtained on the shroud walls.

  11. A quasi-current representation for information needs inspired by Two-State Vector Formalism

    NASA Astrophysics Data System (ADS)

    Wang, Panpan; Hou, Yuexian; Li, Jingfei; Zhang, Yazhou; Song, Dawei; Li, Wenjie

    2017-09-01

    Recently, a number of quantum theory (QT)-based information retrieval (IR) models have been proposed for modeling session search task that users issue queries continuously in order to describe their evolving information needs (IN). However, the standard formalism of QT cannot provide a complete description for users' current IN in a sense that it does not take the 'future' information into consideration. Therefore, to seek a more proper and complete representation for users' IN, we construct a representation of quasi-current IN inspired by an emerging Two-State Vector Formalism (TSVF). With the enlightenment of the completeness of TSVF, a "two-state vector" derived from the 'future' (the current query) and the 'history' (the previous query) is employed to describe users' quasi-current IN in a more complete way. Extensive experiments are conducted on the session tracks of TREC 2013 & 2014, and show that our model outperforms a series of compared IR models.

  12. Motion Estimation Using the Firefly Algorithm in Ultrasonic Image Sequence of Soft Tissue

    PubMed Central

    Chao, Chih-Feng; Horng, Ming-Huwi; Chen, Yu-Chan

    2015-01-01

    Ultrasonic image sequence of the soft tissue is widely used in disease diagnosis; however, the speckle noises usually influenced the image quality. These images usually have a low signal-to-noise ratio presentation. The phenomenon gives rise to traditional motion estimation algorithms that are not suitable to measure the motion vectors. In this paper, a new motion estimation algorithm is developed for assessing the velocity field of soft tissue in a sequence of ultrasonic B-mode images. The proposed iterative firefly algorithm (IFA) searches for few candidate points to obtain the optimal motion vector, and then compares it to the traditional iterative full search algorithm (IFSA) via a series of experiments of in vivo ultrasonic image sequences. The experimental results show that the IFA can assess the vector with better efficiency and almost equal estimation quality compared to the traditional IFSA method. PMID:25873987

  13. Motion estimation using the firefly algorithm in ultrasonic image sequence of soft tissue.

    PubMed

    Chao, Chih-Feng; Horng, Ming-Huwi; Chen, Yu-Chan

    2015-01-01

    Ultrasonic image sequence of the soft tissue is widely used in disease diagnosis; however, the speckle noises usually influenced the image quality. These images usually have a low signal-to-noise ratio presentation. The phenomenon gives rise to traditional motion estimation algorithms that are not suitable to measure the motion vectors. In this paper, a new motion estimation algorithm is developed for assessing the velocity field of soft tissue in a sequence of ultrasonic B-mode images. The proposed iterative firefly algorithm (IFA) searches for few candidate points to obtain the optimal motion vector, and then compares it to the traditional iterative full search algorithm (IFSA) via a series of experiments of in vivo ultrasonic image sequences. The experimental results show that the IFA can assess the vector with better efficiency and almost equal estimation quality compared to the traditional IFSA method.

  14. Implementation of the block-Krylov boundary flexibility method of component synthesis

    NASA Technical Reports Server (NTRS)

    Carney, Kelly S.; Abdallah, Ayman A.; Hucklebridge, Arthur A.

    1993-01-01

    A method of dynamic substructuring is presented which utilizes a set of static Ritz vectors as a replacement for normal eigenvectors in component mode synthesis. This set of Ritz vectors is generated in a recurrence relationship, which has the form of a block-Krylov subspace. The initial seed to the recurrence algorithm is based on the boundary flexibility vectors of the component. This algorithm is not load-dependent, is applicable to both fixed and free-interface boundary components, and results in a general component model appropriate for any type of dynamic analysis. This methodology was implemented in the MSC/NASTRAN normal modes solution sequence using DMAP. The accuracy is found to be comparable to that of component synthesis based upon normal modes. The block-Krylov recurrence algorithm is a series of static solutions and so requires significantly less computation than solving the normal eigenspace problem.

  15. A Guided Tour of Mathematical Methods for the Physical Sciences

    NASA Astrophysics Data System (ADS)

    Snieder, Roel; van Wijk, Kasper

    2015-05-01

    1. Introduction; 2. Dimensional analysis; 3. Power series; 4. Spherical and cylindrical coordinates; 5. Gradient; 6. Divergence of a vector field; 7. Curl of a vector field; 8. Theorem of Gauss; 9. Theorem of Stokes; 10. The Laplacian; 11. Scale analysis; 12. Linear algebra; 13. Dirac delta function; 14. Fourier analysis; 15. Analytic functions; 16. Complex integration; 17. Green's functions: principles; 18. Green's functions: examples; 19. Normal modes; 20. Potential-field theory; 21. Probability and statistics; 22. Inverse problems; 23. Perturbation theory; 24. Asymptotic evaluation of integrals; 25. Conservation laws; 26. Cartesian tensors; 27. Variational calculus; 28. Epilogue on power and knowledge.

  16. Vector dark energy and high-z massive clusters

    NASA Astrophysics Data System (ADS)

    Carlesi, Edoardo; Knebe, Alexander; Yepes, Gustavo; Gottlöber, Stefan; Jiménez, Jose Beltrán.; Maroto, Antonio L.

    2011-12-01

    The detection of extremely massive clusters at z > 1 such as SPT-CL J0546-5345, SPT-CL J2106-5844 and XMMU J2235.3-2557 has been considered by some authors as a challenge to the standard Λ cold dark matter cosmology. In fact, assuming Gaussian initial conditions, the theoretical expectation of detecting such objects is as low as ≤1 per cent. In this paper we discuss the probability of the existence of such objects in the light of the vector dark energy paradigm, showing by means of a series of N-body simulations that chances of detection are substantially enhanced in this non-standard framework.

  17. Yellow fever: ecology, epidemiology, and role in the collapse of the Classic lowland Maya civilization.

    PubMed

    Wilkinson, R L

    1995-07-01

    Mystery has long surrounded the collapse of the Classic lowland Mayan civilization of the Peten region in Guatemala. Recent population reconstructions derived from archaeological evidence from the central lowlands show population declines from urban levels of between 2.5 and 3.5 million to around 536,000 in the two hundred year interval between 800 A.D. and 1000 A.D., the period known as the Classic Maya Collapse. A steady, but lesser rate of population decline continued until the time of European contact. When knowledge of the ecology and epidemiology of yellow fever and its known mosquito vectors are compared with what is known of the ecological conditions of lowland Guatemala as modified by the Classic Maya, provocative similarities are observed. When infection and mortality patterns of more recent urban yellow fever epidemics are used as models for a possible series of Classic Maya epidemics, a correlation is noted between the modeled rate of population decline for a series of epidemics, and population decline figures reconstructed from archaeological evidence.

  18. [Construction and selection of effective mouse Smad6 recombinant lenti-virus interference vectors].

    PubMed

    Yu, Jing; Qi, Mengchun; Deng, Jiupeng; Liu, Gang; Chen, Huaiqing

    2010-10-01

    This experiment was designed to construct mouse Smad6 recombinant RNA interference vectors and determine their interference effects on bone marrow mesenchymal stem cells (BMSCs). Three recombinant Smad6 RNA interference vectors were constructed by molecular clone techniques with a lenti-virus vector expressing green fluorescent protein (GFP), and the correctness of recombinant vectors was verified by DNA sequencing. Mouse BMSCs were used for transfection experiments and BMP-2 was in use for osteogenic induction of MSCs. The transfection efficiency of recombinant vectors was examined by Laser confocal scanning microscope and the interference effect of recombinant vectors on Smad6 gene expression was determined by real-time RT-PCR and Western blot, respectively. Three Smad6 recombinant RNA interference vectors were successfully constructed and their correctness was proved by DNA sequencing. After transfection, GFPs were effectively expressed in MSCs and all of three recombinant vectors gained high transfection efficiency (> 95%). Both real-time PCR and Western blot examination indicated that among three recombinant vectors, No. 2 Svector had the best interference effect and the interference effect was nearly 91% at protein level. In conclusion, Mouse recombinant Smad6 RNA interference (RNAi) vector was successfully constructed and it provided an effective tool for further studies on BMP signal pathways.

  19. Validation of SplitVectors Encoding for Quantitative Visualization of Large-Magnitude-Range Vector Fields

    PubMed Central

    Zhao, Henan; Bryant, Garnett W.; Griffin, Wesley; Terrill, Judith E.; Chen, Jian

    2017-01-01

    We designed and evaluated SplitVectors, a new vector field display approach to help scientists perform new discrimination tasks on large-magnitude-range scientific data shown in three-dimensional (3D) visualization environments. SplitVectors uses scientific notation to display vector magnitude, thus improving legibility. We present an empirical study comparing the SplitVectors approach with three other approaches - direct linear representation, logarithmic, and text display commonly used in scientific visualizations. Twenty participants performed three domain analysis tasks: reading numerical values (a discrimination task), finding the ratio between values (a discrimination task), and finding the larger of two vectors (a pattern detection task). Participants used both mono and stereo conditions. Our results suggest the following: (1) SplitVectors improve accuracy by about 10 times compared to linear mapping and by four times to logarithmic in discrimination tasks; (2) SplitVectors have no significant differences from the textual display approach, but reduce cluttering in the scene; (3) SplitVectors and textual display are less sensitive to data scale than linear and logarithmic approaches; (4) using logarithmic can be problematic as participants' confidence was as high as directly reading from the textual display, but their accuracy was poor; and (5) Stereoscopy improved performance, especially in more challenging discrimination tasks. PMID:28113469

  20. Validation of SplitVectors Encoding for Quantitative Visualization of Large-Magnitude-Range Vector Fields.

    PubMed

    Henan Zhao; Bryant, Garnett W; Griffin, Wesley; Terrill, Judith E; Jian Chen

    2017-06-01

    We designed and evaluated SplitVectors, a new vector field display approach to help scientists perform new discrimination tasks on large-magnitude-range scientific data shown in three-dimensional (3D) visualization environments. SplitVectors uses scientific notation to display vector magnitude, thus improving legibility. We present an empirical study comparing the SplitVectors approach with three other approaches - direct linear representation, logarithmic, and text display commonly used in scientific visualizations. Twenty participants performed three domain analysis tasks: reading numerical values (a discrimination task), finding the ratio between values (a discrimination task), and finding the larger of two vectors (a pattern detection task). Participants used both mono and stereo conditions. Our results suggest the following: (1) SplitVectors improve accuracy by about 10 times compared to linear mapping and by four times to logarithmic in discrimination tasks; (2) SplitVectors have no significant differences from the textual display approach, but reduce cluttering in the scene; (3) SplitVectors and textual display are less sensitive to data scale than linear and logarithmic approaches; (4) using logarithmic can be problematic as participants' confidence was as high as directly reading from the textual display, but their accuracy was poor; and (5) Stereoscopy improved performance, especially in more challenging discrimination tasks.

Top