Sample records for dimensional time series

  1. A Recurrent Probabilistic Neural Network with Dimensionality Reduction Based on Time-series Discriminant Component Analysis.

    PubMed

    Hayashi, Hideaki; Shibanoki, Taro; Shima, Keisuke; Kurita, Yuichi; Tsuji, Toshio

    2015-12-01

    This paper proposes a probabilistic neural network (NN) developed on the basis of time-series discriminant component analysis (TSDCA) that can be used to classify high-dimensional time-series patterns. TSDCA involves the compression of high-dimensional time series into a lower dimensional space using a set of orthogonal transformations and the calculation of posterior probabilities based on a continuous-density hidden Markov model with a Gaussian mixture model expressed in the reduced-dimensional space. The analysis can be incorporated into an NN, which is named a time-series discriminant component network (TSDCN), so that parameters of dimensionality reduction and classification can be obtained simultaneously as network coefficients according to a backpropagation through time-based learning algorithm with the Lagrange multiplier method. The TSDCN is considered to enable high-accuracy classification of high-dimensional time-series patterns and to reduce the computation time taken for network training. The validity of the TSDCN is demonstrated for high-dimensional artificial data and electroencephalogram signals in the experiments conducted during the study.

  2. Defect-Repairable Latent Feature Extraction of Driving Behavior via a Deep Sparse Autoencoder

    PubMed Central

    Taniguchi, Tadahiro; Takenaka, Kazuhito; Bando, Takashi

    2018-01-01

    Data representing driving behavior, as measured by various sensors installed in a vehicle, are collected as multi-dimensional sensor time-series data. These data often include redundant information, e.g., both the speed of wheels and the engine speed represent the velocity of the vehicle. Redundant information can be expected to complicate the data analysis, e.g., more factors need to be analyzed; even varying the levels of redundancy can influence the results of the analysis. We assume that the measured multi-dimensional sensor time-series data of driving behavior are generated from low-dimensional data shared by the many types of one-dimensional data of which multi-dimensional time-series data are composed. Meanwhile, sensor time-series data may be defective because of sensor failure. Therefore, another important function is to reduce the negative effect of defective data when extracting low-dimensional time-series data. This study proposes a defect-repairable feature extraction method based on a deep sparse autoencoder (DSAE) to extract low-dimensional time-series data. In the experiments, we show that DSAE provides high-performance latent feature extraction for driving behavior, even for defective sensor time-series data. In addition, we show that the negative effect of defects on the driving behavior segmentation task could be reduced using the latent features extracted by DSAE. PMID:29462931

  3. Approximating high-dimensional dynamics by barycentric coordinates with linear programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hirata, Yoshito, E-mail: yoshito@sat.t.u-tokyo.ac.jp; Aihara, Kazuyuki; Suzuki, Hideyuki

    The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics ofmore » the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.« less

  4. Approximating high-dimensional dynamics by barycentric coordinates with linear programming.

    PubMed

    Hirata, Yoshito; Shiro, Masanori; Takahashi, Nozomu; Aihara, Kazuyuki; Suzuki, Hideyuki; Mas, Paloma

    2015-01-01

    The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics of the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.

  5. Multivariate time series analysis of neuroscience data: some challenges and opportunities.

    PubMed

    Pourahmadi, Mohsen; Noorbaloochi, Siamak

    2016-04-01

    Neuroimaging data may be viewed as high-dimensional multivariate time series, and analyzed using techniques from regression analysis, time series analysis and spatiotemporal analysis. We discuss issues related to data quality, model specification, estimation, interpretation, dimensionality and causality. Some recent research areas addressing aspects of some recurring challenges are introduced. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Information mining over heterogeneous and high-dimensional time-series data in clinical trials databases.

    PubMed

    Altiparmak, Fatih; Ferhatosmanoglu, Hakan; Erdal, Selnur; Trost, Donald C

    2006-04-01

    An effective analysis of clinical trials data involves analyzing different types of data such as heterogeneous and high dimensional time series data. The current time series analysis methods generally assume that the series at hand have sufficient length to apply statistical techniques to them. Other ideal case assumptions are that data are collected in equal length intervals, and while comparing time series, the lengths are usually expected to be equal to each other. However, these assumptions are not valid for many real data sets, especially for the clinical trials data sets. An addition, the data sources are different from each other, the data are heterogeneous, and the sensitivity of the experiments varies by the source. Approaches for mining time series data need to be revisited, keeping the wide range of requirements in mind. In this paper, we propose a novel approach for information mining that involves two major steps: applying a data mining algorithm over homogeneous subsets of data, and identifying common or distinct patterns over the information gathered in the first step. Our approach is implemented specifically for heterogeneous and high dimensional time series clinical trials data. Using this framework, we propose a new way of utilizing frequent itemset mining, as well as clustering and declustering techniques with novel distance metrics for measuring similarity between time series data. By clustering the data, we find groups of analytes (substances in blood) that are most strongly correlated. Most of these relationships already known are verified by the clinical panels, and, in addition, we identify novel groups that need further biomedical analysis. A slight modification to our algorithm results an effective declustering of high dimensional time series data, which is then used for "feature selection." Using industry-sponsored clinical trials data sets, we are able to identify a small set of analytes that effectively models the state of normal health.

  7. GATE: software for the analysis and visualization of high-dimensional time series expression data.

    PubMed

    MacArthur, Ben D; Lachmann, Alexander; Lemischka, Ihor R; Ma'ayan, Avi

    2010-01-01

    We present Grid Analysis of Time series Expression (GATE), an integrated computational software platform for the analysis and visualization of high-dimensional biomolecular time series. GATE uses a correlation-based clustering algorithm to arrange molecular time series on a two-dimensional hexagonal array and dynamically colors individual hexagons according to the expression level of the molecular component to which they are assigned, to create animated movies of systems-level molecular regulatory dynamics. In order to infer potential regulatory control mechanisms from patterns of correlation, GATE also allows interactive interroga-tion of movies against a wide variety of prior knowledge datasets. GATE movies can be paused and are interactive, allowing users to reconstruct networks and perform functional enrichment analyses. Movies created with GATE can be saved in Flash format and can be inserted directly into PDF manuscript files as interactive figures. GATE is available for download and is free for academic use from http://amp.pharm.mssm.edu/maayan-lab/gate.htm

  8. Analysis and generation of groundwater concentration time series

    NASA Astrophysics Data System (ADS)

    Crăciun, Maria; Vamoş, Călin; Suciu, Nicolae

    2018-01-01

    Concentration time series are provided by simulated concentrations of a nonreactive solute transported in groundwater, integrated over the transverse direction of a two-dimensional computational domain and recorded at the plume center of mass. The analysis of a statistical ensemble of time series reveals subtle features that are not captured by the first two moments which characterize the approximate Gaussian distribution of the two-dimensional concentration fields. The concentration time series exhibit a complex preasymptotic behavior driven by a nonstationary trend and correlated fluctuations with time-variable amplitude. Time series with almost the same statistics are generated by successively adding to a time-dependent trend a sum of linear regression terms, accounting for correlations between fluctuations around the trend and their increments in time, and terms of an amplitude modulated autoregressive noise of order one with time-varying parameter. The algorithm generalizes mixing models used in probability density function approaches. The well-known interaction by exchange with the mean mixing model is a special case consisting of a linear regression with constant coefficients.

  9. Emerging properties of financial time series in the ``Game of Life''

    NASA Astrophysics Data System (ADS)

    Hernández-Montoya, A. R.; Coronel-Brizio, H. F.; Stevens-Ramírez, G. A.; Rodríguez-Achach, M.; Politi, M.; Scalas, E.

    2011-12-01

    We explore the spatial complexity of Conway’s “Game of Life,” a prototypical cellular automaton by means of a geometrical procedure generating a two-dimensional random walk from a bidimensional lattice with periodical boundaries. The one-dimensional projection of this process is analyzed and it turns out that some of its statistical properties resemble the so-called stylized facts observed in financial time series. The scope and meaning of this result are discussed from the viewpoint of complex systems. In particular, we stress how the supposed peculiarities of financial time series are, often, overrated in their importance.

  10. Influence analysis for high-dimensional time series with an application to epileptic seizure onset zone detection

    PubMed Central

    Flamm, Christoph; Graef, Andreas; Pirker, Susanne; Baumgartner, Christoph; Deistler, Manfred

    2013-01-01

    Granger causality is a useful concept for studying causal relations in networks. However, numerical problems occur when applying the corresponding methodology to high-dimensional time series showing co-movement, e.g. EEG recordings or economic data. In order to deal with these shortcomings, we propose a novel method for the causal analysis of such multivariate time series based on Granger causality and factor models. We present the theoretical background, successfully assess our methodology with the help of simulated data and show a potential application in EEG analysis of epileptic seizures. PMID:23354014

  11. Emerging properties of financial time series in the "Game of Life".

    PubMed

    Hernández-Montoya, A R; Coronel-Brizio, H F; Stevens-Ramírez, G A; Rodríguez-Achach, M; Politi, M; Scalas, E

    2011-12-01

    We explore the spatial complexity of Conway's "Game of Life," a prototypical cellular automaton by means of a geometrical procedure generating a two-dimensional random walk from a bidimensional lattice with periodical boundaries. The one-dimensional projection of this process is analyzed and it turns out that some of its statistical properties resemble the so-called stylized facts observed in financial time series. The scope and meaning of this result are discussed from the viewpoint of complex systems. In particular, we stress how the supposed peculiarities of financial time series are, often, overrated in their importance.

  12. Approximate series solution of multi-dimensional, time fractional-order (heat-like) diffusion equations using FRDTM.

    PubMed

    Singh, Brajesh K; Srivastava, Vineet K

    2015-04-01

    The main goal of this paper is to present a new approximate series solution of the multi-dimensional (heat-like) diffusion equation with time-fractional derivative in Caputo form using a semi-analytical approach: fractional-order reduced differential transform method (FRDTM). The efficiency of FRDTM is confirmed by considering four test problems of the multi-dimensional time fractional-order diffusion equation. FRDTM is a very efficient, effective and powerful mathematical tool which provides exact or very close approximate solutions for a wide range of real-world problems arising in engineering and natural sciences, modelled in terms of differential equations.

  13. Approximate series solution of multi-dimensional, time fractional-order (heat-like) diffusion equations using FRDTM

    PubMed Central

    Singh, Brajesh K.; Srivastava, Vineet K.

    2015-01-01

    The main goal of this paper is to present a new approximate series solution of the multi-dimensional (heat-like) diffusion equation with time-fractional derivative in Caputo form using a semi-analytical approach: fractional-order reduced differential transform method (FRDTM). The efficiency of FRDTM is confirmed by considering four test problems of the multi-dimensional time fractional-order diffusion equation. FRDTM is a very efficient, effective and powerful mathematical tool which provides exact or very close approximate solutions for a wide range of real-world problems arising in engineering and natural sciences, modelled in terms of differential equations. PMID:26064639

  14. The application of time series models to cloud field morphology analysis

    NASA Technical Reports Server (NTRS)

    Chin, Roland T.; Jau, Jack Y. C.; Weinman, James A.

    1987-01-01

    A modeling method for the quantitative description of remotely sensed cloud field images is presented. A two-dimensional texture modeling scheme based on one-dimensional time series procedures is adopted for this purpose. The time series procedure used is the seasonal autoregressive, moving average (ARMA) process in Box and Jenkins. Cloud field properties such as directionality, clustering and cloud coverage can be retrieved by this method. It has been demonstrated that a cloud field image can be quantitatively defined by a small set of parameters and synthesized surrogates can be reconstructed from these model parameters. This method enables cloud climatology to be studied quantitatively.

  15. Phase unwrapping in three dimensions with application to InSAR time series.

    PubMed

    Hooper, Andrew; Zebker, Howard A

    2007-09-01

    The problem of phase unwrapping in two dimensions has been studied extensively in the past two decades, but the three-dimensional (3D) problem has so far received relatively little attention. We develop here a theoretical framework for 3D phase unwrapping and also describe two algorithms for implementation, both of which can be applied to synthetic aperture radar interferometry (InSAR) time series. We test the algorithms on simulated data and find both give more accurate results than a two-dimensional algorithm. When applied to actual InSAR time series, we find good agreement both between the algorithms and with ground truth.

  16. Ince-Gaussian series representation of the two-dimensional fractional Fourier transform.

    PubMed

    Bandres, Miguel A; Gutiérrez-Vega, Julio C

    2005-03-01

    We introduce the Ince-Gaussian series representation of the two-dimensional fractional Fourier transform in elliptical coordinates. A physical interpretation is provided in terms of field propagation in quadratic graded-index media whose eigenmodes in elliptical coordinates are derived for the first time to our knowledge. The kernel of the new series representation is expressed in terms of Ince-Gaussian functions. The equivalence among the Hermite-Gaussian, Laguerre-Gaussian, and Ince-Gaussian series representations is verified by establishing the relation among the three definitions.

  17. TimesVector: a vectorized clustering approach to the analysis of time series transcriptome data from multiple phenotypes.

    PubMed

    Jung, Inuk; Jo, Kyuri; Kang, Hyejin; Ahn, Hongryul; Yu, Youngjae; Kim, Sun

    2017-12-01

    Identifying biologically meaningful gene expression patterns from time series gene expression data is important to understand the underlying biological mechanisms. To identify significantly perturbed gene sets between different phenotypes, analysis of time series transcriptome data requires consideration of time and sample dimensions. Thus, the analysis of such time series data seeks to search gene sets that exhibit similar or different expression patterns between two or more sample conditions, constituting the three-dimensional data, i.e. gene-time-condition. Computational complexity for analyzing such data is very high, compared to the already difficult NP-hard two dimensional biclustering algorithms. Because of this challenge, traditional time series clustering algorithms are designed to capture co-expressed genes with similar expression pattern in two sample conditions. We present a triclustering algorithm, TimesVector, specifically designed for clustering three-dimensional time series data to capture distinctively similar or different gene expression patterns between two or more sample conditions. TimesVector identifies clusters with distinctive expression patterns in three steps: (i) dimension reduction and clustering of time-condition concatenated vectors, (ii) post-processing clusters for detecting similar and distinct expression patterns and (iii) rescuing genes from unclassified clusters. Using four sets of time series gene expression data, generated by both microarray and high throughput sequencing platforms, we demonstrated that TimesVector successfully detected biologically meaningful clusters of high quality. TimesVector improved the clustering quality compared to existing triclustering tools and only TimesVector detected clusters with differential expression patterns across conditions successfully. The TimesVector software is available at http://biohealth.snu.ac.kr/software/TimesVector/. sunkim.bioinfo@snu.ac.kr. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  18. From Networks to Time Series

    NASA Astrophysics Data System (ADS)

    Shimada, Yutaka; Ikeguchi, Tohru; Shigehara, Takaomi

    2012-10-01

    In this Letter, we propose a framework to transform a complex network to a time series. The transformation from complex networks to time series is realized by the classical multidimensional scaling. Applying the transformation method to a model proposed by Watts and Strogatz [Nature (London) 393, 440 (1998)], we show that ring lattices are transformed to periodic time series, small-world networks to noisy periodic time series, and random networks to random time series. We also show that these relationships are analytically held by using the circulant-matrix theory and the perturbation theory of linear operators. The results are generalized to several high-dimensional lattices.

  19. Modeling Time Series Data for Supervised Learning

    ERIC Educational Resources Information Center

    Baydogan, Mustafa Gokce

    2012-01-01

    Temporal data are increasingly prevalent and important in analytics. Time series (TS) data are chronological sequences of observations and an important class of temporal data. Fields such as medicine, finance, learning science and multimedia naturally generate TS data. Each series provide a high-dimensional data vector that challenges the learning…

  20. A Multitaper, Causal Decomposition for Stochastic, Multivariate Time Series: Application to High-Frequency Calcium Imaging Data.

    PubMed

    Sornborger, Andrew T; Lauderdale, James D

    2016-11-01

    Neural data analysis has increasingly incorporated causal information to study circuit connectivity. Dimensional reduction forms the basis of most analyses of large multivariate time series. Here, we present a new, multitaper-based decomposition for stochastic, multivariate time series that acts on the covariance of the time series at all lags, C ( τ ), as opposed to standard methods that decompose the time series, X ( t ), using only information at zero-lag. In both simulated and neural imaging examples, we demonstrate that methods that neglect the full causal structure may be discarding important dynamical information in a time series.

  1. Forecasting and analyzing high O3 time series in educational area through an improved chaotic approach

    NASA Astrophysics Data System (ADS)

    Hamid, Nor Zila Abd; Adenan, Nur Hamiza; Noorani, Mohd Salmi Md

    2017-08-01

    Forecasting and analyzing the ozone (O3) concentration time series is important because the pollutant is harmful to health. This study is a pilot study for forecasting and analyzing the O3 time series in one of Malaysian educational area namely Shah Alam using chaotic approach. Through this approach, the observed hourly scalar time series is reconstructed into a multi-dimensional phase space, which is then used to forecast the future time series through the local linear approximation method. The main purpose is to forecast the high O3 concentrations. The original method performed poorly but the improved method addressed the weakness thereby enabling the high concentrations to be successfully forecast. The correlation coefficient between the observed and forecasted time series through the improved method is 0.9159 and both the mean absolute error and root mean squared error are low. Thus, the improved method is advantageous. The time series analysis by means of the phase space plot and Cao method identified the presence of low-dimensional chaotic dynamics in the observed O3 time series. Results showed that at least seven factors affect the studied O3 time series, which is consistent with the listed factors from the diurnal variations investigation and the sensitivity analysis from past studies. In conclusion, chaotic approach has been successfully forecast and analyzes the O3 time series in educational area of Shah Alam. These findings are expected to help stakeholders such as Ministry of Education and Department of Environment in having a better air pollution management.

  2. Multifractal detrended cross-correlation analysis for two nonstationary signals.

    PubMed

    Zhou, Wei-Xing

    2008-06-01

    We propose a method called multifractal detrended cross-correlation analysis to investigate the multifractal behaviors in the power-law cross-correlations between two time series or higher-dimensional quantities recorded simultaneously, which can be applied to diverse complex systems such as turbulence, finance, ecology, physiology, geophysics, and so on. The method is validated with cross-correlated one- and two-dimensional binomial measures and multifractal random walks. As an example, we illustrate the method by analyzing two financial time series.

  3. Low-dimensional chaos in magnetospheric activity from AE time series

    NASA Technical Reports Server (NTRS)

    Vassiliadis, D. V.; Sharma, A. S.; Eastman, T. E.; Papadopoulos, K.

    1990-01-01

    The magnetospheric response to the solar-wind input, as represented by the time-series measurements of the auroral electrojet (AE) index, has been examined using phase-space reconstruction techniques. The system was found to behave as a low-dimensional chaotic system with a fractal dimension of 3.6 and has Kolmogorov entropy less than 0.2/min. These indicate that the dynamics of the system can be adequately described by four independent variables, and that the corresponding intrinsic time scale is of the order of 5 min. The relevance of the results to magnetospheric modeling is discussed.

  4. Risk patterns and correlated brain activities. Multidimensional statistical analysis of FMRI data in economic decision making study.

    PubMed

    van Bömmel, Alena; Song, Song; Majer, Piotr; Mohr, Peter N C; Heekeren, Hauke R; Härdle, Wolfgang K

    2014-07-01

    Decision making usually involves uncertainty and risk. Understanding which parts of the human brain are activated during decisions under risk and which neural processes underly (risky) investment decisions are important goals in neuroeconomics. Here, we analyze functional magnetic resonance imaging (fMRI) data on 17 subjects who were exposed to an investment decision task from Mohr, Biele, Krugel, Li, and Heekeren (in NeuroImage 49, 2556-2563, 2010b). We obtain a time series of three-dimensional images of the blood-oxygen-level dependent (BOLD) fMRI signals. We apply a panel version of the dynamic semiparametric factor model (DSFM) presented in Park, Mammen, Wolfgang, and Borak (in Journal of the American Statistical Association 104(485), 284-298, 2009) and identify task-related activations in space and dynamics in time. With the panel DSFM (PDSFM) we can capture the dynamic behavior of the specific brain regions common for all subjects and represent the high-dimensional time-series data in easily interpretable low-dimensional dynamic factors without large loss of variability. Further, we classify the risk attitudes of all subjects based on the estimated low-dimensional time series. Our classification analysis successfully confirms the estimated risk attitudes derived directly from subjects' decision behavior.

  5. Application of time series discretization using evolutionary programming for classification of precancerous cervical lesions.

    PubMed

    Acosta-Mesa, Héctor-Gabriel; Rechy-Ramírez, Fernando; Mezura-Montes, Efrén; Cruz-Ramírez, Nicandro; Hernández Jiménez, Rodolfo

    2014-06-01

    In this work, we present a novel application of time series discretization using evolutionary programming for the classification of precancerous cervical lesions. The approach optimizes the number of intervals in which the length and amplitude of the time series should be compressed, preserving the important information for classification purposes. Using evolutionary programming, the search for a good discretization scheme is guided by a cost function which considers three criteria: the entropy regarding the classification, the complexity measured as the number of different strings needed to represent the complete data set, and the compression rate assessed as the length of the discrete representation. This discretization approach is evaluated using a time series data based on temporal patterns observed during a classical test used in cervical cancer detection; the classification accuracy reached by our method is compared with the well-known times series discretization algorithm SAX and the dimensionality reduction method PCA. Statistical analysis of the classification accuracy shows that the discrete representation is as efficient as the complete raw representation for the present application, reducing the dimensionality of the time series length by 97%. This representation is also very competitive in terms of classification accuracy when compared with similar approaches. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. Precisions Measurement for the Grasp of Welding Deformation amount of Time Series for Large-Scale Industrial Products

    NASA Astrophysics Data System (ADS)

    Abe, R.; Hamada, K.; Hirata, N.; Tamura, R.; Nishi, N.

    2015-05-01

    As well as the BIM of quality management in the construction industry, demand for quality management of the manufacturing process of the member is higher in shipbuilding field. The time series of three-dimensional deformation of the each process, and are accurately be grasped strongly demanded. In this study, we focused on the shipbuilding field, will be examined three-dimensional measurement method. The shipyard, since a large equipment and components are intricately arranged in a limited space, the installation of the measuring equipment and the target is limited. There is also the element to be measured is moved in each process, the establishment of the reference point for time series comparison is necessary to devise. In this paper will be discussed method for measuring the welding deformation in time series by using a total station. In particular, by using a plurality of measurement data obtained from this approach and evaluated the amount of deformation of each process.

  7. Time-dependent calculations of transfer ionization by fast proton-helium collision in one-dimensional kinematics

    NASA Astrophysics Data System (ADS)

    Serov, Vladislav V.; Kheifets, A. S.

    2014-12-01

    We analyze a transfer ionization (TI) reaction in the fast proton-helium collision H++He →H0+He2 ++ e- by solving a time-dependent Schrödinger equation (TDSE) under the classical projectile motion approximation in one-dimensional kinematics. In addition, we construct various time-independent analogs of our model using lowest-order perturbation theory in the form of the Born series. By comparing various aspects of the TDSE and the Born series calculations, we conclude that the recent discrepancies of experimental and theoretical data may be attributed to deficiency of the Born models used by other authors. We demonstrate that the correct Born series for TI should include the momentum-space overlap between the double-ionization amplitude and the wave function of the transferred electron.

  8. Extending nonlinear analysis to short ecological time series.

    PubMed

    Hsieh, Chih-hao; Anderson, Christian; Sugihara, George

    2008-01-01

    Nonlinearity is important and ubiquitous in ecology. Though detectable in principle, nonlinear behavior is often difficult to characterize, analyze, and incorporate mechanistically into models of ecosystem function. One obvious reason is that quantitative nonlinear analysis tools are data intensive (require long time series), and time series in ecology are generally short. Here we demonstrate a useful method that circumvents data limitation and reduces sampling error by combining ecologically similar multispecies time series into one long time series. With this technique, individual ecological time series containing as few as 20 data points can be mined for such important information as (1) significantly improved forecast ability, (2) the presence and location of nonlinearity, and (3) the effective dimensionality (the number of relevant variables) of an ecological system.

  9. Indispensable finite time corrections for Fokker-Planck equations from time series data.

    PubMed

    Ragwitz, M; Kantz, H

    2001-12-17

    The reconstruction of Fokker-Planck equations from observed time series data suffers strongly from finite sampling rates. We show that previously published results are degraded considerably by such effects. We present correction terms which yield a robust estimation of the diffusion terms, together with a novel method for one-dimensional problems. We apply these methods to time series data of local surface wind velocities, where the dependence of the diffusion constant on the state variable shows a different behavior than previously suggested.

  10. Eisenstein series for infinite-dimensional U-duality groups

    NASA Astrophysics Data System (ADS)

    Fleig, Philipp; Kleinschmidt, Axel

    2012-06-01

    We consider Eisenstein series appearing as coefficients of curvature corrections in the low-energy expansion of type II string theory four-graviton scattering amplitudes. We define these Eisenstein series over all groups in the E n series of string duality groups, and in particular for the infinite-dimensional Kac-Moody groups E 9, E 10 and E 11. We show that, remarkably, the so-called constant term of Kac-Moody-Eisenstein series contains only a finite number of terms for particular choices of a parameter appearing in the definition of the series. This resonates with the idea that the constant term of the Eisenstein series encodes perturbative string corrections in BPS-protected sectors allowing only a finite number of corrections. We underpin our findings with an extensive discussion of physical degeneration limits in D < 3 space-time dimensions.

  11. Computation of canonical correlation and best predictable aspect of future for time series

    NASA Technical Reports Server (NTRS)

    Pourahmadi, Mohsen; Miamee, A. G.

    1989-01-01

    The canonical correlation between the (infinite) past and future of a stationary time series is shown to be the limit of the canonical correlation between the (infinite) past and (finite) future, and computation of the latter is reduced to a (generalized) eigenvalue problem involving (finite) matrices. This provides a convenient and essentially, finite-dimensional algorithm for computing canonical correlations and components of a time series. An upper bound is conjectured for the largest canonical correlation.

  12. Faithfulness of Recurrence Plots: A Mathematical Proof

    NASA Astrophysics Data System (ADS)

    Hirata, Yoshito; Komuro, Motomasa; Horai, Shunsuke; Aihara, Kazuyuki

    It is practically known that a recurrence plot, a two-dimensional visualization of time series data, can contain almost all information related to the underlying dynamics except for its spatial scale because we can recover a rough shape for the original time series from the recurrence plot even if the original time series is multivariate. We here provide a mathematical proof that the metric defined by a recurrence plot [Hirata et al., 2008] is equivalent to the Euclidean metric under mild conditions.

  13. New Features for Neuron Classification.

    PubMed

    Hernández-Pérez, Leonardo A; Delgado-Castillo, Duniel; Martín-Pérez, Rainer; Orozco-Morales, Rubén; Lorenzo-Ginori, Juan V

    2018-04-28

    This paper addresses the problem of obtaining new neuron features capable of improving results of neuron classification. Most studies on neuron classification using morphological features have been based on Euclidean geometry. Here three one-dimensional (1D) time series are derived from the three-dimensional (3D) structure of neuron instead, and afterwards a spatial time series is finally constructed from which the features are calculated. Digitally reconstructed neurons were separated into control and pathological sets, which are related to three categories of alterations caused by epilepsy, Alzheimer's disease (long and local projections), and ischemia. These neuron sets were then subjected to supervised classification and the results were compared considering three sets of features: morphological, features obtained from the time series and a combination of both. The best results were obtained using features from the time series, which outperformed the classification using only morphological features, showing higher correct classification rates with differences of 5.15, 3.75, 5.33% for epilepsy and Alzheimer's disease (long and local projections) respectively. The morphological features were better for the ischemia set with a difference of 3.05%. Features like variance, Spearman auto-correlation, partial auto-correlation, mutual information, local minima and maxima, all related to the time series, exhibited the best performance. Also we compared different evaluators, among which ReliefF was the best ranked.

  14. Dimensionless embedding for nonlinear time series analysis

    NASA Astrophysics Data System (ADS)

    Hirata, Yoshito; Aihara, Kazuyuki

    2017-09-01

    Recently, infinite-dimensional delay coordinates (InDDeCs) have been proposed for predicting high-dimensional dynamics instead of conventional delay coordinates. Although InDDeCs can realize faster computation and more accurate short-term prediction, it is still not well-known whether InDDeCs can be used in other applications of nonlinear time series analysis in which reconstruction is needed for the underlying dynamics from a scalar time series generated from a dynamical system. Here, we give theoretical support for justifying the use of InDDeCs and provide numerical examples to show that InDDeCs can be used for various applications for obtaining the recurrence plots, correlation dimensions, and maximal Lyapunov exponents, as well as testing directional couplings and extracting slow-driving forces. We demonstrate performance of the InDDeCs using the weather data. Thus, InDDeCs can eventually realize "dimensionless embedding" while we enjoy faster and more reliable computations.

  15. Improvements of the two-dimensional FDTD method for the simulation of normal- and superconducting planar waveguides using time series analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hofschen, S.; Wolff, I.

    1996-08-01

    Time-domain simulation results of two-dimensional (2-D) planar waveguide finite-difference time-domain (FDTD) analysis are normally analyzed using Fourier transform. The introduced method of time series analysis to extract propagation and attenuation constants reduces the desired computation time drastically. Additionally, a nonequidistant discretization together with an adequate excitation technique is used to reduce the number of spatial grid points. Therefore, it is possible to reduce the number of spatial grid points. Therefore, it is possible to simulate normal- and superconducting planar waveguide structures with very thin conductors and small dimensions, as they are used in MMIC technology. The simulation results are comparedmore » with measurements and show good agreement.« less

  16. A general framework for time series data mining based on event analysis: application to the medical domains of electroencephalography and stabilometry.

    PubMed

    Lara, Juan A; Lizcano, David; Pérez, Aurora; Valente, Juan P

    2014-10-01

    There are now domains where information is recorded over a period of time, leading to sequences of data known as time series. In many domains, like medicine, time series analysis requires to focus on certain regions of interest, known as events, rather than analyzing the whole time series. In this paper, we propose a framework for knowledge discovery in both one-dimensional and multidimensional time series containing events. We show how our approach can be used to classify medical time series by means of a process that identifies events in time series, generates time series reference models of representative events and compares two time series by analyzing the events they have in common. We have applied our framework on time series generated in the areas of electroencephalography (EEG) and stabilometry. Framework performance was evaluated in terms of classification accuracy, and the results confirmed that the proposed schema has potential for classifying EEG and stabilometric signals. The proposed framework is useful for discovering knowledge from medical time series containing events, such as stabilometric and electroencephalographic time series. These results would be equally applicable to other medical domains generating iconographic time series, such as, for example, electrocardiography (ECG). Copyright © 2014 Elsevier Inc. All rights reserved.

  17. Recurrence plots revisited

    NASA Astrophysics Data System (ADS)

    Casdagli, M. C.

    1997-09-01

    We show that recurrence plots (RPs) give detailed characterizations of time series generated by dynamical systems driven by slowly varying external forces. For deterministic systems we show that RPs of the time series can be used to reconstruct the RP of the driving force if it varies sufficiently slowly. If the driving force is one-dimensional, its functional form can then be inferred up to an invertible coordinate transformation. The same results hold for stochastic systems if the RP of the time series is suitably averaged and transformed. These results are used to investigate the nonlinear prediction of time series generated by dynamical systems driven by slowly varying external forces. We also consider the problem of detecting a small change in the driving force, and propose a surrogate data technique for assessing statistical significance. Numerically simulated time series and a time series of respiration rates recorded from a subject with sleep apnea are used as illustrative examples.

  18. Monitoring groundwater-surface water interaction using time-series and time-frequency analysis of transient three-dimensional electrical resistivity changes

    USGS Publications Warehouse

    Johnson, Timothy C.; Slater, Lee D.; Ntarlagiannis, Dimitris; Day-Lewis, Frederick D.; Elwaseif, Mehrez

    2012-01-01

    Time-lapse resistivity imaging is increasingly used to monitor hydrologic processes. Compared to conventional hydrologic measurements, surface time-lapse resistivity provides superior spatial coverage in two or three dimensions, potentially high-resolution information in time, and information in the absence of wells. However, interpretation of time-lapse electrical tomograms is complicated by the ever-increasing size and complexity of long-term, three-dimensional (3-D) time series conductivity data sets. Here we use 3-D surface time-lapse electrical imaging to monitor subsurface electrical conductivity variations associated with stage-driven groundwater-surface water interactions along a stretch of the Columbia River adjacent to the Hanford 300 near Richland, Washington, USA. We reduce the resulting 3-D conductivity time series using both time-series and time-frequency analyses to isolate a paleochannel causing enhanced groundwater-surface water interactions. Correlation analysis on the time-lapse imaging results concisely represents enhanced groundwater-surface water interactions within the paleochannel, and provides information concerning groundwater flow velocities. Time-frequency analysis using the Stockwell (S) transform provides additional information by identifying the stage periodicities driving groundwater-surface water interactions due to upstream dam operations, and identifying segments in time-frequency space when these interactions are most active. These results provide new insight into the distribution and timing of river water intrusion into the Hanford 300 Area, which has a governing influence on the behavior of a uranium plume left over from historical nuclear fuel processing operations.

  19. Results from field tests of the one-dimensional Time-Encoded Imaging System.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marleau, Peter; Brennan, James S.; Brubaker, Erik

    2014-09-01

    A series of field experiments were undertaken to evaluate the performance of the one dimensional time encoded imaging system. The significant detection of a Cf252 fission radiation source was demonstrated at a stand-off of 100 meters. Extrapolations to different quantities of plutonium equivalent at different distances are made. Hardware modifications to the system for follow on work are suggested.

  20. Information extraction from dynamic PS-InSAR time series using machine learning

    NASA Astrophysics Data System (ADS)

    van de Kerkhof, B.; Pankratius, V.; Chang, L.; van Swol, R.; Hanssen, R. F.

    2017-12-01

    Due to the increasing number of SAR satellites, with shorter repeat intervals and higher resolutions, SAR data volumes are exploding. Time series analyses of SAR data, i.e. Persistent Scatterer (PS) InSAR, enable the deformation monitoring of the built environment at an unprecedented scale, with hundreds of scatterers per km2, updated weekly. Potential hazards, e.g. due to failure of aging infrastructure, can be detected at an early stage. Yet, this requires the operational data processing of billions of measurement points, over hundreds of epochs, updating this data set dynamically as new data come in, and testing whether points (start to) behave in an anomalous way. Moreover, the quality of PS-InSAR measurements is ambiguous and heterogeneous, which will yield false positives and false negatives. Such analyses are numerically challenging. Here we extract relevant information from PS-InSAR time series using machine learning algorithms. We cluster (group together) time series with similar behaviour, even though they may not be spatially close, such that the results can be used for further analysis. First we reduce the dimensionality of the dataset in order to be able to cluster the data, since applying clustering techniques on high dimensional datasets often result in unsatisfying results. Our approach is to apply t-distributed Stochastic Neighbor Embedding (t-SNE), a machine learning algorithm for dimensionality reduction of high-dimensional data to a 2D or 3D map, and cluster this result using Density-Based Spatial Clustering of Applications with Noise (DBSCAN). The results show that we are able to detect and cluster time series with similar behaviour, which is the starting point for more extensive analysis into the underlying driving mechanisms. The results of the methods are compared to conventional hypothesis testing as well as a Self-Organising Map (SOM) approach. Hypothesis testing is robust and takes the stochastic nature of the observations into account, but is time consuming. Therefore, we successively apply our machine learning approach with the hypothesis testing approach in order to benefit from both the reduced computation time of the machine learning approach as from the robust quality metrics of hypothesis testing. We acknowledge support from NASA AISTNNX15AG84G (PI V. Pankratius)

  1. Series expansion solutions for the multi-term time and space fractional partial differential equations in two- and three-dimensions

    NASA Astrophysics Data System (ADS)

    Ye, H.; Liu, F.; Turner, I.; Anh, V.; Burrage, K.

    2013-09-01

    Fractional partial differential equations with more than one fractional derivative in time describe some important physical phenomena, such as the telegraph equation, the power law wave equation, or the Szabo wave equation. In this paper, we consider two- and three-dimensional multi-term time and space fractional partial differential equations. The multi-term time-fractional derivative is defined in the Caputo sense, whose order belongs to the interval (1,2],(2,3],(3,4] or (0, m], and the space-fractional derivative is referred to as the fractional Laplacian form. We derive series expansion solutions based on a spectral representation of the Laplacian operator on a bounded region. Some applications are given for the two- and three-dimensional telegraph equation, power law wave equation and Szabo wave equation.

  2. Empirical intrinsic geometry for nonlinear modeling and time series filtering.

    PubMed

    Talmon, Ronen; Coifman, Ronald R

    2013-07-30

    In this paper, we present a method for time series analysis based on empirical intrinsic geometry (EIG). EIG enables one to reveal the low-dimensional parametric manifold as well as to infer the underlying dynamics of high-dimensional time series. By incorporating concepts of information geometry, this method extends existing geometric analysis tools to support stochastic settings and parametrizes the geometry of empirical distributions. However, the statistical models are not required as priors; hence, EIG may be applied to a wide range of real signals without existing definitive models. We show that the inferred model is noise-resilient and invariant under different observation and instrumental modalities. In addition, we show that it can be extended efficiently to newly acquired measurements in a sequential manner. These two advantages enable us to revisit the Bayesian approach and incorporate empirical dynamics and intrinsic geometry into a nonlinear filtering framework. We show applications to nonlinear and non-Gaussian tracking problems as well as to acoustic signal localization.

  3. Multi-label classification of chronically ill patients with bag of words and supervised dimensionality reduction algorithms.

    PubMed

    Bromuri, Stefano; Zufferey, Damien; Hennebert, Jean; Schumacher, Michael

    2014-10-01

    This research is motivated by the issue of classifying illnesses of chronically ill patients for decision support in clinical settings. Our main objective is to propose multi-label classification of multivariate time series contained in medical records of chronically ill patients, by means of quantization methods, such as bag of words (BoW), and multi-label classification algorithms. Our second objective is to compare supervised dimensionality reduction techniques to state-of-the-art multi-label classification algorithms. The hypothesis is that kernel methods and locality preserving projections make such algorithms good candidates to study multi-label medical time series. We combine BoW and supervised dimensionality reduction algorithms to perform multi-label classification on health records of chronically ill patients. The considered algorithms are compared with state-of-the-art multi-label classifiers in two real world datasets. Portavita dataset contains 525 diabetes type 2 (DT2) patients, with co-morbidities of DT2 such as hypertension, dyslipidemia, and microvascular or macrovascular issues. MIMIC II dataset contains 2635 patients affected by thyroid disease, diabetes mellitus, lipoid metabolism disease, fluid electrolyte disease, hypertensive disease, thrombosis, hypotension, chronic obstructive pulmonary disease (COPD), liver disease and kidney disease. The algorithms are evaluated using multi-label evaluation metrics such as hamming loss, one error, coverage, ranking loss, and average precision. Non-linear dimensionality reduction approaches behave well on medical time series quantized using the BoW algorithm, with results comparable to state-of-the-art multi-label classification algorithms. Chaining the projected features has a positive impact on the performance of the algorithm with respect to pure binary relevance approaches. The evaluation highlights the feasibility of representing medical health records using the BoW for multi-label classification tasks. The study also highlights that dimensionality reduction algorithms based on kernel methods, locality preserving projections or both are good candidates to deal with multi-label classification tasks in medical time series with many missing values and high label density. Copyright © 2014 Elsevier Inc. All rights reserved.

  4. Three-dimensional liver motion tracking using real-time two-dimensional MRI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brix, Lau, E-mail: lau.brix@stab.rm.dk; Ringgaard, Steffen; Sørensen, Thomas Sangild

    2014-04-15

    Purpose: Combined magnetic resonance imaging (MRI) systems and linear accelerators for radiotherapy (MR-Linacs) are currently under development. MRI is noninvasive and nonionizing and can produce images with high soft tissue contrast. However, new tracking methods are required to obtain fast real-time spatial target localization. This study develops and evaluates a method for tracking three-dimensional (3D) respiratory liver motion in two-dimensional (2D) real-time MRI image series with high temporal and spatial resolution. Methods: The proposed method for 3D tracking in 2D real-time MRI series has three steps: (1) Recording of a 3D MRI scan and selection of a blood vessel (ormore » tumor) structure to be tracked in subsequent 2D MRI series. (2) Generation of a library of 2D image templates oriented parallel to the 2D MRI image series by reslicing and resampling the 3D MRI scan. (3) 3D tracking of the selected structure in each real-time 2D image by finding the template and template position that yield the highest normalized cross correlation coefficient with the image. Since the tracked structure has a known 3D position relative to each template, the selection and 2D localization of a specific template translates into quantification of both the through-plane and in-plane position of the structure. As a proof of principle, 3D tracking of liver blood vessel structures was performed in five healthy volunteers in two 5.4 Hz axial, sagittal, and coronal real-time 2D MRI series of 30 s duration. In each 2D MRI series, the 3D localization was carried out twice, using nonoverlapping template libraries, which resulted in a total of 12 estimated 3D trajectories per volunteer. Validation tests carried out to support the tracking algorithm included quantification of the breathing induced 3D liver motion and liver motion directionality for the volunteers, and comparison of 2D MRI estimated positions of a structure in a watermelon with the actual positions. Results: Axial, sagittal, and coronal 2D MRI series yielded 3D respiratory motion curves for all volunteers. The motion directionality and amplitude were very similar when measured directly as in-plane motion or estimated indirectly as through-plane motion. The mean peak-to-peak breathing amplitude was 1.6 mm (left-right), 11.0 mm (craniocaudal), and 2.5 mm (anterior-posterior). The position of the watermelon structure was estimated in 2D MRI images with a root-mean-square error of 0.52 mm (in-plane) and 0.87 mm (through-plane). Conclusions: A method for 3D tracking in 2D MRI series was developed and demonstrated for liver tracking in volunteers. The method would allow real-time 3D localization with integrated MR-Linac systems.« less

  5. Two-dimensional NMR spectrometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farrar, T.C.

    1987-06-01

    This article is the second in a two-part series. In part one (ANALYTICAL CHEMISTRY, May 15) the authors discussed one-dimensional nuclear magnetic resonance (NMR) spectra and some relatively advanced nuclear spin gymnastics experiments that provide a capability for selective sensitivity enhancements. In this article and overview and some applications of two-dimensional NMR experiments are presented. These powerful experiments are important complements to the one-dimensional experiments. As in the more sophisticated one-dimensional experiments, the two-dimensional experiments involve three distinct time periods: a preparation period, t/sub 0/; an evolution period, t/sub 1/; and a detection period, t/sub 2/.

  6. Generalized Hurst exponent and multifractal function of original and translated texts mapped into frequency and length time series

    NASA Astrophysics Data System (ADS)

    Ausloos, M.

    2012-09-01

    A nonlinear dynamics approach can be used in order to quantify complexity in written texts. As a first step, a one-dimensional system is examined: two written texts by one author (Lewis Carroll) are considered, together with one translation into an artificial language (i.e., Esperanto) are mapped into time series. Their corresponding shuffled versions are used for obtaining a baseline. Two different one-dimensional time series are used here: one based on word lengths (LTS), the other on word frequencies (FTS). It is shown that the generalized Hurst exponent h(q) and the derived f(α) curves of the original and translated texts show marked differences. The original texts are far from giving a parabolic f(α) function, in contrast to the shuffled texts. Moreover, the Esperanto text has more extreme values. This suggests cascade model-like, with multiscale time-asymmetric features as finally written texts. A discussion of the difference and complementarity of mapping into a LTS or FTS is presented. The FTS f(α) curves are more opened than the LTS ones.

  7. Generalized Hurst exponent and multifractal function of original and translated texts mapped into frequency and length time series.

    PubMed

    Ausloos, M

    2012-09-01

    A nonlinear dynamics approach can be used in order to quantify complexity in written texts. As a first step, a one-dimensional system is examined: two written texts by one author (Lewis Carroll) are considered, together with one translation into an artificial language (i.e., Esperanto) are mapped into time series. Their corresponding shuffled versions are used for obtaining a baseline. Two different one-dimensional time series are used here: one based on word lengths (LTS), the other on word frequencies (FTS). It is shown that the generalized Hurst exponent h(q) and the derived f(α) curves of the original and translated texts show marked differences. The original texts are far from giving a parabolic f(α) function, in contrast to the shuffled texts. Moreover, the Esperanto text has more extreme values. This suggests cascade model-like, with multiscale time-asymmetric features as finally written texts. A discussion of the difference and complementarity of mapping into a LTS or FTS is presented. The FTS f(α) curves are more opened than the LTS ones.

  8. An M-estimator for reduced-rank system identification.

    PubMed

    Chen, Shaojie; Liu, Kai; Yang, Yuguang; Xu, Yuting; Lee, Seonjoo; Lindquist, Martin; Caffo, Brian S; Vogelstein, Joshua T

    2017-01-15

    High-dimensional time-series data from a wide variety of domains, such as neuroscience, are being generated every day. Fitting statistical models to such data, to enable parameter estimation and time-series prediction, is an important computational primitive. Existing methods, however, are unable to cope with the high-dimensional nature of these data, due to both computational and statistical reasons. We mitigate both kinds of issues by proposing an M-estimator for Reduced-rank System IDentification ( MR. SID). A combination of low-rank approximations, ℓ 1 and ℓ 2 penalties, and some numerical linear algebra tricks, yields an estimator that is computationally efficient and numerically stable. Simulations and real data examples demonstrate the usefulness of this approach in a variety of problems. In particular, we demonstrate that MR. SID can accurately estimate spatial filters, connectivity graphs, and time-courses from native resolution functional magnetic resonance imaging data. MR. SID therefore enables big time-series data to be analyzed using standard methods, readying the field for further generalizations including non-linear and non-Gaussian state-space models.

  9. An M-estimator for reduced-rank system identification

    PubMed Central

    Chen, Shaojie; Liu, Kai; Yang, Yuguang; Xu, Yuting; Lee, Seonjoo; Lindquist, Martin; Caffo, Brian S.; Vogelstein, Joshua T.

    2018-01-01

    High-dimensional time-series data from a wide variety of domains, such as neuroscience, are being generated every day. Fitting statistical models to such data, to enable parameter estimation and time-series prediction, is an important computational primitive. Existing methods, however, are unable to cope with the high-dimensional nature of these data, due to both computational and statistical reasons. We mitigate both kinds of issues by proposing an M-estimator for Reduced-rank System IDentification ( MR. SID). A combination of low-rank approximations, ℓ1 and ℓ2 penalties, and some numerical linear algebra tricks, yields an estimator that is computationally efficient and numerically stable. Simulations and real data examples demonstrate the usefulness of this approach in a variety of problems. In particular, we demonstrate that MR. SID can accurately estimate spatial filters, connectivity graphs, and time-courses from native resolution functional magnetic resonance imaging data. MR. SID therefore enables big time-series data to be analyzed using standard methods, readying the field for further generalizations including non-linear and non-Gaussian state-space models. PMID:29391659

  10. Detrending moving average algorithm for multifractals

    NASA Astrophysics Data System (ADS)

    Gu, Gao-Feng; Zhou, Wei-Xing

    2010-07-01

    The detrending moving average (DMA) algorithm is a widely used technique to quantify the long-term correlations of nonstationary time series and the long-range correlations of fractal surfaces, which contains a parameter θ determining the position of the detrending window. We develop multifractal detrending moving average (MFDMA) algorithms for the analysis of one-dimensional multifractal measures and higher-dimensional multifractals, which is a generalization of the DMA method. The performance of the one-dimensional and two-dimensional MFDMA methods is investigated using synthetic multifractal measures with analytical solutions for backward (θ=0) , centered (θ=0.5) , and forward (θ=1) detrending windows. We find that the estimated multifractal scaling exponent τ(q) and the singularity spectrum f(α) are in good agreement with the theoretical values. In addition, the backward MFDMA method has the best performance, which provides the most accurate estimates of the scaling exponents with lowest error bars, while the centered MFDMA method has the worse performance. It is found that the backward MFDMA algorithm also outperforms the multifractal detrended fluctuation analysis. The one-dimensional backward MFDMA method is applied to analyzing the time series of Shanghai Stock Exchange Composite Index and its multifractal nature is confirmed.

  11. Control and synchronisation of a novel seven-dimensional hyperchaotic system with active control

    NASA Astrophysics Data System (ADS)

    Varan, Metin; Akgul, Akif

    2018-04-01

    In this work, active control method is proposed for controlling and synchronising seven-dimensional (7D) hyperchaotic systems. The seven-dimensional hyperchaotic system is considered for the implementation. Seven-dimensional hyperchaotic system is also investigated via time series, phase portraits and bifurcation diagrams. For understanding the impact of active controllers on global asymptotic stability of synchronisation and control errors, the Lyapunov function is used. Numerical analysis is done to reveal the effectiveness of applied active control method and the results are discussed.

  12. Characterizing Time Series Data Diversity for Wind Forecasting: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hodge, Brian S; Chartan, Erol Kevin; Feng, Cong

    Wind forecasting plays an important role in integrating variable and uncertain wind power into the power grid. Various forecasting models have been developed to improve the forecasting accuracy. However, it is challenging to accurately compare the true forecasting performances from different methods and forecasters due to the lack of diversity in forecasting test datasets. This paper proposes a time series characteristic analysis approach to visualize and quantify wind time series diversity. The developed method first calculates six time series characteristic indices from various perspectives. Then the principal component analysis is performed to reduce the data dimension while preserving the importantmore » information. The diversity of the time series dataset is visualized by the geometric distribution of the newly constructed principal component space. The volume of the 3-dimensional (3D) convex polytope (or the length of 1D number axis, or the area of the 2D convex polygon) is used to quantify the time series data diversity. The method is tested with five datasets with various degrees of diversity.« less

  13. Large-scale Granger causality analysis on resting-state functional MRI

    NASA Astrophysics Data System (ADS)

    D'Souza, Adora M.; Abidin, Anas Zainul; Leistritz, Lutz; Wismüller, Axel

    2016-03-01

    We demonstrate an approach to measure the information flow between each pair of time series in resting-state functional MRI (fMRI) data of the human brain and subsequently recover its underlying network structure. By integrating dimensionality reduction into predictive time series modeling, large-scale Granger Causality (lsGC) analysis method can reveal directed information flow suggestive of causal influence at an individual voxel level, unlike other multivariate approaches. This method quantifies the influence each voxel time series has on every other voxel time series in a multivariate sense and hence contains information about the underlying dynamics of the whole system, which can be used to reveal functionally connected networks within the brain. To identify such networks, we perform non-metric network clustering, such as accomplished by the Louvain method. We demonstrate the effectiveness of our approach to recover the motor and visual cortex from resting state human brain fMRI data and compare it with the network recovered from a visuomotor stimulation experiment, where the similarity is measured by the Dice Coefficient (DC). The best DC obtained was 0.59 implying a strong agreement between the two networks. In addition, we thoroughly study the effect of dimensionality reduction in lsGC analysis on network recovery. We conclude that our approach is capable of detecting causal influence between time series in a multivariate sense, which can be used to segment functionally connected networks in the resting-state fMRI.

  14. Homotopy decomposition method for solving one-dimensional time-fractional diffusion equation

    NASA Astrophysics Data System (ADS)

    Abuasad, Salah; Hashim, Ishak

    2018-04-01

    In this paper, we present the homotopy decomposition method with a modified definition of beta fractional derivative for the first time to find exact solution of one-dimensional time-fractional diffusion equation. In this method, the solution takes the form of a convergent series with easily computable terms. The exact solution obtained by the proposed method is compared with the exact solution obtained by using fractional variational homotopy perturbation iteration method via a modified Riemann-Liouville derivative.

  15. Network structure of multivariate time series.

    PubMed

    Lacasa, Lucas; Nicosia, Vincenzo; Latora, Vito

    2015-10-21

    Our understanding of a variety of phenomena in physics, biology and economics crucially depends on the analysis of multivariate time series. While a wide range tools and techniques for time series analysis already exist, the increasing availability of massive data structures calls for new approaches for multidimensional signal processing. We present here a non-parametric method to analyse multivariate time series, based on the mapping of a multidimensional time series into a multilayer network, which allows to extract information on a high dimensional dynamical system through the analysis of the structure of the associated multiplex network. The method is simple to implement, general, scalable, does not require ad hoc phase space partitioning, and is thus suitable for the analysis of large, heterogeneous and non-stationary time series. We show that simple structural descriptors of the associated multiplex networks allow to extract and quantify nontrivial properties of coupled chaotic maps, including the transition between different dynamical phases and the onset of various types of synchronization. As a concrete example we then study financial time series, showing that a multiplex network analysis can efficiently discriminate crises from periods of financial stability, where standard methods based on time-series symbolization often fail.

  16. Aggregate Measures of Watershed Health from Reconstructed ...

    EPA Pesticide Factsheets

    Risk-based indices such as reliability, resilience, and vulnerability (R-R-V), have the potential to serve as watershed health assessment tools. Recent research has demonstrated the applicability of such indices for water quality (WQ) constituents such as total suspended solids and nutrients on an individual basis. However, the calculations can become tedious when time-series data for several WQ constituents have to be evaluated individually. Also, comparisons between locations with different sets of constituent data can prove difficult. In this study, data reconstruction using relevance vector machine algorithm was combined with dimensionality reduction via variational Bayesian noisy principal component analysis to reconstruct and condense sparse multidimensional WQ data sets into a single time series. The methodology allows incorporation of uncertainty in both the reconstruction and dimensionality-reduction steps. The R-R-V values were calculated using the aggregate time series at multiple locations within two Indiana watersheds. Results showed that uncertainty present in the reconstructed WQ data set propagates to the aggregate time series and subsequently to the aggregate R-R-V values as well. serving as motivating examples. Locations with different WQ constituents and different standards for impairment were successfully combined to provide aggregate measures of R-R-V values. Comparisons with individual constituent R-R-V values showed that v

  17. Study of chaos in chaotic satellite systems

    NASA Astrophysics Data System (ADS)

    Khan, Ayub; Kumar, Sanjay

    2018-01-01

    In this paper, we study the qualitative behaviour of satellite systems using bifurcation diagrams, Poincaré section, Lyapunov exponents, dissipation, equilibrium points, Kaplan-Yorke dimension etc. Bifurcation diagrams with respect to the known parameters of satellite systems are analysed. Poincaré sections with different sowing axes of the satellite are drawn. Eigenvalues of Jacobian matrices for the satellite system at different equilibrium points are calculated to justify the unstable regions. Lyapunov exponents are estimated. From these studies, chaos in satellite system has been established. Solution of equations of motion of the satellite system are drawn in the form of three-dimensional, two-dimensional and time series phase portraits. Phase portraits and time series display the chaotic nature of the considered system.

  18. A novel water quality data analysis framework based on time-series data mining.

    PubMed

    Deng, Weihui; Wang, Guoyin

    2017-07-01

    The rapid development of time-series data mining provides an emerging method for water resource management research. In this paper, based on the time-series data mining methodology, we propose a novel and general analysis framework for water quality time-series data. It consists of two parts: implementation components and common tasks of time-series data mining in water quality data. In the first part, we propose to granulate the time series into several two-dimensional normal clouds and calculate the similarities in the granulated level. On the basis of the similarity matrix, the similarity search, anomaly detection, and pattern discovery tasks in the water quality time-series instance dataset can be easily implemented in the second part. We present a case study of this analysis framework on weekly Dissolve Oxygen time-series data collected from five monitoring stations on the upper reaches of Yangtze River, China. It discovered the relationship of water quality in the mainstream and tributary as well as the main changing patterns of DO. The experimental results show that the proposed analysis framework is a feasible and efficient method to mine the hidden and valuable knowledge from water quality historical time-series data. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. On power series representing solutions of the one-dimensional time-independent Schrödinger equation

    NASA Astrophysics Data System (ADS)

    Trotsenko, N. P.

    2017-06-01

    For the equation χ″( x) = u( x)χ( x) with infinitely smooth u( x), the general solution χ( x) is found in the form of a power series. The coefficients of the series are expressed via all derivatives u ( m)( y) of the function u( x) at a fixed point y. Examples of solutions for particular functions u( x) are considered.

  20. The Effect of Three-Dimensional Freestream Disturbances on the Supersonic Flow Past a Wedge

    NASA Technical Reports Server (NTRS)

    Duck, Peter W.; Lasseigne, D. Glenn; Hussaini, M. Y.

    1997-01-01

    The interaction between a shock wave (attached to a wedge) and small amplitude, three-dimensional disturbances of a uniform, supersonic, freestream flow are investigated. The paper extends the two-dimensional study of Duck et al, through the use of vector potentials, which render the problem tractable by the same techniques as in the two-dimensional case, in particular by expansion of the solution by means of a Fourier-Bessel series, in appropriately chosen coordinates. Results are presented for specific classes of freestream disturbances, and the study shows conclusively that the shock is stable to all classes of disturbances (i.e. time periodic perturbations to the shock do not grow downstream), provided the flow downstream of the shock is supersonic (loosely corresponding to the weak shock solution). This is shown from our numerical results and also by asymptotic analysis of the Fourier-Bessel series, valid far downstream of the shock.

  1. A convergent series expansion for hyperbolic systems of conservation laws

    NASA Technical Reports Server (NTRS)

    Harabetian, E.

    1985-01-01

    The discontinuities piecewise analytic initial value problem for a wide class of conservation laws is considered which includes the full three-dimensional Euler equations. The initial interaction at an arbitrary curved surface is resolved in time by a convergent series. Among other features the solution exhibits shock, contact, and expansion waves as well as sound waves propagating on characteristic surfaces. The expansion waves correspond to he one-dimensional rarefactions but have a more complicated structure. The sound waves are generated in place of zero strength shocks, and they are caused by mismatches in derivatives.

  2. In-Situ Three-Dimensional Shape Rendering from Strain Values Obtained Through Optical Fiber Sensors

    NASA Technical Reports Server (NTRS)

    Chan, Hon Man (Inventor); Parker, Jr., Allen R. (Inventor)

    2015-01-01

    A method and system for rendering the shape of a multi-core optical fiber or multi-fiber bundle in three-dimensional space in real time based on measured fiber strain data. Three optical fiber cores arc arranged in parallel at 120.degree. intervals about a central axis. A series of longitudinally co-located strain sensor triplets, typically fiber Bragg gratings, are positioned along the length of each fiber at known intervals. A tunable laser interrogates the sensors to detect strain on the fiber cores. Software determines the strain magnitude (.DELTA.L/L) for each fiber at a given triplet, but then applies beam theory to calculate curvature, beading angle and torsion of the fiber bundle, and from there it determines the shape of the fiber in s Cartesian coordinate system by solving a series of ordinary differential equations expanded from the Frenet-Serrat equations. This approach eliminates the need for computationally time-intensive curve-tilting and allows the three-dimensional shape of the optical fiber assembly to be displayed in real-time.

  3. Detecting unstable periodic orbits in chaotic time series using synchronization

    NASA Astrophysics Data System (ADS)

    Olyaei, Ali Azimi; Wu, Christine; Kinsner, Witold

    2017-07-01

    An alternative approach of detecting unstable periodic orbits in chaotic time series is proposed using synchronization techniques. A master-slave synchronization scheme is developed, in which the chaotic system drives a system of harmonic oscillators through a proper coupling condition. The proposed scheme is designed so that the power of the coupling signal exhibits notches that drop to zero once the system approaches an unstable orbit yielding an explicit indication of the presence of a periodic motion. The results shows that the proposed approach is particularly suitable in practical situations, where the time series is short and noisy, or it is obtained from high-dimensional chaotic systems.

  4. Algorithm for Compressing Time-Series Data

    NASA Technical Reports Server (NTRS)

    Hawkins, S. Edward, III; Darlington, Edward Hugo

    2012-01-01

    An algorithm based on Chebyshev polynomials effects lossy compression of time-series data or other one-dimensional data streams (e.g., spectral data) that are arranged in blocks for sequential transmission. The algorithm was developed for use in transmitting data from spacecraft scientific instruments to Earth stations. In spite of its lossy nature, the algorithm preserves the information needed for scientific analysis. The algorithm is computationally simple, yet compresses data streams by factors much greater than two. The algorithm is not restricted to spacecraft or scientific uses: it is applicable to time-series data in general. The algorithm can also be applied to general multidimensional data that have been converted to time-series data, a typical example being image data acquired by raster scanning. However, unlike most prior image-data-compression algorithms, this algorithm neither depends on nor exploits the two-dimensional spatial correlations that are generally present in images. In order to understand the essence of this compression algorithm, it is necessary to understand that the net effect of this algorithm and the associated decompression algorithm is to approximate the original stream of data as a sequence of finite series of Chebyshev polynomials. For the purpose of this algorithm, a block of data or interval of time for which a Chebyshev polynomial series is fitted to the original data is denoted a fitting interval. Chebyshev approximation has two properties that make it particularly effective for compressing serial data streams with minimal loss of scientific information: The errors associated with a Chebyshev approximation are nearly uniformly distributed over the fitting interval (this is known in the art as the "equal error property"); and the maximum deviations of the fitted Chebyshev polynomial from the original data have the smallest possible values (this is known in the art as the "min-max property").

  5. Aggregated Indexing of Biomedical Time Series Data

    PubMed Central

    Woodbridge, Jonathan; Mortazavi, Bobak; Sarrafzadeh, Majid; Bui, Alex A.T.

    2016-01-01

    Remote and wearable medical sensing has the potential to create very large and high dimensional datasets. Medical time series databases must be able to efficiently store, index, and mine these datasets to enable medical professionals to effectively analyze data collected from their patients. Conventional high dimensional indexing methods are a two stage process. First, a superset of the true matches is efficiently extracted from the database. Second, supersets are pruned by comparing each of their objects to the query object and rejecting any objects falling outside a predetermined radius. This pruning stage heavily dominates the computational complexity of most conventional search algorithms. Therefore, indexing algorithms can be significantly improved by reducing the amount of pruning. This paper presents an online algorithm to aggregate biomedical times series data to significantly reduce the search space (index size) without compromising the quality of search results. This algorithm is built on the observation that biomedical time series signals are composed of cyclical and often similar patterns. This algorithm takes in a stream of segments and groups them to highly concentrated collections. Locality Sensitive Hashing (LSH) is used to reduce the overall complexity of the algorithm, allowing it to run online. The output of this aggregation is used to populate an index. The proposed algorithm yields logarithmic growth of the index (with respect to the total number of objects) while keeping sensitivity and specificity simultaneously above 98%. Both memory and runtime complexities of time series search are improved when using aggregated indexes. In addition, data mining tasks, such as clustering, exhibit runtimes that are orders of magnitudes faster when run on aggregated indexes. PMID:27617298

  6. Time Series Analysis of the Bacillus subtilis Sporulation Network Reveals Low Dimensional Chaotic Dynamics.

    PubMed

    Lecca, Paola; Mura, Ivan; Re, Angela; Barker, Gary C; Ihekwaba, Adaoha E C

    2016-01-01

    Chaotic behavior refers to a behavior which, albeit irregular, is generated by an underlying deterministic process. Therefore, a chaotic behavior is potentially controllable. This possibility becomes practically amenable especially when chaos is shown to be low-dimensional, i.e., to be attributable to a small fraction of the total systems components. In this case, indeed, including the major drivers of chaos in a system into the modeling approach allows us to improve predictability of the systems dynamics. Here, we analyzed the numerical simulations of an accurate ordinary differential equation model of the gene network regulating sporulation initiation in Bacillus subtilis to explore whether the non-linearity underlying time series data is due to low-dimensional chaos. Low-dimensional chaos is expectedly common in systems with few degrees of freedom, but rare in systems with many degrees of freedom such as the B. subtilis sporulation network. The estimation of a number of indices, which reflect the chaotic nature of a system, indicates that the dynamics of this network is affected by deterministic chaos. The neat separation between the indices obtained from the time series simulated from the model and those obtained from time series generated by Gaussian white and colored noise confirmed that the B. subtilis sporulation network dynamics is affected by low dimensional chaos rather than by noise. Furthermore, our analysis identifies the principal driver of the networks chaotic dynamics to be sporulation initiation phosphotransferase B (Spo0B). We then analyzed the parameters and the phase space of the system to characterize the instability points of the network dynamics, and, in turn, to identify the ranges of values of Spo0B and of the other drivers of the chaotic dynamics, for which the whole system is highly sensitive to minimal perturbation. In summary, we described an unappreciated source of complexity in the B. subtilis sporulation network by gathering evidence for the chaotic behavior of the system, and by suggesting candidate molecules driving chaos in the system. The results of our chaos analysis can increase our understanding of the intricacies of the regulatory network under analysis, and suggest experimental work to refine our behavior of the mechanisms underlying B. subtilis sporulation initiation control.

  7. Generalizing DTW to the multi-dimensional case requires an adaptive approach

    PubMed Central

    Hu, Bing; Jin, Hongxia; Wang, Jun; Keogh, Eamonn

    2017-01-01

    In recent years Dynamic Time Warping (DTW) has emerged as the distance measure of choice for virtually all time series data mining applications. For example, virtually all applications that process data from wearable devices use DTW as a core sub-routine. This is the result of significant progress in improving DTW’s efficiency, together with multiple empirical studies showing that DTW-based classifiers at least equal (and generally surpass) the accuracy of all their rivals across dozens of datasets. Thus far, most of the research has considered only the one-dimensional case, with practitioners generalizing to the multi-dimensional case in one of two ways, dependent or independent warping. In general, it appears the community believes either that the two ways are equivalent, or that the choice is irrelevant. In this work, we show that this is not the case. The two most commonly used multi-dimensional DTW methods can produce different classifications, and neither one dominates over the other. This seems to suggest that one should learn the best method for a particular application. However, we will show that this is not necessary; a simple, principled rule can be used on a case-by-case basis to predict which of the two methods we should trust at the time of classification. Our method allows us to ensure that classification results are at least as accurate as the better of the two rival methods, and, in many cases, our method is significantly more accurate. We demonstrate our ideas with the most extensive set of multi-dimensional time series classification experiments ever attempted. PMID:29104448

  8. Interpretable Categorization of Heterogeneous Time Series Data

    NASA Technical Reports Server (NTRS)

    Lee, Ritchie; Kochenderfer, Mykel J.; Mengshoel, Ole J.; Silbermann, Joshua

    2017-01-01

    We analyze data from simulated aircraft encounters to validate and inform the development of a prototype aircraft collision avoidance system. The high-dimensional and heterogeneous time series dataset is analyzed to discover properties of near mid-air collisions (NMACs) and categorize the NMAC encounters. Domain experts use these properties to better organize and understand NMAC occurrences. Existing solutions either are not capable of handling high-dimensional and heterogeneous time series datasets or do not provide explanations that are interpretable by a domain expert. The latter is critical to the acceptance and deployment of safety-critical systems. To address this gap, we propose grammar-based decision trees along with a learning algorithm. Our approach extends decision trees with a grammar framework for classifying heterogeneous time series data. A context-free grammar is used to derive decision expressions that are interpretable, application-specific, and support heterogeneous data types. In addition to classification, we show how grammar-based decision trees can also be used for categorization, which is a combination of clustering and generating interpretable explanations for each cluster. We apply grammar-based decision trees to a simulated aircraft encounter dataset and evaluate the performance of four variants of our learning algorithm. The best algorithm is used to analyze and categorize near mid-air collisions in the aircraft encounter dataset. We describe each discovered category in detail and discuss its relevance to aircraft collision avoidance.

  9. Deconvolution of mixing time series on a graph

    PubMed Central

    Blocker, Alexander W.; Airoldi, Edoardo M.

    2013-01-01

    In many applications we are interested in making inference on latent time series from indirect measurements, which are often low-dimensional projections resulting from mixing or aggregation. Positron emission tomography, super-resolution, and network traffic monitoring are some examples. Inference in such settings requires solving a sequence of ill-posed inverse problems, yt = Axt, where the projection mechanism provides information on A. We consider problems in which A specifies mixing on a graph of times series that are bursty and sparse. We develop a multilevel state-space model for mixing times series and an efficient approach to inference. A simple model is used to calibrate regularization parameters that lead to efficient inference in the multilevel state-space model. We apply this method to the problem of estimating point-to-point traffic flows on a network from aggregate measurements. Our solution outperforms existing methods for this problem, and our two-stage approach suggests an efficient inference strategy for multilevel models of multivariate time series. PMID:25309135

  10. A Method for Comparing Multivariate Time Series with Different Dimensions

    PubMed Central

    Tapinos, Avraam; Mendes, Pedro

    2013-01-01

    In many situations it is desirable to compare dynamical systems based on their behavior. Similarity of behavior often implies similarity of internal mechanisms or dependency on common extrinsic factors. While there are widely used methods for comparing univariate time series, most dynamical systems are characterized by multivariate time series. Yet, comparison of multivariate time series has been limited to cases where they share a common dimensionality. A semi-metric is a distance function that has the properties of non-negativity, symmetry and reflexivity, but not sub-additivity. Here we develop a semi-metric – SMETS – that can be used for comparing groups of time series that may have different dimensions. To demonstrate its utility, the method is applied to dynamic models of biochemical networks and to portfolios of shares. The former is an example of a case where the dependencies between system variables are known, while in the latter the system is treated (and behaves) as a black box. PMID:23393554

  11. Chaotic behaviour of the short-term variations in ozone column observed in Arctic

    NASA Astrophysics Data System (ADS)

    Petkov, Boyan H.; Vitale, Vito; Mazzola, Mauro; Lanconelli, Christian; Lupi, Angelo

    2015-09-01

    The diurnal variations observed in the ozone column at Ny-Ålesund, Svalbard during different periods of 2009, 2010 and 2011 have been examined to test the hypothesis that they could be a result of a chaotic process. It was found that each of the attractors, reconstructed by applying the time delay technique and corresponding to any of the three time series can be embedded by 6-dimensional space. Recurrence plots, depicted to characterise the attractor features revealed structures typical for a chaotic system. In addition, the two positive Lyapunov exponents found for the three attractors, the fractal Hausdorff dimension presented by the Kaplan-Yorke estimator and the feasibility to predict the short-term ozone column variations within 10-20 h, knowing the past behaviour make the assumption about their chaotic character more realistic. The similarities of the estimated parameters in all three cases allow us to hypothesise that the three time series under study likely present one-dimensional projections of the same chaotic system taken at different time intervals.

  12. Highlights from the previous volumes

    NASA Astrophysics Data System (ADS)

    Tong, Liu; al., Hadjihoseini Ali et; Jörg David, J.; al., Gao Zhong-Ke et; et al.

    2018-01-01

    Superconductivity at 7.3 K in quasi--one-dimensional RbCr3As3Rogue waves as negative entropy events durationsBiological rhythms ---What sets their amplitude?Reconstructing multi-mode networks from multivariate time series

  13. Application of information-retrieval methods to the classification of physical data

    NASA Technical Reports Server (NTRS)

    Mamotko, Z. N.; Khorolskaya, S. K.; Shatrovskiy, L. I.

    1975-01-01

    Scientific data received from satellites are characterized as a multi-dimensional time series, whose terms are vector functions of a vector of measurement conditions. Information retrieval methods are used to construct lower dimensional samples on the basis of the condition vector, in order to obtain these data and to construct partial relations. The methods are applied to the joint Soviet-French Arkad project.

  14. Robust extrema features for time-series data analysis.

    PubMed

    Vemulapalli, Pramod K; Monga, Vishal; Brennan, Sean N

    2013-06-01

    The extraction of robust features for comparing and analyzing time series is a fundamentally important problem. Research efforts in this area encompass dimensionality reduction using popular signal analysis tools such as the discrete Fourier and wavelet transforms, various distance metrics, and the extraction of interest points from time series. Recently, extrema features for analysis of time-series data have assumed increasing significance because of their natural robustness under a variety of practical distortions, their economy of representation, and their computational benefits. Invariably, the process of encoding extrema features is preceded by filtering of the time series with an intuitively motivated filter (e.g., for smoothing), and subsequent thresholding to identify robust extrema. We define the properties of robustness, uniqueness, and cardinality as a means to identify the design choices available in each step of the feature generation process. Unlike existing methods, which utilize filters "inspired" from either domain knowledge or intuition, we explicitly optimize the filter based on training time series to optimize robustness of the extracted extrema features. We demonstrate further that the underlying filter optimization problem reduces to an eigenvalue problem and has a tractable solution. An encoding technique that enhances control over cardinality and uniqueness is also presented. Experimental results obtained for the problem of time series subsequence matching establish the merits of the proposed algorithm.

  15. Multiscale synchrony behaviors of paired financial time series by 3D multi-continuum percolation

    NASA Astrophysics Data System (ADS)

    Wang, M.; Wang, J.; Wang, B. T.

    2018-02-01

    Multiscale synchrony behaviors and nonlinear dynamics of paired financial time series are investigated, in an attempt to study the cross correlation relationships between two stock markets. A random stock price model is developed by a new system called three-dimensional (3D) multi-continuum percolation system, which is utilized to imitate the formation mechanism of price dynamics and explain the nonlinear behaviors found in financial time series. We assume that the price fluctuations are caused by the spread of investment information. The cluster of 3D multi-continuum percolation represents the cluster of investors who share the same investment attitude. In this paper, we focus on the paired return series, the paired volatility series, and the paired intrinsic mode functions which are decomposed by empirical mode decomposition. A new cross recurrence quantification analysis is put forward, combining with multiscale cross-sample entropy, to investigate the multiscale synchrony of these paired series from the proposed model. The corresponding research is also carried out for two China stock markets as comparison.

  16. Transition Icons for Time-Series Visualization and Exploratory Analysis.

    PubMed

    Nickerson, Paul V; Baharloo, Raheleh; Wanigatunga, Amal A; Manini, Todd M; Tighe, Patrick J; Rashidi, Parisa

    2018-03-01

    The modern healthcare landscape has seen the rapid emergence of techniques and devices that temporally monitor and record physiological signals. The prevalence of time-series data within the healthcare field necessitates the development of methods that can analyze the data in order to draw meaningful conclusions. Time-series behavior is notoriously difficult to intuitively understand due to its intrinsic high-dimensionality, which is compounded in the case of analyzing groups of time series collected from different patients. Our framework, which we call transition icons, renders common patterns in a visual format useful for understanding the shared behavior within groups of time series. Transition icons are adept at detecting and displaying subtle differences and similarities, e.g., between measurements taken from patients receiving different treatment strategies or stratified by demographics. We introduce various methods that collectively allow for exploratory analysis of groups of time series, while being free of distribution assumptions and including simple heuristics for parameter determination. Our technique extracts discrete transition patterns from symbolic aggregate approXimation representations, and compiles transition frequencies into a bag of patterns constructed for each group. These transition frequencies are normalized and aligned in icon form to intuitively display the underlying patterns. We demonstrate the transition icon technique for two time-series datasets-postoperative pain scores, and hip-worn accelerometer activity counts. We believe transition icons can be an important tool for researchers approaching time-series data, as they give rich and intuitive information about collective time-series behaviors.

  17. The PEPR GeneChip data warehouse, and implementation of a dynamic time series query tool (SGQT) with graphical interface.

    PubMed

    Chen, Josephine; Zhao, Po; Massaro, Donald; Clerch, Linda B; Almon, Richard R; DuBois, Debra C; Jusko, William J; Hoffman, Eric P

    2004-01-01

    Publicly accessible DNA databases (genome browsers) are rapidly accelerating post-genomic research (see http://www.genome.ucsc.edu/), with integrated genomic DNA, gene structure, EST/ splicing and cross-species ortholog data. DNA databases have relatively low dimensionality; the genome is a linear code that anchors all associated data. In contrast, RNA expression and protein databases need to be able to handle very high dimensional data, with time, tissue, cell type and genes, as interrelated variables. The high dimensionality of microarray expression profile data, and the lack of a standard experimental platform have complicated the development of web-accessible databases and analytical tools. We have designed and implemented a public resource of expression profile data containing 1024 human, mouse and rat Affymetrix GeneChip expression profiles, generated in the same laboratory, and subject to the same quality and procedural controls (Public Expression Profiling Resource; PEPR). Our Oracle-based PEPR data warehouse includes a novel time series query analysis tool (SGQT), enabling dynamic generation of graphs and spreadsheets showing the action of any transcript of interest over time. In this report, we demonstrate the utility of this tool using a 27 time point, in vivo muscle regeneration series. This data warehouse and associated analysis tools provides access to multidimensional microarray data through web-based interfaces, both for download of all types of raw data for independent analysis, and also for straightforward gene-based queries. Planned implementations of PEPR will include web-based remote entry of projects adhering to quality control and standard operating procedure (QC/SOP) criteria, and automated output of alternative probe set algorithms for each project (see http://microarray.cnmcresearch.org/pgadatatable.asp).

  18. The PEPR GeneChip data warehouse, and implementation of a dynamic time series query tool (SGQT) with graphical interface

    PubMed Central

    Chen, Josephine; Zhao, Po; Massaro, Donald; Clerch, Linda B.; Almon, Richard R.; DuBois, Debra C.; Jusko, William J.; Hoffman, Eric P.

    2004-01-01

    Publicly accessible DNA databases (genome browsers) are rapidly accelerating post-genomic research (see http://www.genome.ucsc.edu/), with integrated genomic DNA, gene structure, EST/ splicing and cross-species ortholog data. DNA databases have relatively low dimensionality; the genome is a linear code that anchors all associated data. In contrast, RNA expression and protein databases need to be able to handle very high dimensional data, with time, tissue, cell type and genes, as interrelated variables. The high dimensionality of microarray expression profile data, and the lack of a standard experimental platform have complicated the development of web-accessible databases and analytical tools. We have designed and implemented a public resource of expression profile data containing 1024 human, mouse and rat Affymetrix GeneChip expression profiles, generated in the same laboratory, and subject to the same quality and procedural controls (Public Expression Profiling Resource; PEPR). Our Oracle-based PEPR data warehouse includes a novel time series query analysis tool (SGQT), enabling dynamic generation of graphs and spreadsheets showing the action of any transcript of interest over time. In this report, we demonstrate the utility of this tool using a 27 time point, in vivo muscle regeneration series. This data warehouse and associated analysis tools provides access to multidimensional microarray data through web-based interfaces, both for download of all types of raw data for independent analysis, and also for straightforward gene-based queries. Planned implementations of PEPR will include web-based remote entry of projects adhering to quality control and standard operating procedure (QC/SOP) criteria, and automated output of alternative probe set algorithms for each project (see http://microarray.cnmcresearch.org/pgadatatable.asp). PMID:14681485

  19. Novel Visualization Approaches in Environmental Mineralogy

    NASA Astrophysics Data System (ADS)

    Anderson, C. D.; Lopano, C. L.; Hummer, D. R.; Heaney, P. J.; Post, J. E.; Kubicki, J. D.; Sofo, J. O.

    2006-05-01

    Communicating the complexities of atomic scale reactions between minerals and fluids is fraught with intrinsic challenges. For example, an increasing number of techniques are now available for the interrogation of dynamical processes at the mineral-fluid interface. However, the time-dependent behavior of atomic interactions between a solid and a liquid is often not adequately captured by two-dimensional line drawings or images. At the same time, the necessity for describing these reactions to general audiences is growing more urgent, as funding agencies are amplifying their encouragement to scientists to reach across disciplines and to justify their studies to public audiences. To overcome the shortcomings of traditional graphical representations, the Center for Environmental Kinetics Analysis is creating three-dimensional visualizations of experimental and simulated mineral reactions. These visualizations are then displayed on a stereo 3D projection system called the GeoWall. Made possible (and affordable) by recent improvements in computer and data projector technology, the GeoWall system uses a combination of computer software and hardware, polarizing filters and polarizing glasses, to present visualizations in true 3D. The three-dimensional views greatly improve comprehension of complex multidimensional data, and animations of time series foster better understanding of the underlying processes. The visualizations also offer an effective means to communicate the complexities of environmental mineralogy to colleagues, students and the public. Here we present three different kinds of datasets that demonstrate the effectiveness of the GeoWall in clarifying complex environmental reactions at the atomic scale. First, a time-resolved series of diffraction patterns obtained during the hydrothermal synthesis of metal oxide phases from precursor solutions can be viewed as a surface with interactive controls for peak scaling and color mapping. Second, the results of Rietveld analysis of cation exchange reactions in Mn oxides has provided three-dimensional difference Fourier maps. When stitched together in a temporal series, these offer an animated view of changes in atomic configurations during the process of exchange. Finally, molecular dynamical simulations are visualized as three-dimensional reactions between vibrating atoms in both the solid and the aqueous phases.

  20. Multiscale Analysis of Time Irreversibility Based on Phase-Space Reconstruction and Horizontal Visibility Graph Approach

    NASA Astrophysics Data System (ADS)

    Zhang, Yongping; Shang, Pengjian; Xiong, Hui; Xia, Jianan

    Time irreversibility is an important property of nonequilibrium dynamic systems. A visibility graph approach was recently proposed, and this approach is generally effective to measure time irreversibility of time series. However, its result may be unreliable when dealing with high-dimensional systems. In this work, we consider the joint concept of time irreversibility and adopt the phase-space reconstruction technique to improve this visibility graph approach. Compared with the previous approach, the improved approach gives a more accurate estimate for the irreversibility of time series, and is more effective to distinguish irreversible and reversible stochastic processes. We also use this approach to extract the multiscale irreversibility to account for the multiple inherent dynamics of time series. Finally, we apply the approach to detect the multiscale irreversibility of financial time series, and succeed to distinguish the time of financial crisis and the plateau. In addition, Asian stock indexes away from other indexes are clearly visible in higher time scales. Simulations and real data support the effectiveness of the improved approach when detecting time irreversibility.

  1. Use of a polar ionic liquid as second column for the comprehensive two-dimensional GC separation of PCBs.

    PubMed

    Zapadlo, Michal; Krupcík, Ján; Májek, Pavel; Armstrong, Daniel W; Sandra, Pat

    2010-09-10

    The orthogonality of three columns coupled in two series was studied for the congener specific comprehensive two-dimensional GC separation of polychlorinated biphenyls (PCBs). A non-polar capillary column coated with poly(5%-phenyl-95%-methyl)siloxane was used as the first ((1)D) column in both series. A polar capillary column coated with 70% cyanopropyl-polysilphenylene-siloxane or a capillary column coated with the ionic liquid 1,12-di(tripropylphosphonium)dodecane bis(trifluoromethane-sulfonyl)imide were used as the second ((2)D) columns. Nine multi-congener standard PCB solutions containing subsets of all native 209 PCBs, a mixture of 209 PCBs as well as Aroclor 1242 and 1260 formulations were used to study the orthogonality of both column series. Retention times of the corresponding PCB congeners on (1)D and (2)D columns were used to construct retention time dependences (apex plots) for assessing orthogonality of both columns coupled in series. For a visual assessment of the peak density of PCBs congeners on a retention plane, 2D images were compared. The degree of orthogonality of both column series was, along the visual assessment of distribution of PCBs on the retention plane, evaluated also by Pearson's correlation coefficient, which was found by correlation of retention times t(R,i,2D) and t(R,i,1D) of corresponding PCB congeners on both column series. It was demonstrated that the apolar+ionic liquid column series is almost orthogonal both for the 2D separation of PCBs present in Aroclor 1242 and 1260 formulations as well as for the separation of all of 209 PCBs. All toxic, dioxin-like PCBs, with the exception of PCB 118 that overlaps with PCB 106, were resolved by the apolar/ionic liquid series while on the apolar/polar column series three toxic PCBs overlapped (105+127, 81+148 and 118+106). Copyright 2010 Elsevier B.V. All rights reserved.

  2. On the pressure field of nonlinear standing water waves

    NASA Technical Reports Server (NTRS)

    Schwartz, L. W.

    1980-01-01

    The pressure field produced by two dimensional nonlinear time and space periodic standing waves was calculated as a series expansion in the wave height. The high order series was summed by the use of Pade approximants. Calculations included the pressure variation at great depth, which was considered to be a likely cause of microseismic activity, and the pressure distribution on a vertical barrier or breakwater.

  3. Challenge Online Time Series Clustering For Demand Response A Theory to Break the ‘Curse of Dimensionality'

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pal, Ranjan; Chelmis, Charalampos; Aman, Saima

    The advent of smart meters and advanced communication infrastructures catalyzes numerous smart grid applications such as dynamic demand response, and paves the way to solve challenging research problems in sustainable energy consumption. The space of solution possibilities are restricted primarily by the huge amount of generated data requiring considerable computational resources and efficient algorithms. To overcome this Big Data challenge, data clustering techniques have been proposed. Current approaches however do not scale in the face of the “increasing dimensionality” problem where a cluster point is represented by the entire customer consumption time series. To overcome this aspect we first rethinkmore » the way cluster points are created and designed, and then design an efficient online clustering technique for demand response (DR) in order to analyze high volume, high dimensional energy consumption time series data at scale, and on the fly. Our online algorithm is randomized in nature, and provides optimal performance guarantees in a computationally efficient manner. Unlike prior work we (i) study the consumption properties of the whole population simultaneously rather than developing individual models for each customer separately, claiming it to be a ‘killer’ approach that breaks the “curse of dimensionality” in online time series clustering, and (ii) provide tight performance guarantees in theory to validate our approach. Our insights are driven by the field of sociology, where collective behavior often emerges as the result of individual patterns and lifestyles.« less

  4. Heat transfer of phase-change materials in two-dimensional cylindrical coordinates

    NASA Technical Reports Server (NTRS)

    Labdon, M. B.; Guceri, S. I.

    1981-01-01

    Two-dimensional phase-change problem is numerically solved in cylindrical coordinates (r and z) by utilizing two Taylor series expansions for the temperature distributions in the neighborhood of the interface location. These two expansions form two polynomials in r and z directions. For the regions sufficiently away from the interface the temperature field equations are numerically solved in the usual way and the results are coupled with the polynomials. The main advantages of this efficient approach include ability to accept arbitrarily time dependent boundary conditions of all types and arbitrarily specified initial temperature distributions. A modified approach using a single Taylor series expansion in two variables is also suggested.

  5. Accelerating execution of the integrated TIGER series Monte Carlo radiation transport codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, L.M.; Hochstedler, R.D.

    1997-02-01

    Execution of the integrated TIGER series (ITS) of coupled electron/photon Monte Carlo radiation transport codes has been accelerated by modifying the FORTRAN source code for more efficient computation. Each member code of ITS was benchmarked and profiled with a specific test case that directed the acceleration effort toward the most computationally intensive subroutines. Techniques for accelerating these subroutines included replacing linear search algorithms with binary versions, replacing the pseudo-random number generator, reducing program memory allocation, and proofing the input files for geometrical redundancies. All techniques produced identical or statistically similar results to the original code. Final benchmark timing of themore » accelerated code resulted in speed-up factors of 2.00 for TIGER (the one-dimensional slab geometry code), 1.74 for CYLTRAN (the two-dimensional cylindrical geometry code), and 1.90 for ACCEPT (the arbitrary three-dimensional geometry code).« less

  6. Experimental investigation of stress wave propagation in standing trees

    Treesearch

    Houjiang Zhang; Xiping Wang; Juan Su

    2011-01-01

    The objective of this study was to investigate how a stress wave travels in a standing tree as it is introduced into the tree trunk through a mechanical impact. A series of stress wave time-of-flight (TOF) data were obtained from three freshly-cut red pine (Pinus resinosa Ait.) logs by means of a two-probe stress wave timer. Two-dimensional (2D) and three-dimensional (...

  7. Decoupled ARX and RBF Neural Network Modeling Using PCA and GA Optimization for Nonlinear Distributed Parameter Systems.

    PubMed

    Zhang, Ridong; Tao, Jili; Lu, Renquan; Jin, Qibing

    2018-02-01

    Modeling of distributed parameter systems is difficult because of their nonlinearity and infinite-dimensional characteristics. Based on principal component analysis (PCA), a hybrid modeling strategy that consists of a decoupled linear autoregressive exogenous (ARX) model and a nonlinear radial basis function (RBF) neural network model are proposed. The spatial-temporal output is first divided into a few dominant spatial basis functions and finite-dimensional temporal series by PCA. Then, a decoupled ARX model is designed to model the linear dynamics of the dominant modes of the time series. The nonlinear residual part is subsequently parameterized by RBFs, where genetic algorithm is utilized to optimize their hidden layer structure and the parameters. Finally, the nonlinear spatial-temporal dynamic system is obtained after the time/space reconstruction. Simulation results of a catalytic rod and a heat conduction equation demonstrate the effectiveness of the proposed strategy compared to several other methods.

  8. Normalization methods in time series of platelet function assays

    PubMed Central

    Van Poucke, Sven; Zhang, Zhongheng; Roest, Mark; Vukicevic, Milan; Beran, Maud; Lauwereins, Bart; Zheng, Ming-Hua; Henskens, Yvonne; Lancé, Marcus; Marcus, Abraham

    2016-01-01

    Abstract Platelet function can be quantitatively assessed by specific assays such as light-transmission aggregometry, multiple-electrode aggregometry measuring the response to adenosine diphosphate (ADP), arachidonic acid, collagen, and thrombin-receptor activating peptide and viscoelastic tests such as rotational thromboelastometry (ROTEM). The task of extracting meaningful statistical and clinical information from high-dimensional data spaces in temporal multivariate clinical data represented in multivariate time series is complex. Building insightful visualizations for multivariate time series demands adequate usage of normalization techniques. In this article, various methods for data normalization (z-transformation, range transformation, proportion transformation, and interquartile range) are presented and visualized discussing the most suited approach for platelet function data series. Normalization was calculated per assay (test) for all time points and per time point for all tests. Interquartile range, range transformation, and z-transformation demonstrated the correlation as calculated by the Spearman correlation test, when normalized per assay (test) for all time points. When normalizing per time point for all tests, no correlation could be abstracted from the charts as was the case when using all data as 1 dataset for normalization. PMID:27428217

  9. VPV--The velocity profile viewer user manual

    USGS Publications Warehouse

    Donovan, John M.

    2004-01-01

    The Velocity Profile Viewer (VPV) is a tool for visualizing time series of velocity profiles developed by the U.S. Geological Survey (USGS). The USGS uses VPV to preview and present measured velocity data from acoustic Doppler current profilers and simulated velocity data from three-dimensional estuarine, river, and lake hydrodynamic models. The data can be viewed as an animated three-dimensional profile or as a stack of time-series graphs that each represents a location in the water column. The graphically displayed data are shown at each time step like frames of animation. The animation can play at several different speeds or can be suspended on one frame. The viewing angle and time can be manipulated using mouse interaction. A number of options control the appearance of the profile and the graphs. VPV cannot edit or save data, but it can create a Post-Script file showing the velocity profile in three dimensions. This user manual describes how to use each of these features. VPV is available and can be downloaded for free from the World Wide Web at http://ca.water.usgs.gov/program/sfbay/vpv.

  10. Causality networks from multivariate time series and application to epilepsy.

    PubMed

    Siggiridou, Elsa; Koutlis, Christos; Tsimpiris, Alkiviadis; Kimiskidis, Vasilios K; Kugiumtzis, Dimitris

    2015-08-01

    Granger causality and variants of this concept allow the study of complex dynamical systems as networks constructed from multivariate time series. In this work, a large number of Granger causality measures used to form causality networks from multivariate time series are assessed. For this, realizations on high dimensional coupled dynamical systems are considered and the performance of the Granger causality measures is evaluated, seeking for the measures that form networks closest to the true network of the dynamical system. In particular, the comparison focuses on Granger causality measures that reduce the state space dimension when many variables are observed. Further, the linear and nonlinear Granger causality measures of dimension reduction are compared to a standard Granger causality measure on electroencephalographic (EEG) recordings containing episodes of epileptiform discharges.

  11. Using spatiotemporal source separation to identify prominent features in multichannel data without sinusoidal filters.

    PubMed

    Cohen, Michael X

    2017-09-27

    The number of simultaneously recorded electrodes in neuroscience is steadily increasing, providing new opportunities for understanding brain function, but also new challenges for appropriately dealing with the increase in dimensionality. Multivariate source separation analysis methods have been particularly effective at improving signal-to-noise ratio while reducing the dimensionality of the data and are widely used for cleaning, classifying and source-localizing multichannel neural time series data. Most source separation methods produce a spatial component (that is, a weighted combination of channels to produce one time series); here, this is extended to apply source separation to a time series, with the idea of obtaining a weighted combination of successive time points, such that the weights are optimized to satisfy some criteria. This is achieved via a two-stage source separation procedure, in which an optimal spatial filter is first constructed and then its optimal temporal basis function is computed. This second stage is achieved with a time-delay-embedding matrix, in which additional rows of a matrix are created from time-delayed versions of existing rows. The optimal spatial and temporal weights can be obtained by solving a generalized eigendecomposition of covariance matrices. The method is demonstrated in simulated data and in an empirical electroencephalogram study on theta-band activity during response conflict. Spatiotemporal source separation has several advantages, including defining empirical filters without the need to apply sinusoidal narrowband filters. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  12. Modeling multivariate time series on manifolds with skew radial basis functions.

    PubMed

    Jamshidi, Arta A; Kirby, Michael J

    2011-01-01

    We present an approach for constructing nonlinear empirical mappings from high-dimensional domains to multivariate ranges. We employ radial basis functions and skew radial basis functions for constructing a model using data that are potentially scattered or sparse. The algorithm progresses iteratively, adding a new function at each step to refine the model. The placement of the functions is driven by a statistical hypothesis test that accounts for correlation in the multivariate range variables. The test is applied on training and validation data and reveals nonstatistical or geometric structure when it fails. At each step, the added function is fit to data contained in a spatiotemporally defined local region to determine the parameters--in particular, the scale of the local model. The scale of the function is determined by the zero crossings of the autocorrelation function of the residuals. The model parameters and the number of basis functions are determined automatically from the given data, and there is no need to initialize any ad hoc parameters save for the selection of the skew radial basis functions. Compactly supported skew radial basis functions are employed to improve model accuracy, order, and convergence properties. The extension of the algorithm to higher-dimensional ranges produces reduced-order models by exploiting the existence of correlation in the range variable data. Structure is tested not just in a single time series but between all pairs of time series. We illustrate the new methodologies using several illustrative problems, including modeling data on manifolds and the prediction of chaotic time series.

  13. Estimating rainfall time series and model parameter distributions using model data reduction and inversion techniques

    NASA Astrophysics Data System (ADS)

    Wright, Ashley J.; Walker, Jeffrey P.; Pauwels, Valentijn R. N.

    2017-08-01

    Floods are devastating natural hazards. To provide accurate, precise, and timely flood forecasts, there is a need to understand the uncertainties associated within an entire rainfall time series, even when rainfall was not observed. The estimation of an entire rainfall time series and model parameter distributions from streamflow observations in complex dynamic catchments adds skill to current areal rainfall estimation methods, allows for the uncertainty of entire rainfall input time series to be considered when estimating model parameters, and provides the ability to improve rainfall estimates from poorly gauged catchments. Current methods to estimate entire rainfall time series from streamflow records are unable to adequately invert complex nonlinear hydrologic systems. This study aims to explore the use of wavelets in the estimation of rainfall time series from streamflow records. Using the Discrete Wavelet Transform (DWT) to reduce rainfall dimensionality for the catchment of Warwick, Queensland, Australia, it is shown that model parameter distributions and an entire rainfall time series can be estimated. Including rainfall in the estimation process improves streamflow simulations by a factor of up to 1.78. This is achieved while estimating an entire rainfall time series, inclusive of days when none was observed. It is shown that the choice of wavelet can have a considerable impact on the robustness of the inversion. Combining the use of a likelihood function that considers rainfall and streamflow errors with the use of the DWT as a model data reduction technique allows the joint inference of hydrologic model parameters along with rainfall.

  14. Koopman Operator Framework for Time Series Modeling and Analysis

    NASA Astrophysics Data System (ADS)

    Surana, Amit

    2018-01-01

    We propose an interdisciplinary framework for time series classification, forecasting, and anomaly detection by combining concepts from Koopman operator theory, machine learning, and linear systems and control theory. At the core of this framework is nonlinear dynamic generative modeling of time series using the Koopman operator which is an infinite-dimensional but linear operator. Rather than working with the underlying nonlinear model, we propose two simpler linear representations or model forms based on Koopman spectral properties. We show that these model forms are invariants of the generative model and can be readily identified directly from data using techniques for computing Koopman spectral properties without requiring the explicit knowledge of the generative model. We also introduce different notions of distances on the space of such model forms which is essential for model comparison/clustering. We employ the space of Koopman model forms equipped with distance in conjunction with classical machine learning techniques to develop a framework for automatic feature generation for time series classification. The forecasting/anomaly detection framework is based on using Koopman model forms along with classical linear systems and control approaches. We demonstrate the proposed framework for human activity classification, and for time series forecasting/anomaly detection in power grid application.

  15. Stratified Shear Flows In Pipe Geometries

    NASA Astrophysics Data System (ADS)

    Harabin, George; Camassa, Roberto; McLaughlin, Richard; UNC Joint Fluids Lab Team Team

    2015-11-01

    Exact and series solutions to the full Navier-Stokes equations coupled to the advection diffusion equation are investigated in tilted three-dimensional pipe geometries. Analytic techniques for studying the three-dimensional problem provide a means for tackling interesting questions such as the optimal domain for mass transport, and provide new avenues for experimental investigation of diffusion driven flows. Both static and time dependent solutions will be discussed. NSF RTG DMS-0943851, NSF RTG ARC-1025523, NSF DMS-1009750.

  16. Panel data analysis of cardiotocograph (CTG) data.

    PubMed

    Horio, Hiroyuki; Kikuchi, Hitomi; Ikeda, Tomoaki

    2013-01-01

    Panel data analysis is a statistical method, widely used in econometrics, which deals with two-dimensional panel data collected over time and over individuals. Cardiotocograph (CTG) which monitors fetal heart rate (FHR) using Doppler ultrasound and uterine contraction by strain gage is commonly used in intrapartum treatment of pregnant women. Although the relationship between FHR waveform pattern and the outcome such as umbilical blood gas data at delivery has long been analyzed, there exists no accumulated FHR patterns from large number of cases. As time-series economic fluctuations in econometrics such as consumption trend has been studied using panel data which consists of time-series and cross-sectional data, we tried to apply this method to CTG data. The panel data composed of a symbolized segment of FHR pattern can be easily handled, and a perinatologist can get the whole FHR pattern view from the microscopic level of time-series FHR data.

  17. Multifractality Signatures in Quasars Time Series. I. 3C 273

    NASA Astrophysics Data System (ADS)

    Belete, A. Bewketu; Bravo, J. P.; Canto Martins, B. L.; Leão, I. C.; De Araujo, J. M.; De Medeiros, J. R.

    2018-05-01

    The presence of multifractality in a time series shows different correlations for different time scales as well as intermittent behaviour that cannot be captured by a single scaling exponent. The identification of a multifractal nature allows for a characterization of the dynamics and of the intermittency of the fluctuations in non-linear and complex systems. In this study, we search for a possible multifractal structure (multifractality signature) of the flux variability in the quasar 3C 273 time series for all electromagnetic wavebands at different observation points, and the origins for the observed multifractality. This study is intended to highlight how the scaling behaves across the different bands of the selected candidate which can be used as an additional new technique to group quasars based on the fractal signature observed in their time series and determine whether quasars are non-linear physical systems or not. The Multifractal Detrended Moving Average algorithm (MFDMA) has been used to study the scaling in non-linear, complex and dynamic systems. To achieve this goal, we applied the backward (θ = 0) MFDMA method for one-dimensional signals. We observe weak multifractal (close to monofractal) behaviour in some of the time series of our candidate except in the mm, UV and X-ray bands. The non-linear temporal correlation is the main source of the observed multifractality in the time series whereas the heaviness of the distribution contributes less.

  18. Nonlinear analysis of solar cycles

    NASA Astrophysics Data System (ADS)

    Serre, T.; Nesme-Ribes, E.

    2000-08-01

    In this paper, the recent improvement of the Wolf sunspot time-series by Hoyt and co-workers has been analysed with the Global Flow Reconstruction (GFR) method (Serre et al. 1996a and b). A nonlinear 4-dimensional chaotic model has been extracted from the data which captures the principal characteristic features of the sunspot group time-series. The hypothesis of interactions between magnetic modes is implicitly tested; presumably, this is the cause of the irregular variations of solar cycle amplitudes recorded since the year 1610. The present results indicate that interactions are occurring between few global magnetic modes.

  19. Java 3D Interactive Visualization for Astrophysics

    NASA Astrophysics Data System (ADS)

    Chae, K.; Edirisinghe, D.; Lingerfelt, E. J.; Guidry, M. W.

    2003-05-01

    We are developing a series of interactive 3D visualization tools that employ the Java 3D API. We have applied this approach initially to a simple 3-dimensional galaxy collision model (restricted 3-body approximation), with quite satisfactory results. Running either as an applet under Web browser control, or as a Java standalone application, this program permits real-time zooming, panning, and 3-dimensional rotation of the galaxy collision simulation under user mouse and keyboard control. We shall also discuss applications of this technology to 3-dimensional visualization for other problems of astrophysical interest such as neutron star mergers and the time evolution of element/energy production networks in X-ray bursts. *Managed by UT-Battelle, LLC, for the U.S. Department of Energy under contract DE-AC05-00OR22725.

  20. Silicon/Carbon Anodes with One-Dimensional Pore Structure for Lithium-Ion Batteries

    DTIC Science & Technology

    2012-02-28

    REPORT Silicon/Carbon Anodes with One-Dimensional Pore Structure for Lithium - Ion Batteries 14. ABSTRACT 16. SECURITY CLASSIFICATION OF: A series of...Dimensional Pore Structure for Lithium - Ion Batteries Report Title ABSTRACT A series of composite electrode materials have been synthesized and...1 Silicon/Carbon Anodes with One-Dimensional Pore Structure for Lithium - Ion Batteries Grant # W911NF1110231 Annual Progress report June

  1. An architecture for consolidating multidimensional time-series data onto a common coordinate grid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shippert, Tim; Gaustad, Krista

    Consolidating measurement data for use by data models or in inter-comparison studies frequently requires transforming the data onto a common grid. Standard methods for interpolating multidimensional data are often not appropriate for data with non-homogenous dimensionality, and are hard to implement in a consistent manner for different datastreams. These challenges are increased when dealing with the automated procedures necessary for use with continuous, operational datastreams. In this paper we introduce a method of applying a series of one-dimensional transformations to merge data onto a common grid, examine the challenges of ensuring consistent application of data consolidation methods, present a frameworkmore » for addressing those challenges, and describe the implementation of such a framework for the Atmospheric Radiation Measurement (ARM) program.« less

  2. Spectral analysis of a two-species competition model: Determining the effects of extreme conditions on the color of noise generated from simulated time series

    NASA Astrophysics Data System (ADS)

    Golinski, M. R.

    2006-07-01

    Ecologists have observed that environmental noise affects population variance in the logistic equation for one-species growth. Interactions between deterministic and stochastic dynamics in a one-dimensional system result in increased variance in species population density over time. Since natural populations do not live in isolation, the present paper simulates a discrete-time two-species competition model with environmental noise to determine the type of colored population noise generated by extreme conditions in the long-term population dynamics of competing populations. Discrete Fourier analysis is applied to the simulation results and the calculated Hurst exponent ( H) is used to determine how the color of population noise for the two species corresponds to extreme conditions in population dynamics. To interpret the biological meaning of the color of noise generated by the two-species model, the paper determines the color of noise generated by three reference models: (1) A two-dimensional discrete-time white noise model (0⩽ H<1/2); (2) A two-dimensional fractional Brownian motion model (H=1/2); and (3) A two-dimensional discrete-time model with noise for unbounded growth of two uncoupled species (1/2< H⩽1).

  3. Three-dimensional reconstruction of single-cell chromosome structure using recurrence plots.

    PubMed

    Hirata, Yoshito; Oda, Arisa; Ohta, Kunihiro; Aihara, Kazuyuki

    2016-10-11

    Single-cell analysis of the three-dimensional (3D) chromosome structure can reveal cell-to-cell variability in genome activities. Here, we propose to apply recurrence plots, a mathematical method of nonlinear time series analysis, to reconstruct the 3D chromosome structure of a single cell based on information of chromosomal contacts from genome-wide chromosome conformation capture (Hi-C) data. This recurrence plot-based reconstruction (RPR) method enables rapid reconstruction of a unique structure in single cells, even from incomplete Hi-C information.

  4. Three-dimensional reconstruction of single-cell chromosome structure using recurrence plots

    NASA Astrophysics Data System (ADS)

    Hirata, Yoshito; Oda, Arisa; Ohta, Kunihiro; Aihara, Kazuyuki

    2016-10-01

    Single-cell analysis of the three-dimensional (3D) chromosome structure can reveal cell-to-cell variability in genome activities. Here, we propose to apply recurrence plots, a mathematical method of nonlinear time series analysis, to reconstruct the 3D chromosome structure of a single cell based on information of chromosomal contacts from genome-wide chromosome conformation capture (Hi-C) data. This recurrence plot-based reconstruction (RPR) method enables rapid reconstruction of a unique structure in single cells, even from incomplete Hi-C information.

  5. Topographic evolution of orogens: The long term perspective

    NASA Astrophysics Data System (ADS)

    Robl, Jörg; Hergarten, Stefan; Prasicek, Günther

    2017-04-01

    The landscape of mountain ranges reflects the competition of tectonics and climate, that build up and destroy topography, respectively. While there is a broad consensus on the acting processes, there is a vital debate whether the topography of individual orogens reflects stages of growth, steady-state or decay. This debate is fuelled by the million-year time scales hampering direct observations on landscape evolution in mountain ranges, the superposition of various process patterns and the complex interactions among different processes. In this presentation we focus on orogen-scale landscape evolution based on time-dependent numerical models and explore model time series to constrain the development of mountain range topography during an orogenic cycle. The erosional long term response of rivers and hillslopes to uplift can be mathematically formalised by the stream power and mass diffusion equations, respectively, which enables us to describe the time-dependent evolution of topography in orogens. Based on a simple one-dimensional model consisting of two rivers separated by a watershed we explain the influence of uplift rate and rock erodibility on steady-state channel profiles and show the time-dependent development of the channel - drainage divide system. The effect of dynamic drainage network reorganization adds additional complexity and its effect on topography is explored on the basis of two-dimensional models. Further complexity is introduced by coupling a mechanical model (thin viscous sheet approach) describing continental collision, crustal thickening and topography formation with a stream power-based landscape evolution model. Model time series show the impact of crustal deformation on drainage networks and consequently on the evolution of mountain range topography (Robl et al., in review). All model outcomes, from simple one-dimensional to coupled two dimensional models are presented as movies featuring a high spatial and temporal resolution. Robl, J., S. Hergarten, and G. Prasicek (in review), The topographic state of mountain ranges, Earth Science Reviews.

  6. New View of Relativity Theory

    NASA Astrophysics Data System (ADS)

    Martini, Luiz Cesar

    2014-04-01

    This article results from Introducing the Dimensional Continuous Space-Time Theory that was published in reference 1. The Dimensional Continuous Space-Time Theory shows a series of facts relative to matter, energy, space and concludes that empty space is inelastic, absolutely stationary, motionless, perpetual, without possibility of deformation neither can it be destroyed or created. A elementary cell of empty space or a certain amount of empty space can be occupied by any quantity of energy or matter without any alteration or deformation. As a consequence of these properties and being a integral part of the theory, the principles of Relativity Theory must be changed to become simple and intuitive.

  7. Dynamical density delay maps: simple, new method for visualising the behaviour of complex systems

    PubMed Central

    2014-01-01

    Background Physiologic signals, such as cardiac interbeat intervals, exhibit complex fluctuations. However, capturing important dynamical properties, including nonstationarities may not be feasible from conventional time series graphical representations. Methods We introduce a simple-to-implement visualisation method, termed dynamical density delay mapping (“D3-Map” technique) that provides an animated representation of a system’s dynamics. The method is based on a generalization of conventional two-dimensional (2D) Poincaré plots, which are scatter plots where each data point, x(n), in a time series is plotted against the adjacent one, x(n + 1). First, we divide the original time series, x(n) (n = 1,…, N), into a sequence of segments (windows). Next, for each segment, a three-dimensional (3D) Poincaré surface plot of x(n), x(n + 1), h[x(n),x(n + 1)] is generated, in which the third dimension, h, represents the relative frequency of occurrence of each (x(n),x(n + 1)) point. This 3D Poincaré surface is then chromatised by mapping the relative frequency h values onto a colour scheme. We also generate a colourised 2D contour plot from each time series segment using the same colourmap scheme as for the 3D Poincaré surface. Finally, the original time series graph, the colourised 3D Poincaré surface plot, and its projection as a colourised 2D contour map for each segment, are animated to create the full “D3-Map.” Results We first exemplify the D3-Map method using the cardiac interbeat interval time series from a healthy subject during sleeping hours. The animations uncover complex dynamical changes, such as transitions between states, and the relative amount of time the system spends in each state. We also illustrate the utility of the method in detecting hidden temporal patterns in the heart rate dynamics of a patient with atrial fibrillation. The videos, as well as the source code, are made publicly available. Conclusions Animations based on density delay maps provide a new way of visualising dynamical properties of complex systems not apparent in time series graphs or standard Poincaré plot representations. Trainees in a variety of fields may find the animations useful as illustrations of fundamental but challenging concepts, such as nonstationarity and multistability. For investigators, the method may facilitate data exploration. PMID:24438439

  8. The Effect of Two-dimensional and Stereoscopic Presentation on Middle School Students' Performance of Spatial Cognition Tasks

    NASA Astrophysics Data System (ADS)

    Price, Aaron; Lee, Hee-Sun

    2010-02-01

    We investigated whether and how student performance on three types of spatial cognition tasks differs when worked with two-dimensional or stereoscopic representations. We recruited nineteen middle school students visiting a planetarium in a large Midwestern American city and analyzed their performance on a series of spatial cognition tasks in terms of response accuracy and task completion time. Results show that response accuracy did not differ between the two types of representations while task completion time was significantly greater with the stereoscopic representations. The completion time increased as the number of mental manipulations of 3D objects increased in the tasks. Post-interviews provide evidence that some students continued to think of stereoscopic representations as two-dimensional. Based on cognitive load and cue theories, we interpret that, in the absence of pictorial depth cues, students may need more time to be familiar with stereoscopic representations for optimal performance. In light of these results, we discuss potential uses of stereoscopic representations for science learning.

  9. Nonlinear analysis and dynamic structure in the energy market

    NASA Astrophysics Data System (ADS)

    Aghababa, Hajar

    This research assesses the dynamic structure of the energy sector of the aggregate economy in the context of nonlinear mechanisms. Earlier studies have focused mainly on the price of the energy products when detecting nonlinearities in time series data of the energy market, and there is little mention of the production side of the market. Moreover, there is a lack of exploration about the implication of high dimensionality and time aggregation when analyzing the market's fundamentals. This research will address these gaps by including the quantity side of the market in addition to the price and by systematically incorporating various frequencies for sample sizes in three essays. The goal of this research is to provide an inclusive and exhaustive examination of the dynamics in the energy markets. The first essay begins with the application of statistical techniques, and it incorporates the most well-known univariate tests for nonlinearity with distinct power functions over alternatives and tests different null hypotheses. It utilizes the daily spot price observations on five major products in the energy market. The results suggest that the time series daily spot prices of the energy products are highly nonlinear in their nature. They demonstrate apparent evidence of general nonlinear serial dependence in each individual series, as well as nonlinearity in the first, second, and third moments of the series. The second essay examines the underlying mechanism of crude oil production and identifies the nonlinear structure of the production market by utilizing various monthly time series observations of crude oil production: the U.S. field, Organization of the Petroleum Exporting Countries (OPEC), non-OPEC, and the world production of crude oil. The finding implies that the time series data of the U.S. field, OPEC, and the world production of crude oil exhibit deep nonlinearity in their structure and are generated by nonlinear mechanisms. However, the dynamics of the non-OPEC production time series data does not reveal signs of nonlinearity. The third essay explores nonlinear structure in the case of high dimensionality of the observations, different frequencies of sample sizes, and division of the samples into sub-samples. It systematically examines the robustness of the inference methods at various levels of time aggregation by employing daily spot prices on crude oil for 26 years as well as monthly spot price index on crude oil for 41 years. The daily and monthly samples are divided into sub-samples as well. All the tests detect strong evidence of nonlinear structure in the daily spot price of crude oil; whereas in monthly observations the evidence of nonlinear dependence is less dramatic, indicating that the nonlinear serial dependence will not be as intense when the time aggregation increase in time series observations.

  10. Investigation of Time Series Representations and Similarity Measures for Structural Damage Pattern Recognition

    PubMed Central

    Swartz, R. Andrew

    2013-01-01

    This paper investigates the time series representation methods and similarity measures for sensor data feature extraction and structural damage pattern recognition. Both model-based time series representation and dimensionality reduction methods are studied to compare the effectiveness of feature extraction for damage pattern recognition. The evaluation of feature extraction methods is performed by examining the separation of feature vectors among different damage patterns and the pattern recognition success rate. In addition, the impact of similarity measures on the pattern recognition success rate and the metrics for damage localization are also investigated. The test data used in this study are from the System Identification to Monitor Civil Engineering Structures (SIMCES) Z24 Bridge damage detection tests, a rigorous instrumentation campaign that recorded the dynamic performance of a concrete box-girder bridge under progressively increasing damage scenarios. A number of progressive damage test case datasets and damage test data with different damage modalities are used. The simulation results show that both time series representation methods and similarity measures have significant impact on the pattern recognition success rate. PMID:24191136

  11. Decoding Dynamic Brain Patterns from Evoked Responses: A Tutorial on Multivariate Pattern Analysis Applied to Time Series Neuroimaging Data.

    PubMed

    Grootswagers, Tijl; Wardle, Susan G; Carlson, Thomas A

    2017-04-01

    Multivariate pattern analysis (MVPA) or brain decoding methods have become standard practice in analyzing fMRI data. Although decoding methods have been extensively applied in brain-computer interfaces, these methods have only recently been applied to time series neuroimaging data such as MEG and EEG to address experimental questions in cognitive neuroscience. In a tutorial style review, we describe a broad set of options to inform future time series decoding studies from a cognitive neuroscience perspective. Using example MEG data, we illustrate the effects that different options in the decoding analysis pipeline can have on experimental results where the aim is to "decode" different perceptual stimuli or cognitive states over time from dynamic brain activation patterns. We show that decisions made at both preprocessing (e.g., dimensionality reduction, subsampling, trial averaging) and decoding (e.g., classifier selection, cross-validation design) stages of the analysis can significantly affect the results. In addition to standard decoding, we describe extensions to MVPA for time-varying neuroimaging data including representational similarity analysis, temporal generalization, and the interpretation of classifier weight maps. Finally, we outline important caveats in the design and interpretation of time series decoding experiments.

  12. Big Data Analytics for Demand Response: Clustering Over Space and Time

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chelmis, Charalampos; Kolte, Jahanvi; Prasanna, Viktor K.

    The pervasive deployment of advanced sensing infrastructure in Cyber-Physical systems, such as the Smart Grid, has resulted in an unprecedented data explosion. Such data exhibit both large volumes and high velocity characteristics, two of the three pillars of Big Data, and have a time-series notion as datasets in this context typically consist of successive measurements made over a time interval. Time-series data can be valuable for data mining and analytics tasks such as identifying the “right” customers among a diverse population, to target for Demand Response programs. However, time series are challenging to mine due to their high dimensionality. Inmore » this paper, we motivate this problem using a real application from the smart grid domain. We explore novel representations of time-series data for BigData analytics, and propose a clustering technique for determining natural segmentation of customers and identification of temporal consumption patterns. Our method is generizable to large-scale, real-world scenarios, without making any assumptions about the data. We evaluate our technique using real datasets from smart meters, totaling ~ 18,200,000 data points, and show the efficacy of our technique in efficiency detecting the number of optimal number of clusters.« less

  13. Machine learning for cardiac ultrasound time series data

    NASA Astrophysics Data System (ADS)

    Yuan, Baichuan; Chitturi, Sathya R.; Iyer, Geoffrey; Li, Nuoyu; Xu, Xiaochuan; Zhan, Ruohan; Llerena, Rafael; Yen, Jesse T.; Bertozzi, Andrea L.

    2017-03-01

    We consider the problem of identifying frames in a cardiac ultrasound video associated with left ventricular chamber end-systolic (ES, contraction) and end-diastolic (ED, expansion) phases of the cardiac cycle. Our procedure involves a simple application of non-negative matrix factorization (NMF) to a series of frames of a video from a single patient. Rank-2 NMF is performed to compute two end-members. The end members are shown to be close representations of the actual heart morphology at the end of each phase of the heart function. Moreover, the entire time series can be represented as a linear combination of these two end-member states thus providing a very low dimensional representation of the time dynamics of the heart. Unlike previous work, our methods do not require any electrocardiogram (ECG) information in order to select the end-diastolic frame. Results are presented for a data set of 99 patients including both healthy and diseased examples.

  14. Nonlinear modeling of chaotic time series: Theory and applications

    NASA Astrophysics Data System (ADS)

    Casdagli, M.; Eubank, S.; Farmer, J. D.; Gibson, J.; Desjardins, D.; Hunter, N.; Theiler, J.

    We review recent developments in the modeling and prediction of nonlinear time series. In some cases, apparent randomness in time series may be due to chaotic behavior of a nonlinear but deterministic system. In such cases, it is possible to exploit the determinism to make short term forecasts that are much more accurate than one could make from a linear stochastic model. This is done by first reconstructing a state space, and then using nonlinear function approximation methods to create a dynamical model. Nonlinear models are valuable not only as short term forecasters, but also as diagnostic tools for identifying and quantifying low-dimensional chaotic behavior. During the past few years, methods for nonlinear modeling have developed rapidly, and have already led to several applications where nonlinear models motivated by chaotic dynamics provide superior predictions to linear models. These applications include prediction of fluid flows, sunspots, mechanical vibrations, ice ages, measles epidemics, and human speech.

  15. A complex systems analysis of stick-slip dynamics of a laboratory fault

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walker, David M.; Tordesillas, Antoinette, E-mail: atordesi@unimelb.edu.au; Small, Michael

    2014-03-15

    We study the stick-slip behavior of a granular bed of photoelastic disks sheared by a rough slider pulled along the surface. Time series of a proxy for granular friction are examined using complex systems methods to characterize the observed stick-slip dynamics of this laboratory fault. Nonlinear surrogate time series methods show that the stick-slip behavior appears more complex than a periodic dynamics description. Phase space embedding methods show that the dynamics can be locally captured within a four to six dimensional subspace. These slider time series also provide an experimental test for recent complex network methods. Phase space networks, constructedmore » by connecting nearby phase space points, proved useful in capturing the key features of the dynamics. In particular, network communities could be associated to slip events and the ranking of small network subgraphs exhibited a heretofore unreported ordering.« less

  16. Volterra Series Approach for Nonlinear Aeroelastic Response of 2-D Lifting Surfaces

    NASA Technical Reports Server (NTRS)

    Silva, Walter A.; Marzocca, Piergiovanni; Librescu, Liviu

    2001-01-01

    The problem of the determination of the subcritical aeroelastic response and flutter instability of nonlinear two-dimensional lifting surfaces in an incompressible flow-field via Volterra series approach is addressed. The related aeroelastic governing equations are based upon the inclusion of structural nonlinearities, of the linear unsteady aerodynamics and consideration of an arbitrary time-dependent external pressure pulse. Unsteady aeroelastic nonlinear kernels are determined, and based on these, frequency and time histories of the subcritical aeroelastic response are obtained, and in this context the influence of geometric nonlinearities is emphasized. Conclusions and results displaying the implications of the considered effects are supplied.

  17. Topics in Two-Dimensional Quantum Gravity and Chern-Simons Gauge Theories

    NASA Astrophysics Data System (ADS)

    Zemba, Guillermo Raul

    A series of studies in two and three dimensional theories is presented. The two dimensional problems are considered in the framework of String Theory. The first one determines the region of integration in the space of inequivalent tori of a tadpole diagram in Closed String Field Theory, using the naive Witten three-string vertex. It is shown that every surface is counted an infinite number of times and the source of this behavior is identified. The second study analyzes the behavior of the discrete matrix model of two dimensional gravity without matter using a mathematically well-defined construction, confirming several conjectures and partial results from the literature. The studies in three dimensions are based on Chern Simons pure gauge theory. The first one deals with the projection of the theory onto a two-dimensional surface of constant time, whereas the second analyzes the large N behavior of the SU(N) theory and makes evident a duality symmetry between the only two parameters of the theory. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253 -1690.).

  18. An architecture for consolidating multidimensional time-series data onto a common coordinate grid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shippert, Tim; Gaustad, Krista

    In this paper, consolidating measurement data for use by data models or in inter-comparison studies frequently requires transforming the data onto a common grid. Standard methods for interpolating multidimensional data are often not appropriate for data with non-homogenous dimensionality, and are hard to implement in a consistent manner for different datastreams. In addition, these challenges are increased when dealing with the automated procedures necessary for use with continuous, operational datastreams. In this paper we introduce a method of applying a series of one-dimensional transformations to merge data onto a common grid, examine the challenges of ensuring consistent application of datamore » consolidation methods, present a framework for addressing those challenges, and describe the implementation of such a framework for the Atmospheric Radiation Measurement (ARM) program.« less

  19. An architecture for consolidating multidimensional time-series data onto a common coordinate grid

    DOE PAGES

    Shippert, Tim; Gaustad, Krista

    2016-12-16

    In this paper, consolidating measurement data for use by data models or in inter-comparison studies frequently requires transforming the data onto a common grid. Standard methods for interpolating multidimensional data are often not appropriate for data with non-homogenous dimensionality, and are hard to implement in a consistent manner for different datastreams. In addition, these challenges are increased when dealing with the automated procedures necessary for use with continuous, operational datastreams. In this paper we introduce a method of applying a series of one-dimensional transformations to merge data onto a common grid, examine the challenges of ensuring consistent application of datamore » consolidation methods, present a framework for addressing those challenges, and describe the implementation of such a framework for the Atmospheric Radiation Measurement (ARM) program.« less

  20. A Nonlinear Dynamical Systems based Model for Stochastic Simulation of Streamflow

    NASA Astrophysics Data System (ADS)

    Erkyihun, S. T.; Rajagopalan, B.; Zagona, E. A.

    2014-12-01

    Traditional time series methods model the evolution of the underlying process as a linear or nonlinear function of the autocorrelation. These methods capture the distributional statistics but are incapable of providing insights into the dynamics of the process, the potential regimes, and predictability. This work develops a nonlinear dynamical model for stochastic simulation of streamflows. In this, first a wavelet spectral analysis is employed on the flow series to isolate dominant orthogonal quasi periodic timeseries components. The periodic bands are added denoting the 'signal' component of the time series and the residual being the 'noise' component. Next, the underlying nonlinear dynamics of this combined band time series is recovered. For this the univariate time series is embedded in a d-dimensional space with an appropriate lag T to recover the state space in which the dynamics unfolds. Predictability is assessed by quantifying the divergence of trajectories in the state space with time, as Lyapunov exponents. The nonlinear dynamics in conjunction with a K-nearest neighbor time resampling is used to simulate the combined band, to which the noise component is added to simulate the timeseries. We demonstrate this method by applying it to the data at Lees Ferry that comprises of both the paleo reconstructed and naturalized historic annual flow spanning 1490-2010. We identify interesting dynamics of the signal in the flow series and epochal behavior of predictability. These will be of immense use for water resources planning and management.

  1. River flow prediction using hybrid models of support vector regression with the wavelet transform, singular spectrum analysis and chaotic approach

    NASA Astrophysics Data System (ADS)

    Baydaroğlu, Özlem; Koçak, Kasım; Duran, Kemal

    2018-06-01

    Prediction of water amount that will enter the reservoirs in the following month is of vital importance especially for semi-arid countries like Turkey. Climate projections emphasize that water scarcity will be one of the serious problems in the future. This study presents a methodology for predicting river flow for the subsequent month based on the time series of observed monthly river flow with hybrid models of support vector regression (SVR). Monthly river flow over the period 1940-2012 observed for the Kızılırmak River in Turkey has been used for training the method, which then has been applied for predictions over a period of 3 years. SVR is a specific implementation of support vector machines (SVMs), which transforms the observed input data time series into a high-dimensional feature space (input matrix) by way of a kernel function and performs a linear regression in this space. SVR requires a special input matrix. The input matrix was produced by wavelet transforms (WT), singular spectrum analysis (SSA), and a chaotic approach (CA) applied to the input time series. WT convolutes the original time series into a series of wavelets, and SSA decomposes the time series into a trend, an oscillatory and a noise component by singular value decomposition. CA uses a phase space formed by trajectories, which represent the dynamics producing the time series. These three methods for producing the input matrix for the SVR proved successful, while the SVR-WT combination resulted in the highest coefficient of determination and the lowest mean absolute error.

  2. Applications and development of new algorithms for displacement analysis using InSAR time series

    NASA Astrophysics Data System (ADS)

    Osmanoglu, Batuhan

    Time series analysis of Synthetic Aperture Radar Interferometry (InSAR) data has become an important scientific tool for monitoring and measuring the displacement of Earth's surface due to a wide range of phenomena, including earthquakes, volcanoes, landslides, changes in ground water levels, and wetlands. Time series analysis is a product of interferometric phase measurements, which become ambiguous when the observed motion is larger than half of the radar wavelength. Thus, phase observations must first be unwrapped in order to obtain physically meaningful results. Persistent Scatterer Interferometry (PSI), Stanford Method for Persistent Scatterers (StaMPS), Short Baselines Interferometry (SBAS) and Small Temporal Baseline Subset (STBAS) algorithms solve for this ambiguity using a series of spatio-temporal unwrapping algorithms and filters. In this dissertation, I improve upon current phase unwrapping algorithms, and apply the PSI method to study subsidence in Mexico City. PSI was used to obtain unwrapped deformation rates in Mexico City (Chapter 3),where ground water withdrawal in excess of natural recharge causes subsurface, clay-rich sediments to compact. This study is based on 23 satellite SAR scenes acquired between January 2004 and July 2006. Time series analysis of the data reveals a maximum line-of-sight subsidence rate of 300mm/yr at a high enough resolution that individual subsidence rates for large buildings can be determined. Differential motion and related structural damage along an elevated metro rail was evident from the results. Comparison of PSI subsidence rates with data from permanent GPS stations indicate root mean square (RMS) agreement of 6.9 mm/yr, about the level expected based on joint data uncertainty. The Mexico City results suggest negligible recharge, implying continuing degradation and loss of the aquifer in the third largest metropolitan area in the world. Chapters 4 and 5 illustrate the link between time series analysis and three-dimensional (3-D) phase unwrapping. Chapter 4 focuses on the unwrapping path. Unwrapping algorithms can be divided into two groups, path-dependent and path-independent algorithms. Path-dependent algorithms use local unwrapping functions applied pixel-by-pixel to the dataset. In contrast, path-independent algorithms use global optimization methods such as least squares, and return a unique solution. However, when aliasing and noise are present, path-independent algorithms can underestimate the signal in some areas due to global fitting criteria. Path-dependent algorithms do not underestimate the signal, but, as the name implies, the unwrapping path can affect the result. Comparison between existing path algorithms and a newly developed algorithm based on Fisher information theory was conducted. Results indicate that Fisher information theory does indeed produce lower misfit results for most tested cases. Chapter 5 presents a new time series analysis method based on 3-D unwrapping of SAR data using extended Kalman filters. Existing methods for time series generation using InSAR data employ special filters to combine two-dimensional (2-D) spatial unwrapping with one-dimensional (1-D) temporal unwrapping results. The new method, however, combines observations in azimuth, range and time for repeat pass interferometry. Due to the pixel-by-pixel characteristic of the filter, the unwrapping path is selected based on a quality map. This unwrapping algorithm is the first application of extended Kalman filters to the 3-D unwrapping problem. Time series analyses of InSAR data are used in a variety of applications with different characteristics. Consequently, it is difficult to develop a single algorithm that can provide optimal results in all cases, given that different algorithms possess a unique set of strengths and weaknesses. Nonetheless, filter-based unwrapping algorithms such as the one presented in this dissertation have the capability of joining multiple observations into a uniform solution, which is becoming an important feature with continuously growing datasets.

  3. A Locally Optimal Algorithm for Estimating a Generating Partition from an Observed Time Series and Its Application to Anomaly Detection.

    PubMed

    Ghalyan, Najah F; Miller, David J; Ray, Asok

    2018-06-12

    Estimation of a generating partition is critical for symbolization of measurements from discrete-time dynamical systems, where a sequence of symbols from a (finite-cardinality) alphabet may uniquely specify the underlying time series. Such symbolization is useful for computing measures (e.g., Kolmogorov-Sinai entropy) to identify or characterize the (possibly unknown) dynamical system. It is also useful for time series classification and anomaly detection. The seminal work of Hirata, Judd, and Kilminster (2004) derives a novel objective function, akin to a clustering objective, that measures the discrepancy between a set of reconstruction values and the points from the time series. They cast estimation of a generating partition via the minimization of their objective function. Unfortunately, their proposed algorithm is nonconvergent, with no guarantee of finding even locally optimal solutions with respect to their objective. The difficulty is a heuristic-nearest neighbor symbol assignment step. Alternatively, we develop a novel, locally optimal algorithm for their objective. We apply iterative nearest-neighbor symbol assignments with guaranteed discrepancy descent, by which joint, locally optimal symbolization of the entire time series is achieved. While most previous approaches frame generating partition estimation as a state-space partitioning problem, we recognize that minimizing the Hirata et al. (2004) objective function does not induce an explicit partitioning of the state space, but rather the space consisting of the entire time series (effectively, clustering in a (countably) infinite-dimensional space). Our approach also amounts to a novel type of sliding block lossy source coding. Improvement, with respect to several measures, is demonstrated over popular methods for symbolizing chaotic maps. We also apply our approach to time-series anomaly detection, considering both chaotic maps and failure application in a polycrystalline alloy material.

  4. State and parameter estimation of spatiotemporally chaotic systems illustrated by an application to Rayleigh-Bénard convection.

    PubMed

    Cornick, Matthew; Hunt, Brian; Ott, Edward; Kurtuldu, Huseyin; Schatz, Michael F

    2009-03-01

    Data assimilation refers to the process of estimating a system's state from a time series of measurements (which may be noisy or incomplete) in conjunction with a model for the system's time evolution. Here we demonstrate the applicability of a recently developed data assimilation method, the local ensemble transform Kalman filter, to nonlinear, high-dimensional, spatiotemporally chaotic flows in Rayleigh-Bénard convection experiments. Using this technique we are able to extract the full temperature and velocity fields from a time series of shadowgraph measurements. In addition, we describe extensions of the algorithm for estimating model parameters. Our results suggest the potential usefulness of our data assimilation technique to a broad class of experimental situations exhibiting spatiotemporal chaos.

  5. Coronal Mass Ejection Data Clustering and Visualization of Decision Trees

    NASA Astrophysics Data System (ADS)

    Ma, Ruizhe; Angryk, Rafal A.; Riley, Pete; Filali Boubrahimi, Soukaina

    2018-05-01

    Coronal mass ejections (CMEs) can be categorized as either “magnetic clouds” (MCs) or non-MCs. Features such as a large magnetic field, low plasma-beta, and low proton temperature suggest that a CME event is also an MC event; however, so far there is neither a definitive method nor an automatic process to distinguish the two. Human labeling is time-consuming, and results can fluctuate owing to the imprecise definition of such events. In this study, we approach the problem of MC and non-MC distinction from a time series data analysis perspective and show how clustering can shed some light on this problem. Although many algorithms exist for traditional data clustering in the Euclidean space, they are not well suited for time series data. Problems such as inadequate distance measure, inaccurate cluster center description, and lack of intuitive cluster representations need to be addressed for effective time series clustering. Our data analysis in this work is twofold: clustering and visualization. For clustering we compared the results from the popular hierarchical agglomerative clustering technique to a distance density clustering heuristic we developed previously for time series data clustering. In both cases, dynamic time warping will be used for similarity measure. For classification as well as visualization, we use decision trees to aggregate single-dimensional clustering results to form a multidimensional time series decision tree, with averaged time series to present each decision. In this study, we achieved modest accuracy and, more importantly, an intuitive interpretation of how different parameters contribute to an MC event.

  6. Real time three dimensional sensing system

    DOEpatents

    Gordon, S.J.

    1996-12-31

    The invention is a three dimensional sensing system which utilizes two flexibly located cameras for receiving and recording visual information with respect to a sensed object illuminated by a series of light planes. Each pixel of each image is converted to a digital word and the words are grouped into stripes, each stripe comprising contiguous pixels. One pixel of each stripe in one image is selected and an epi-polar line of that point is drawn in the other image. The three dimensional coordinate of each selected point is determined by determining the point on said epi-polar line which also lies on a stripe in the second image and which is closest to a known light plane. 7 figs.

  7. Low-Dimensional Chaos in an Instance of Epilepsy

    NASA Astrophysics Data System (ADS)

    Babloyantz, A.; Destexhe, A.

    1986-05-01

    Using a time series obtained from the electroencephalogram recording of a human epileptic seizure, we show the existence of a chaotic attractor, the latter being the direct consequence of the deterministic nature of brain activity. This result is compared with other attractors seen in normal human brain dynamics. A sudden jump is observed between the dimensionalities of these brain attractors 4.05 ± 0.05 for deep sleep) and the very low dimensionality of the epileptic state (2.05 ± 0.09). The evaluation of the autocorrelation function and of the largest Lyapunov exponent allows us to sharpen further the main features of underlying dynamics. Possible implications in biological and medical research are briefly discussed.

  8. Real time three dimensional sensing system

    DOEpatents

    Gordon, Steven J.

    1996-01-01

    The invention is a three dimensional sensing system which utilizes two flexibly located cameras for receiving and recording visual information with respect to a sensed object illuminated by a series of light planes. Each pixel of each image is converted to a digital word and the words are grouped into stripes, each stripe comprising contiguous pixels. One pixel of each stripe in one image is selected and an epi-polar line of that point is drawn in the other image. The three dimensional coordinate of each selected point is determined by determining the point on said epi-polar line which also lies on a stripe in the second image and which is closest to a known light plane.

  9. A Maple package for improved global mapping forecast

    NASA Astrophysics Data System (ADS)

    Carli, H.; Duarte, L. G. S.; da Mota, L. A. C. P.

    2014-03-01

    We present a Maple implementation of the well known global approach to time series analysis and some further developments designed to improve the computational efficiency of the forecasting capabilities of the approach. This global approach can be summarized as being a reconstruction of the phase space, based on a time ordered series of data obtained from the system. After that, using the reconstructed vectors, a portion of this space is used to produce a mapping, a polynomial fitting, through a minimization procedure, that represents the system and can be employed to forecast further entries for the series. In the present implementation, we introduce a set of commands, tools, in order to perform all these tasks. For example, the command VecTS deals mainly with the reconstruction of the vector in the phase space. The command GfiTS deals with producing the minimization and the fitting. ForecasTS uses all these and produces the prediction of the next entries. For the non-standard algorithms, we here present two commands: IforecasTS and NiforecasTS that, respectively deal with the one-step and the N-step forecasting. Finally, we introduce two further tools to aid the forecasting. The commands GfiTS and AnalysTS, basically, perform an analysis of the behavior of each portion of a series regarding the settings used on the commands just mentioned above. Catalogue identifier: AERW_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERW_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 3001 No. of bytes in distributed program, including test data, etc.: 95018 Distribution format: tar.gz Programming language: Maple 14. Computer: Any capable of running Maple Operating system: Any capable of running Maple. Tested on Windows ME, Windows XP, Windows 7. RAM: 128 MB Classification: 4.3, 4.9, 5 Nature of problem: Time series analysis and improving forecast capability. Solution method: The method of solution is partially based on a result published in [1]. Restrictions: If the time series that is being analyzed presents a great amount of noise or if the dynamical system behind the time series is of high dimensionality (Dim≫3), then the method may not work well. Unusual features: Our implementation can, in the cases where the dynamics behind the time series is given by a system of low dimensionality, greatly improve the forecast. Running time: This depends strongly on the command that is being used. References: [1] Barbosa, L.M.C.R., Duarte, L.G.S., Linhares, C.A. and da Mota, L.A.C.P., Improving the global fitting method on nonlinear time series analysis, Phys. Rev. E 74, 026702 (2006).

  10. Computing algebraic transfer entropy and coupling directions via transcripts

    NASA Astrophysics Data System (ADS)

    Amigó, José M.; Monetti, Roberto; Graff, Beata; Graff, Grzegorz

    2016-11-01

    Most random processes studied in nonlinear time series analysis take values on sets endowed with a group structure, e.g., the real and rational numbers, and the integers. This fact allows to associate with each pair of group elements a third element, called their transcript, which is defined as the product of the second element in the pair times the first one. The transfer entropy of two such processes is called algebraic transfer entropy. It measures the information transferred between two coupled processes whose values belong to a group. In this paper, we show that, subject to one constraint, the algebraic transfer entropy matches the (in general, conditional) mutual information of certain transcripts with one variable less. This property has interesting practical applications, especially to the analysis of short time series. We also derive weak conditions for the 3-dimensional algebraic transfer entropy to yield the same coupling direction as the corresponding mutual information of transcripts. A related issue concerns the use of mutual information of transcripts to determine coupling directions in cases where the conditions just mentioned are not fulfilled. We checked the latter possibility in the lowest dimensional case with numerical simulations and cardiovascular data, and obtained positive results.

  11. CauseMap: fast inference of causality from complex time series.

    PubMed

    Maher, M Cyrus; Hernandez, Ryan D

    2015-01-01

    Background. Establishing health-related causal relationships is a central pursuit in biomedical research. Yet, the interdependent non-linearity of biological systems renders causal dynamics laborious and at times impractical to disentangle. This pursuit is further impeded by the dearth of time series that are sufficiently long to observe and understand recurrent patterns of flux. However, as data generation costs plummet and technologies like wearable devices democratize data collection, we anticipate a coming surge in the availability of biomedically-relevant time series data. Given the life-saving potential of these burgeoning resources, it is critical to invest in the development of open source software tools that are capable of drawing meaningful insight from vast amounts of time series data. Results. Here we present CauseMap, the first open source implementation of convergent cross mapping (CCM), a method for establishing causality from long time series data (≳25 observations). Compared to existing time series methods, CCM has the advantage of being model-free and robust to unmeasured confounding that could otherwise induce spurious associations. CCM builds on Takens' Theorem, a well-established result from dynamical systems theory that requires only mild assumptions. This theorem allows us to reconstruct high dimensional system dynamics using a time series of only a single variable. These reconstructions can be thought of as shadows of the true causal system. If reconstructed shadows can predict points from opposing time series, we can infer that the corresponding variables are providing views of the same causal system, and so are causally related. Unlike traditional metrics, this test can establish the directionality of causation, even in the presence of feedback loops. Furthermore, since CCM can extract causal relationships from times series of, e.g., a single individual, it may be a valuable tool to personalized medicine. We implement CCM in Julia, a high-performance programming language designed for facile technical computing. Our software package, CauseMap, is platform-independent and freely available as an official Julia package. Conclusions. CauseMap is an efficient implementation of a state-of-the-art algorithm for detecting causality from time series data. We believe this tool will be a valuable resource for biomedical research and personalized medicine.

  12. Advanced Space Shuttle simulation model

    NASA Technical Reports Server (NTRS)

    Tatom, F. B.; Smith, S. R.

    1982-01-01

    A non-recursive model (based on von Karman spectra) for atmospheric turbulence along the flight path of the shuttle orbiter was developed. It provides for simulation of instantaneous vertical and horizontal gusts at the vehicle center-of-gravity, and also for simulation of instantaneous gusts gradients. Based on this model the time series for both gusts and gust gradients were generated and stored on a series of magnetic tapes, entitled Shuttle Simulation Turbulence Tapes (SSTT). The time series are designed to represent atmospheric turbulence from ground level to an altitude of 120,000 meters. A description of the turbulence generation procedure is provided. The results of validating the simulated turbulence are described. Conclusions and recommendations are presented. One-dimensional von Karman spectra are tabulated, while a discussion of the minimum frequency simulated is provided. The results of spectral and statistical analyses of the SSTT are presented.

  13. Development and Testing of Data Mining Algorithms for Earth Observation

    NASA Technical Reports Server (NTRS)

    Glymour, Clark

    2005-01-01

    The new algorithms developed under this project included a principled procedure for classification of objects, events or circumstances according to a target variable when a very large number of potential predictor variables is available but the number of cases that can be used for training a classifier is relatively small. These "high dimensional" problems require finding a minimal set of variables -called the Markov Blanket-- sufficient for predicting the value of the target variable. An algorithm, the Markov Blanket Fan Search, was developed, implemented and tested on both simulated and real data in conjunction with a graphical model classifier, which was also implemented. Another algorithm developed and implemented in TETRAD IV for time series elaborated on work by C. Granger and N. Swanson, which in turn exploited some of our earlier work. The algorithms in question learn a linear time series model from data. Given such a time series, the simultaneous residual covariances, after factoring out time dependencies, may provide information about causal processes that occur more rapidly than the time series representation allow, so called simultaneous or contemporaneous causal processes. Working with A. Monetta, a graduate student from Italy, we produced the correct statistics for estimating the contemporaneous causal structure from time series data using the TETRAD IV suite of algorithms. Two economists, David Bessler and Kevin Hoover, have independently published applications using TETRAD style algorithms to the same purpose. These implementations and algorithmic developments were separately used in two kinds of studies of climate data: Short time series of geographically proximate climate variables predicting agricultural effects in California, and longer duration climate measurements of temperature teleconnections.

  14. Nonlinear dynamic analysis of D α signals for type I edge localized modes characterization on JET with a carbon wall

    NASA Astrophysics Data System (ADS)

    Cannas, Barbara; Fanni, Alessandra; Murari, Andrea; Pisano, Fabio; Contributors, JET

    2018-02-01

    In this paper, the dynamic characteristics of type-I ELM time-series from the JET tokamak, the world’s largest magnetic confinement plasma physics experiment, have been investigated. The dynamic analysis has been focused on the detection of nonlinear structure in D α radiation time series. Firstly, the method of surrogate data has been applied to evaluate the statistical significance of the null hypothesis of static nonlinear distortion of an underlying Gaussian linear process. Several nonlinear statistics have been evaluated, such us the time delayed mutual information, the correlation dimension and the maximal Lyapunov exponent. The obtained results allow us to reject the null hypothesis, giving evidence of underlying nonlinear dynamics. Moreover, no evidence of low-dimensional chaos has been found; indeed, the analysed time series are better characterized by the power law sensitivity to initial conditions which can suggest a motion at the ‘edge of chaos’, at the border between chaotic and regular non-chaotic dynamics. This uncertainty makes it necessary to further investigate about the nature of the nonlinear dynamics. For this purpose, a second surrogate test to distinguish chaotic orbits from pseudo-periodic orbits has been applied. In this case, we cannot reject the null hypothesis which means that the ELM time series is possibly pseudo-periodic. In order to reproduce pseudo-periodic dynamical properties, a periodic state-of-the-art model, proposed to reproduce the ELM cycle, has been corrupted by a dynamical noise, obtaining time series qualitatively in agreement with experimental time series.

  15. Simulation Exploration through Immersive Parallel Planes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brunhart-Lupo, Nicholas J; Bush, Brian W; Gruchalla, Kenny M

    We present a visualization-driven simulation system that tightly couples systems dynamics simulations with an immersive virtual environment to allow analysts to rapidly develop and test hypotheses in a high-dimensional parameter space. To accomplish this, we generalize the two-dimensional parallel-coordinates statistical graphic as an immersive 'parallel-planes' visualization for multivariate time series emitted by simulations running in parallel with the visualization. In contrast to traditional parallel coordinate's mapping the multivariate dimensions onto coordinate axes represented by a series of parallel lines, we map pairs of the multivariate dimensions onto a series of parallel rectangles. As in the case of parallel coordinates, eachmore » individual observation in the dataset is mapped to a polyline whose vertices coincide with its coordinate values. Regions of the rectangles can be 'brushed' to highlight and select observations of interest: a 'slider' control allows the user to filter the observations by their time coordinate. In an immersive virtual environment, users interact with the parallel planes using a joystick that can select regions on the planes, manipulate selection, and filter time. The brushing and selection actions are used to both explore existing data as well as to launch additional simulations corresponding to the visually selected portions of the input parameter space. As soon as the new simulations complete, their resulting observations are displayed in the virtual environment. This tight feedback loop between simulation and immersive analytics accelerates users' realization of insights about the simulation and its output.« less

  16. Simulation Exploration through Immersive Parallel Planes: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brunhart-Lupo, Nicholas; Bush, Brian W.; Gruchalla, Kenny

    We present a visualization-driven simulation system that tightly couples systems dynamics simulations with an immersive virtual environment to allow analysts to rapidly develop and test hypotheses in a high-dimensional parameter space. To accomplish this, we generalize the two-dimensional parallel-coordinates statistical graphic as an immersive 'parallel-planes' visualization for multivariate time series emitted by simulations running in parallel with the visualization. In contrast to traditional parallel coordinate's mapping the multivariate dimensions onto coordinate axes represented by a series of parallel lines, we map pairs of the multivariate dimensions onto a series of parallel rectangles. As in the case of parallel coordinates, eachmore » individual observation in the dataset is mapped to a polyline whose vertices coincide with its coordinate values. Regions of the rectangles can be 'brushed' to highlight and select observations of interest: a 'slider' control allows the user to filter the observations by their time coordinate. In an immersive virtual environment, users interact with the parallel planes using a joystick that can select regions on the planes, manipulate selection, and filter time. The brushing and selection actions are used to both explore existing data as well as to launch additional simulations corresponding to the visually selected portions of the input parameter space. As soon as the new simulations complete, their resulting observations are displayed in the virtual environment. This tight feedback loop between simulation and immersive analytics accelerates users' realization of insights about the simulation and its output.« less

  17. Surface Layer Flux Processes During Cloud Intermittency and Advection above a Middle Rio Grande Riparian Forest, New Mexico

    NASA Astrophysics Data System (ADS)

    Cleverly, J. R.; Prueger, J.; Cooper, D. I.; Hipps, L.; Eichinger, W.

    2002-12-01

    An intensive field campaign was undertaken to bring together state-of-the-art methodologies for investigating surface layer physical characteristics over a desert riparian forest. Three-dimensional sonic eddy covariance (3SEC), LIDAR, SODAR, Radiosonde, one-dimensional propeller eddy covariance (1PEC), heat dissipation sap flux, and leaf gas exchange were simultaneously in use 13 -- 21 June 1999 at Bosque del Apache National Wildlife Refuge (NWR) in New Mexico. A one hour period of intense advection was identified by /line{v} >> 0 and /line{u} = 0, indicating that wind direction was transverse to the riparian corridor. The period of highest /line{v} was 1400 h on 20 June; this hour experienced intermittent cloud cover and enhanced mesoscale forcing of surface fluxes. High-frequency (20 Hz) time series of u, v, w, q, θ , and T were collected for spectral, cospectral, and wavelet analyses. These time series analyses illustrate scales at which processes co-occur. At high frequencies (> 0.015 Hz), /line{T' q'} > 0, and (KH)/ (KW) = 1. At low frequencies, however, /line{T' q'} < 0, and (KH)/(KW) !=q 1. Under these transient conditions, frequencies below 0.015 Hz are associated with advection. While power cospectra are useful in associating processes at certain frequencies, further analysis must be performed to determine whether such examples of aphasia are localized to transient events or constant through time. Continuous wavelet transformation (CWT) sacrifices localization in frequency space for localization in time. Mother wavelets were evaluated, and Daubechies order 10 wavelet was found to reduce red noise and leakage near the spectral gap. The spectral gap is a frequency domain between synoptic and turbulent scales. Low frequency turbulent structures near the spectral gap in the time series of /line{T' q'}, /line{w' T'}, and /line{w' q'} followed a perturbation--relaxation pattern to cloud cover. Further cloud cover in the same hour did not produce the low frequency variation associated with mesoscale forcing. Two dimensional vertical LIDAR scans of eddy structure explains the observed frequency response patterns. Insight into the temporal progression of homeostatic processes in the surface layer will provide resources for water managers to better predict ET.

  18. A method for ensemble wildland fire simulation

    Treesearch

    Mark A. Finney; Isaac C. Grenfell; Charles W. McHugh; Robert C. Seli; Diane Trethewey; Richard D. Stratton; Stuart Brittain

    2011-01-01

    An ensemble simulation system that accounts for uncertainty in long-range weather conditions and two-dimensional wildland fire spread is described. Fuel moisture is expressed based on the energy release component, a US fire danger rating index, and its variation throughout the fire season is modeled using time series analysis of historical weather data. This analysis...

  19. Evidence of low dimensional chaos in renal blood flow control in genetic and experimental hypertension

    NASA Astrophysics Data System (ADS)

    Yip, K.-P.; Marsh, D. J.; Holstein-Rathlou, N.-H.

    1995-01-01

    We applied a surrogate data technique to test for nonlinear structure in spontaneous fluctuations of hydrostatic pressure in renal tubules of hypertensive rats. Tubular pressure oscillates at 0.03-0.05 Hz in animals with normal blood pressure, but the fluctuations become irregular with chronic hypertension. Using time series from rats with hypertension we produced surrogate data sets to test whether they represent linearly correlated noise or ‘static’ nonlinear transforms of a linear stochastic process. The correlation dimension and the forecasting error were used as discriminating statistics to compare surrogate with experimental data. The results show that the original experimental time series can be distinguished from both linearly and static nonlinearly correlated noise, indicating that the nonlinear behavior is due to the intrinsic dynamics of the system. Together with other evidence this strongly suggests that a low dimensional chaotic attractor governs renal hemodynamics in hypertension. This appears to be the first demonstration of a transition to chaotic dynamics in an integrated physiological control system occurring in association with a pathological condition.

  20. Uncovering low dimensional macroscopic chaotic dynamics of large finite size complex systems

    NASA Astrophysics Data System (ADS)

    Skardal, Per Sebastian; Restrepo, Juan G.; Ott, Edward

    2017-08-01

    In the last decade, it has been shown that a large class of phase oscillator models admit low dimensional descriptions for the macroscopic system dynamics in the limit of an infinite number N of oscillators. The question of whether the macroscopic dynamics of other similar systems also have a low dimensional description in the infinite N limit has, however, remained elusive. In this paper, we show how techniques originally designed to analyze noisy experimental chaotic time series can be used to identify effective low dimensional macroscopic descriptions from simulations with a finite number of elements. We illustrate and verify the effectiveness of our approach by applying it to the dynamics of an ensemble of globally coupled Landau-Stuart oscillators for which we demonstrate low dimensional macroscopic chaotic behavior with an effective 4-dimensional description. By using this description, we show that one can calculate dynamical invariants such as Lyapunov exponents and attractor dimensions. One could also use the reconstruction to generate short-term predictions of the macroscopic dynamics.

  1. Computing the multifractal spectrum from time series: an algorithmic approach.

    PubMed

    Harikrishnan, K P; Misra, R; Ambika, G; Amritkar, R E

    2009-12-01

    We show that the existing methods for computing the f(alpha) spectrum from a time series can be improved by using a new algorithmic scheme. The scheme relies on the basic idea that the smooth convex profile of a typical f(alpha) spectrum can be fitted with an analytic function involving a set of four independent parameters. While the standard existing schemes [P. Grassberger et al., J. Stat. Phys. 51, 135 (1988); A. Chhabra and R. V. Jensen, Phys. Rev. Lett. 62, 1327 (1989)] generally compute only an incomplete f(alpha) spectrum (usually the top portion), we show that this can be overcome by an algorithmic approach, which is automated to compute the D(q) and f(alpha) spectra from a time series for any embedding dimension. The scheme is first tested with the logistic attractor with known f(alpha) curve and subsequently applied to higher-dimensional cases. We also show that the scheme can be effectively adapted for analyzing practical time series involving noise, with examples from two widely different real world systems. Moreover, some preliminary results indicating that the set of four independent parameters may be used as diagnostic measures are also included.

  2. The application of neural networks to myoelectric signal analysis: a preliminary study.

    PubMed

    Kelly, M F; Parker, P A; Scott, R N

    1990-03-01

    Two neural network implementations are applied to myoelectric signal (MES) analysis tasks. The motivation behind this research is to explore more reliable methods of deriving control for multidegree of freedom arm prostheses. A discrete Hopfield network is used to calculate the time series parameters for a moving average MES model. It is demonstrated that the Hopfield network is capable of generating the same time series parameters as those produced by the conventional sequential least squares (SLS) algorithm. Furthermore, it can be extended to applications utilizing larger amounts of data, and possibly to higher order time series models, without significant degradation in computational efficiency. The second neural network implementation involves using a two-layer perceptron for classifying a single site MES based on two features, specifically the first time series parameter, and the signal power. Using these features, the perceptron is trained to distinguish between four separate arm functions. The two-dimensional decision boundaries used by the perceptron classifier are delineated. It is also demonstrated that the perceptron is able to rapidly compensate for variations when new data are incorporated into the training set. This adaptive quality suggests that perceptrons may provide a useful tool for future MES analysis.

  3. Aeroelastic Response of Nonlinear Wing Section By Functional Series Technique

    NASA Technical Reports Server (NTRS)

    Marzocca, Piergiovanni; Librescu, Liviu; Silva, Walter A.

    2000-01-01

    This paper addresses the problem of the determination of the subcritical aeroelastic response and flutter instability of nonlinear two-dimensional lifting surfaces in an incompressible flow-field via indicial functions and Volterra series approach. The related aeroelastic governing equations are based upon the inclusion of structural and damping nonlinearities in plunging and pitching, of the linear unsteady aerodynamics and consideration of an arbitrary time-dependent external pressure pulse. Unsteady aeroelastic nonlinear kernels are determined, and based on these, frequency and time histories of the subcritical aeroelastic response are obtained, and in this context the influence of the considered nonlinearities is emphasized. Conclusions and results displaying the implications of the considered effects are supplied.

  4. Aeroelastic Response of Nonlinear Wing Section by Functional Series Technique

    NASA Technical Reports Server (NTRS)

    Silva, Walter A.; Marzocca, Piergiovanni

    2001-01-01

    This paper addresses the problem of the determination of the subcritical aeroelastic response and flutter instability of nonlinear two-dimensional lifting surfaces in an incompressible flow-field via indicial functions and Volterra series approach. The related aeroelastic governing equations are based upon the inclusion of structural and damping nonlinearities in plunging and pitching, of the linear unsteady aerodynamics and consideration of an arbitrary time-dependent external pressure pulse. Unsteady aeroelastic nonlinear kernels are determined, and based on these, frequency and time histories of the subcritical aeroelastic response are obtained, and in this context the influence of the considered nonlinearities is emphasized. Conclusions and results displaying the implications of the considered effects are supplied.

  5. A Seasonal Time-Series Model Based on Gene Expression Programming for Predicting Financial Distress

    PubMed Central

    2018-01-01

    The issue of financial distress prediction plays an important and challenging research topic in the financial field. Currently, there have been many methods for predicting firm bankruptcy and financial crisis, including the artificial intelligence and the traditional statistical methods, and the past studies have shown that the prediction result of the artificial intelligence method is better than the traditional statistical method. Financial statements are quarterly reports; hence, the financial crisis of companies is seasonal time-series data, and the attribute data affecting the financial distress of companies is nonlinear and nonstationary time-series data with fluctuations. Therefore, this study employed the nonlinear attribute selection method to build a nonlinear financial distress prediction model: that is, this paper proposed a novel seasonal time-series gene expression programming model for predicting the financial distress of companies. The proposed model has several advantages including the following: (i) the proposed model is different from the previous models lacking the concept of time series; (ii) the proposed integrated attribute selection method can find the core attributes and reduce high dimensional data; and (iii) the proposed model can generate the rules and mathematical formulas of financial distress for providing references to the investors and decision makers. The result shows that the proposed method is better than the listing classifiers under three criteria; hence, the proposed model has competitive advantages in predicting the financial distress of companies. PMID:29765399

  6. A Seasonal Time-Series Model Based on Gene Expression Programming for Predicting Financial Distress.

    PubMed

    Cheng, Ching-Hsue; Chan, Chia-Pang; Yang, Jun-He

    2018-01-01

    The issue of financial distress prediction plays an important and challenging research topic in the financial field. Currently, there have been many methods for predicting firm bankruptcy and financial crisis, including the artificial intelligence and the traditional statistical methods, and the past studies have shown that the prediction result of the artificial intelligence method is better than the traditional statistical method. Financial statements are quarterly reports; hence, the financial crisis of companies is seasonal time-series data, and the attribute data affecting the financial distress of companies is nonlinear and nonstationary time-series data with fluctuations. Therefore, this study employed the nonlinear attribute selection method to build a nonlinear financial distress prediction model: that is, this paper proposed a novel seasonal time-series gene expression programming model for predicting the financial distress of companies. The proposed model has several advantages including the following: (i) the proposed model is different from the previous models lacking the concept of time series; (ii) the proposed integrated attribute selection method can find the core attributes and reduce high dimensional data; and (iii) the proposed model can generate the rules and mathematical formulas of financial distress for providing references to the investors and decision makers. The result shows that the proposed method is better than the listing classifiers under three criteria; hence, the proposed model has competitive advantages in predicting the financial distress of companies.

  7. Time series analyses of breathing patterns of lung cancer patients using nonlinear dynamical system theory.

    PubMed

    Tewatia, D K; Tolakanahalli, R P; Paliwal, B R; Tomé, W A

    2011-04-07

    The underlying requirements for successful implementation of any efficient tumour motion management strategy are regularity and reproducibility of a patient's breathing pattern. The physiological act of breathing is controlled by multiple nonlinear feedback and feed-forward couplings. It would therefore be appropriate to analyse the breathing pattern of lung cancer patients in the light of nonlinear dynamical system theory. The purpose of this paper is to analyse the one-dimensional respiratory time series of lung cancer patients based on nonlinear dynamics and delay coordinate state space embedding. It is very important to select a suitable pair of embedding dimension 'm' and time delay 'τ' when performing a state space reconstruction. Appropriate time delay and embedding dimension were obtained using well-established methods, namely mutual information and the false nearest neighbour method, respectively. Establishing stationarity and determinism in a given scalar time series is a prerequisite to demonstrating that the nonlinear dynamical system that gave rise to the scalar time series exhibits a sensitive dependence on initial conditions, i.e. is chaotic. Hence, once an appropriate state space embedding of the dynamical system has been reconstructed, we show that the time series of the nonlinear dynamical systems under study are both stationary and deterministic in nature. Once both criteria are established, we proceed to calculate the largest Lyapunov exponent (LLE), which is an invariant quantity under time delay embedding. The LLE for all 16 patients is positive, which along with stationarity and determinism establishes the fact that the time series of a lung cancer patient's breathing pattern is not random or irregular, but rather it is deterministic in nature albeit chaotic. These results indicate that chaotic characteristics exist in the respiratory waveform and techniques based on state space dynamics should be employed for tumour motion management.

  8. Hilbert-Schmidt and Sobol sensitivity indices for static and time series Wnt signaling measurements in colorectal cancer - part A.

    PubMed

    Sinha, Shriprakash

    2017-12-04

    Ever since the accidental discovery of Wingless [Sharma R.P., Drosophila information service, 1973, 50, p 134], research in the field of Wnt signaling pathway has taken significant strides in wet lab experiments and various cancer clinical trials, augmented by recent developments in advanced computational modeling of the pathway. Information rich gene expression profiles reveal various aspects of the signaling pathway and help in studying different issues simultaneously. Hitherto, not many computational studies exist which incorporate the simultaneous study of these issues. This manuscript ∙ explores the strength of contributing factors in the signaling pathway, ∙ analyzes the existing causal relations among the inter/extracellular factors effecting the pathway based on prior biological knowledge and ∙ investigates the deviations in fold changes in the recently found prevalence of psychophysical laws working in the pathway. To achieve this goal, local and global sensitivity analysis is conducted on the (non)linear responses between the factors obtained from static and time series expression profiles using the density (Hilbert-Schmidt Information Criterion) and variance (Sobol) based sensitivity indices. The results show the advantage of using density based indices over variance based indices mainly due to the former's employment of distance measures & the kernel trick via Reproducing kernel Hilbert space (RKHS) that capture nonlinear relations among various intra/extracellular factors of the pathway in a higher dimensional space. In time series data, using these indices it is now possible to observe where in time, which factors get influenced & contribute to the pathway, as changes in concentration of the other factors are made. This synergy of prior biological knowledge, sensitivity analysis & representations in higher dimensional spaces can facilitate in time based administration of target therapeutic drugs & reveal hidden biological information within colorectal cancer samples.

  9. Max CAPR: high-resolution 3D contrast-enhanced MR angiography with acquisition times under 5 seconds.

    PubMed

    Haider, Clifton R; Borisch, Eric A; Glockner, James F; Mostardi, Petrice M; Rossman, Phillip J; Young, Phillip M; Riederer, Stephen J

    2010-10-01

    High temporal and spatial resolution is desired in imaging of vascular abnormalities having short arterial-to-venous transit times. Methods that exploit temporal correlation to reduce the observed frame time demonstrate temporal blurring, obfuscating bolus dynamics. Previously, a Cartesian acquisition with projection reconstruction-like (CAPR) sampling method has been demonstrated for three-dimensional contrast-enhanced angiographic imaging of the lower legs using two-dimensional sensitivity-encoding acceleration and partial Fourier acceleration, providing 1mm isotropic resolution of the calves, with 4.9-sec frame time and 17.6-sec temporal footprint. In this work, the CAPR acquisition is further undersampled to provide a net acceleration approaching 40 by eliminating all view sharing. The tradeoff of frame time and temporal footprint in view sharing is presented and characterized in phantom experiments. It is shown that the resultant 4.9-sec acquisition time, three-dimensional images sets have sufficient spatial and temporal resolution to clearly portray arterial and venous phases of contrast passage. It is further hypothesized that these short temporal footprint sequences provide diagnostic quality images. This is tested and shown in a series of nine contrast-enhanced MR angiography patient studies performed with the new method.

  10. Fuzzy fractals, chaos, and noise

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zardecki, A.

    1997-05-01

    To distinguish between chaotic and noisy processes, the authors analyze one- and two-dimensional chaotic mappings, supplemented by the additive noise terms. The predictive power of a fuzzy rule-based system allows one to distinguish ergodic and chaotic time series: in an ergodic series the likelihood of finding large numbers is small compared to the likelihood of finding them in a chaotic series. In the case of two dimensions, they consider the fractal fuzzy sets whose {alpha}-cuts are fractals, arising in the context of a quadratic mapping in the extended complex plane. In an example provided by the Julia set, the conceptmore » of Hausdorff dimension enables one to decide in favor of chaotic or noisy evolution.« less

  11. Density-based clustering: A 'landscape view' of multi-channel neural data for inference and dynamic complexity analysis.

    PubMed

    Baglietto, Gabriel; Gigante, Guido; Del Giudice, Paolo

    2017-01-01

    Two, partially interwoven, hot topics in the analysis and statistical modeling of neural data, are the development of efficient and informative representations of the time series derived from multiple neural recordings, and the extraction of information about the connectivity structure of the underlying neural network from the recorded neural activities. In the present paper we show that state-space clustering can provide an easy and effective option for reducing the dimensionality of multiple neural time series, that it can improve inference of synaptic couplings from neural activities, and that it can also allow the construction of a compact representation of the multi-dimensional dynamics, that easily lends itself to complexity measures. We apply a variant of the 'mean-shift' algorithm to perform state-space clustering, and validate it on an Hopfield network in the glassy phase, in which metastable states are largely uncorrelated from memories embedded in the synaptic matrix. In this context, we show that the neural states identified as clusters' centroids offer a parsimonious parametrization of the synaptic matrix, which allows a significant improvement in inferring the synaptic couplings from the neural activities. Moving to the more realistic case of a multi-modular spiking network, with spike-frequency adaptation inducing history-dependent effects, we propose a procedure inspired by Boltzmann learning, but extending its domain of application, to learn inter-module synaptic couplings so that the spiking network reproduces a prescribed pattern of spatial correlations; we then illustrate, in the spiking network, how clustering is effective in extracting relevant features of the network's state-space landscape. Finally, we show that the knowledge of the cluster structure allows casting the multi-dimensional neural dynamics in the form of a symbolic dynamics of transitions between clusters; as an illustration of the potential of such reduction, we define and analyze a measure of complexity of the neural time series.

  12. Information jet: Handling noisy big data from weakly disconnected network

    NASA Astrophysics Data System (ADS)

    Aurongzeb, Deeder

    Sudden aggregation (information jet) of large amount of data is ubiquitous around connected social networks, driven by sudden interacting and non-interacting events, network security threat attacks, online sales channel etc. Clustering of information jet based on time series analysis and graph theory is not new but little work is done to connect them with particle jet statistics. We show pre-clustering based on context can element soft network or network of information which is critical to minimize time to calculate results from noisy big data. We show difference between, stochastic gradient boosting and time series-graph clustering. For disconnected higher dimensional information jet, we use Kallenberg representation theorem (Kallenberg, 2005, arXiv:1401.1137) to identify and eliminate jet similarities from dense or sparse graph.

  13. Ocean rogue waves and their phase space dynamics in the limit of a linear interference model.

    PubMed

    Birkholz, Simon; Brée, Carsten; Veselić, Ivan; Demircan, Ayhan; Steinmeyer, Günter

    2016-10-12

    We reanalyse the probability for formation of extreme waves using the simple model of linear interference of a finite number of elementary waves with fixed amplitude and random phase fluctuations. Under these model assumptions no rogue waves appear when less than 10 elementary waves interfere with each other. Above this threshold rogue wave formation becomes increasingly likely, with appearance frequencies that may even exceed long-term observations by an order of magnitude. For estimation of the effective number of interfering waves, we suggest the Grassberger-Procaccia dimensional analysis of individual time series. For the ocean system, it is further shown that the resulting phase space dimension may vary, such that the threshold for rogue wave formation is not always reached. Time series analysis as well as the appearance of particular focusing wind conditions may enable an effective forecast of such rogue-wave prone situations. In particular, extracting the dimension from ocean time series allows much more specific estimation of the rogue wave probability.

  14. Nonlinear modeling of chaotic time series: Theory and applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Casdagli, M.; Eubank, S.; Farmer, J.D.

    1990-01-01

    We review recent developments in the modeling and prediction of nonlinear time series. In some cases apparent randomness in time series may be due to chaotic behavior of a nonlinear but deterministic system. In such cases it is possible to exploit the determinism to make short term forecasts that are much more accurate than one could make from a linear stochastic model. This is done by first reconstructing a state space, and then using nonlinear function approximation methods to create a dynamical model. Nonlinear models are valuable not only as short term forecasters, but also as diagnostic tools for identifyingmore » and quantifying low-dimensional chaotic behavior. During the past few years methods for nonlinear modeling have developed rapidly, and have already led to several applications where nonlinear models motivated by chaotic dynamics provide superior predictions to linear models. These applications include prediction of fluid flows, sunspots, mechanical vibrations, ice ages, measles epidemics and human speech. 162 refs., 13 figs.« less

  15. Cross-visit tumor sub-segmentation and registration with outlier rejection for dynamic contrast-enhanced MRI time series data.

    PubMed

    Buonaccorsi, G A; Rose, C J; O'Connor, J P B; Roberts, C; Watson, Y; Jackson, A; Jayson, G C; Parker, G J M

    2010-01-01

    Clinical trials of anti-angiogenic and vascular-disrupting agents often use biomarkers derived from DCE-MRI, typically reporting whole-tumor summary statistics and so overlooking spatial parameter variations caused by tissue heterogeneity. We present a data-driven segmentation method comprising tracer-kinetic model-driven registration for motion correction, conversion from MR signal intensity to contrast agent concentration for cross-visit normalization, iterative principal components analysis for imputation of missing data and dimensionality reduction, and statistical outlier detection using the minimum covariance determinant to obtain a robust Mahalanobis distance. After applying these techniques we cluster in the principal components space using k-means. We present results from a clinical trial of a VEGF inhibitor, using time-series data selected because of problems due to motion and outlier time series. We obtained spatially-contiguous clusters that map to regions with distinct microvascular characteristics. This methodology has the potential to uncover localized effects in trials using DCE-MRI-based biomarkers.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferraioli, Luigi; Hueller, Mauro; Vitale, Stefano

    The scientific objectives of the LISA Technology Package experiment on board of the LISA Pathfinder mission demand accurate calibration and validation of the data analysis tools in advance of the mission launch. The level of confidence required in the mission outcomes can be reached only by intensively testing the tools on synthetically generated data. A flexible procedure allowing the generation of a cross-correlated stationary noise time series was set up. A multichannel time series with the desired cross-correlation behavior can be generated once a model for a multichannel cross-spectral matrix is provided. The core of the procedure comprises a noisemore » coloring, multichannel filter designed via a frequency-by-frequency eigendecomposition of the model cross-spectral matrix and a subsequent fit in the Z domain. The common problem of initial transients in a filtered time series is solved with a proper initialization of the filter recursion equations. The noise generator performance was tested in a two-dimensional case study of the closed-loop LISA Technology Package dynamics along the two principal degrees of freedom.« less

  17. Ocean rogue waves and their phase space dynamics in the limit of a linear interference model

    PubMed Central

    Birkholz, Simon; Brée, Carsten; Veselić, Ivan; Demircan, Ayhan; Steinmeyer, Günter

    2016-01-01

    We reanalyse the probability for formation of extreme waves using the simple model of linear interference of a finite number of elementary waves with fixed amplitude and random phase fluctuations. Under these model assumptions no rogue waves appear when less than 10 elementary waves interfere with each other. Above this threshold rogue wave formation becomes increasingly likely, with appearance frequencies that may even exceed long-term observations by an order of magnitude. For estimation of the effective number of interfering waves, we suggest the Grassberger-Procaccia dimensional analysis of individual time series. For the ocean system, it is further shown that the resulting phase space dimension may vary, such that the threshold for rogue wave formation is not always reached. Time series analysis as well as the appearance of particular focusing wind conditions may enable an effective forecast of such rogue-wave prone situations. In particular, extracting the dimension from ocean time series allows much more specific estimation of the rogue wave probability. PMID:27731411

  18. Scale invariance in chaotic time series: Classical and quantum examples

    NASA Astrophysics Data System (ADS)

    Landa, Emmanuel; Morales, Irving O.; Stránský, Pavel; Fossion, Rubén; Velázquez, Victor; López Vieyra, J. C.; Frank, Alejandro

    Important aspects of chaotic behavior appear in systems of low dimension, as illustrated by the Map Module 1. It is indeed a remarkable fact that all systems tha make a transition from order to disorder display common properties, irrespective of their exacta functional form. We discuss evidence for 1/f power spectra in the chaotic time series associated in classical and quantum examples, the one-dimensional map module 1 and the spectrum of 48Ca. A Detrended Fluctuation Analysis (DFA) method is applied to investigate the scaling properties of the energy fluctuations in the spectrum of 48Ca obtained with a large realistic shell model calculation (ANTOINE code) and with a random shell model (TBRE) calculation also in the time series obtained with the map mod 1. We compare the scale invariant properties of the 48Ca nuclear spectrum sith similar analyses applied to the RMT ensambles GOE and GDE. A comparison with the corresponding power spectra is made in both cases. The possible consequences of the results are discussed.

  19. ON THE GEOMETRY OF MEASURABLE SETS IN N-DIMENSIONAL SPACE ON WHICH GENERALIZED LOCALIZATION HOLDS FOR MULTIPLE FOURIER SERIES OF FUNCTIONS IN L_p, p>1

    NASA Astrophysics Data System (ADS)

    Bloshanskiĭ, I. L.

    1984-02-01

    The precise geometry is found of measurable sets in N-dimensional Euclidean space on which generalized localization almost everywhere holds for multiple Fourier series which are rectangularly summable.Bibliography: 14 titles.

  20. Development of a solution adaptive unstructured scheme for quasi-3D inviscid flows through advanced turbomachinery cascades

    NASA Technical Reports Server (NTRS)

    Usab, William J., Jr.; Jiang, Yi-Tsann

    1991-01-01

    The objective of the present research is to develop a general solution adaptive scheme for the accurate prediction of inviscid quasi-three-dimensional flow in advanced compressor and turbine designs. The adaptive solution scheme combines an explicit finite-volume time-marching scheme for unstructured triangular meshes and an advancing front triangular mesh scheme with a remeshing procedure for adapting the mesh as the solution evolves. The unstructured flow solver has been tested on a series of two-dimensional airfoil configurations including a three-element analytic test case presented here. Mesh adapted quasi-three-dimensional Euler solutions are presented for three spanwise stations of the NASA rotor 67 transonic fan. Computed solutions are compared with available experimental data.

  1. A finite element approach for solution of the 3D Euler equations

    NASA Technical Reports Server (NTRS)

    Thornton, E. A.; Ramakrishnan, R.; Dechaumphai, P.

    1986-01-01

    Prediction of thermal deformations and stresses has prime importance in the design of the next generation of high speed flight vehicles. Aerothermal load computations for complex three-dimensional shapes necessitate development of procedures to solve the full Navier-Stokes equations. This paper details the development of a three-dimensional inviscid flow approach which can be extended for three-dimensional viscous flows. A finite element formulation, based on a Taylor series expansion in time, is employed to solve the compressible Euler equations. Model generation and results display are done using a commercially available program, PATRAN, and vectorizing strategies are incorporated to ensure computational efficiency. Sample problems are presented to demonstrate the validity of the approach for analyzing high speed compressible flows.

  2. Enabling Web-Based Analysis of CUAHSI HIS Hydrologic Data Using R and Web Processing Services

    NASA Astrophysics Data System (ADS)

    Ames, D. P.; Kadlec, J.; Bayles, M.; Seul, M.; Hooper, R. P.; Cummings, B.

    2015-12-01

    The CUAHSI Hydrologic Information System (CUAHSI HIS) provides open access to a large number of hydrological time series observation and modeled data from many parts of the world. Several software tools have been designed to simplify searching and access to the CUAHSI HIS datasets. These software tools include: Desktop client software (HydroDesktop, HydroExcel), developer libraries (WaterML R Package, OWSLib, ulmo), and the new interactive search website, http://data.cuahsi.org. An issue with using the time series data from CUAHSI HIS for further analysis by hydrologists (for example for verification of hydrological and snowpack models) is the large heterogeneity of the time series data. The time series may be regular or irregular, contain missing data, have different time support, and be recorded in different units. R is a widely used computational environment for statistical analysis of time series and spatio-temporal data that can be used to assess fitness and perform scientific analyses on observation data. R includes the ability to record a data analysis in the form of a reusable script. The R script together with the input time series dataset can be shared with other users, making the analysis more reproducible. The major goal of this study is to examine the use of R as a Web Processing Service for transforming time series data from the CUAHSI HIS and sharing the results on the Internet within HydroShare. HydroShare is an online data repository and social network for sharing large hydrological data sets such as time series, raster datasets, and multi-dimensional data. It can be used as a permanent cloud storage space for saving the time series analysis results. We examine the issues associated with running R scripts online: including code validation, saving of outputs, reporting progress, and provenance management. An explicit goal is that the script which is run locally should produce exactly the same results as the script run on the Internet. Our design can be used as a model for other studies that need to run R scripts on the web.

  3. A method for analyzing temporal patterns of variability of a time series from Poincare plots.

    PubMed

    Fishman, Mikkel; Jacono, Frank J; Park, Soojin; Jamasebi, Reza; Thungtong, Anurak; Loparo, Kenneth A; Dick, Thomas E

    2012-07-01

    The Poincaré plot is a popular two-dimensional, time series analysis tool because of its intuitive display of dynamic system behavior. Poincaré plots have been used to visualize heart rate and respiratory pattern variabilities. However, conventional quantitative analysis relies primarily on statistical measurements of the cumulative distribution of points, making it difficult to interpret irregular or complex plots. Moreover, the plots are constructed to reflect highly correlated regions of the time series, reducing the amount of nonlinear information that is presented and thereby hiding potentially relevant features. We propose temporal Poincaré variability (TPV), a novel analysis methodology that uses standard techniques to quantify the temporal distribution of points and to detect nonlinear sources responsible for physiological variability. In addition, the analysis is applied across multiple time delays, yielding a richer insight into system dynamics than the traditional circle return plot. The method is applied to data sets of R-R intervals and to synthetic point process data extracted from the Lorenz time series. The results demonstrate that TPV complements the traditional analysis and can be applied more generally, including Poincaré plots with multiple clusters, and more consistently than the conventional measures and can address questions regarding potential structure underlying the variability of a data set.

  4. NeuroRhythmics: software for analyzing time-series measurements of saltatory movements in neuronal processes.

    PubMed

    Kerlin, Aaron M; Lindsley, Tara A

    2008-08-15

    Time-lapse imaging of living neurons both in vivo and in vitro has revealed that the growth of axons and dendrites is highly dynamic and characterized by alternating periods of extension and retraction. These growth dynamics are associated with important features of neuronal development and are differentially affected by experimental treatments, but the underlying cellular mechanisms are poorly understood. NeuroRhythmics was developed to semi-automate specific quantitative tasks involved in analysis of two-dimensional time-series images of processes that exhibit saltatory elongation. This software provides detailed information on periods of growth and nongrowth that it identifies by transitions in elongation (i.e. initiation time, average rate, duration) and information regarding the overall pattern of saltatory growth (i.e. time of pattern onset, frequency of transitions, relative time spent in a state of growth vs. nongrowth). Plots and numeric output are readily imported into other applications. The user has the option to specify criteria for identifying transitions in growth behavior, which extends the potential application of the software to neurons of different types or developmental stage and to other time-series phenomena that exhibit saltatory dynamics. NeuroRhythmics will facilitate mechanistic studies of periodic axonal and dendritic growth in neurons.

  5. Stochastic modeling of experimental chaotic time series.

    PubMed

    Stemler, Thomas; Werner, Johannes P; Benner, Hartmut; Just, Wolfram

    2007-01-26

    Methods developed recently to obtain stochastic models of low-dimensional chaotic systems are tested in electronic circuit experiments. We demonstrate that reliable drift and diffusion coefficients can be obtained even when no excessive time scale separation occurs. Crisis induced intermittent motion can be described in terms of a stochastic model showing tunneling which is dominated by state space dependent diffusion. Analytical solutions of the corresponding Fokker-Planck equation are in excellent agreement with experimental data.

  6. Computed tomography image-guided surgery in complex acetabular fractures.

    PubMed

    Brown, G A; Willis, M C; Firoozbakhsh, K; Barmada, A; Tessman, C L; Montgomery, A

    2000-01-01

    Eleven complex acetabular fractures in 10 patients were treated by open reduction with internal fixation incorporating computed tomography image guided software intraoperatively. Each of the implants placed under image guidance was found to be accurate and without penetration of the pelvis or joint space. The setup time for the system was minimal. Accuracy in the range of 1 mm was found when registration was precise (eight cases) and was in the range of 3.5 mm when registration was only approximate (three cases). Added benefits included reduced intraoperative fluoroscopic time, less need for more extensive dissection, and obviation of additional surgical approaches in some cases. Compared with a series of similar fractures treated before this image guided series, the reduction in operative time was significant. For patients with complex anterior and posterior combined fractures, the average operation times with and without application of three-dimensional imaging technique were, respectively, 5 hours 15 minutes and 6 hours 14 minutes, revealing 16% less operative time for those who had surgery using image guidance. In the single column fracture group, the operation time for those with three-dimensional imaging application, was 2 hours 58 minutes and for those with traditional surgery, 3 hours 42 minutes, indicating 20% less operative time for those with imaging modality. Intraoperative computed tomography guided imagery was found to be an accurate and suitable method for use in the operative treatment of complex acetabular fractures with substantial displacement.

  7. From analytical solutions of solute transport equations to multidimensional time-domain random walk (TDRW) algorithms

    NASA Astrophysics Data System (ADS)

    Bodin, Jacques

    2015-03-01

    In this study, new multi-dimensional time-domain random walk (TDRW) algorithms are derived from approximate one-dimensional (1-D), two-dimensional (2-D), and three-dimensional (3-D) analytical solutions of the advection-dispersion equation and from exact 1-D, 2-D, and 3-D analytical solutions of the pure-diffusion equation. These algorithms enable the calculation of both the time required for a particle to travel a specified distance in a homogeneous medium and the mass recovery at the observation point, which may be incomplete due to 2-D or 3-D transverse dispersion or diffusion. The method is extended to heterogeneous media, represented as a piecewise collection of homogeneous media. The particle motion is then decomposed along a series of intermediate checkpoints located on the medium interface boundaries. The accuracy of the multi-dimensional TDRW method is verified against (i) exact analytical solutions of solute transport in homogeneous media and (ii) finite-difference simulations in a synthetic 2-D heterogeneous medium of simple geometry. The results demonstrate that the method is ideally suited to purely diffusive transport and to advection-dispersion transport problems dominated by advection. Conversely, the method is not recommended for highly dispersive transport problems because the accuracy of the advection-dispersion TDRW algorithms degrades rapidly for a low Péclet number, consistent with the accuracy limit of the approximate analytical solutions. The proposed approach provides a unified methodology for deriving multi-dimensional time-domain particle equations and may be applicable to other mathematical transport models, provided that appropriate analytical solutions are available.

  8. Transport properties of the two-dimensional electron gas in AlxGa1-xN/GaN heterostructures

    NASA Astrophysics Data System (ADS)

    Han, Xiuxun; Honda, Yoshio; Narita, Tetsuo; Yamaguchi, Masahito; Sawaki, Nobuhiko

    2007-01-01

    Magnetotransport measurements were performed on a series of AlxGa1-xN/GaN heterostructures with different Al compositions (x = 0.15, 0.20 and 0.30) at 4.2 K. Adopting a fast Fourier transform method, we analysed the Shubnikov-de Hass oscillations due to the two-dimensional electron gas to derive the quantum scattering time (τq). It was found that the quantum scattering time in the ground subband decreases with increasing Al composition: 0.194 ps (x = 0.15), 0.174 ps (x = 0.20) and 0.123 ps (x = 0.30), respectively. To discern the predominant scattering process, the scattering time limited by interface roughness, the residual impurity and the alloy disorder were investigated numerically by including inter-subband scattering. We found that enhanced interface roughness scattering dominates both the transport and quantum scattering time in the ground subband.

  9. Language time series analysis

    NASA Astrophysics Data System (ADS)

    Kosmidis, Kosmas; Kalampokis, Alkiviadis; Argyrakis, Panos

    2006-10-01

    We use the detrended fluctuation analysis (DFA) and the Grassberger-Proccacia analysis (GP) methods in order to study language characteristics. Despite that we construct our signals using only word lengths or word frequencies, excluding in this way huge amount of information from language, the application of GP analysis indicates that linguistic signals may be considered as the manifestation of a complex system of high dimensionality, different from random signals or systems of low dimensionality such as the Earth climate. The DFA method is additionally able to distinguish a natural language signal from a computer code signal. This last result may be useful in the field of cryptography.

  10. A chaotic model for the plague epidemic that has occurred in Bombay at the end of the 19th century

    NASA Astrophysics Data System (ADS)

    Mangiarotti, Sylvain

    2015-04-01

    The plague epidemic that has occurred in Bombay at the end of the 19th century was detected in 1896. One year before, an Advisory Committee had been appointed by the Secretary of State for India, the Royal Society, and the Lister Institute. This Committee made numerous investigations and gathered a large panel of data including the number of people attacked and died from the plague, records of rat and flea populations, as well as meteorological records of temperature and humidity [1]. The global modeling technique [2] aims to obtain low dimensional models able to simulate the observed cycles from time series. As far as we know, this technique has been tried only to one case of epidemiological analysis (the whooping cough infection) based on a discrete formulation [3]. In the present work, the continuous time formulation of this technique is used to analyze the time evolution of the plague epidemic from this data set. One low dimensional model (three variables) is obtained exhibiting a limit cycle of period-5. A chaotic behavior could be derived from this model by tuning the model parameters. It provides a strong argument for a dynamical behavior that can be approximated by low dimensional deterministic equations. This model also provides an empirical argument for chaos in epidemics. [1] Verjbitski D. T., Bannerman W. B. & Kápadiâ R. T., 1908. Reports on Plague Investigations in India (May,1908), The Journal of Hygiene, 8(2), 161 -308. [2] Mangiarotti S., Coudret R., Drapeau L. & Jarlan L., 2012. Polynomial search and Global modelling: two algorithms for modeling chaos. Physical Review E, 86(4), 046205. [3] Boudjema G. & Cazelles B., 2003. Extraction of nonlinear dynamics from short and noisy time series. Chaos, Solitons and Fractals, 12, 2051-2069.

  11. Three-dimensional time reversal communications in elastic media

    DOE PAGES

    Anderson, Brian E.; Ulrich, Timothy J.; Le Bas, Pierre-Yves; ...

    2016-02-23

    Our letter presents a series of vibrational communication experiments, using time reversal, conducted on a set of cast iron pipes. Time reversal has been used to provide robust, private, and clean communications in many underwater acoustic applications. Also, the use of time reversal to communicate along sections of pipes and through a wall is demonstrated here in order to overcome the complications of dispersion and multiple scattering. These demonstrations utilize a single source transducer and a single sensor, a triaxial accelerometer, enabling multiple channels of simultaneous communication streams to a single location.

  12. Three dimensional empirical mode decomposition analysis apparatus, method and article manufacture

    NASA Technical Reports Server (NTRS)

    Gloersen, Per (Inventor)

    2004-01-01

    An apparatus and method of analysis for three-dimensional (3D) physical phenomena. The physical phenomena may include any varying 3D phenomena such as time varying polar ice flows. A repesentation of the 3D phenomena is passed through a Hilbert transform to convert the data into complex form. A spatial variable is separated from the complex representation by producing a time based covariance matrix. The temporal parts of the principal components are produced by applying Singular Value Decomposition (SVD). Based on the rapidity with which the eigenvalues decay, the first 3-10 complex principal components (CPC) are selected for Empirical Mode Decomposition into intrinsic modes. The intrinsic modes produced are filtered in order to reconstruct the spatial part of the CPC. Finally, a filtered time series may be reconstructed from the first 3-10 filtered complex principal components.

  13. Data-Driven Modeling of Complex Systems by means of a Dynamical ANN

    NASA Astrophysics Data System (ADS)

    Seleznev, A.; Mukhin, D.; Gavrilov, A.; Loskutov, E.; Feigin, A.

    2017-12-01

    The data-driven methods for modeling and prognosis of complex dynamical systems become more and more popular in various fields due to growth of high-resolution data. We distinguish the two basic steps in such an approach: (i) determining the phase subspace of the system, or embedding, from available time series and (ii) constructing an evolution operator acting in this reduced subspace. In this work we suggest a novel approach combining these two steps by means of construction of an artificial neural network (ANN) with special topology. The proposed ANN-based model, on the one hand, projects the data onto a low-dimensional manifold, and, on the other hand, models a dynamical system on this manifold. Actually, this is a recurrent multilayer ANN which has internal dynamics and capable of generating time series. Very important point of the proposed methodology is the optimization of the model allowing us to avoid overfitting: we use Bayesian criterion to optimize the ANN structure and estimate both the degree of evolution operator nonlinearity and the complexity of nonlinear manifold which the data are projected on. The proposed modeling technique will be applied to the analysis of high-dimensional dynamical systems: Lorenz'96 model of atmospheric turbulence, producing high-dimensional space-time chaos, and quasi-geostrophic three-layer model of the Earth's atmosphere with the natural orography, describing the dynamics of synoptical vortexes as well as mesoscale blocking systems. The possibility of application of the proposed methodology to analyze real measured data is also discussed. The study was supported by the Russian Science Foundation (grant #16-12-10198).

  14. Data series embedding and scale invariant statistics.

    PubMed

    Michieli, I; Medved, B; Ristov, S

    2010-06-01

    Data sequences acquired from bio-systems such as human gait data, heart rate interbeat data, or DNA sequences exhibit complex dynamics that is frequently described by a long-memory or power-law decay of autocorrelation function. One way of characterizing that dynamics is through scale invariant statistics or "fractal-like" behavior. For quantifying scale invariant parameters of physiological signals several methods have been proposed. Among them the most common are detrended fluctuation analysis, sample mean variance analyses, power spectral density analysis, R/S analysis, and recently in the realm of the multifractal approach, wavelet analysis. In this paper it is demonstrated that embedding the time series data in the high-dimensional pseudo-phase space reveals scale invariant statistics in the simple fashion. The procedure is applied on different stride interval data sets from human gait measurements time series (Physio-Bank data library). Results show that introduced mapping adequately separates long-memory from random behavior. Smaller gait data sets were analyzed and scale-free trends for limited scale intervals were successfully detected. The method was verified on artificially produced time series with known scaling behavior and with the varying content of noise. The possibility for the method to falsely detect long-range dependence in the artificially generated short range dependence series was investigated. (c) 2009 Elsevier B.V. All rights reserved.

  15. Improved nonlinear prediction method

    NASA Astrophysics Data System (ADS)

    Adenan, Nur Hamiza; Md Noorani, Mohd Salmi

    2014-06-01

    The analysis and prediction of time series data have been addressed by researchers. Many techniques have been developed to be applied in various areas, such as weather forecasting, financial markets and hydrological phenomena involving data that are contaminated by noise. Therefore, various techniques to improve the method have been introduced to analyze and predict time series data. In respect of the importance of analysis and the accuracy of the prediction result, a study was undertaken to test the effectiveness of the improved nonlinear prediction method for data that contain noise. The improved nonlinear prediction method involves the formation of composite serial data based on the successive differences of the time series. Then, the phase space reconstruction was performed on the composite data (one-dimensional) to reconstruct a number of space dimensions. Finally the local linear approximation method was employed to make a prediction based on the phase space. This improved method was tested with data series Logistics that contain 0%, 5%, 10%, 20% and 30% of noise. The results show that by using the improved method, the predictions were found to be in close agreement with the observed ones. The correlation coefficient was close to one when the improved method was applied on data with up to 10% noise. Thus, an improvement to analyze data with noise without involving any noise reduction method was introduced to predict the time series data.

  16. Cluster Analysis and Gaussian Mixture Estimation of Correlated Time-Series by Means of Multi-dimensional Scaling

    NASA Astrophysics Data System (ADS)

    Ibuki, Takero; Suzuki, Sei; Inoue, Jun-ichi

    We investigate cross-correlations between typical Japanese stocks collected through Yahoo!Japan website ( http://finance.yahoo.co.jp/ ). By making use of multi-dimensional scaling (MDS) for the cross-correlation matrices, we draw two-dimensional scattered plots in which each point corresponds to each stock. To make a clustering for these data plots, we utilize the mixture of Gaussians to fit the data set to several Gaussian densities. By minimizing the so-called Akaike Information Criterion (AIC) with respect to parameters in the mixture, we attempt to specify the best possible mixture of Gaussians. It might be naturally assumed that all the two-dimensional data points of stocks shrink into a single small region when some economic crisis takes place. The justification of this assumption is numerically checked for the empirical Japanese stock data, for instance, those around 11 March 2011.

  17. Fast computation of the electrolyte-concentration transfer function of a lithium-ion cell model

    NASA Astrophysics Data System (ADS)

    Rodríguez, Albert; Plett, Gregory L.; Trimboli, M. Scott

    2017-08-01

    One approach to creating physics-based reduced-order models (ROMs) of battery-cell dynamics requires first generating linearized Laplace-domain transfer functions of all cell internal electrochemical variables of interest. Then, the resulting infinite-dimensional transfer functions can be reduced by various means in order to find an approximate low-dimensional model. These methods include Padé approximation or the Discrete-Time Realization algorithm. In a previous article, Lee and colleagues developed a transfer function of the electrolyte concentration for a porous-electrode pseudo-two-dimensional lithium-ion cell model. Their approach used separation of variables and Sturm-Liouville theory to compute an infinite-series solution to the transfer function, which they then truncated to a finite number of terms for reasons of practicality. Here, we instead use a variation-of-parameters approach to arrive at a different representation of the identical solution that does not require a series expansion. The primary benefits of the new approach are speed of computation of the transfer function and the removal of the requirement to approximate the transfer function by truncating the number of terms evaluated. Results show that the speedup of the new method can be more than 3800.

  18. A Two-dimensional Version of the Niblett-Bostick Transformation for Magnetotelluric Interpretations

    NASA Astrophysics Data System (ADS)

    Esparza, F.

    2005-05-01

    An imaging technique for two-dimensional magnetotelluric interpretations is developed following the well known Niblett-Bostick transformation for one-dimensional profiles. The algorithm uses a Hopfield artificial neural network to process series and parallel magnetotelluric impedances along with their analytical influence functions. The adaptive, weighted average approximation preserves part of the nonlinearity of the original problem. No initial model in the usual sense is required for the recovery of a functional model. Rather, the built-in relationship between model and data considers automatically, all at the same time, many half spaces whose electrical conductivities vary according to the data. The use of series and parallel impedances, a self-contained pair of invariants of the impedance tensor, avoids the need to decide on best angles of rotation for TE and TM separations. Field data from a given profile can thus be fed directly into the algorithm without much processing. The solutions offered by the Hopfield neural network correspond to spatial averages computed through rectangular windows that can be chosen at will. Applications of the algorithm to simple synthetic models and to the COPROD2 data set illustrate the performance of the approximation.

  19. Post-operative 3D CT feedback improves accuracy and precision in the learning curve of anatomic ACL femoral tunnel placement.

    PubMed

    Sirleo, Luigi; Innocenti, Massimo; Innocenti, Matteo; Civinini, Roberto; Carulli, Christian; Matassi, Fabrizio

    2018-02-01

    To evaluate the feedback from post-operative three-dimensional computed tomography (3D-CT) on femoral tunnel placement in the learning process, to obtain an anatomic anterior cruciate ligament (ACL) reconstruction. A series of 60 consecutive patients undergoing primary ACL reconstruction using autologous hamstrings single-bundle outside-in technique were prospectively included in the study. ACL reconstructions were performed by the same trainee-surgeon during his learning phase of anatomic ACL femoral tunnel placement. A CT scan with dedicated tunnel study was performed in all patients within 48 h after surgery. The data obtained from the CT scan were processed into a three-dimensional surface model, and a true medial view of the lateral femoral condyle was used for the femoral tunnel placement analysis. Two independent examiners analysed the tunnel placements. The centre of femoral tunnel was measured using a quadrant method as described by Bernard and Hertel. The coordinates measured were compared with anatomic coordinates values described in the literature [deep-to-shallow distance (X-axis) 28.5%; high-to-low distance (Y-axis) 35.2%]. Tunnel placement was evaluated in terms of accuracy and precision. After each ACL reconstruction, results were shown to the surgeon to receive an instant feedback in order to achieve accurate correction and improve tunnel placement for the next surgery. Complications and arthroscopic time were also recorded. Results were divided into three consecutive series (1, 2, 3) of 20 patients each. A trend to placing femoral tunnel slightly shallow in deep-to-shallow distance and slightly high in high-to-low distance was observed in the first and the second series. A progressive improvement in tunnel position was recorded from the first to second series and from the second to the third series. Both accuracy (+52.4%) and precision (+55.7%) increased from the first to the third series (p < 0.001). Arthroscopic time decreased from a mean of 105 min in the first series to 57 min in the third series (p < 0.001). After 50 ACL reconstructions, a satisfactory anatomic femoral tunnel was reached. Feedback from post-operative 3D-CT is effective in the learning process to improve accuracy and precision of femoral tunnel placement in order to obtain anatomic ACL reconstruction and helps to reduce also arthroscopic time and learning curve. For clinical relevance, trainee-surgeons should use feedback from post-operative 3DCT to learn anatomic ACL femoral tunnel placement and apply it appropriately. Consecutive case series, Level IV.

  20. Tidal and residual currents measured by an acoustic doppler current profiler at the west end of Carquinez Strait, San Francisco Bay, California, March to November 1988

    USGS Publications Warehouse

    Burau, J.R.; Simpson, M.R.; Cheng, R.T.

    1993-01-01

    Water-velocity profiles were collected at the west end of Carquinez Strait, San Francisco Bay, California, from March to November 1988, using an acoustic Doppler current profiler (ADCP). These data are a series of 10-minute-averaged water velocities collected at 1-meter vertical intervals (bins) in the 16.8-meter water column, beginning 2.1 meters above the estuary bed. To examine the vertical structure of the horizontal water velocities, the data are separated into individual time-series by bin and then used for time-series plots, harmonic analysis, and for input to digital filters. Three-dimensional graphic renditions of the filtered data are also used in the analysis. Harmonic analysis of the time-series data from each bin indicates that the dominant (12.42 hour or M2) partial tidal currents reverse direction near the bottom, on average, 20 minutes sooner than M2 partial tidal currents near the surface. Residual (nontidal) currents derived from the filtered data indicate that currents near the bottom are pre- dominantly up-estuary during the neap tides and down-estuary during the more energetic spring tides.

  1. Time-series animation techniques for visualizing urban growth

    USGS Publications Warehouse

    Acevedo, W.; Masuoka, P.

    1997-01-01

    Time-series animation is a visually intuitive way to display urban growth. Animations of landuse change for the Baltimore-Washington region were generated by showing a series of images one after the other in sequential order. Before creating an animation, various issues which will affect the appearance of the animation should be considered, including the number of original data frames to use, the optimal animation display speed, the number of intermediate frames to create between the known frames, and the output media on which the animations will be displayed. To create new frames between the known years of data, the change in each theme (i.e. urban development, water bodies, transportation routes) must be characterized and an algorithm developed to create the in-between frames. Example time-series animations were created using a temporal GIS database of the Baltimore-Washington area. Creating the animations involved generating raster images of the urban development, water bodies, and principal transportation routes; overlaying the raster images on a background image; and importing the frames to a movie file. Three-dimensional perspective animations were created by draping each image over digital elevation data prior to importing the frames to a movie file. ?? 1997 Elsevier Science Ltd.

  2. Two-Dimensional Numerical Model of coupled Heat and Moisture Transport in Frost Heaving Soils.

    DTIC Science & Technology

    1982-08-01

    integrated relations become: The exact solution is the %%ell-known series expansion: At -11)e )+bO! -201, +Li j I:IAx), " 2" 4 ,, sin 3 .x )fx. t=-szf...giethe complete mab balance formula tion. Integrating .patiall% and temporall % on eac:n R ~ .% fl, Icc .1’l i l Ilt,.’. ,l~llc "jaJ i l C tl~ I1I’ .El~lt...diffusivity model can be approximately linearized by using values of diffusivitv assumed constant for small intervals of space and time. By a series expansion

  3. Centrality measures in temporal networks with time series analysis

    NASA Astrophysics Data System (ADS)

    Huang, Qiangjuan; Zhao, Chengli; Zhang, Xue; Wang, Xiaojie; Yi, Dongyun

    2017-05-01

    The study of identifying important nodes in networks has a wide application in different fields. However, the current researches are mostly based on static or aggregated networks. Recently, the increasing attention to networks with time-varying structure promotes the study of node centrality in temporal networks. In this paper, we define a supra-evolution matrix to depict the temporal network structure. With using of the time series analysis, the relationships between different time layers can be learned automatically. Based on the special form of the supra-evolution matrix, the eigenvector centrality calculating problem is turned into the calculation of eigenvectors of several low-dimensional matrices through iteration, which effectively reduces the computational complexity. Experiments are carried out on two real-world temporal networks, Enron email communication network and DBLP co-authorship network, the results of which show that our method is more efficient at discovering the important nodes than the common aggregating method.

  4. Appropriate use of the increment entropy for electrophysiological time series.

    PubMed

    Liu, Xiaofeng; Wang, Xue; Zhou, Xu; Jiang, Aimin

    2018-04-01

    The increment entropy (IncrEn) is a new measure for quantifying the complexity of a time series. There are three critical parameters in the IncrEn calculation: N (length of the time series), m (dimensionality), and q (quantifying precision). However, the question of how to choose the most appropriate combination of IncrEn parameters for short datasets has not been extensively explored. The purpose of this research was to provide guidance on choosing suitable IncrEn parameters for short datasets by exploring the effects of varying the parameter values. We used simulated data, epileptic EEG data and cardiac interbeat (RR) data to investigate the effects of the parameters on the calculated IncrEn values. The results reveal that IncrEn is sensitive to changes in m, q and N for short datasets (N≤500). However, IncrEn reaches stability at a data length of N=1000 with m=2 and q=2, and for short datasets (N=100), it shows better relative consistency with 2≤m≤6 and 2≤q≤8 We suggest that the value of N should be no less than 100. To enable a clear distinction between different classes based on IncrEn, we recommend that m and q should take values between 2 and 4. With appropriate parameters, IncrEn enables the effective detection of complexity variations in physiological time series, suggesting that IncrEn should be useful for the analysis of physiological time series in clinical applications. Copyright © 2018 Elsevier Ltd. All rights reserved.

  5. AIRS Ozone Burden During Antarctic Winter: Time Series from 8/1/2005 to 9/30/2005

    NASA Image and Video Library

    2007-07-24

    The Atmospheric Infrared Sounder (AIRS) provides a daily global 3-dimensional view of Earth's ozone layer. Since AIRS observes in the thermal infrared spectral range, it also allows scientists to view from space the Antarctic ozone hole for the first time continuously during polar winter. This image sequence captures the intensification of the annual ozone hole in the Antarctic Polar Vortex. http://photojournal.jpl.nasa.gov/catalog/PIA09938

  6. Traveltime delay relative to the maximum energy of the wave train for dispersive tsunamis propagating across the Pacific Ocean: the case of 2010 and 2015 Chilean Tsunamis

    NASA Astrophysics Data System (ADS)

    Poupardin, A.; Heinrich, P.; Hébert, H.; Schindelé, F.; Jamelot, A.; Reymond, D.; Sugioka, H.

    2018-05-01

    This paper evaluates the importance of frequency dispersion in the propagation of recent trans-Pacific tsunamis. Frequency dispersion induces a time delay for the most energetic waves, which increases for long propagation distances and short source dimensions. To calculate this time delay, propagation of tsunamis is simulated and analyzed from spectrograms of time-series at specific gauges in the Pacific Ocean. One- and two-dimensional simulations are performed by solving either shallow water or Boussinesq equations and by considering realistic seismic sources. One-dimensional sensitivity tests are first performed in a constant-depth channel to study the influence of the source width. Two-dimensional tests are then performed in a simulated Pacific Ocean with a 4000-m constant depth and by considering tectonic sources of 2010 and 2015 Chilean earthquakes. For these sources, both the azimuth and the distance play a major role in the frequency dispersion of tsunamis. Finally, simulations are performed considering the real bathymetry of the Pacific Ocean. Multiple reflections, refractions as well as shoaling of waves result in much more complex time series for which the effects of the frequency dispersion are hardly discernible. The main point of this study is to evaluate frequency dispersion in terms of traveltime delays by calculating spectrograms for a time window of 6 hours after the arrival of the first wave. Results of the spectral analysis show that the wave packets recorded by pressure and tide sensors in the Pacific Ocean seem to be better reproduced by the Boussinesq model than the shallow water model and approximately follow the theoretical dispersion relationship linking wave arrival times and frequencies. Additionally, a traveltime delay is determined above which effects of frequency dispersion are considered to be significant in terms of maximum surface elevations.

  7. Analytical Approach to (2+1)-Dimensional Boussinesq Equation and (3+1)-Dimensional Kadomtsev-Petviashvili Equation

    NASA Astrophysics Data System (ADS)

    Sarıaydın, Selin; Yıldırım, Ahmet

    2010-05-01

    In this paper, we studied the solitary wave solutions of the (2+1)-dimensional Boussinesq equation utt -uxx-uyy-(u2)xx-uxxxx = 0 and the (3+1)-dimensional Kadomtsev-Petviashvili (KP) equation uxt -6ux 2 +6uuxx -uxxxx -uyy -uzz = 0. By using this method, an explicit numerical solution is calculated in the form of a convergent power series with easily computable components. To illustrate the application of this method numerical results are derived by using the calculated components of the homotopy perturbation series. The numerical solutions are compared with the known analytical solutions. Results derived from our method are shown graphically.

  8. On the applicability of low-dimensional models for convective flow reversals at extreme Prandtl numbers

    NASA Astrophysics Data System (ADS)

    Mannattil, Manu; Pandey, Ambrish; Verma, Mahendra K.; Chakraborty, Sagar

    2017-12-01

    Constructing simpler models, either stochastic or deterministic, for exploring the phenomenon of flow reversals in fluid systems is in vogue across disciplines. Using direct numerical simulations and nonlinear time series analysis, we illustrate that the basic nature of flow reversals in convecting fluids can depend on the dimensionless parameters describing the system. Specifically, we find evidence of low-dimensional behavior in flow reversals occurring at zero Prandtl number, whereas we fail to find such signatures for reversals at infinite Prandtl number. Thus, even in a single system, as one varies the system parameters, one can encounter reversals that are fundamentally different in nature. Consequently, we conclude that a single general low-dimensional deterministic model cannot faithfully characterize flow reversals for every set of parameter values.

  9. A stochastic model for correlated protein motions

    NASA Astrophysics Data System (ADS)

    Karain, Wael I.; Qaraeen, Nael I.; Ajarmah, Basem

    2006-06-01

    A one-dimensional Langevin-type stochastic difference equation is used to find the deterministic and Gaussian contributions of time series representing the projections of a Bovine Pancreatic Trypsin Inhibitor (BPTI) protein molecular dynamics simulation along different eigenvector directions determined using principal component analysis. The deterministic part shows a distinct nonlinear behavior only for eigenvectors contributing significantly to the collective protein motion.

  10. Maxillary reaction patterns identified by three-dimensional analysis of casts from infants with unilateral cleft lip and palate.

    PubMed

    Neuschulz, J; Schaefer, I; Scheer, M; Christ, H; Braumann, B

    2013-07-01

    In order to visualize and quantify the direction and extent of morphological upper-jaw changes in infants with unilateral cleft lip and palate (UCLP) during early orthodontic treatment, a three-dimensional method of cast analysis for routine application was developed. In the present investigation, this method was used to identify reaction patterns associated with specific cleft forms. The study included a cast series reflecting the upper-jaw situations of 46 infants with complete (n=27) or incomplete (n=19) UCLP during week 1 and months 3, 6, and 12 of life. Three-dimensional datasets were acquired and visualized with scanning software (DigiModel®; OrthoProof, The Netherlands). Following interactive identification of landmarks on the digitized surface relief, a defined set of representative linear parameters were three-dimensionally measured. At the same time, the three-dimensional surfaces of one patient series were superimposed based on a defined reference plane. Morphometric differences were statistically analyzed. Thanks to the user-friendly software, all landmarks could be identified quickly and reproducibly, thus, allowing for simultaneous three-dimensional measurement of all defined parameters. The measured values revealed that significant morphometric differences were present in all three planes of space between the two patient groups. Patients with complete UCLP underwent significantly larger reductions in cleft width (p<0.001), and sagittal growth in the complete UCLP group exceeded sagittal growth in the incomplete UCLP group by almost 50% within the first year of life. Based on patients with incomplete versus complete UCLP, different reaction patterns were identified that depended not on apparent severities of malformation but on cleft forms.

  11. Fractal dimension and nonlinear dynamical processes

    NASA Astrophysics Data System (ADS)

    McCarty, Robert C.; Lindley, John P.

    1993-11-01

    Mandelbrot, Falconer and others have demonstrated the existence of dimensionally invariant geometrical properties of non-linear dynamical processes known as fractals. Barnsley defines fractal geometry as an extension of classical geometry. Such an extension, however, is not mathematically trivial Of specific interest to those engaged in signal processing is the potential use of fractal geometry to facilitate the analysis of non-linear signal processes often referred to as non-linear time series. Fractal geometry has been used in the modeling of non- linear time series represented by radar signals in the presence of ground clutter or interference generated by spatially distributed reflections around the target or a radar system. It was recognized by Mandelbrot that the fractal geometries represented by man-made objects had different dimensions than the geometries of the familiar objects that abound in nature such as leaves, clouds, ferns, trees, etc. The invariant dimensional property of non-linear processes suggests that in the case of acoustic signals (active or passive) generated within a dispersive medium such as the ocean environment, there exists much rich structure that will aid in the detection and classification of various objects, man-made or natural, within the medium.

  12. Peak picking and the assessment of separation performance in two-dimensional high performance liquid chromatography.

    PubMed

    Stevenson, Paul G; Mnatsakanyan, Mariam; Guiochon, Georges; Shalliker, R Andrew

    2010-07-01

    An algorithm was developed for 2DHPLC that automated the process of peak recognition, measuring their retention times, and then subsequently plotting the information in a two-dimensional retention plane. Following the recognition of peaks, the software then performed a series of statistical assessments of the separation performance, measuring for example, correlation between dimensions, peak capacity and the percentage of usage of the separation space. Peak recognition was achieved by interpreting the first and second derivatives of each respective one-dimensional chromatogram to determine the 1D retention times of each solute and then compiling these retention times for each respective fraction 'cut'. Due to the nature of comprehensive 2DHPLC adjacent cut fractions may contain peaks common to more than one cut fraction. The algorithm determined which components were common in adjacent cuts and subsequently calculated the peak maximum profile by interpolating the space between adjacent peaks. This algorithm was applied to the analysis of a two-dimensional separation of an apple flesh extract separated in a first dimension comprising a cyano stationary phase and an aqueous/THF mobile phase as the first dimension and a second dimension comprising C18-Hydro with an aqueous/MeOH mobile phase. A total of 187 peaks were detected.

  13. Spatio-temporal phenomena in complex systems with time delays

    NASA Astrophysics Data System (ADS)

    Yanchuk, Serhiy; Giacomelli, Giovanni

    2017-03-01

    Real-world systems can be strongly influenced by time delays occurring in self-coupling interactions, due to unavoidable finite signal propagation velocities. When the delays become significantly long, complicated high-dimensional phenomena appear and a simple extension of the methods employed in low-dimensional dynamical systems is not feasible. We review the general theory developed in this case, describing the main destabilization mechanisms, the use of visualization tools, and commenting on the most important and effective dynamical indicators as well as their properties in different regimes. We show how a suitable approach, based on a comparison with spatio-temporal systems, represents a powerful instrument for disclosing the very basic mechanism of long-delay systems. Various examples from different models and a series of recent experiments are reported.

  14. Comparison of the performance of tracer kinetic model-driven registration for dynamic contrast enhanced MRI using different models of contrast enhancement.

    PubMed

    Buonaccorsi, Giovanni A; Roberts, Caleb; Cheung, Sue; Watson, Yvonne; O'Connor, James P B; Davies, Karen; Jackson, Alan; Jayson, Gordon C; Parker, Geoff J M

    2006-09-01

    The quantitative analysis of dynamic contrast-enhanced (DCE) magnetic resonance imaging (MRI) data is subject to model fitting errors caused by motion during the time-series data acquisition. However, the time-varying features that occur as a result of contrast enhancement can confound motion correction techniques based on conventional registration similarity measures. We have therefore developed a heuristic, locally controlled tracer kinetic model-driven registration procedure, in which the model accounts for contrast enhancement, and applied it to the registration of abdominal DCE-MRI data at high temporal resolution. Using severely motion-corrupted data sets that had been excluded from analysis in a clinical trial of an antiangiogenic agent, we compared the results obtained when using different models to drive the tracer kinetic model-driven registration with those obtained when using a conventional registration against the time series mean image volume. Using tracer kinetic model-driven registration, it was possible to improve model fitting by reducing the sum of squared errors but the improvement was only realized when using a model that adequately described the features of the time series data. The registration against the time series mean significantly distorted the time series data, as did tracer kinetic model-driven registration using a simpler model of contrast enhancement. When an appropriate model is used, tracer kinetic model-driven registration influences motion-corrupted model fit parameter estimates and provides significant improvements in localization in three-dimensional parameter maps. This has positive implications for the use of quantitative DCE-MRI for example in clinical trials of antiangiogenic or antivascular agents.

  15. On computing the global time-optimal motions of robotic manipulators in the presence of obstacles

    NASA Technical Reports Server (NTRS)

    Shiller, Zvi; Dubowsky, Steven

    1991-01-01

    A method for computing the time-optimal motions of robotic manipulators is presented that considers the nonlinear manipulator dynamics, actuator constraints, joint limits, and obstacles. The optimization problem is reduced to a search for the time-optimal path in the n-dimensional position space. A small set of near-optimal paths is first efficiently selected from a grid, using a branch and bound search and a series of lower bound estimates on the traveling time along a given path. These paths are further optimized with a local path optimization to yield the global optimal solution. Obstacles are considered by eliminating the collision points from the tessellated space and by adding a penalty function to the motion time in the local optimization. The computational efficiency of the method stems from the reduced dimensionality of the searched spaced and from combining the grid search with a local optimization. The method is demonstrated in several examples for two- and six-degree-of-freedom manipulators with obstacles.

  16. Spectral analysis of finite-time correlation matrices near equilibrium phase transitions

    NASA Astrophysics Data System (ADS)

    Vinayak; Prosen, T.; Buča, B.; Seligman, T. H.

    2014-10-01

    We study spectral densities for systems on lattices, which, at a phase transition display, power-law spatial correlations. Constructing the spatial correlation matrix we prove that its eigenvalue density shows a power law that can be derived from the spatial correlations. In practice time series are short in the sense that they are either not stationary over long time intervals or not available over long time intervals. Also we usually do not have time series for all variables available. We shall make numerical simulations on a two-dimensional Ising model with the usual Metropolis algorithm as time evolution. Using all spins on a grid with periodic boundary conditions we find a power law, that is, for large grids, compatible with the analytic result. We still find a power law even if we choose a fairly small subset of grid points at random. The exponents of the power laws will be smaller under such circumstances. For very short time series leading to singular correlation matrices we use a recently developed technique to lift the degeneracy at zero in the spectrum and find a significant signature of critical behavior even in this case as compared to high temperature results which tend to those of random matrix models.

  17. Shaping ability of ProFile.04 Taper Series 29 rotary nickel-titanium instruments in simulated root canals. Part 1.

    PubMed

    Thompson, S A; Dummer, P M

    1997-01-01

    The aim of this study was to determine the shaping ability of ProFile.04 Taper Series 29 nickel-titanium instruments in simulated canals. A total of 40 simulated root canals made up of four different shapes in terms of angle and position of curvature were prepared by ProFile instruments using a step-down approach. Part 1 of this two-part report describes the efficacy of the instruments in terms of preparation time, instrument failure, canal blockages, loss of canal length and three-dimensional canal form. The time necessary for canal preparation was not influenced significantly by canal shape. No instrument fractures occurred but a total of 52 instruments deformed. Size 6 instruments deformed the most followed by sizes 5, 3 and 4. Canal shape did not influence significantly instrument deformation. None of the canals became blocked with debris and loss of working distance was on average 0.5 mm or less. Intracanal impressions of canal form demonstrated that most canals had definite apical stops, smooth canal walls and good flow and taper. Under the conditions of this study, ProFile.04 Taper Series 29 rotary nickel-titanium instruments prepared simulated canals rapidly and created good three-dimensional form. A substantial number of instruments deformed but it was not possible to determine whether this phenomenon occurred because of the nature of the experimental model or through an inherent design weakness in the instruments.

  18. Reconstruction of the dynamics of the climatic system from time-series data

    PubMed Central

    Nicolis, C.; Nicolis, G.

    1986-01-01

    The oxygen isotope record of the last million years, as provided by a deep sea core sediment, is analyzed by a method recently developed in the theory of dynamical systems. The analysis suggests that climatic variability is the manifestation of a chaotic dynamics described by an attractor of fractal dimensionality. A quantitative measure of the limited predictability of the climatic system is provided by the evaluation of the time-correlation function and the largest positive Lyapounov exponent of the system. PMID:16593650

  19. Computation of the radiation amplitude of oscillons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fodor, Gyula; Forgacs, Peter; LMPT, CNRS-UMR 6083, Universite de Tours, Parc de Grandmont, 37200 Tours

    2009-03-15

    The radiation loss of small-amplitude oscillons (very long-living, spatially localized, time-dependent solutions) in one-dimensional scalar field theories is computed in the small-amplitude expansion analytically using matched asymptotic series expansions and Borel summation. The amplitude of the radiation is beyond all orders in perturbation theory and the method used has been developed by Segur and Kruskal in Phys. Rev. Lett. 58, 747 (1987). Our results are in good agreement with those of long-time numerical simulations of oscillons.

  20. Multifractality and Network Analysis of Phase Transition

    PubMed Central

    Li, Wei; Yang, Chunbin; Han, Jihui; Su, Zhu; Zou, Yijiang

    2017-01-01

    Many models and real complex systems possess critical thresholds at which the systems shift dramatically from one sate to another. The discovery of early-warnings in the vicinity of critical points are of great importance to estimate how far the systems are away from the critical states. Multifractal Detrended Fluctuation analysis (MF-DFA) and visibility graph method have been employed to investigate the multifractal and geometrical properties of the magnetization time series of the two-dimensional Ising model. Multifractality of the time series near the critical point has been uncovered from the generalized Hurst exponents and singularity spectrum. Both long-term correlation and broad probability density function are identified to be the sources of multifractality. Heterogeneous nature of the networks constructed from magnetization time series have validated the fractal properties. Evolution of the topological quantities of the visibility graph, along with the variation of multifractality, serve as new early-warnings of phase transition. Those methods and results may provide new insights about the analysis of phase transition problems and can be used as early-warnings for a variety of complex systems. PMID:28107414

  1. Visualization of time series statistical data by shape analysis (GDP ratio changes among Asia countries)

    NASA Astrophysics Data System (ADS)

    Shirota, Yukari; Hashimoto, Takako; Fitri Sari, Riri

    2018-03-01

    It has been very significant to visualize time series big data. In the paper we shall discuss a new analysis method called “statistical shape analysis” or “geometry driven statistics” on time series statistical data in economics. In the paper, we analyse the agriculture, value added and industry, value added (percentage of GDP) changes from 2000 to 2010 in Asia. We handle the data as a set of landmarks on a two-dimensional image to see the deformation using the principal components. The point of the analysis method is the principal components of the given formation which are eigenvectors of its bending energy matrix. The local deformation can be expressed as the set of non-Affine transformations. The transformations give us information about the local differences between in 2000 and in 2010. Because the non-Affine transformation can be decomposed into a set of partial warps, we present the partial warps visually. The statistical shape analysis is widely used in biology but, in economics, no application can be found. In the paper, we investigate its potential to analyse the economic data.

  2. Toeplitz Inverse Covariance-Based Clustering of Multivariate Time Series Data

    PubMed Central

    Hallac, David; Vare, Sagar; Boyd, Stephen; Leskovec, Jure

    2018-01-01

    Subsequence clustering of multivariate time series is a useful tool for discovering repeated patterns in temporal data. Once these patterns have been discovered, seemingly complicated datasets can be interpreted as a temporal sequence of only a small number of states, or clusters. For example, raw sensor data from a fitness-tracking application can be expressed as a timeline of a select few actions (i.e., walking, sitting, running). However, discovering these patterns is challenging because it requires simultaneous segmentation and clustering of the time series. Furthermore, interpreting the resulting clusters is difficult, especially when the data is high-dimensional. Here we propose a new method of model-based clustering, which we call Toeplitz Inverse Covariance-based Clustering (TICC). Each cluster in the TICC method is defined by a correlation network, or Markov random field (MRF), characterizing the interdependencies between different observations in a typical subsequence of that cluster. Based on this graphical representation, TICC simultaneously segments and clusters the time series data. We solve the TICC problem through alternating minimization, using a variation of the expectation maximization (EM) algorithm. We derive closed-form solutions to efficiently solve the two resulting subproblems in a scalable way, through dynamic programming and the alternating direction method of multipliers (ADMM), respectively. We validate our approach by comparing TICC to several state-of-the-art baselines in a series of synthetic experiments, and we then demonstrate on an automobile sensor dataset how TICC can be used to learn interpretable clusters in real-world scenarios. PMID:29770257

  3. Normalizing the causality between time series.

    PubMed

    Liang, X San

    2015-08-01

    Recently, a rigorous yet concise formula was derived to evaluate information flow, and hence the causality in a quantitative sense, between time series. To assess the importance of a resulting causality, it needs to be normalized. The normalization is achieved through distinguishing a Lyapunov exponent-like, one-dimensional phase-space stretching rate and a noise-to-signal ratio from the rate of information flow in the balance of the marginal entropy evolution of the flow recipient. It is verified with autoregressive models and applied to a real financial analysis problem. An unusually strong one-way causality is identified from IBM (International Business Machines Corporation) to GE (General Electric Company) in their early era, revealing to us an old story, which has almost faded into oblivion, about "Seven Dwarfs" competing with a giant for the mainframe computer market.

  4. Time series analysis for minority game simulations of financial markets

    NASA Astrophysics Data System (ADS)

    Ferreira, Fernando F.; Francisco, Gerson; Machado, Birajara S.; Muruganandam, Paulsamy

    2003-04-01

    The minority game (MG) model introduced recently provides promising insights into the understanding of the evolution of prices, indices and rates in the financial markets. In this paper we perform a time series analysis of the model employing tools from statistics, dynamical systems theory and stochastic processes. Using benchmark systems and a financial index for comparison, several conclusions are obtained about the generating mechanism for this kind of evolution. The motion is deterministic, driven by occasional random external perturbation. When the interval between two successive perturbations is sufficiently large, one can find low dimensional chaos in this regime. However, the full motion of the MG model is found to be similar to that of the first differences of the SP500 index: stochastic, nonlinear and (unit root) stationary.

  5. Normalizing the causality between time series

    NASA Astrophysics Data System (ADS)

    Liang, X. San

    2015-08-01

    Recently, a rigorous yet concise formula was derived to evaluate information flow, and hence the causality in a quantitative sense, between time series. To assess the importance of a resulting causality, it needs to be normalized. The normalization is achieved through distinguishing a Lyapunov exponent-like, one-dimensional phase-space stretching rate and a noise-to-signal ratio from the rate of information flow in the balance of the marginal entropy evolution of the flow recipient. It is verified with autoregressive models and applied to a real financial analysis problem. An unusually strong one-way causality is identified from IBM (International Business Machines Corporation) to GE (General Electric Company) in their early era, revealing to us an old story, which has almost faded into oblivion, about "Seven Dwarfs" competing with a giant for the mainframe computer market.

  6. Multidimensional Recurrence Quantification Analysis (MdRQA) for the Analysis of Multidimensional Time-Series: A Software Implementation in MATLAB and Its Application to Group-Level Data in Joint Action

    PubMed Central

    Wallot, Sebastian; Roepstorff, Andreas; Mønster, Dan

    2016-01-01

    We introduce Multidimensional Recurrence Quantification Analysis (MdRQA) as a tool to analyze multidimensional time-series data. We show how MdRQA can be used to capture the dynamics of high-dimensional signals, and how MdRQA can be used to assess coupling between two or more variables. In particular, we describe applications of the method in research on joint and collective action, as it provides a coherent analysis framework to systematically investigate dynamics at different group levels—from individual dynamics, to dyadic dynamics, up to global group-level of arbitrary size. The Appendix in Supplementary Material contains a software implementation in MATLAB to calculate MdRQA measures. PMID:27920748

  7. Multidimensional Recurrence Quantification Analysis (MdRQA) for the Analysis of Multidimensional Time-Series: A Software Implementation in MATLAB and Its Application to Group-Level Data in Joint Action.

    PubMed

    Wallot, Sebastian; Roepstorff, Andreas; Mønster, Dan

    2016-01-01

    We introduce Multidimensional Recurrence Quantification Analysis (MdRQA) as a tool to analyze multidimensional time-series data. We show how MdRQA can be used to capture the dynamics of high-dimensional signals, and how MdRQA can be used to assess coupling between two or more variables. In particular, we describe applications of the method in research on joint and collective action, as it provides a coherent analysis framework to systematically investigate dynamics at different group levels-from individual dynamics, to dyadic dynamics, up to global group-level of arbitrary size. The Appendix in Supplementary Material contains a software implementation in MATLAB to calculate MdRQA measures.

  8. A web service system supporting three-dimensional post-processing of medical images based on WADO protocol.

    PubMed

    He, Longjun; Xu, Lang; Ming, Xing; Liu, Qian

    2015-02-01

    Three-dimensional post-processing operations on the volume data generated by a series of CT or MR images had important significance on image reading and diagnosis. As a part of the DIOCM standard, WADO service defined how to access DICOM objects on the Web, but it didn't involve three-dimensional post-processing operations on the series images. This paper analyzed the technical features of three-dimensional post-processing operations on the volume data, and then designed and implemented a web service system for three-dimensional post-processing operations of medical images based on the WADO protocol. In order to improve the scalability of the proposed system, the business tasks and calculation operations were separated into two modules. As results, it was proved that the proposed system could support three-dimensional post-processing service of medical images for multiple clients at the same moment, which met the demand of accessing three-dimensional post-processing operations on the volume data on the web.

  9. NEW SUNS IN THE COSMOS. III. MULTIFRACTAL SIGNATURE ANALYSIS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Freitas, D. B. de; Nepomuceno, M. M. F.; Junior, P. R. V. de Moraes

    2016-11-01

    In the present paper, we investigate the multifractality signatures in hourly time series extracted from the CoRoT spacecraft database. Our analysis is intended to highlight the possibility that astrophysical time series can be members of a particular class of complex and dynamic processes, which require several photometric variability diagnostics to characterize their structural and topological properties. To achieve this goal, we search for contributions due to a nonlinear temporal correlation and effects caused by heavier tails than the Gaussian distribution, using a detrending moving average algorithm for one-dimensional multifractal signals (MFDMA). We observe that the correlation structure is the mainmore » source of multifractality, while heavy-tailed distribution plays a minor role in generating the multifractal effects. Our work also reveals that the rotation period of stars is inherently scaled by the degree of multifractality. As a result, analyzing the multifractal degree of the referred series, we uncover an evolution of multifractality from shorter to larger periods.« less

  10. Fading channel simulator

    DOEpatents

    Argo, Paul E.; Fitzgerald, T. Joseph

    1993-01-01

    Fading channel effects on a transmitted communication signal are simulated with both frequency and time variations using a channel scattering function to affect the transmitted signal. A conventional channel scattering function is converted to a series of channel realizations by multiplying the square root of the channel scattering function by a complex number of which the real and imaginary parts are each independent variables. The two-dimensional inverse-FFT of this complex-valued channel realization yields a matrix of channel coefficients that provide a complete frequency-time description of the channel. The transmitted radio signal is segmented to provide a series of transmitted signal and each segment is subject to FFT to generate a series of signal coefficient matrices. The channel coefficient matrices and signal coefficient matrices are then multiplied and subjected to inverse-FFT to output a signal representing the received affected radio signal. A variety of channel scattering functions can be used to characterize the response of a transmitter-receiver system to such atmospheric effects.

  11. Space shuttle simulation model

    NASA Technical Reports Server (NTRS)

    Tatom, F. B.; Smith, S. R.

    1980-01-01

    The effects of atmospheric turbulence in both horizontal and near horizontal flight, during the return of the space shuttle, are important for determining design, control, and 'pilot-in-the-loop' effects. A nonrecursive model (based on von Karman spectra) for atmospheric turbulence along the flight path of the shuttle orbiter was developed which provides for simulation of instantaneous vertical and horizontal gusts at the vehicle center-of-gravity, and also for simulation of instantaneous gust gradients. Based on this model, the time series for both gusts and gust gradients were generated and stored on a series of magnetic tapes which are entitled shuttle simulation turbulence tapes (SSTT). The time series are designed to represent atmospheric turbulence from ground level to an altitude of 10,000 meters. The turbulence generation procedure is described as well as the results of validating the simulated turbulence. Conclusions and recommendations are presented and references cited. The tabulated one dimensional von Karman spectra and the results of spectral and statistical analyses of the SSTT are contained in the appendix.

  12. A multiple-fan active control wind tunnel for outdoor wind speed and direction simulation

    NASA Astrophysics Data System (ADS)

    Wang, Jia-Ying; Meng, Qing-Hao; Luo, Bing; Zeng, Ming

    2018-03-01

    This article presents a new type of active controlled multiple-fan wind tunnel. The wind tunnel consists of swivel plates and arrays of direct current fans, and the rotation speed of each fan and the shaft angle of each swivel plate can be controlled independently for simulating different kinds of outdoor wind fields. To measure the similarity between the simulated wind field and the outdoor wind field, wind speed and direction time series of two kinds of wind fields are recorded by nine two-dimensional ultrasonic anemometers, and then statistical properties of the wind signals in different time scales are analyzed based on the empirical mode decomposition. In addition, the complexity of wind speed and direction time series is also investigated using multiscale entropy and multivariate multiscale entropy. Results suggest that the simulated wind field in the multiple-fan wind tunnel has a high degree of similarity with the outdoor wind field.

  13. Permutation entropy with vector embedding delays

    NASA Astrophysics Data System (ADS)

    Little, Douglas J.; Kane, Deb M.

    2017-12-01

    Permutation entropy (PE) is a statistic used widely for the detection of structure within a time series. Embedding delay times at which the PE is reduced are characteristic timescales for which such structure exists. Here, a generalized scheme is investigated where embedding delays are represented by vectors rather than scalars, permitting PE to be calculated over a (D -1 ) -dimensional space, where D is the embedding dimension. This scheme is applied to numerically generated noise, sine wave and logistic map series, and experimental data sets taken from a vertical-cavity surface emitting laser exhibiting temporally localized pulse structures within the round-trip time of the laser cavity. Results are visualized as PE maps as a function of embedding delay, with low PE values indicating combinations of embedding delays where correlation structure is present. It is demonstrated that vector embedding delays enable identification of structure that is ambiguous or masked, when the embedding delay is constrained to scalar form.

  14. Microscopic Spin Model for the STOCK Market with Attractor Bubbling on Regular and Small-World Lattices

    NASA Astrophysics Data System (ADS)

    Krawiecki, A.

    A multi-agent spin model for changes of prices in the stock market based on the Ising-like cellular automaton with interactions between traders randomly varying in time is investigated by means of Monte Carlo simulations. The structure of interactions has topology of a small-world network obtained from regular two-dimensional square lattices with various coordination numbers by randomly cutting and rewiring edges. Simulations of the model on regular lattices do not yield time series of logarithmic price returns with statistical properties comparable with the empirical ones. In contrast, in the case of networks with a certain degree of randomness for a wide range of parameters the time series of the logarithmic price returns exhibit intermittent bursting typical of volatility clustering. Also the tails of distributions of returns obey a power scaling law with exponents comparable to those obtained from the empirical data.

  15. Experimental Control of Thermocapillary Convection in a Liquid Bridge

    NASA Technical Reports Server (NTRS)

    Petrov, Valery; Schatz, Michael F.; Muehlner, Kurt A.; VanHook, Stephen J.; McCormick, W. D.; Swift, Jack B.; Swinney, Harry L.

    1996-01-01

    We demonstrate the stabilization of an isolated unstable periodic orbit in a liquid bridge convection experiment. A model independent, nonlinear control algorithm uses temperature measurements near the liquid interface to compute control perturbations which are applied by a thermoelectric element. The algorithm employs a time series reconstruction of a nonlinear control surface in a high dimensional phase space to alter the system dynamics.

  16. TaiWan Ionospheric Model (TWIM) prediction based on time series autoregressive analysis

    NASA Astrophysics Data System (ADS)

    Tsai, L. C.; Macalalad, Ernest P.; Liu, C. H.

    2014-10-01

    As described in a previous paper, a three-dimensional ionospheric electron density (Ne) model has been constructed from vertical Ne profiles retrieved from the FormoSat3/Constellation Observing System for Meteorology, Ionosphere, and Climate GPS radio occultation measurements and worldwide ionosonde foF2 and foE data and named the TaiWan Ionospheric Model (TWIM). The TWIM exhibits vertically fitted α-Chapman-type layers with distinct F2, F1, E, and D layers, and surface spherical harmonic approaches for the fitted layer parameters including peak density, peak density height, and scale height. To improve the TWIM into a real-time model, we have developed a time series autoregressive model to forecast short-term TWIM coefficients. The time series of TWIM coefficients are considered as realizations of stationary stochastic processes within a processing window of 30 days. These autocorrelation coefficients are used to derive the autoregressive parameters and then forecast the TWIM coefficients, based on the least squares method and Lagrange multiplier technique. The forecast root-mean-square relative TWIM coefficient errors are generally <30% for 1 day predictions. The forecast TWIM values of foE and foF2 values are also compared and evaluated using worldwide ionosonde data.

  17. Hierarchical Aligned Cluster Analysis for Temporal Clustering of Human Motion.

    PubMed

    Zhou, Feng; De la Torre, Fernando; Hodgins, Jessica K

    2013-03-01

    Temporal segmentation of human motion into plausible motion primitives is central to understanding and building computational models of human motion. Several issues contribute to the challenge of discovering motion primitives: the exponential nature of all possible movement combinations, the variability in the temporal scale of human actions, and the complexity of representing articulated motion. We pose the problem of learning motion primitives as one of temporal clustering, and derive an unsupervised hierarchical bottom-up framework called hierarchical aligned cluster analysis (HACA). HACA finds a partition of a given multidimensional time series into m disjoint segments such that each segment belongs to one of k clusters. HACA combines kernel k-means with the generalized dynamic time alignment kernel to cluster time series data. Moreover, it provides a natural framework to find a low-dimensional embedding for time series. HACA is efficiently optimized with a coordinate descent strategy and dynamic programming. Experimental results on motion capture and video data demonstrate the effectiveness of HACA for segmenting complex motions and as a visualization tool. We also compare the performance of HACA to state-of-the-art algorithms for temporal clustering on data of a honey bee dance. The HACA code is available online.

  18. Spatio-temporal Event Classification using Time-series Kernel based Structured Sparsity

    PubMed Central

    Jeni, László A.; Lőrincz, András; Szabó, Zoltán; Cohn, Jeffrey F.; Kanade, Takeo

    2016-01-01

    In many behavioral domains, such as facial expression and gesture, sparse structure is prevalent. This sparsity would be well suited for event detection but for one problem. Features typically are confounded by alignment error in space and time. As a consequence, high-dimensional representations such as SIFT and Gabor features have been favored despite their much greater computational cost and potential loss of information. We propose a Kernel Structured Sparsity (KSS) method that can handle both the temporal alignment problem and the structured sparse reconstruction within a common framework, and it can rely on simple features. We characterize spatio-temporal events as time-series of motion patterns and by utilizing time-series kernels we apply standard structured-sparse coding techniques to tackle this important problem. We evaluated the KSS method using both gesture and facial expression datasets that include spontaneous behavior and differ in degree of difficulty and type of ground truth coding. KSS outperformed both sparse and non-sparse methods that utilize complex image features and their temporal extensions. In the case of early facial event classification KSS had 10% higher accuracy as measured by F1 score over kernel SVM methods1. PMID:27830214

  19. 75 FR 38061 - Airworthiness Directives; Airbus Model A300 B4-600 Series Airplanes; Model A300 B4-600R Series...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-01

    ... dimensional measurement of the holes, and doing corrective actions if necessary; doing an eddy current... dimensional measurement of the holes, doing an eddy current inspection of the holes for cracking, doing a cold... the effective date of this AD, prior to doing any cold working process, determine if an eddy current...

  20. Three-dimensional, automated, real-time video system for tracking limb motion in brain-machine interface studies.

    PubMed

    Peikon, Ian D; Fitzsimmons, Nathan A; Lebedev, Mikhail A; Nicolelis, Miguel A L

    2009-06-15

    Collection and analysis of limb kinematic data are essential components of the study of biological motion, including research into biomechanics, kinesiology, neurophysiology and brain-machine interfaces (BMIs). In particular, BMI research requires advanced, real-time systems capable of sampling limb kinematics with minimal contact to the subject's body. To answer this demand, we have developed an automated video tracking system for real-time tracking of multiple body parts in freely behaving primates. The system employs high-contrast markers painted on the animal's joints to continuously track the three-dimensional positions of their limbs during activity. Two-dimensional coordinates captured by each video camera are combined and converted to three-dimensional coordinates using a quadratic fitting algorithm. Real-time operation of the system is accomplished using direct memory access (DMA). The system tracks the markers at a rate of 52 frames per second (fps) in real-time and up to 100fps if video recordings are captured to be later analyzed off-line. The system has been tested in several BMI primate experiments, in which limb position was sampled simultaneously with chronic recordings of the extracellular activity of hundreds of cortical cells. During these recordings, multiple computational models were employed to extract a series of kinematic parameters from neuronal ensemble activity in real-time. The system operated reliably under these experimental conditions and was able to compensate for marker occlusions that occurred during natural movements. We propose that this system could also be extended to applications that include other classes of biological motion.

  1. Low dimensional temporal organization of spontaneous eye blinks in adults with developmental disabilities and stereotyped movement disorder.

    PubMed

    Lee, Mei-Hua; Bodfish, James W; Lewis, Mark H; Newell, Karl M

    2010-01-01

    This study investigated the mean rate and time-dependent sequential organization of spontaneous eye blinks in adults with intellectual and developmental disability (IDD) and individuals from this group who were additionally categorized with stereotypic movement disorder (IDD+SMD). The mean blink rate was lower in the IDD+SMD group than the IDD group and both of these groups had a lower blink rate than a contrast group of healthy adults. In the IDD group the n to n+1 sequential organization over time of the eye-blink durations showed a stronger compensatory organization than the contrast group suggesting decreased complexity/dimensionality of eye-blink behavior. Very low blink rate (and thus insufficient time series data) precluded analysis of time-dependent sequential properties in the IDD+SMD group. These findings support the hypothesis that both IDD and SMD are associated with a reduction in the dimension and adaptability of movement behavior and that this may serve as a risk factor for the expression of abnormal movements.

  2. Dynamic Cross-Entropy.

    PubMed

    Aur, Dorian; Vila-Rodriguez, Fidel

    2017-01-01

    Complexity measures for time series have been used in many applications to quantify the regularity of one dimensional time series, however many dynamical systems are spatially distributed multidimensional systems. We introduced Dynamic Cross-Entropy (DCE) a novel multidimensional complexity measure that quantifies the degree of regularity of EEG signals in selected frequency bands. Time series generated by discrete logistic equations with varying control parameter r are used to test DCE measures. Sliding window DCE analyses are able to reveal specific period doubling bifurcations that lead to chaos. A similar behavior can be observed in seizures triggered by electroconvulsive therapy (ECT). Sample entropy data show the level of signal complexity in different phases of the ictal ECT. The transition to irregular activity is preceded by the occurrence of cyclic regular behavior. A significant increase of DCE values in successive order from high frequencies in gamma to low frequencies in delta band reveals several phase transitions into less ordered states, possible chaos in the human brain. To our knowledge there are no reliable techniques able to reveal the transition to chaos in case of multidimensional times series. In addition, DCE based on sample entropy appears to be robust to EEG artifacts compared to DCE based on Shannon entropy. The applied technique may offer new approaches to better understand nonlinear brain activity. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Modelling spatiotemporal change using multidimensional arrays Meng

    NASA Astrophysics Data System (ADS)

    Lu, Meng; Appel, Marius; Pebesma, Edzer

    2017-04-01

    The large variety of remote sensors, model simulations, and in-situ records provide great opportunities to model environmental change. The massive amount of high-dimensional data calls for methods to integrate data from various sources and to analyse spatiotemporal and thematic information jointly. An array is a collection of elements ordered and indexed in arbitrary dimensions, which naturally represent spatiotemporal phenomena that are identified by their geographic locations and recording time. In addition, array regridding (e.g., resampling, down-/up-scaling), dimension reduction, and spatiotemporal statistical algorithms are readily applicable to arrays. However, the role of arrays in big geoscientific data analysis has not been systematically studied: How can arrays discretise continuous spatiotemporal phenomena? How can arrays facilitate the extraction of multidimensional information? How can arrays provide a clean, scalable and reproducible change modelling process that is communicable between mathematicians, computer scientist, Earth system scientist and stakeholders? This study emphasises on detecting spatiotemporal change using satellite image time series. Current change detection methods using satellite image time series commonly analyse data in separate steps: 1) forming a vegetation index, 2) conducting time series analysis on each pixel, and 3) post-processing and mapping time series analysis results, which does not consider spatiotemporal correlations and ignores much of the spectral information. Multidimensional information can be better extracted by jointly considering spatial, spectral, and temporal information. To approach this goal, we use principal component analysis to extract multispectral information and spatial autoregressive models to account for spatial correlation in residual based time series structural change modelling. We also discuss the potential of multivariate non-parametric time series structural change methods, hierarchical modelling, and extreme event detection methods to model spatiotemporal change. We show how array operations can facilitate expressing these methods, and how the open-source array data management and analytics software SciDB and R can be used to scale the process and make it easily reproducible.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guiochon, Georges A; Shalliker, R. Andrew

    An algorithm was developed for 2DHPLC that automated the process of peak recognition, measuring their retention times, and then subsequently plotting the information in a two-dimensional retention plane. Following the recognition of peaks, the software then performed a series of statistical assessments of the separation performance, measuring for example, correlation between dimensions, peak capacity and the percentage of usage of the separation space. Peak recognition was achieved by interpreting the first and second derivatives of each respective one-dimensional chromatogram to determine the 1D retention times of each solute and then compiling these retention times for each respective fraction 'cut'. Duemore » to the nature of comprehensive 2DHPLC adjacent cut fractions may contain peaks common to more than one cut fraction. The algorithm determined which components were common in adjacent cuts and subsequently calculated the peak maximum profile by interpolating the space between adjacent peaks. This algorithm was applied to the analysis of a two-dimensional separation of an apple flesh extract separated in a first dimension comprising a cyano stationary phase and an aqueous/THF mobile phase as the first dimension and a second dimension comprising C18-Hydro with an aqueous/MeOH mobile phase. A total of 187 peaks were detected.« less

  5. Inference of Vohradský's Models of Genetic Networks by Solving Two-Dimensional Function Optimization Problems

    PubMed Central

    Kimura, Shuhei; Sato, Masanao; Okada-Hatakeyama, Mariko

    2013-01-01

    The inference of a genetic network is a problem in which mutual interactions among genes are inferred from time-series of gene expression levels. While a number of models have been proposed to describe genetic networks, this study focuses on a mathematical model proposed by Vohradský. Because of its advantageous features, several researchers have proposed the inference methods based on Vohradský's model. When trying to analyze large-scale networks consisting of dozens of genes, however, these methods must solve high-dimensional non-linear function optimization problems. In order to resolve the difficulty of estimating the parameters of the Vohradský's model, this study proposes a new method that defines the problem as several two-dimensional function optimization problems. Through numerical experiments on artificial genetic network inference problems, we showed that, although the computation time of the proposed method is not the shortest, the method has the ability to estimate parameters of Vohradský's models more effectively with sufficiently short computation times. This study then applied the proposed method to an actual inference problem of the bacterial SOS DNA repair system, and succeeded in finding several reasonable regulations. PMID:24386175

  6. Virial series expansion and Monte Carlo studies of equation of state for hard spheres in narrow cylindrical pores

    NASA Astrophysics Data System (ADS)

    Mon, K. K.

    2018-05-01

    In this paper, the virial series expansion and constant pressure Monte Carlo method are used to study the longitudinal pressure equation of state for hard spheres in narrow cylindrical pores. We invoke dimensional reduction and map the model into an effective one-dimensional fluid model with interacting internal degrees of freedom. The one-dimensional model is extensive. The Euler relation holds, and longitudinal pressure can be probed with the standard virial series expansion method. Virial coefficients B2 and B3 were obtained analytically, and numerical quadrature was used for B4. A range of narrow pore widths (2 Rp) , Rp<(√{3 }+2 ) /4 =0.9330 ... (in units of the hard sphere diameter) was used, corresponding to fluids in the important single-file formations. We have also computed the virial pressure series coefficients B2', B3', and B4' to compare a truncated virial pressure series equation of state with accurate constant pressure Monte Carlo data. We find very good agreement for a wide range of pressures for narrow pores. These results contribute toward increasing the rather limited understanding of virial coefficients and the equation of state of hard sphere fluids in narrow cylindrical pores.

  7. New method for solving inductive electric fields in the non-uniformly conducting ionosphere

    NASA Astrophysics Data System (ADS)

    Vanhamäki, H.; Amm, O.; Viljanen, A.

    2006-10-01

    We present a new calculation method for solving inductive electric fields in the ionosphere. The time series of the potential part of the ionospheric electric field, together with the Hall and Pedersen conductances serves as the input to this method. The output is the time series of the induced rotational part of the ionospheric electric field. The calculation method works in the time-domain and can be used with non-uniform, time-dependent conductances. In addition, no particular symmetry requirements are imposed on the input potential electric field. The presented method makes use of special non-local vector basis functions called the Cartesian Elementary Current Systems (CECS). This vector basis offers a convenient way of representing curl-free and divergence-free parts of 2-dimensional vector fields and makes it possible to solve the induction problem using simple linear algebra. The new calculation method is validated by comparing it with previously published results for Alfvén wave reflection from a uniformly conducting ionosphere.

  8. Ikeda-like chaos on a dynamically filtered supercontinuum light source

    NASA Astrophysics Data System (ADS)

    Chembo, Yanne K.; Jacquot, Maxime; Dudley, John M.; Larger, Laurent

    2016-08-01

    We demonstrate temporal chaos in a color-selection mechanism from the visible spectrum of a supercontinuum light source. The color-selection mechanism is governed by an acousto-optoelectronic nonlinear delayed-feedback scheme modeled by an Ikeda-like equation. Initially motivated by the design of a broad audience live demonstrator in the framework of the International Year of Light 2015, the setup also provides a different experimental tool to investigate the dynamical complexity of delayed-feedback dynamics. Deterministic hyperchaos is analyzed here from the experimental time series. A projection method identifies the delay parameter, for which the chaotic strange attractor originally evolving in an infinite-dimensional phase space can be revealed in a two-dimensional subspace.

  9. Estimating the decomposition of predictive information in multivariate systems

    NASA Astrophysics Data System (ADS)

    Faes, Luca; Kugiumtzis, Dimitris; Nollo, Giandomenico; Jurysta, Fabrice; Marinazzo, Daniele

    2015-03-01

    In the study of complex systems from observed multivariate time series, insight into the evolution of one system may be under investigation, which can be explained by the information storage of the system and the information transfer from other interacting systems. We present a framework for the model-free estimation of information storage and information transfer computed as the terms composing the predictive information about the target of a multivariate dynamical process. The approach tackles the curse of dimensionality employing a nonuniform embedding scheme that selects progressively, among the past components of the multivariate process, only those that contribute most, in terms of conditional mutual information, to the present target process. Moreover, it computes all information-theoretic quantities using a nearest-neighbor technique designed to compensate the bias due to the different dimensionality of individual entropy terms. The resulting estimators of prediction entropy, storage entropy, transfer entropy, and partial transfer entropy are tested on simulations of coupled linear stochastic and nonlinear deterministic dynamic processes, demonstrating the superiority of the proposed approach over the traditional estimators based on uniform embedding. The framework is then applied to multivariate physiologic time series, resulting in physiologically well-interpretable information decompositions of cardiovascular and cardiorespiratory interactions during head-up tilt and of joint brain-heart dynamics during sleep.

  10. Reliability of a Seven-Segment Foot Model with Medial and Lateral Midfoot and Forefoot Segments During Walking Gait.

    PubMed

    Cobb, Stephen C; Joshi, Mukta N; Pomeroy, Robin L

    2016-12-01

    In-vitro and invasive in-vivo studies have reported relatively independent motion in the medial and lateral forefoot segments during gait. However, most current surface-based models have not defined medial and lateral forefoot or midfoot segments. The purpose of the current study was to determine the reliability of a 7-segment foot model that includes medial and lateral midfoot and forefoot segments during walking gait. Three-dimensional positions of marker clusters located on the leg and 6 foot segments were tracked as 10 participants completed 5 walking trials. To examine the reliability of the foot model, coefficients of multiple correlation (CMC) were calculated across the trials for each participant. Three-dimensional stance time series and range of motion (ROM) during stance were also calculated for each functional articulation. CMCs for all of the functional articulations were ≥ 0.80. Overall, the rearfoot complex (leg-calcaneus segments) was the most reliable articulation and the medial midfoot complex (calcaneus-navicular segments) was the least reliable. With respect to ROM, reliability was greatest for plantarflexion/dorsiflexion and least for abduction/adduction. Further, the stance ROM and time-series patterns results between the current study and previous invasive in-vivo studies that have assessed actual bone motion were generally consistent.

  11. Using machine learning to replicate chaotic attractors and calculate Lyapunov exponents from data

    NASA Astrophysics Data System (ADS)

    Pathak, Jaideep; Lu, Zhixin; Hunt, Brian R.; Girvan, Michelle; Ott, Edward

    2017-12-01

    We use recent advances in the machine learning area known as "reservoir computing" to formulate a method for model-free estimation from data of the Lyapunov exponents of a chaotic process. The technique uses a limited time series of measurements as input to a high-dimensional dynamical system called a "reservoir." After the reservoir's response to the data is recorded, linear regression is used to learn a large set of parameters, called the "output weights." The learned output weights are then used to form a modified autonomous reservoir designed to be capable of producing an arbitrarily long time series whose ergodic properties approximate those of the input signal. When successful, we say that the autonomous reservoir reproduces the attractor's "climate." Since the reservoir equations and output weights are known, we can compute the derivatives needed to determine the Lyapunov exponents of the autonomous reservoir, which we then use as estimates of the Lyapunov exponents for the original input generating system. We illustrate the effectiveness of our technique with two examples, the Lorenz system and the Kuramoto-Sivashinsky (KS) equation. In the case of the KS equation, we note that the high dimensional nature of the system and the large number of Lyapunov exponents yield a challenging test of our method, which we find the method successfully passes.

  12. Observed wave characteristics during growth and decay: a case study

    NASA Astrophysics Data System (ADS)

    Prasada Rao, C. V. K.; Baba, M.

    1996-10-01

    Observed 1-h time series data on sea surface waves in the shelf waters off Goa, west coast of India (depth 80 m), during 17-24 March 1986, are analyzed with reference to the prevailing synoptic winds to understand wave growth and decay aspects. Wind speeds ( U10) ranged from 0 to 11.5 m s -1, whereas significant wave height ( Hs) varied between 0.6 and 2.3 m. Cross-correlation analysis between U10 and Hs revealed a time-lag of 4 h. A relationship is obtained between wave steepness ( H s/L ) and wave age ( C/U10) viz. Log 10( H s/L = -0.53 Log 10( C/U 10) - 1.385. Phillips' hypothesis of f-5 formula for equilibrium range of wave spectrum and relationship between non-dimensional energy ( E * = Eg 2/U *4) and non-dimensional peak frequency ( v * = U *f m/g ) are studied. Correlation of non-dimensional wave parameters ( E * and v *) using the present data showed a better aereement with Hasselmann et al. (1976) when comnared to Toba (1978).

  13. Burst and inter-burst duration statistics as empirical test of long-range memory in the financial markets

    NASA Astrophysics Data System (ADS)

    Gontis, V.; Kononovicius, A.

    2017-10-01

    We address the problem of long-range memory in the financial markets. There are two conceptually different ways to reproduce power-law decay of auto-correlation function: using fractional Brownian motion as well as non-linear stochastic differential equations. In this contribution we address this problem by analyzing empirical return and trading activity time series from the Forex. From the empirical time series we obtain probability density functions of burst and inter-burst duration. Our analysis reveals that the power-law exponents of the obtained probability density functions are close to 3 / 2, which is a characteristic feature of the one-dimensional stochastic processes. This is in a good agreement with earlier proposed model of absolute return based on the non-linear stochastic differential equations derived from the agent-based herding model.

  14. Nonlinear multi-analysis of agent-based financial market dynamics by epidemic system

    NASA Astrophysics Data System (ADS)

    Lu, Yunfan; Wang, Jun; Niu, Hongli

    2015-10-01

    Based on the epidemic dynamical system, we construct a new agent-based financial time series model. In order to check and testify its rationality, we compare the statistical properties of the time series model with the real stock market indices, Shanghai Stock Exchange Composite Index and Shenzhen Stock Exchange Component Index. For analyzing the statistical properties, we combine the multi-parameter analysis with the tail distribution analysis, the modified rescaled range analysis, and the multifractal detrended fluctuation analysis. For a better perspective, the three-dimensional diagrams are used to present the analysis results. The empirical research in this paper indicates that the long-range dependence property and the multifractal phenomenon exist in the real returns and the proposed model. Therefore, the new agent-based financial model can recurrence some important features of real stock markets.

  15. An agreement coefficient for image comparison

    USGS Publications Warehouse

    Ji, Lei; Gallo, Kevin

    2006-01-01

    Combination of datasets acquired from different sensor systems is necessary to construct a long time-series dataset for remotely sensed land-surface variables. Assessment of the agreement of the data derived from various sources is an important issue in understanding the data continuity through the time-series. Some traditional measures, including correlation coefficient, coefficient of determination, mean absolute error, and root mean square error, are not always optimal for evaluating the data agreement. For this reason, we developed a new agreement coefficient for comparing two different images. The agreement coefficient has the following properties: non-dimensional, bounded, symmetric, and distinguishable between systematic and unsystematic differences. The paper provides examples of agreement analyses for hypothetical data and actual remotely sensed data. The results demonstrate that the agreement coefficient does include the above properties, and therefore is a useful tool for image comparison.

  16. Discovery and identification of a series of alkyl decalin isomers in petroleum geological samples.

    PubMed

    Wang, Huitong; Zhang, Shuichang; Weng, Na; Zhang, Bin; Zhu, Guangyou; Liu, Lingyan

    2015-07-07

    The comprehensive two-dimensional gas chromatography/time-of-flight mass spectrometry (GC × GC/TOFMS) has been used to characterize a crude oil and a source rock extract sample. During the process, a series of pairwise components between monocyclic alkanes and mono-aromatics have been discovered. After tentative assignments of decahydronaphthalene isomers, a series of alkyl decalin isomers have been synthesized and used for identification and validation of these petroleum compounds. From both the MS and chromatography information, these pairwise compounds were identified as 2-alkyl-decahydronaphthalenes and 1-alkyl-decahydronaphthalenes. The polarity of 1-alkyl-decahydronaphthalenes was stronger. Their long chain alkyl substituent groups may be due to bacterial transformation or different oil cracking events. This systematic profiling of alkyl-decahydronaphthalene isomers provides further understanding and recognition of these potential petroleum biomarkers.

  17. CMGTooL user's manual

    USGS Publications Warehouse

    Xu, Jingping; Lightsom, Fran; Noble, Marlene A.; Denham, Charles

    2002-01-01

    During the past several years, the sediment transport group in the Coastal and Marine Geology Program (CMGP) of the U. S. Geological Survey has made major revisions to its methodology of processing, analyzing, and maintaining the variety of oceanographic time-series data. First, CMGP completed the transition of the its oceanographic time-series database to a self-documenting NetCDF (Rew et al., 1997) data format. Second, CMGP’s oceanographic data variety and complexity have been greatly expanded from traditional 2-dimensional, single-point time-series measurements (e.g., Electro-magnetic current meters, transmissometers) to more advanced 3-dimensional and profiling time-series measurements due to many new acquisitions of modern instruments such as Acoustic Doppler Current Profiler (RDI, 1996), Acoustic Doppler Velocitimeter, Pulse-Coherence Acoustic Doppler Profiler (SonTek, 2001), Acoustic Bacscatter Sensor (Aquatec, 1001001001001001001). In order to accommodate the NetCDF format of data from the new instruments, a software package of processing, analyzing, and visualizing time-series oceanographic data was developed. It is named CMGTooL. The CMGTooL package contains two basic components: a user-friendly GUI for NetCDF file analysis, processing and manipulation; and a data analyzing program library. Most of the routines in the library are stand-alone programs suitable for batch processing. CMGTooL is written in MATLAB computing language (The Mathworks, 1997), therefore users must have MATLAB installed on their computer in order to use this software package. In addition, MATLAB’s Signal Processing Toolbox is also required by some CMGTooL’s routines. Like most MATLAB programs, all CMGTooL codes are compatible with different computing platforms including PC, MAC, and UNIX machines (Note: CMGTooL has been tested on different platforms that run MATLAB 5.2 (Release 10) or lower versions. Some of the commands related to MAC may not be compatible with later releases of MATLAB). The GUI and some of the library routines call low-level NetCDF file I/O, variable and attribute functions. These NetCDF exclusive functions are supported by a MATLAB toolbox named NetCDF, created by Dr. Charles Denham . This toolbox has to be installed in order to use the CMGTooL GUI. The CMGTooL GUI calls several routines that were initially developed by others. The authors would like to acknowledge the following scientists for their ideas and codes: Dr. Rich Signell (USGS), Dr. Chris Sherwood (USGS), and Dr. Bob Beardsley (WHOI). Many special terms that carry special meanings in either MATLAB or the NetCDF Toolbox are used in this manual. Users are encouraged to read the documents of MATLAB and NetCDF for references.

  18. Using the Graphing Calculator--in Two-Dimensional Motion Plots.

    ERIC Educational Resources Information Center

    Brueningsen, Chris; Bower, William

    1995-01-01

    Presents a series of simple activities involving generalized two-dimensional motion topics to prepare students to study projectile motion. Uses a pair of motion detectors, each connected to a calculator-based-laboratory (CBL) unit interfaced with a standard graphics calculator, to explore two-dimensional motion. (JRH)

  19. Exact three-dimensional spectral solution to surface-groundwater interactions with arbitrary surface topography

    USGS Publications Warehouse

    Worman, A.; Packman, A.I.; Marklund, L.; Harvey, J.W.; Stone, S.H.

    2006-01-01

    It has been long known that land surface topography governs both groundwater flow patterns at the regional-to-continental scale and on smaller scales such as in the hyporheic zone of streams. Here we show that the surface topography can be separated in a Fourier-series spectrum that provides an exact solution of the underlying three-dimensional groundwater flows. The new spectral solution offers a practical tool for fast calculation of subsurface flows in different hydrological applications and provides a theoretical platform for advancing conceptual understanding of the effect of landscape topography on subsurface flows. We also show how the spectrum of surface topography influences the residence time distribution for subsurface flows. The study indicates that the subsurface head variation decays exponentially with depth faster than it would with equivalent two-dimensional features, resulting in a shallower flow interaction. Copyright 2006 by the American Geophysical Union.

  20. Data-driven forecasting of high-dimensional chaotic systems with long short-term memory networks.

    PubMed

    Vlachas, Pantelis R; Byeon, Wonmin; Wan, Zhong Y; Sapsis, Themistoklis P; Koumoutsakos, Petros

    2018-05-01

    We introduce a data-driven forecasting method for high-dimensional chaotic systems using long short-term memory (LSTM) recurrent neural networks. The proposed LSTM neural networks perform inference of high-dimensional dynamical systems in their reduced order space and are shown to be an effective set of nonlinear approximators of their attractor. We demonstrate the forecasting performance of the LSTM and compare it with Gaussian processes (GPs) in time series obtained from the Lorenz 96 system, the Kuramoto-Sivashinsky equation and a prototype climate model. The LSTM networks outperform the GPs in short-term forecasting accuracy in all applications considered. A hybrid architecture, extending the LSTM with a mean stochastic model (MSM-LSTM), is proposed to ensure convergence to the invariant measure. This novel hybrid method is fully data-driven and extends the forecasting capabilities of LSTM networks.

  1. Nonlinear theory for laminated and thick plates and shells including the effects of transverse shearing

    NASA Technical Reports Server (NTRS)

    Stein, M.

    1985-01-01

    Nonlinear strain displacement relations for three-dimensional elasticity are determined in orthogonal curvilinear coordinates. To develop a two-dimensional theory, the displacements are expressed by trigonometric series representation through-the-thickness. The nonlinear strain-displacement relations are expanded into series which contain all first and second degree terms. In the series for the displacements only the first few terms are retained. Insertion of the expansions into the three-dimensional virtual work expression leads to nonlinear equations of equilibrium for laminated and thick plates and shells that include the effects of transverse shearing. Equations of equilibrium and buckling equations are derived for flat plates and cylindrical shells. The shell equations reduce to conventional transverse shearing shell equations when the effects of the trigonometric terms are omitted and to classical shell equations when the trigonometric terms are omitted and the shell is assumed to be thin.

  2. Multidimensional scaling analysis of financial time series based on modified cross-sample entropy methods

    NASA Astrophysics Data System (ADS)

    He, Jiayi; Shang, Pengjian; Xiong, Hui

    2018-06-01

    Stocks, as the concrete manifestation of financial time series with plenty of potential information, are often used in the study of financial time series. In this paper, we utilize the stock data to recognize their patterns through out the dissimilarity matrix based on modified cross-sample entropy, then three-dimensional perceptual maps of the results are provided through multidimensional scaling method. Two modified multidimensional scaling methods are proposed in this paper, that is, multidimensional scaling based on Kronecker-delta cross-sample entropy (MDS-KCSE) and multidimensional scaling based on permutation cross-sample entropy (MDS-PCSE). These two methods use Kronecker-delta based cross-sample entropy and permutation based cross-sample entropy to replace the distance or dissimilarity measurement in classical multidimensional scaling (MDS). Multidimensional scaling based on Chebyshev distance (MDSC) is employed to provide a reference for comparisons. Our analysis reveals a clear clustering both in synthetic data and 18 indices from diverse stock markets. It implies that time series generated by the same model are easier to have similar irregularity than others, and the difference in the stock index, which is caused by the country or region and the different financial policies, can reflect the irregularity in the data. In the synthetic data experiments, not only the time series generated by different models can be distinguished, the one generated under different parameters of the same model can also be detected. In the financial data experiment, the stock indices are clearly divided into five groups. Through analysis, we find that they correspond to five regions, respectively, that is, Europe, North America, South America, Asian-Pacific (with the exception of mainland China), mainland China and Russia. The results also demonstrate that MDS-KCSE and MDS-PCSE provide more effective divisions in experiments than MDSC.

  3. A metrics for soil hydrological processes and their intrinsic dimensionality in heterogeneous systems

    NASA Astrophysics Data System (ADS)

    Lischeid, G.; Hohenbrink, T.; Schindler, U.

    2012-04-01

    Hydrology is based on the observation that catchments process input signals, e.g., precipitation, in a highly deterministic way. Thus, the Darcy or the Richards equation can be applied to model water fluxes in the saturated or vadose zone, respectively. Soils and aquifers usually exhibit substantial spatial heterogeneities at different scales that can, in principle, be represented by corresponding parameterisations of the models. In practice, however, data are hardly available at the required spatial resolution, and accounting for observed heterogeneities of soil and aquifer structure renders models very time and CPU consuming. We hypothesize that the intrinsic dimensionality of soil hydrological processes, which is induced by spatial heterogeneities, actually is very low and that soil hydrological processes in heterogeneous soils follow approximately the same trajectory. That means, the way how the soil transforms any hydrological input signals is the same for different soil textures and structures. Different soils differ only with respect to the extent of transformation of input signals. In a first step, we analysed the output of a soil hydrological model, based on the Richards equation, for homogeneous soils down to 5 m depth for different soil textures. A matrix of time series of soil matrix potential and soil water content at 10 cm depth intervals was set up. The intrinsic dimensionality of that matrix was assessed using the Correlation Dimension and a non-linear principal component approach. The latter provided a metrics for the extent of transformation ("damping") of the input signal. In a second step, model outputs for heterogeneous soils were analysed. In a last step, the same approaches were applied to 55 time series of observed soil water content from 15 sites and different depths. In all cases, the intrinsic dimensionality in fact was very close to unity, confirming our hypothesis. The metrics provided a very efficient tool to quantify the observed behaviour, depending on depth and soil heterogeneity: Different soils differed primarily with respect to the extent of damping per depth interval rather than to the kind of damping. We will show how that metrics can be used in a very efficient way for representing soil heterogeneities in simulation models.

  4. Peak clustering in two-dimensional gas chromatography with mass spectrometric detection based on theoretical calculation of two-dimensional peak shapes: the 2DAid approach.

    PubMed

    van Stee, Leo L P; Brinkman, Udo A Th

    2011-10-28

    A method is presented to facilitate the non-target analysis of data obtained in temperature-programmed comprehensive two-dimensional (2D) gas chromatography coupled to time-of-flight mass spectrometry (GC×GC-ToF-MS). One main difficulty of GC×GC data analysis is that each peak is usually modulated several times and therefore appears as a series of peaks (or peaklets) in the one-dimensionally recorded data. The proposed method, 2DAid, uses basic chromatographic laws to calculate the theoretical shape of a 2D peak (a cluster of peaklets originating from the same analyte) in order to define the area in which the peaklets of each individual compound can be expected to show up. Based on analyte-identity information obtained by means of mass spectral library searching, the individual peaklets are then combined into a single 2D peak. The method is applied, amongst others, to a complex mixture containing 362 analytes. It is demonstrated that the 2D peak shapes can be accurately predicted and that clustering and further processing can reduce the final peak list to a manageable size. Copyright © 2011 Elsevier B.V. All rights reserved.

  5. Recurrence network measures for hypothesis testing using surrogate data: Application to black hole light curves

    NASA Astrophysics Data System (ADS)

    Jacob, Rinku; Harikrishnan, K. P.; Misra, R.; Ambika, G.

    2018-01-01

    Recurrence networks and the associated statistical measures have become important tools in the analysis of time series data. In this work, we test how effective the recurrence network measures are in analyzing real world data involving two main types of noise, white noise and colored noise. We use two prominent network measures as discriminating statistic for hypothesis testing using surrogate data for a specific null hypothesis that the data is derived from a linear stochastic process. We show that the characteristic path length is especially efficient as a discriminating measure with the conclusions reasonably accurate even with limited number of data points in the time series. We also highlight an additional advantage of the network approach in identifying the dimensionality of the system underlying the time series through a convergence measure derived from the probability distribution of the local clustering coefficients. As examples of real world data, we use the light curves from a prominent black hole system and show that a combined analysis using three primary network measures can provide vital information regarding the nature of temporal variability of light curves from different spectroscopic classes.

  6. Nonlinear analysis of dynamic signature

    NASA Astrophysics Data System (ADS)

    Rashidi, S.; Fallah, A.; Towhidkhah, F.

    2013-12-01

    Signature is a long trained motor skill resulting in well combination of segments like strokes and loops. It is a physical manifestation of complex motor processes. The problem, generally stated, is that how relative simplicity in behavior emerges from considerable complexity of perception-action system that produces behavior within an infinitely variable biomechanical and environmental context. To solve this problem, we present evidences which indicate that motor control dynamic in signing process is a chaotic process. This chaotic dynamic may explain a richer array of time series behavior in motor skill of signature. Nonlinear analysis is a powerful approach and suitable tool which seeks for characterizing dynamical systems through concepts such as fractal dimension and Lyapunov exponent. As a result, they can be analyzed in both horizontal and vertical for time series of position and velocity. We observed from the results that noninteger values for the correlation dimension indicates low dimensional deterministic dynamics. This result could be confirmed by using surrogate data tests. We have also used time series to calculate the largest Lyapunov exponent and obtain a positive value. These results constitute significant evidence that signature data are outcome of chaos in a nonlinear dynamical system of motor control.

  7. Analyses of mean and turbulent motion in the tropics with the use of unequally spaced data

    NASA Technical Reports Server (NTRS)

    Kao, S. K.; Nimmo, E. J.

    1979-01-01

    Wind velocities from 25 km to 60 km over Ascension Island, Fort Sherman and Kwajalein for the period January 1970 to December 1971 are analyzed in order to achieve a better understanding of the mean flow, the eddy kinetic energy and the Eulerian time spectra of the eddy kinetic energy. Since the data are unequally spaced in time, techniques of one-dimensional covariance theory were utilized and an unequally spaced time series analysis was accomplished. The theoretical equations for two-dimensional analysis or wavenumber frequency analysis of unequally spaced data were developed. Analysis of the turbulent winds and the average seasonal variance and eddy kinetic energy of the turbulent winds indicated that maximum total variance and energy is associated with the east-west velocity component. This is particularly true for long period seasonal waves which dominate the total energy spectrum. Additionally, there is an energy shift for the east-west component into the longer period waves with altitude increasing from 30 km to 50 km.

  8. Analytical Modeling of a Novel Transverse Flux Machine for Direct Drive Wind Turbine Applications: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hasan, IIftekhar; Husain, Tausif; Uddin, Md Wasi

    2015-08-24

    This paper presents a nonlinear analytical model of a novel double-sided flux concentrating Transverse Flux Machine (TFM) based on the Magnetic Equivalent Circuit (MEC) model. The analytical model uses a series-parallel combination of flux tubes to predict the flux paths through different parts of the machine including air gaps, permanent magnets, stator, and rotor. The two-dimensional MEC model approximates the complex three-dimensional flux paths of the TFM and includes the effects of magnetic saturation. The model is capable of adapting to any geometry that makes it a good alternative for evaluating prospective designs of TFM compared to finite element solversmore » that are numerically intensive and require more computation time. A single-phase, 1-kW, 400-rpm machine is analytically modeled, and its resulting flux distribution, no-load EMF, and torque are verified with finite element analysis. The results are found to be in agreement, with less than 5% error, while reducing the computation time by 25 times.« less

  9. Analytical Modeling of a Novel Transverse Flux Machine for Direct Drive Wind Turbine Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hasan, IIftekhar; Husain, Tausif; Uddin, Md Wasi

    2015-09-02

    This paper presents a nonlinear analytical model of a novel double sided flux concentrating Transverse Flux Machine (TFM) based on the Magnetic Equivalent Circuit (MEC) model. The analytical model uses a series-parallel combination of flux tubes to predict the flux paths through different parts of the machine including air gaps, permanent magnets (PM), stator, and rotor. The two-dimensional MEC model approximates the complex three-dimensional flux paths of the TFM and includes the effects of magnetic saturation. The model is capable of adapting to any geometry which makes it a good alternative for evaluating prospective designs of TFM as compared tomore » finite element solvers which are numerically intensive and require more computation time. A single phase, 1 kW, 400 rpm machine is analytically modeled and its resulting flux distribution, no-load EMF and torque, verified with Finite Element Analysis (FEA). The results are found to be in agreement with less than 5% error, while reducing the computation time by 25 times.« less

  10. Random-subset fitting of digital holograms for fast three-dimensional particle tracking [invited].

    PubMed

    Dimiduk, Thomas G; Perry, Rebecca W; Fung, Jerome; Manoharan, Vinothan N

    2014-09-20

    Fitting scattering solutions to time series of digital holograms is a precise way to measure three-dimensional dynamics of microscale objects such as colloidal particles. However, this inverse-problem approach is computationally expensive. We show that the computational time can be reduced by an order of magnitude or more by fitting to a random subset of the pixels in a hologram. We demonstrate our algorithm on experimentally measured holograms of micrometer-scale colloidal particles, and we show that 20-fold increases in speed, relative to fitting full frames, can be attained while introducing errors in the particle positions of 10 nm or less. The method is straightforward to implement and works for any scattering model. It also enables a parallelization strategy wherein random-subset fitting is used to quickly determine initial guesses that are subsequently used to fit full frames in parallel. This approach may prove particularly useful for studying rare events, such as nucleation, that can only be captured with high frame rates over long times.

  11. Nonaxisymmetric evolution in protostellar disks

    NASA Technical Reports Server (NTRS)

    Laughlin, Gregory; Bodenheimer, Peter

    1994-01-01

    We present a two-dimensional, multigridded hydrodynamical simulation of the collapse of an axisymmetric, rotating, 1 solar mass protostellar cloud, which forms a resolved, hydrotastic disk. The code includes the effects of physical viscosity, radiative transfer and radiative acceleration but not magnetic fields. We examine how the disk is affected by the inclusion of turbulent viscosity by comparing a viscous simulation with an inviscid model evolved from the same initial conditions, and we derive a disk evolutionary timescale on the order of 300,000 years if alpha = 0.01. Effects arising from non-axisymmetric gravitational instabilities in the protostellar disk are followed with a three-dimensional SPH code, starting from the two-dimensional structure. We find that the disk is prone to a series of spiral instabilities with primary azimulthal mode number m = 1 and m = 2. The torques induced by these nonaxisymmetric structures elicit material transport of angular momentum and mass through the disk, readjusting the surface density profile toward more stable configurations. We present a series of analyses which characterize both the development and the likely source of the instabilities. We speculate that an evolving disk which maintains a minimum Toomre Q-value approximately 1.4 will have a total evolutionary span of several times 10(exp 5) years, comparable to, but somewhat shorter than the evolutionary timescale resulting from viscous turbulence alone. We compare the evolution resulting from nonaxisymmetric instabilities with solutions of a one-dimensional viscous diffusion equation applied to the initial surface density and temperature profile. We find that an effective alpha-value of 0.03 is a good fit to the results of the simulation. However, the effective alpha will depend on the minimum Q in the disk at the time the instability is activated. We argue that the major fraction of the transport characterized by the value of alpha is due to the action of gravitational torques, and does not arise from inherent viscosity within the smoothed particle hydrodynamics method.

  12. Use of forecasting signatures to help distinguish periodicity, randomness, and chaos in ripples and other spatial patterns

    USGS Publications Warehouse

    Rubin, D.M.

    1992-01-01

    Forecasting of one-dimensional time series previously has been used to help distinguish periodicity, chaos, and noise. This paper presents two-dimensional generalizations for making such distinctions for spatial patterns. The techniques are evaluated using synthetic spatial patterns and then are applied to a natural example: ripples formed in sand by blowing wind. Tests with the synthetic patterns demonstrate that the forecasting techniques can be applied to two-dimensional spatial patterns, with the same utility and limitations as when applied to one-dimensional time series. One limitation is that some combinations of periodicity and randomness exhibit forecasting signatures that mimic those of chaos. For example, sine waves distorted with correlated phase noise have forecasting errors that increase with forecasting distance, errors that, are minimized using nonlinear models at moderate embedding dimensions, and forecasting properties that differ significantly between the original and surrogates. Ripples formed in sand by flowing air or water typically vary in geometry from one to another, even when formed in a flow that is uniform on a large scale; each ripple modifies the local flow or sand-transport field, thereby influencing the geometry of the next ripple downcurrent. Spatial forecasting was used to evaluate the hypothesis that such a deterministic process - rather than randomness or quasiperiodicity - is responsible for the variation between successive ripples. This hypothesis is supported by a forecasting error that increases with forecasting distance, a greater accuracy of nonlinear relative to linear models, and significant differences between forecasts made with the original ripples and those made with surrogate patterns. Forecasting signatures cannot be used to distinguish ripple geometry from sine waves with correlated phase noise, but this kind of structure can be ruled out by two geometric properties of the ripples: Successive ripples are highly correlated in wavelength, and ripple crests display dislocations such as branchings and mergers. ?? 1992 American Institute of Physics.

  13. Statistical Downscaling in Multi-dimensional Wave Climate Forecast

    NASA Astrophysics Data System (ADS)

    Camus, P.; Méndez, F. J.; Medina, R.; Losada, I. J.; Cofiño, A. S.; Gutiérrez, J. M.

    2009-04-01

    Wave climate at a particular site is defined by the statistical distribution of sea state parameters, such as significant wave height, mean wave period, mean wave direction, wind velocity, wind direction and storm surge. Nowadays, long-term time series of these parameters are available from reanalysis databases obtained by numerical models. The Self-Organizing Map (SOM) technique is applied to characterize multi-dimensional wave climate, obtaining the relevant "wave types" spanning the historical variability. This technique summarizes multi-dimension of wave climate in terms of a set of clusters projected in low-dimensional lattice with a spatial organization, providing Probability Density Functions (PDFs) on the lattice. On the other hand, wind and storm surge depend on instantaneous local large-scale sea level pressure (SLP) fields while waves depend on the recent history of these fields (say, 1 to 5 days). Thus, these variables are associated with large-scale atmospheric circulation patterns. In this work, a nearest-neighbors analog method is used to predict monthly multi-dimensional wave climate. This method establishes relationships between the large-scale atmospheric circulation patterns from numerical models (SLP fields as predictors) with local wave databases of observations (monthly wave climate SOM PDFs as predictand) to set up statistical models. A wave reanalysis database, developed by Puertos del Estado (Ministerio de Fomento), is considered as historical time series of local variables. The simultaneous SLP fields calculated by NCEP atmospheric reanalysis are used as predictors. Several applications with different size of sea level pressure grid and with different temporal domain resolution are compared to obtain the optimal statistical model that better represents the monthly wave climate at a particular site. In this work we examine the potential skill of this downscaling approach considering perfect-model conditions, but we will also analyze the suitability of this methodology to be used for seasonal forecast and for long-term climate change scenario projection of wave climate.

  14. Spatiotemporal Permutation Entropy as a Measure for Complexity of Cardiac Arrhythmia

    NASA Astrophysics Data System (ADS)

    Schlemmer, Alexander; Berg, Sebastian; Lilienkamp, Thomas; Luther, Stefan; Parlitz, Ulrich

    2018-05-01

    Permutation entropy (PE) is a robust quantity for measuring the complexity of time series. In the cardiac community it is predominantly used in the context of electrocardiogram (ECG) signal analysis for diagnoses and predictions with a major application found in heart rate variability parameters. In this article we are combining spatial and temporal PE to form a spatiotemporal PE that captures both, complexity of spatial structures and temporal complexity at the same time. We demonstrate that the spatiotemporal PE (STPE) quantifies complexity using two datasets from simulated cardiac arrhythmia and compare it to phase singularity analysis and spatial PE (SPE). These datasets simulate ventricular fibrillation (VF) on a two-dimensional and a three-dimensional medium using the Fenton-Karma model. We show that SPE and STPE are robust against noise and demonstrate its usefulness for extracting complexity features at different spatial scales.

  15. REVISITING EVIDENCE OF CHAOS IN X-RAY LIGHT CURVES: THE CASE OF GRS 1915+105

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mannattil, Manu; Gupta, Himanshu; Chakraborty, Sagar, E-mail: mmanu@iitk.ac.in, E-mail: hiugupta@iitk.ac.in, E-mail: sagarc@iitk.ac.in

    2016-12-20

    Nonlinear time series analysis has been widely used to search for signatures of low-dimensional chaos in light curves emanating from astrophysical bodies. A particularly popular example is the microquasar GRS 1915+105, whose irregular but systematic X-ray variability has been well studied using data acquired by the Rossi X-ray Timing Explorer . With a view to building simpler models of X-ray variability, attempts have been made to classify the light curves of GRS 1915+105 as chaotic or stochastic. Contrary to some of the earlier suggestions, after careful analysis, we find no evidence for chaos or determinism in any of the GRS 1915+105 classes. Themore » dearth of long and stationary data sets representing all the different variability classes of GRS 1915+105 makes it a poor candidate for analysis using nonlinear time series techniques. We conclude that either very exhaustive data analysis with sufficiently long and stationary light curves should be performed, keeping all the pitfalls of nonlinear time series analysis in mind, or alternative schemes of classifying the light curves should be adopted. The generic limitations of the techniques that we point out in the context of GRS 1915+105 affect all similar investigations of light curves from other astrophysical sources.« less

  16. q-Gaussian distributions and multiplicative stochastic processes for analysis of multiple financial time series

    NASA Astrophysics Data System (ADS)

    Sato, Aki-Hiro

    2010-12-01

    This study considers q-Gaussian distributions and stochastic differential equations with both multiplicative and additive noises. In the M-dimensional case a q-Gaussian distribution can be theoretically derived as a stationary probability distribution of the multiplicative stochastic differential equation with both mutually independent multiplicative and additive noises. By using the proposed stochastic differential equation a method to evaluate a default probability under a given risk buffer is proposed.

  17. Observing spatio-temporal dynamics of excitable media using reservoir computing

    NASA Astrophysics Data System (ADS)

    Zimmermann, Roland S.; Parlitz, Ulrich

    2018-04-01

    We present a dynamical observer for two dimensional partial differential equation models describing excitable media, where the required cross prediction from observed time series to not measured state variables is provided by Echo State Networks receiving input from local regions in space, only. The efficacy of this approach is demonstrated for (noisy) data from a (cubic) Barkley model and the Bueno-Orovio-Cherry-Fenton model describing chaotic electrical wave propagation in cardiac tissue.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aldridge, David F.; Bartel, Lewis C.

    Program LETS calculates the electric current distribution (in space and time) along an electrically energized steel-cased geologic borehole situated within the subsurface earth. The borehole is modeled as an electrical transmission line that “leaks” current into the surrounding geology. Parameters pertinent to the transmission line current calculation (i.e., series resistance and inductance, shunt capacitance and conductance) are obtained by sampling the electromagnetic (EM) properties of a three-dimensional (3D) geologic earth model along a (possibly deviated) well track.

  19. PENDISC: a simple method for constructing a mathematical model from time-series data of metabolite concentrations.

    PubMed

    Sriyudthsak, Kansuporn; Iwata, Michio; Hirai, Masami Yokota; Shiraishi, Fumihide

    2014-06-01

    The availability of large-scale datasets has led to more effort being made to understand characteristics of metabolic reaction networks. However, because the large-scale data are semi-quantitative, and may contain biological variations and/or analytical errors, it remains a challenge to construct a mathematical model with precise parameters using only these data. The present work proposes a simple method, referred to as PENDISC (Parameter Estimation in a N on- DImensionalized S-system with Constraints), to assist the complex process of parameter estimation in the construction of a mathematical model for a given metabolic reaction system. The PENDISC method was evaluated using two simple mathematical models: a linear metabolic pathway model with inhibition and a branched metabolic pathway model with inhibition and activation. The results indicate that a smaller number of data points and rate constant parameters enhances the agreement between calculated values and time-series data of metabolite concentrations, and leads to faster convergence when the same initial estimates are used for the fitting. This method is also shown to be applicable to noisy time-series data and to unmeasurable metabolite concentrations in a network, and to have a potential to handle metabolome data of a relatively large-scale metabolic reaction system. Furthermore, it was applied to aspartate-derived amino acid biosynthesis in Arabidopsis thaliana plant. The result provides confirmation that the mathematical model constructed satisfactorily agrees with the time-series datasets of seven metabolite concentrations.

  20. A fast quadrature-based numerical method for the continuous spectrum biphasic poroviscoelastic model of articular cartilage.

    PubMed

    Stuebner, Michael; Haider, Mansoor A

    2010-06-18

    A new and efficient method for numerical solution of the continuous spectrum biphasic poroviscoelastic (BPVE) model of articular cartilage is presented. Development of the method is based on a composite Gauss-Legendre quadrature approximation of the continuous spectrum relaxation function that leads to an exponential series representation. The separability property of the exponential terms in the series is exploited to develop a numerical scheme that can be reduced to an update rule requiring retention of the strain history at only the previous time step. The cost of the resulting temporal discretization scheme is O(N) for N time steps. Application and calibration of the method is illustrated in the context of a finite difference solution of the one-dimensional confined compression BPVE stress-relaxation problem. Accuracy of the numerical method is demonstrated by comparison to a theoretical Laplace transform solution for a range of viscoelastic relaxation times that are representative of articular cartilage. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  1. Rapid determination of thermodynamic parameters from one-dimensional programmed-temperature gas chromatography for use in retention time prediction in comprehensive multidimensional chromatography.

    PubMed

    McGinitie, Teague M; Ebrahimi-Najafabadi, Heshmatollah; Harynuk, James J

    2014-01-17

    A new method for estimating the thermodynamic parameters of ΔH(T0), ΔS(T0), and ΔCP for use in thermodynamic modeling of GC×GC separations has been developed. The method is an alternative to the traditional isothermal separations required to fit a three-parameter thermodynamic model to retention data. Herein, a non-linear optimization technique is used to estimate the parameters from a series of temperature-programmed separations using the Nelder-Mead simplex algorithm. With this method, the time required to obtain estimates of thermodynamic parameters a series of analytes is significantly reduced. This new method allows for precise predictions of retention time with the average error being only 0.2s for 1D separations. Predictions for GC×GC separations were also in agreement with experimental measurements; having an average relative error of 0.37% for (1)tr and 2.1% for (2)tr. Copyright © 2013 Elsevier B.V. All rights reserved.

  2. A three-dimensional semi-analytical solution for predicting drug release through the orifice of a spherical device.

    PubMed

    Simon, Laurent; Ospina, Juan

    2016-07-25

    Three-dimensional solute transport was investigated for a spherical device with a release hole. The governing equation was derived using the Fick's second law. A mixed Neumann-Dirichlet condition was imposed at the boundary to represent diffusion through a small region on the surface of the device. The cumulative percentage of drug released was calculated in the Laplace domain and represented by the first term of an infinite series of Legendre and modified Bessel functions of the first kind. Application of the Zakian algorithm yielded the time-domain closed-form expression. The first-order solution closely matched a numerical solution generated by Mathematica(®). The proposed method allowed computation of the characteristic time. A larger surface pore resulted in a smaller effective time constant. The agreement between the numerical solution and the semi-analytical method improved noticeably as the size of the orifice increased. It took four time constants for the device to release approximately ninety-eight of its drug content. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Low Dimensional Temporal Organization of Spontaneous Eye Blinks in Adults with Developmental Disabilities and Stereotyped Movement Disorder

    PubMed Central

    Lee, Mei-Hua; Bodfish, James W.; Lewis, Mark H.; Newell, Karl M.

    2009-01-01

    This study investigated the mean rate and time-dependent sequential organization of spontaneous eye blinks in adults with intellectual and developmental disability (IDD) and individuals from this group that were additionally categorized with stereotypic movement disorder (IDD+SMD). The mean blink rate was lower in the IDD+SMD group than the IDD group and both of these groups had a lower blink rate than a contrast group of healthy adults. In the IDD group the n to n+1 sequential organization over time of the eye blink durations showed a stronger compensatory organization than the contrast group suggesting decreased complexity/dimensionality of eye-blink behavior. Very low blink rate (and thus insufficient time series data) precluded analysis of time-dependent sequential properties in the IDD+SMD group. These findings support the hypothesis that both IDD and SMD are associated with a reduction in the dimension and adaptability of movement behavior and that this may serve as a risk factor for the expression of abnormal movements. PMID:19819672

  4. Flood mapping in ungauged basins using fully continuous hydrologic-hydraulic modeling

    NASA Astrophysics Data System (ADS)

    Grimaldi, Salvatore; Petroselli, Andrea; Arcangeletti, Ettore; Nardi, Fernando

    2013-04-01

    SummaryIn this work, a fully-continuous hydrologic-hydraulic modeling framework for flood mapping is introduced and tested. It is characterized by a simulation of a long rainfall time series at sub-daily resolution that feeds a continuous rainfall-runoff model producing a discharge time series that is directly given as an input to a bi-dimensional hydraulic model. The main advantage of the proposed approach is to avoid the use of the design hyetograph and the design hydrograph that constitute the main source of subjective analysis and uncertainty for standard methods. The proposed procedure is optimized for small and ungauged watersheds where empirical models are commonly applied. Results of a simple real case study confirm that this experimental fully-continuous framework may pave the way for the implementation of a less subjective and potentially automated procedure for flood hazard mapping.

  5. Evaluation of a Magneto-optical Filter and a Fabry-perot Interferometer for the Measurement of Solar Velocity Fields from Space

    NASA Technical Reports Server (NTRS)

    Rhodes, E. J., Jr.; Cacciani, A.; Blamont, J.; Tomczyk, S.; Ulrich, R. K.; Howard, R. F.

    1984-01-01

    A program was developed to evaluate the performance of three different devices as possible space-borne solar velocity field imagers. Two of these three devices, a magneto-optical filter and a molecular adherence Fabry-Perot interferometer were installed in a newly-constructed observing system located at the 60-foot tower telescope at the Mt. Wilson Observatory. Time series of solar filtergrams and Dopplergrams lasting up to 10 hours per day were obtained with the filter while shorter runs were obtained with the Fabry-Perot. Two-dimensional k (sub h)-omega power spectra which show clearly the well-known p-mode ridges were computed from the time series obtained with the magneto-optical filter. These power spectra were compared with similar power spectra obtained recently with the 13.7-m McMath spectrograph at Kitt Peak.

  6. Cross over of recurrence networks to random graphs and random geometric graphs

    NASA Astrophysics Data System (ADS)

    Jacob, Rinku; Harikrishnan, K. P.; Misra, R.; Ambika, G.

    2017-02-01

    Recurrence networks are complex networks constructed from the time series of chaotic dynamical systems where the connection between two nodes is limited by the recurrence threshold. This condition makes the topology of every recurrence network unique with the degree distribution determined by the probability density variations of the representative attractor from which it is constructed. Here we numerically investigate the properties of recurrence networks from standard low-dimensional chaotic attractors using some basic network measures and show how the recurrence networks are different from random and scale-free networks. In particular, we show that all recurrence networks can cross over to random geometric graphs by adding sufficient amount of noise to the time series and into the classical random graphs by increasing the range of interaction to the system size. We also highlight the effectiveness of a combined plot of characteristic path length and clustering coefficient in capturing the small changes in the network characteristics.

  7. Assessment of Walking Stability of Elderly by Means of Nonlinear Time-Series Analysis and Simple Accelerometry

    NASA Astrophysics Data System (ADS)

    Ohtaki, Yasuaki; Arif, Muhammad; Suzuki, Akihiro; Fujita, Kazuki; Inooka, Hikaru; Nagatomi, Ryoichi; Tsuji, Ichiro

    This study presents an assessment of walking stability in elderly people, focusing on local dynamic stability of walking. Its main objectives were to propose a technique to quantify local dynamic stability using nonlinear time-series analyses and a portable instrument, and to investigate their reliability in revealing the efficacy of an exercise training intervention for elderly people for improvement of walking stability. The method measured three-dimensional acceleration of the upper body, and computation of Lyapunov exponents, thereby directly quantifying the local stability of the dynamic system. Straight level walking of young and elderly subjects was investigated in the experimental study. We compared Lyapunov exponents of young and the elderly subjects, and of groups before and after the exercise intervention. Experimental results demonstrated that the exercise intervention improved local dynamic stability of walking. The proposed method was useful in revealing effects and efficacies of the exercise intervention for elderly people.

  8. Optimized stereo matching in binocular three-dimensional measurement system using structured light.

    PubMed

    Liu, Kun; Zhou, Changhe; Wei, Shengbin; Wang, Shaoqing; Fan, Xin; Ma, Jianyong

    2014-09-10

    In this paper, we develop an optimized stereo-matching method used in an active binocular three-dimensional measurement system. A traditional dense stereo-matching algorithm is time consuming due to a long search range and the high complexity of a similarity evaluation. We project a binary fringe pattern in combination with a series of N binary band limited patterns. In order to prune the search range, we execute an initial matching before exhaustive matching and evaluate a similarity measure using logical comparison instead of a complicated floating-point operation. Finally, an accurate point cloud can be obtained by triangulation methods and subpixel interpolation. The experiment results verify the computational efficiency and matching accuracy of the method.

  9. Two dimensional recursive digital filters for near real time image processing

    NASA Technical Reports Server (NTRS)

    Olson, D.; Sherrod, E.

    1980-01-01

    A program was designed toward the demonstration of the feasibility of using two dimensional recursive digital filters for subjective image processing applications that require rapid turn around. The concept of the use of a dedicated minicomputer for the processor for this application was demonstrated. The minicomputer used was the HP1000 series E with a RTE 2 disc operating system and 32K words of memory. A Grinnel 256 x 512 x 8 bit display system was used to display the images. Sample images were provided by NASA Goddard on a 800 BPI, 9 track tape. Four 512 x 512 images representing 4 spectral regions of the same scene were provided. These images were filtered with enhancement filters developed during this effort.

  10. The Effect of Three-Dimensional Simulations on the Understanding of Chemical Structures and Their Properties

    ERIC Educational Resources Information Center

    Urhahne, Detlef; Nick, Sabine; Schanze, Sascha

    2009-01-01

    In a series of three experimental studies, the effectiveness of three-dimensional computer simulations to aid the understanding of chemical structures and their properties was investigated. Arguments for the usefulness of three-dimensional simulations were derived from Mayer's generative theory of multimedia learning. Simulations might lead to a…

  11. Divergent series and memory of the initial condition in the long-time solution of some anomalous diffusion problems.

    PubMed

    Yuste, S Bravo; Borrego, R; Abad, E

    2010-02-01

    We consider various anomalous d -dimensional diffusion problems in the presence of an absorbing boundary with radial symmetry. The motion of particles is described by a fractional diffusion equation. Their mean-square displacement is given by r(2) proportional, variant t(gamma)(00 , the emergence of such series in the long-time domain is a specific feature of subdiffusion problems. We present a method to regularize such series, and, in some cases, validate the procedure by using alternative techniques (Laplace transform method and numerical simulations). In the normal diffusion case, we find that the signature of the initial condition on the approach to the steady state rapidly fades away and the solution approaches a single (the main) decay mode in the long-time regime. In remarkable contrast, long-time memory of the initial condition is present in the subdiffusive case as the spatial part Psi1(r) describing the long-time decay of the solution to the steady state is determined by a weighted superposition of all spatial modes characteristic of the normal diffusion problem, the weight being dependent on the initial condition. Interestingly, Psi1(r) turns out to be independent of the anomalous diffusion exponent gamma .

  12. Discovering significant evolution patterns from satellite image time series.

    PubMed

    Petitjean, François; Masseglia, Florent; Gançarski, Pierre; Forestier, Germain

    2011-12-01

    Satellite Image Time Series (SITS) provide us with precious information on land cover evolution. By studying these series of images we can both understand the changes of specific areas and discover global phenomena that spread over larger areas. Changes that can occur throughout the sensing time can spread over very long periods and may have different start time and end time depending on the location, which complicates the mining and the analysis of series of images. This work focuses on frequent sequential pattern mining (FSPM) methods, since this family of methods fits the above-mentioned issues. This family of methods consists of finding the most frequent evolution behaviors, and is actually able to extract long-term changes as well as short term ones, whenever the change may start and end. However, applying FSPM methods to SITS implies confronting two main challenges, related to the characteristics of SITS and the domain's constraints. First, satellite images associate multiple measures with a single pixel (the radiometric levels of different wavelengths corresponding to infra-red, red, etc.), which makes the search space multi-dimensional and thus requires specific mining algorithms. Furthermore, the non evolving regions, which are the vast majority and overwhelm the evolving ones, challenge the discovery of these patterns. We propose a SITS mining framework that enables discovery of these patterns despite these constraints and characteristics. Our proposal is inspired from FSPM and provides a relevant visualization principle. Experiments carried out on 35 images sensed over 20 years show the proposed approach makes it possible to extract relevant evolution behaviors.

  13. Attributed graph distance measure for automatic detection of attention deficit hyperactive disordered subjects.

    PubMed

    Dey, Soumyabrata; Rao, A Ravishankar; Shah, Mubarak

    2014-01-01

    Attention Deficit Hyperactive Disorder (ADHD) is getting a lot of attention recently for two reasons. First, it is one of the most commonly found childhood disorders and second, the root cause of the problem is still unknown. Functional Magnetic Resonance Imaging (fMRI) data has become a popular tool for the analysis of ADHD, which is the focus of our current research. In this paper we propose a novel framework for the automatic classification of the ADHD subjects using their resting state fMRI (rs-fMRI) data of the brain. We construct brain functional connectivity networks for all the subjects. The nodes of the network are constructed with clusters of highly active voxels and edges between any pair of nodes represent the correlations between their average fMRI time series. The activity level of the voxels are measured based on the average power of their corresponding fMRI time-series. For each node of the networks, a local descriptor comprising of a set of attributes of the node is computed. Next, the Multi-Dimensional Scaling (MDS) technique is used to project all the subjects from the unknown graph-space to a low dimensional space based on their inter-graph distance measures. Finally, the Support Vector Machine (SVM) classifier is used on the low dimensional projected space for automatic classification of the ADHD subjects. Exhaustive experimental validation of the proposed method is performed using the data set released for the ADHD-200 competition. Our method shows promise as we achieve impressive classification accuracies on the training (70.49%) and test data sets (73.55%). Our results reveal that the detection rates are higher when classification is performed separately on the male and female groups of subjects.

  14. The hydrogen atom in D = 3 - 2ɛ dimensions

    NASA Astrophysics Data System (ADS)

    Adkins, Gregory S.

    2018-06-01

    The nonrelativistic hydrogen atom in D = 3 - 2 ɛ dimensions is the reference system for perturbative schemes used in dimensionally regularized nonrelativistic effective field theories to describe hydrogen-like atoms. Solutions to the D-dimensional Schrödinger-Coulomb equation are given in the form of a double power series. Energies and normalization integrals are obtained numerically and also perturbatively in terms of ɛ. The utility of the series expansion is demonstrated by the calculation of the divergent expectation value <(V‧)2 >.

  15. Detecting multiple moving objects in crowded environments with coherent motion regions

    DOEpatents

    Cheriyadat, Anil M.; Radke, Richard J.

    2013-06-11

    Coherent motion regions extend in time as well as space, enforcing consistency in detected objects over long time periods and making the algorithm robust to noisy or short point tracks. As a result of enforcing the constraint that selected coherent motion regions contain disjoint sets of tracks defined in a three-dimensional space including a time dimension. An algorithm operates directly on raw, unconditioned low-level feature point tracks, and minimizes a global measure of the coherent motion regions. At least one discrete moving object is identified in a time series of video images based on the trajectory similarity factors, which is a measure of a maximum distance between a pair of feature point tracks.

  16. The effect of disinfectants on dimensional stability of addition and condensation silicone impressions.

    PubMed

    Sinobad, Tamara; Obradović-Djuricić, Kosovka; Nikolić, Zoran; Dodić, Slobodan; Lazić, Vojkan; Sinobad, Vladimir; Jesenko-Rokvić, Aleksandra

    2014-03-01

    Dimensional stability and accuracy of an impression after chemical disinfection by immersion in disinfectants are crucial for the accuracy of final prosthetic restorations. The aim of this study was to assess the deformation of addition and condensation silicone impressions after disinfection in antimicrobial solutions. A total of 120 impressions were made on the model of the upper arch representing three full metal-ceramic crown preparations. Four impression materials were used: two condensation silicones (Oranwash L - Zhermack and Xantopren L Blue - Heraeus Kulzer) and two addition silicones (Elite H-D + regular body - Zhermack and Flexitime correct flow - Heraeus Kulzer). After removal from the model the impressions were immediatel immersed in appropriate disinfectant (glutaraldehyde, benzalkonium chloride - Sterigum and 5.25% NaOC1) for a period of 10 min. The control group consisted of samples that were not treated with disinfectant solution. Consecutive measurements of identical impressions were realized with a Canon G9 (12 megapixels, 2 fps, 6x/24x), and automated with a computer Asus Lamborghini VX-2R Intel C2D 2.4 GHz, by using Remote Capture software package, so that time-depending series of images of the same impression were obtained. The dimensional changes of all the samples were significant both as a function of time and the applied disinfectant. The results show significant differences of the obtained dimensional changes between the group of condensation silicones and the group of addition silicones for the same time, and the same applied disinfectant (p = 0.026, F = 3.95). The greatest dimensional changes of addition and condensation silicone impressions appear in the first hour after their separation from the model.

  17. Detecting recurrence domains of dynamical systems by symbolic dynamics.

    PubMed

    beim Graben, Peter; Hutt, Axel

    2013-04-12

    We propose an algorithm for the detection of recurrence domains of complex dynamical systems from time series. Our approach exploits the characteristic checkerboard texture of recurrence domains exhibited in recurrence plots. In phase space, recurrence plots yield intersecting balls around sampling points that could be merged into cells of a phase space partition. We construct this partition by a rewriting grammar applied to the symbolic dynamics of time indices. A maximum entropy principle defines the optimal size of intersecting balls. The final application to high-dimensional brain signals yields an optimal symbolic recurrence plot revealing functional components of the signal.

  18. Agatha: Disentangling period signals from correlated noise in a periodogram framework

    NASA Astrophysics Data System (ADS)

    Feng, F.; Tuomi, M.; Jones, H. R. A.

    2018-04-01

    Agatha is a framework of periodograms to disentangle periodic signals from correlated noise and to solve the two-dimensional model selection problem: signal dimension and noise model dimension. These periodograms are calculated by applying likelihood maximization and marginalization and combined in a self-consistent way. Agatha can be used to select the optimal noise model and to test the consistency of signals in time and can be applied to time series analyses in other astronomical and scientific disciplines. An interactive web implementation of the software is also available at http://agatha.herts.ac.uk/.

  19. Methodology for time-domain estimation of storm time geoelectric fields using the 3-D magnetotelluric response tensors

    USGS Publications Warehouse

    Kelbert, Anna; Balch, Christopher; Pulkkinen, Antti; Egbert, Gary D; Love, Jeffrey J.; Rigler, E. Joshua; Fujii, Ikuko

    2017-01-01

    Geoelectric fields at the Earth's surface caused by magnetic storms constitute a hazard to the operation of electric power grids and related infrastructure. The ability to estimate these geoelectric fields in close to real time and provide local predictions would better equip the industry to mitigate negative impacts on their operations. Here we report progress toward this goal: development of robust algorithms that convolve a magnetic storm time series with a frequency domain impedance for a realistic three-dimensional (3-D) Earth, to estimate the local, storm time geoelectric field. Both frequency domain and time domain approaches are presented and validated against storm time geoelectric field data measured in Japan. The methods are then compared in the context of a real-time application.

  20. Methodology for time-domain estimation of storm time geoelectric fields using the 3-D magnetotelluric response tensors

    NASA Astrophysics Data System (ADS)

    Kelbert, Anna; Balch, Christopher C.; Pulkkinen, Antti; Egbert, Gary D.; Love, Jeffrey J.; Rigler, E. Joshua; Fujii, Ikuko

    2017-07-01

    Geoelectric fields at the Earth's surface caused by magnetic storms constitute a hazard to the operation of electric power grids and related infrastructure. The ability to estimate these geoelectric fields in close to real time and provide local predictions would better equip the industry to mitigate negative impacts on their operations. Here we report progress toward this goal: development of robust algorithms that convolve a magnetic storm time series with a frequency domain impedance for a realistic three-dimensional (3-D) Earth, to estimate the local, storm time geoelectric field. Both frequency domain and time domain approaches are presented and validated against storm time geoelectric field data measured in Japan. The methods are then compared in the context of a real-time application.

  1. Structure of (Ga2O3)2(ZnO)13 and a unified description of the homologous series (Ga2O3)2(ZnO)(2n + 1).

    PubMed

    Michiue, Yuichi; Kimizuka, Noboru; Kanke, Yasushi; Mori, Takao

    2012-06-01

    The structure of (Ga(2)O(3))(2)(ZnO)(13) has been determined by a single-crystal X-ray diffraction technique. In the monoclinic structure of the space group C2/m with cell parameters a = 19.66 (4), b = 3.2487 (5), c = 27.31 (2) Å, and β = 105.9 (1)°, a unit cell is constructed by combining the halves of the unit cell of Ga(2)O(3)(ZnO)(6) and Ga(2)O(3)(ZnO)(7) in the homologous series Ga(2)O(3)(ZnO)(m). The homologous series (Ga(2)O(3))(2)(ZnO)(2n + 1) is derived and a unified description for structures in the series is presented using the (3+1)-dimensional superspace formalism. The phases are treated as compositely modulated structures consisting of two subsystems. One is constructed by metal ions and another is by O ions. In the (3 + 1)-dimensional model, displacive modulations of ions are described by the asymmetric zigzag function with large amplitudes, which was replaced by a combination of the sawtooth function in refinements. Similarities and differences between the two homologous series (Ga(2)O(3))(2)(ZnO)(2n + 1) and Ga(2)O(3)(ZnO)(m) are clarified in (3 + 1)-dimensional superspace. The validity of the (3 + 1)-dimensional model is confirmed by the refinements of (Ga(2)O(3))(2)(ZnO)(13), while a few complex phenomena in the real structure are taken into account by modifying the model.

  2. Three-dimensional object recognition based on planar images

    NASA Astrophysics Data System (ADS)

    Mital, Dinesh P.; Teoh, Eam-Khwang; Au, K. C.; Chng, E. K.

    1993-01-01

    This paper presents the development and realization of a robotic vision system for the recognition of 3-dimensional (3-D) objects. The system can recognize a single object from among a group of known regular convex polyhedron objects that is constrained to lie on a calibrated flat platform. The approach adopted comprises a series of image processing operations on a single 2-dimensional (2-D) intensity image to derive an image line drawing. Subsequently, a feature matching technique is employed to determine 2-D spatial correspondences of the image line drawing with the model in the database. Besides its identification ability, the system can also provide important position and orientation information of the recognized object. The system was implemented on an IBM-PC AT machine executing at 8 MHz without the 80287 Maths Co-processor. In our overall performance evaluation based on a 600 recognition cycles test, the system demonstrated an accuracy of above 80% with recognition time well within 10 seconds. The recognition time is, however, indirectly dependent on the number of models in the database. The reliability of the system is also affected by illumination conditions which must be clinically controlled as in any industrial robotic vision system.

  3. Extracting Leading Nonlinear Modes of Changing Climate From Global SST Time Series

    NASA Astrophysics Data System (ADS)

    Mukhin, D.; Gavrilov, A.; Loskutov, E. M.; Feigin, A. M.; Kurths, J.

    2017-12-01

    Data-driven modeling of climate requires adequate principal variables extracted from observed high-dimensional data. For constructing such variables it is needed to find spatial-temporal patterns explaining a substantial part of the variability and comprising all dynamically related time series from the data. The difficulties of this task rise from the nonlinearity and non-stationarity of the climate dynamical system. The nonlinearity leads to insufficiency of linear methods of data decomposition for separating different processes entangled in the observed time series. On the other hand, various forcings, both anthropogenic and natural, make the dynamics non-stationary, and we should be able to describe the response of the system to such forcings in order to separate the modes explaining the internal variability. The method we present is aimed to overcome both these problems. The method is based on the Nonlinear Dynamical Mode (NDM) decomposition [1,2], but takes into account external forcing signals. An each mode depends on hidden, unknown a priori, time series which, together with external forcing time series, are mapped onto data space. Finding both the hidden signals and the mapping allows us to study the evolution of the modes' structure in changing external conditions and to compare the roles of the internal variability and forcing in the observed behavior. The method is used for extracting of the principal modes of SST variability on inter-annual and multidecadal time scales accounting the external forcings such as CO2, variations of the solar activity and volcanic activity. The structure of the revealed teleconnection patterns as well as their forecast under different CO2 emission scenarios are discussed.[1] Mukhin, D., Gavrilov, A., Feigin, A., Loskutov, E., & Kurths, J. (2015). Principal nonlinear dynamical modes of climate variability. Scientific Reports, 5, 15510. [2] Gavrilov, A., Mukhin, D., Loskutov, E., Volodin, E., Feigin, A., & Kurths, J. (2016). Method for reconstructing nonlinear modes with adaptive structure from multidimensional data. Chaos: An Interdisciplinary Journal of Nonlinear Science, 26(12), 123101.

  4. Mapping the Information Trace in Local Field Potentials by a Computational Method of Two-Dimensional Time-Shifting Synchronization Likelihood Based on Graphic Processing Unit Acceleration.

    PubMed

    Zhao, Zi-Fang; Li, Xue-Zhu; Wan, You

    2017-12-01

    The local field potential (LFP) is a signal reflecting the electrical activity of neurons surrounding the electrode tip. Synchronization between LFP signals provides important details about how neural networks are organized. Synchronization between two distant brain regions is hard to detect using linear synchronization algorithms like correlation and coherence. Synchronization likelihood (SL) is a non-linear synchronization-detecting algorithm widely used in studies of neural signals from two distant brain areas. One drawback of non-linear algorithms is the heavy computational burden. In the present study, we proposed a graphic processing unit (GPU)-accelerated implementation of an SL algorithm with optional 2-dimensional time-shifting. We tested the algorithm with both artificial data and raw LFP data. The results showed that this method revealed detailed information from original data with the synchronization values of two temporal axes, delay time and onset time, and thus can be used to reconstruct the temporal structure of a neural network. Our results suggest that this GPU-accelerated method can be extended to other algorithms for processing time-series signals (like EEG and fMRI) using similar recording techniques.

  5. Challenges in Extracting Information From Large Hydrogeophysical-monitoring Datasets

    NASA Astrophysics Data System (ADS)

    Day-Lewis, F. D.; Slater, L. D.; Johnson, T.

    2012-12-01

    Over the last decade, new automated geophysical data-acquisition systems have enabled collection of increasingly large and information-rich geophysical datasets. Concurrent advances in field instrumentation, web services, and high-performance computing have made real-time processing, inversion, and visualization of large three-dimensional tomographic datasets practical. Geophysical-monitoring datasets have provided high-resolution insights into diverse hydrologic processes including groundwater/surface-water exchange, infiltration, solute transport, and bioremediation. Despite the high information content of such datasets, extraction of quantitative or diagnostic hydrologic information is challenging. Visual inspection and interpretation for specific hydrologic processes is difficult for datasets that are large, complex, and (or) affected by forcings (e.g., seasonal variations) unrelated to the target hydrologic process. New strategies are needed to identify salient features in spatially distributed time-series data and to relate temporal changes in geophysical properties to hydrologic processes of interest while effectively filtering unrelated changes. Here, we review recent work using time-series and digital-signal-processing approaches in hydrogeophysics. Examples include applications of cross-correlation, spectral, and time-frequency (e.g., wavelet and Stockwell transforms) approaches to (1) identify salient features in large geophysical time series; (2) examine correlation or coherence between geophysical and hydrologic signals, even in the presence of non-stationarity; and (3) condense large datasets while preserving information of interest. Examples demonstrate analysis of large time-lapse electrical tomography and fiber-optic temperature datasets to extract information about groundwater/surface-water exchange and contaminant transport.

  6. Modeling The Shock Initiation of PBX-9501 in ALE3D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leininger, L; Springer, H K; Mace, J

    The SMIS (Specific Munitions Impact Scenario) experimental series performed at Los Alamos National Laboratory has determined the 3-dimensional shock initiation behavior of the HMX-based heterogeneous high explosive, PBX 9501. A series of finite element impact calculations have been performed in the ALE3D [1] hydrodynamic code and compared to the SMIS results to validate the code predictions. The SMIS tests use a powder gun to shoot scaled NATO standard fragments at a cylinder of PBX 9501, which has a PMMA case and a steel impact cover. The SMIS real-world shot scenario creates a unique test-bed because many of the fragments arrivemore » at the impact plate off-center and at an angle of impact. The goal of this model validation experiments is to demonstrate the predictive capability of the Tarver-Lee Ignition and Growth (I&G) reactive flow model [2] in this fully 3-dimensional regime of Shock to Detonation Transition (SDT). The 3-dimensional Arbitrary Lagrange Eulerian hydrodynamic model in ALE3D applies the Ignition and Growth (I&G) reactive flow model with PBX 9501 parameters derived from historical 1-dimensional experimental data. The model includes the off-center and angle of impact variations seen in the experiments. Qualitatively, the ALE3D I&G calculations accurately reproduce the 'Go/No-Go' threshold of the Shock to Detonation Transition (SDT) reaction in the explosive, as well as the case expansion recorded by a high-speed optical camera. Quantitatively, the calculations show good agreement with the shock time of arrival at internal and external diagnostic pins. This exercise demonstrates the utility of the Ignition and Growth model applied in a predictive fashion for the response of heterogeneous high explosives in the SDT regime.« less

  7. Detection of bifurcations in noisy coupled systems from multiple time series

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williamson, Mark S., E-mail: m.s.williamson@exeter.ac.uk; Lenton, Timothy M.

    We generalize a method of detecting an approaching bifurcation in a time series of a noisy system from the special case of one dynamical variable to multiple dynamical variables. For a system described by a stochastic differential equation consisting of an autonomous deterministic part with one dynamical variable and an additive white noise term, small perturbations away from the system's fixed point will decay slower the closer the system is to a bifurcation. This phenomenon is known as critical slowing down and all such systems exhibit this decay-type behaviour. However, when the deterministic part has multiple coupled dynamical variables, themore » possible dynamics can be much richer, exhibiting oscillatory and chaotic behaviour. In our generalization to the multi-variable case, we find additional indicators to decay rate, such as frequency of oscillation. In the case of approaching a homoclinic bifurcation, there is no change in decay rate but there is a decrease in frequency of oscillations. The expanded method therefore adds extra tools to help detect and classify approaching bifurcations given multiple time series, where the underlying dynamics are not fully known. Our generalisation also allows bifurcation detection to be applied spatially if one treats each spatial location as a new dynamical variable. One may then determine the unstable spatial mode(s). This is also something that has not been possible with the single variable method. The method is applicable to any set of time series regardless of its origin, but may be particularly useful when anticipating abrupt changes in the multi-dimensional climate system.« less

  8. Detection of bifurcations in noisy coupled systems from multiple time series

    NASA Astrophysics Data System (ADS)

    Williamson, Mark S.; Lenton, Timothy M.

    2015-03-01

    We generalize a method of detecting an approaching bifurcation in a time series of a noisy system from the special case of one dynamical variable to multiple dynamical variables. For a system described by a stochastic differential equation consisting of an autonomous deterministic part with one dynamical variable and an additive white noise term, small perturbations away from the system's fixed point will decay slower the closer the system is to a bifurcation. This phenomenon is known as critical slowing down and all such systems exhibit this decay-type behaviour. However, when the deterministic part has multiple coupled dynamical variables, the possible dynamics can be much richer, exhibiting oscillatory and chaotic behaviour. In our generalization to the multi-variable case, we find additional indicators to decay rate, such as frequency of oscillation. In the case of approaching a homoclinic bifurcation, there is no change in decay rate but there is a decrease in frequency of oscillations. The expanded method therefore adds extra tools to help detect and classify approaching bifurcations given multiple time series, where the underlying dynamics are not fully known. Our generalisation also allows bifurcation detection to be applied spatially if one treats each spatial location as a new dynamical variable. One may then determine the unstable spatial mode(s). This is also something that has not been possible with the single variable method. The method is applicable to any set of time series regardless of its origin, but may be particularly useful when anticipating abrupt changes in the multi-dimensional climate system.

  9. A Three-Dimensional Statistical Average Skull: Application of Biometric Morphing in Generating Missing Anatomy.

    PubMed

    Teshima, Tara Lynn; Patel, Vaibhav; Mainprize, James G; Edwards, Glenn; Antonyshyn, Oleh M

    2015-07-01

    The utilization of three-dimensional modeling technology in craniomaxillofacial surgery has grown exponentially during the last decade. Future development, however, is hindered by the lack of a normative three-dimensional anatomic dataset and a statistical mean three-dimensional virtual model. The purpose of this study is to develop and validate a protocol to generate a statistical three-dimensional virtual model based on a normative dataset of adult skulls. Two hundred adult skull CT images were reviewed. The average three-dimensional skull was computed by processing each CT image in the series using thin-plate spline geometric morphometric protocol. Our statistical average three-dimensional skull was validated by reconstructing patient-specific topography in cranial defects. The experiment was repeated 4 times. In each case, computer-generated cranioplasties were compared directly to the original intact skull. The errors describing the difference between the prediction and the original were calculated. A normative database of 33 adult human skulls was collected. Using 21 anthropometric landmark points, a protocol for three-dimensional skull landmarking and data reduction was developed and a statistical average three-dimensional skull was generated. Our results show the root mean square error (RMSE) for restoration of a known defect using the native best match skull, our statistical average skull, and worst match skull was 0.58, 0.74, and 4.4  mm, respectively. The ability to statistically average craniofacial surface topography will be a valuable instrument for deriving missing anatomy in complex craniofacial defects and deficiencies as well as in evaluating morphologic results of surgery.

  10. On the Development of a Deterministic Three-Dimensional Radiation Transport Code

    NASA Technical Reports Server (NTRS)

    Rockell, Candice; Tweed, John

    2011-01-01

    Since astronauts on future deep space missions will be exposed to dangerous radiations, there is a need to accurately model the transport of radiation through shielding materials and to estimate the received radiation dose. In response to this need a three dimensional deterministic code for space radiation transport is now under development. The new code GRNTRN is based on a Green's function solution of the Boltzmann transport equation that is constructed in the form of a Neumann series. Analytical approximations will be obtained for the first three terms of the Neumann series and the remainder will be estimated by a non-perturbative technique . This work discusses progress made to date and exhibits some computations based on the first two Neumann series terms.

  11. rasdaman Array Database: current status

    NASA Astrophysics Data System (ADS)

    Merticariu, George; Toader, Alexandru

    2015-04-01

    rasdaman (Raster Data Manager) is a Free Open Source Array Database Management System which provides functionality for storing and processing massive amounts of raster data in the form of multidimensional arrays. The user can access, process and delete the data using SQL. The key features of rasdaman are: flexibility (datasets of any dimensionality can be processed with the help of SQL queries), scalability (rasdaman's distributed architecture enables it to seamlessly run on cloud infrastructures while offering an increase in performance with the increase of computation resources), performance (real-time access, processing, mixing and filtering of arrays of any dimensionality) and reliability (legacy communication protocol replaced with a new one based on cutting edge technology - Google Protocol Buffers and ZeroMQ). Among the data with which the system works, we can count 1D time series, 2D remote sensing imagery, 3D image time series, 3D geophysical data, and 4D atmospheric and climate data. Most of these representations cannot be stored only in the form of raw arrays, as the location information of the contents is also important for having a correct geoposition on Earth. This is defined by ISO 19123 as coverage data. rasdaman provides coverage data support through the Petascope service. Extensions were added on top of rasdaman in order to provide support for the Geoscience community. The following OGC standards are currently supported: Web Map Service (WMS), Web Coverage Service (WCS), and Web Coverage Processing Service (WCPS). The Web Map Service is an extension which provides zoom and pan navigation over images provided by a map server. Starting with version 9.1, rasdaman supports WMS version 1.3. The Web Coverage Service provides capabilities for downloading multi-dimensional coverage data. Support is also provided for several extensions of this service: Subsetting Extension, Scaling Extension, and, starting with version 9.1, Transaction Extension, which defines request types for inserting, updating and deleting coverages. A web client, designed for both novice and experienced users, is also available for the service and its extensions. The client offers an intuitive interface that allows users to work with multi-dimensional coverages by abstracting the specifics of the standard definitions of the requests. The Web Coverage Processing Service defines a language for on-the-fly processing and filtering multi-dimensional raster coverages. rasdaman exposes this service through the WCS processing extension. Demonstrations are provided online via the Earthlook website (earthlook.org) which presents use-cases from a wide variety of application domains, using the rasdaman system as processing engine.

  12. Convolutionless Nakajima-Zwanzig equations for stochastic analysis in nonlinear dynamical systems.

    PubMed

    Venturi, D; Karniadakis, G E

    2014-06-08

    Determining the statistical properties of stochastic nonlinear systems is of major interest across many disciplines. Currently, there are no general efficient methods to deal with this challenging problem that involves high dimensionality, low regularity and random frequencies. We propose a framework for stochastic analysis in nonlinear dynamical systems based on goal-oriented probability density function (PDF) methods. The key idea stems from techniques of irreversible statistical mechanics, and it relies on deriving evolution equations for the PDF of quantities of interest, e.g. functionals of the solution to systems of stochastic ordinary and partial differential equations. Such quantities could be low-dimensional objects in infinite dimensional phase spaces. We develop the goal-oriented PDF method in the context of the time-convolutionless Nakajima-Zwanzig-Mori formalism. We address the question of approximation of reduced-order density equations by multi-level coarse graining, perturbation series and operator cumulant resummation. Numerical examples are presented for stochastic resonance and stochastic advection-reaction problems.

  13. The innovative concept of three-dimensional hybrid receptor modeling

    NASA Astrophysics Data System (ADS)

    Stojić, A.; Stanišić Stojić, S.

    2017-09-01

    The aim of this study was to improve the current understanding of air pollution transport processes at regional and long-range scale. For this purpose, three-dimensional (3D) potential source contribution function and concentration weighted trajectory models, as well as new hybrid receptor model, concentration weighted boundary layer (CWBL), which uses a two-dimensional grid and a planetary boundary layer height as a frame of reference, are presented. The refined approach to hybrid receptor modeling has two advantages. At first, it considers whether each trajectory endpoint meets the inclusion criteria based on planetary boundary layer height, which is expected to provide a more realistic representation of the spatial distribution of emission sources and pollutant transport pathways. Secondly, it includes pollutant time series preprocessing to make hybrid receptor models more applicable for suburban and urban locations. The 3D hybrid receptor models presented herein are designed to identify altitude distribution of potential sources, whereas CWBL can be used for analyzing the vertical distribution of pollutant concentrations along the transport pathway.

  14. Convolutionless Nakajima–Zwanzig equations for stochastic analysis in nonlinear dynamical systems

    PubMed Central

    Venturi, D.; Karniadakis, G. E.

    2014-01-01

    Determining the statistical properties of stochastic nonlinear systems is of major interest across many disciplines. Currently, there are no general efficient methods to deal with this challenging problem that involves high dimensionality, low regularity and random frequencies. We propose a framework for stochastic analysis in nonlinear dynamical systems based on goal-oriented probability density function (PDF) methods. The key idea stems from techniques of irreversible statistical mechanics, and it relies on deriving evolution equations for the PDF of quantities of interest, e.g. functionals of the solution to systems of stochastic ordinary and partial differential equations. Such quantities could be low-dimensional objects in infinite dimensional phase spaces. We develop the goal-oriented PDF method in the context of the time-convolutionless Nakajima–Zwanzig–Mori formalism. We address the question of approximation of reduced-order density equations by multi-level coarse graining, perturbation series and operator cumulant resummation. Numerical examples are presented for stochastic resonance and stochastic advection–reaction problems. PMID:24910519

  15. A chaotic model for the epidemic of Ebola virus disease in West Africa (2013-2016)

    NASA Astrophysics Data System (ADS)

    Mangiarotti, Sylvain; Peyre, Marisa; Huc, Mireille

    2016-11-01

    An epidemic of Ebola Virus Disease (EVD) broke out in Guinea in December 2013. It was only identified in March 2014 while it had already spread out in Liberia and Sierra Leone. The spill over of the disease became uncontrollable and the epidemic could not be stopped before 2016. The time evolution of this epidemic is revisited here with the global modeling technique which was designed to obtain the deterministic models from single time series. A generalized formulation of this technique for multivariate time series is introduced. It is applied to the epidemic of EVD in West Africa focusing on the period between March 2014 and January 2015, that is, before any detected signs of weakening. Data gathered by the World Health Organization, based on the official publications of the Ministries of Health of the three main countries involved in this epidemic, are considered in our analysis. Two observed time series are used: the daily numbers of infections and deaths. A four-dimensional model producing a very complex dynamical behavior is obtained. The model is tested in order to investigate its skills and drawbacks. Our global analysis clearly helps to distinguish three main stages during the epidemic. A characterization of the obtained attractor is also performed. In particular, the topology of the chaotic attractor is analyzed and a skeleton is obtained for its structure.

  16. Robust evaluation of time series classification algorithms for structural health monitoring

    NASA Astrophysics Data System (ADS)

    Harvey, Dustin Y.; Worden, Keith; Todd, Michael D.

    2014-03-01

    Structural health monitoring (SHM) systems provide real-time damage and performance information for civil, aerospace, and mechanical infrastructure through analysis of structural response measurements. The supervised learning methodology for data-driven SHM involves computation of low-dimensional, damage-sensitive features from raw measurement data that are then used in conjunction with machine learning algorithms to detect, classify, and quantify damage states. However, these systems often suffer from performance degradation in real-world applications due to varying operational and environmental conditions. Probabilistic approaches to robust SHM system design suffer from incomplete knowledge of all conditions a system will experience over its lifetime. Info-gap decision theory enables nonprobabilistic evaluation of the robustness of competing models and systems in a variety of decision making applications. Previous work employed info-gap models to handle feature uncertainty when selecting various components of a supervised learning system, namely features from a pre-selected family and classifiers. In this work, the info-gap framework is extended to robust feature design and classifier selection for general time series classification through an efficient, interval arithmetic implementation of an info-gap data model. Experimental results are presented for a damage type classification problem on a ball bearing in a rotating machine. The info-gap framework in conjunction with an evolutionary feature design system allows for fully automated design of a time series classifier to meet performance requirements under maximum allowable uncertainty.

  17. Three-Dimensional Flow of an Oldroyd-B Fluid with Variable Thermal Conductivity and Heat Generation/Absorption

    PubMed Central

    Shehzad, Sabir Ali; Alsaedi, Ahmed; Hayat, Tasawar; Alhuthali, M. Shahab

    2013-01-01

    This paper looks at the series solutions of three dimensional boundary layer flow. An Oldroyd-B fluid with variable thermal conductivity is considered. The flow is induced due to stretching of a surface. Analysis has been carried out in the presence of heat generation/absorption. Homotopy analysis is implemented in developing the series solutions to the governing flow and energy equations. Graphs are presented and discussed for various parameters of interest. Comparison of present study with the existing limiting solution is shown and examined. PMID:24223780

  18. Finite-dimensional integrable systems: A collection of research problems

    NASA Astrophysics Data System (ADS)

    Bolsinov, A. V.; Izosimov, A. M.; Tsonev, D. M.

    2017-05-01

    This article suggests a series of problems related to various algebraic and geometric aspects of integrability. They reflect some recent developments in the theory of finite-dimensional integrable systems such as bi-Poisson linear algebra, Jordan-Kronecker invariants of finite dimensional Lie algebras, the interplay between singularities of Lagrangian fibrations and compatible Poisson brackets, and new techniques in projective geometry.

  19. Modeling Three-Dimensional Shock Initiation of PBX 9501 in ALE3D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leininger, L; Springer, H K; Mace, J

    A recent SMIS (Specific Munitions Impact Scenario) experimental series performed at Los Alamos National Laboratory has provided 3-dimensional shock initiation behavior of the HMX-based heterogeneous high explosive, PBX 9501. A series of finite element impact calculations have been performed in the ALE3D [1] hydrodynamic code and compared to the SMIS results to validate and study code predictions. These SMIS tests used a powder gun to shoot scaled NATO standard fragments into a cylinder of PBX 9501, which has a PMMA case and a steel impact cover. This SMIS real-world shot scenario creates a unique test-bed because (1) SMIS tests facilitatemore » the investigation of 3D Shock to Detonation Transition (SDT) within the context of a considerable suite of diagnostics, and (2) many of the fragments arrive at the impact plate off-center and at an angle of impact. A particular goal of these model validation experiments is to demonstrate the predictive capability of the ALE3D implementation of the Tarver-Lee Ignition and Growth reactive flow model [2] within a fully 3-dimensional regime of SDT. The 3-dimensional Arbitrary Lagrange Eulerian (ALE) hydrodynamic model in ALE3D applies the Ignition and Growth (I&G) reactive flow model with PBX 9501 parameters derived from historical 1-dimensional experimental data. The model includes the off-center and angle of impact variations seen in the experiments. Qualitatively, the ALE3D I&G calculations reproduce observed 'Go/No-Go' 3D Shock to Detonation Transition (SDT) reaction in the explosive, as well as the case expansion recorded by a high-speed optical camera. Quantitatively, the calculations show good agreement with the shock time of arrival at internal and external diagnostic pins. This exercise demonstrates the utility of the Ignition and Growth model applied for the response of heterogeneous high explosives in the SDT regime.« less

  20. A lattice Boltzmann model for the Burgers-Fisher equation.

    PubMed

    Zhang, Jianying; Yan, Guangwu

    2010-06-01

    A lattice Boltzmann model is developed for the one- and two-dimensional Burgers-Fisher equation based on the method of the higher-order moment of equilibrium distribution functions and a series of partial differential equations in different time scales. In order to obtain the two-dimensional Burgers-Fisher equation, vector sigma(j) has been used. And in order to overcome the drawbacks of "error rebound," a new assumption of additional distribution is presented, where two additional terms, in first order and second order separately, are used. Comparisons with the results obtained by other methods reveal that the numerical solutions obtained by the proposed method converge to exact solutions. The model under new assumption gives better results than that with second order assumption. (c) 2010 American Institute of Physics.

  1. Nonlinear dynamics of the magnetosphere and space weather

    NASA Technical Reports Server (NTRS)

    Sharma, A. Surjalal

    1996-01-01

    The solar wind-magnetosphere system exhibits coherence on the global scale and such behavior can arise from nonlinearity on the dynamics. The observational time series data were used together with phase space reconstruction techniques to analyze the magnetospheric dynamics. Analysis of the solar wind, auroral electrojet and Dst indices showed low dimensionality of the dynamics and accurate prediction can be made with an input/output model. The predictability of the magnetosphere in spite of the apparent complexity arises from its dynamical synchronism with the solar wind. The electrodynamic coupling between different regions of the magnetosphere yields its coherent, low dimensional behavior. The data from multiple satellites and ground stations can be used to develop a spatio-temporal model that identifies the coupling between different regions. These nonlinear dynamical models provide space weather forecasting capabilities.

  2. Coherent diffractive imaging of time-evolving samples with improved temporal resolution

    DOE PAGES

    Ulvestad, A.; Tripathi, A.; Hruszkewycz, S. O.; ...

    2016-05-19

    Bragg coherent x-ray diffractive imaging is a powerful technique for investigating dynamic nanoscale processes in nanoparticles immersed in reactive, realistic environments. Its temporal resolution is limited, however, by the oversampling requirements of three-dimensional phase retrieval. Here, we show that incorporating the entire measurement time series, which is typically a continuous physical process, into phase retrieval allows the oversampling requirement at each time step to be reduced, leading to a subsequent improvement in the temporal resolution by a factor of 2-20 times. The increased time resolution will allow imaging of faster dynamics and of radiation-dose-sensitive samples. Furthermore, this approach, which wemore » call "chrono CDI," may find use in improving the time resolution in other imaging techniques.« less

  3. Time-resolved computed tomography of the liver: retrospective, multi-phase image reconstruction derived from volumetric perfusion imaging.

    PubMed

    Fischer, Michael A; Leidner, Bertil; Kartalis, Nikolaos; Svensson, Anders; Aspelin, Peter; Albiin, Nils; Brismar, Torkel B

    2014-01-01

    To assess feasibility and image quality (IQ) of a new post-processing algorithm for retrospective extraction of an optimised multi-phase CT (time-resolved CT) of the liver from volumetric perfusion imaging. Sixteen patients underwent clinically indicated perfusion CT using 4D spiral mode of dual-source 128-slice CT. Three image sets were reconstructed: motion-corrected and noise-reduced (MCNR) images derived from 4D raw data; maximum and average intensity projections (time MIP/AVG) of the arterial/portal/portal-venous phases and all phases (total MIP/ AVG) derived from retrospective fusion of dedicated MCNR split series. Two readers assessed the IQ, detection rate and evaluation time; one reader assessed image noise and lesion-to-liver contrast. Time-resolved CT was feasible in all patients. Each post-processing step yielded a significant reduction of image noise and evaluation time, maintaining lesion-to-liver contrast. Time MIPs/AVGs showed the highest overall IQ without relevant motion artefacts and best depiction of arterial and portal/portal-venous phases respectively. Time MIPs demonstrated a significantly higher detection rate for arterialised liver lesions than total MIPs/AVGs and the raw data series. Time-resolved CT allows data from volumetric perfusion imaging to be condensed into an optimised multi-phase liver CT, yielding a superior IQ and higher detection rate for arterialised liver lesions than the raw data series. • Four-dimensional computed tomography is limited by motion artefacts and poor image quality. • Time-resolved-CT facilitates 4D-CT data visualisation, segmentation and analysis by condensing raw data. • Time-resolved CT demonstrates better image quality than raw data images. • Time-resolved CT improves detection of arterialised liver lesions in cirrhotic patients.

  4. Critical and Griffiths-McCoy singularities in quantum Ising spin glasses on d-dimensional hypercubic lattices: A series expansion study.

    PubMed

    Singh, R R P; Young, A P

    2017-08-01

    We study the ±J transverse-field Ising spin-glass model at zero temperature on d-dimensional hypercubic lattices and in the Sherrington-Kirkpatrick (SK) model, by series expansions around the strong-field limit. In the SK model and in high dimensions our calculated critical properties are in excellent agreement with the exact mean-field results, surprisingly even down to dimension d=6, which is below the upper critical dimension of d=8. In contrast, at lower dimensions we find a rich singular behavior consisting of critical and Griffiths-McCoy singularities. The divergence of the equal-time structure factor allows us to locate the critical coupling where the correlation length diverges, implying the onset of a thermodynamic phase transition. We find that the spin-glass susceptibility as well as various power moments of the local susceptibility become singular in the paramagnetic phase before the critical point. Griffiths-McCoy singularities are very strong in two dimensions but decrease rapidly as the dimension increases. We present evidence that high enough powers of the local susceptibility may become singular at the pure-system critical point.

  5. Critical and Griffiths-McCoy singularities in quantum Ising spin glasses on d -dimensional hypercubic lattices: A series expansion study

    NASA Astrophysics Data System (ADS)

    Singh, R. R. P.; Young, A. P.

    2017-08-01

    We study the ±J transverse-field Ising spin-glass model at zero temperature on d -dimensional hypercubic lattices and in the Sherrington-Kirkpatrick (SK) model, by series expansions around the strong-field limit. In the SK model and in high dimensions our calculated critical properties are in excellent agreement with the exact mean-field results, surprisingly even down to dimension d =6 , which is below the upper critical dimension of d =8 . In contrast, at lower dimensions we find a rich singular behavior consisting of critical and Griffiths-McCoy singularities. The divergence of the equal-time structure factor allows us to locate the critical coupling where the correlation length diverges, implying the onset of a thermodynamic phase transition. We find that the spin-glass susceptibility as well as various power moments of the local susceptibility become singular in the paramagnetic phase before the critical point. Griffiths-McCoy singularities are very strong in two dimensions but decrease rapidly as the dimension increases. We present evidence that high enough powers of the local susceptibility may become singular at the pure-system critical point.

  6. Estimation of surface heat and moisture fluxes over a prairie grassland. II - Two-dimensional time filtering and site variability

    NASA Technical Reports Server (NTRS)

    Crosson, William L.; Smith, Eric A.

    1992-01-01

    The behavior of in situ measurements of surface fluxes obtained during FIFE 1987 is examined by using correlative and spectral techniques in order to assess the significance of fluctuations on various time scales, from subdiurnal up to synoptic, intraseasonal, and annual scales. The objectives of this analysis are: (1) to determine which temporal scales have a significant impact on areal averaged fluxes and (2) to design a procedure for filtering an extended flux time series that preserves the basic diurnal features and longer time scales while removing high frequency noise that cannot be attributed to site-induced variation. These objectives are accomplished through the use of a two-dimensional cross-time Fourier transform, which serves to separate processes inherently related to diurnal and subdiurnal variability from those which impact flux variations on the longer time scales. A filtering procedure is desirable before the measurements are utilized as input with an experimental biosphere model, to insure that model based intercomparisons at multiple sites are uncontaminated by input variance not related to true site behavior. Analysis of the spectral decomposition indicates that subdiurnal time scales having periods shorter than 6 hours have little site-to-site consistency and therefore little impact on areal integrated fluxes.

  7. Relativistic parameters of senescence.

    PubMed

    Stathatos, Marios A

    2005-01-01

    The laws of biochemistry and biology are governed by parameters whose description in mathematical formulas is based on the three-dimensional space. It is a fact, however, that the life span of a cell and its specific functions, though limited, can be extended or diminished depending on the genetic code but also, on the natural pressure of the environment. The plasticity exhibited by a cellular system has been attributed to the change of the three-dimensional structure of the cell, with time being a simple measure of this change. The model of biological relativity proposed here, considers time as a flexible fourth dimension that corresponds directly to the inertial status of the cells. Two types of clocks are defined: the relativistic biological clock (RBC) and the mechanical clock (MC). In contrast to the MCs that show the astrological reference time, the time shown by the RBCs delay because it depends on cellular activity. The maximum and the expected life span of the cells and/or the organisms can be therefore relied on time transformation. One of the most important factors that can affect time flow is the energy that is produced during metabolic work. Based on this observation, RBCs can be constructed following series of theoretical experiments in order to assess biological time and life span changes.

  8. Price Formation Based on Particle-Cluster Aggregation

    NASA Astrophysics Data System (ADS)

    Wang, Shijun; Zhang, Changshui

    In the present work, we propose a microscopic model of financial markets based on particle-cluster aggregation on a two-dimensional small-world information network in order to simulate the dynamics of the stock markets. "Stylized facts" of the financial market time series, such as fat-tail distribution of returns, volatility clustering and multifractality, are observed in the model. The results of the model agree with empirical data taken from historical records of the daily closures of the NYSE composite index.

  9. Linear perturbations of a Schwarzschild blackhole by thin disc - convergence

    NASA Astrophysics Data System (ADS)

    Čížek, P.; Semerák, O.

    2012-07-01

    In order to find the perturbation of a Schwarzschild space-time due to a rotating thin disc, we try to adjust the method used by [4] in the case of perturbation by a one-dimensional ring. This involves solution of stationary axisymmetric Einstein's equations in terms of spherical-harmonic expansions whose convergence however turned out questionable in numerical examples. Here we show, analytically, that the series are almost everywhere convergent, but in some regions the convergence is not absolute.

  10. INTERNATIONAL CONFERENCE ON SEMICONDUCTOR INJECTION LASERS SELCO-87: Transient heat conduction in laser diodes

    NASA Astrophysics Data System (ADS)

    Enders, P.; Galley, J.

    1988-11-01

    The dynamics of heat transfer in stripe GaAlAs laser diodes is investigated by solving the linear diffusion equation for a quasitwo-dimensional multilayer structure. The calculations are rationalized drastically by the transfer matrix method and also using for the first time the asymptotes of the decay constants. Special attention is given to the convergence of the Fourier series. A comparison with experimental results reveals however that this is essentially the Stefan problem (with moving boundary conditions).

  11. IUTAM Symposium and NATO Advanced Research Workshop on Interpretation of Time Series from Nonlinear Mechanical Systems Held in Coventry, England on 26-30 August 1991. Conference Abstracts

    DTIC Science & Technology

    1991-08-01

    day oscillation in the extratropical atmosphere as identified by multi-channel singular spectrum analysis 10:25 Coffee Break 10:50 Read Chaotic...day oscillation in the extratropical atmosphere as identified by multi-channel singular spectrum analysis M. Kimoto, M. Ghil and K.-C. Mo ABSTRACT...The three-dimensional spatial structure of an oscillatory mode in the Northern Hemisphere (NH) extratropics will be describe.d The oscillation is

  12. Reconstructing latent dynamical noise for better forecasting observables

    NASA Astrophysics Data System (ADS)

    Hirata, Yoshito

    2018-03-01

    I propose a method for reconstructing multi-dimensional dynamical noise inspired by the embedding theorem of Muldoon et al. [Dyn. Stab. Syst. 13, 175 (1998)] by regarding multiple predictions as different observables. Then, applying the embedding theorem by Stark et al. [J. Nonlinear Sci. 13, 519 (2003)] for a forced system, I produce time series forecast by supplying the reconstructed past dynamical noise as auxiliary information. I demonstrate the proposed method on toy models driven by auto-regressive models or independent Gaussian noise.

  13. Skill Testing a Three-Dimensional Global Tide Model to Historical Current Meter Records

    DTIC Science & Technology

    2013-12-17

    up to 20% weaker skill in the Southern Ocean. Citation: Timko, P. G., B. K. Arbic, J. G. Richman, R . B. Scott, E. J. Metzger, and A. J. Wallcraft (2013...model were identified from a current meter archive ( CMA ) of approximately 9000 unique time series previously used by Scott et al. [2010] and Timko et al...2012]. The CMA spans 40 years of observations. Some of the velocity records used in this study represents individ- ual depth bins from ADCP’s. The

  14. Chaotic behaviour of the Rossler model and its analysis by using bifurcations of limit cycles and chaotic attractors

    NASA Astrophysics Data System (ADS)

    Ibrahim, K. M.; Jamal, R. K.; Ali, F. H.

    2018-05-01

    The behaviour of certain dynamical nonlinear systems are described in term as chaos, i.e., systems’ variables change with the time, displaying very sensitivity to initial conditions of chaotic dynamics. In this paper, we study archetype systems of ordinary differential equations in two-dimensional phase spaces of the Rössler model. A system displays continuous time chaos and is explained by three coupled nonlinear differential equations. We study its characteristics and determine the control parameters that lead to different behavior of the system output, periodic, quasi-periodic and chaos. The time series, attractor, Fast Fourier Transformation and bifurcation diagram for different values have been described.

  15. Tracking the Momentum Flux of a CME and Quantifying Its Influence on Geomagnetically Induced Currents at Earth

    NASA Technical Reports Server (NTRS)

    Savani, N. P.; Vourlidas, A.; Pulkkinen, A.; Nieves-Chinchilla, T.; Lavraud, B.; Owens, M. J.

    2013-01-01

    We investigate a coronal mass ejection (CME) propagating toward Earth on 29 March 2011. This event is specifically chosen for its predominately northward directed magnetic field, so that the influence from the momentum flux onto Earth can be isolated. We focus our study on understanding how a small Earth-directed segment propagates. Mass images are created from the white-light cameras onboard STEREO which are also converted into mass height-time maps (mass J-maps). The mass tracks on these J-maps correspond to the sheath region between the CME and its associated shockfront as detected by in situ measurements at L1. A time series of mass measurements from the STEREOCOR-2A instrument is made along the Earth propagation direction. Qualitatively, this mass time series shows a remarkable resemblance to the L1 in situ density series. The in situ measurements are used as inputs into a three-dimensional (3-D) magnetospheric space weather simulation from the Community Coordinated Modeling Center. These simulations display a sudden compression of the magnetosphere from the large momentum flux at the leading edge of the CME, and predictions are made for the time derivative of the magnetic field (dBdt) on the ground. The predicted dBdt values were then compared with the observations from specific equatorially located ground stations and showed notable similarity. This study of the momentum of a CME from the Sun down to its influence on magnetic ground stations on Earth is presented as a preliminary proof of concept, such that future attempts may try to use remote sensing to create density and velocity time series as inputs to magnetospheric simulations.

  16. Ultrafast-based projection-reconstruction three-dimensional nuclear magnetic resonance spectroscopy.

    PubMed

    Mishkovsky, Mor; Kupce, Eriks; Frydman, Lucio

    2007-07-21

    Recent years have witnessed increased efforts toward the accelerated acquisition of multidimensional nuclear magnetic resonance (nD NMR) spectra. Among the methods proposed to speed up these NMR experiments is "projection reconstruction," a scheme based on the acquisition of a reduced number of two-dimensional (2D) NMR data sets constituting cross sections of the nD time domain being sought. Another proposition involves "ultrafast" spectroscopy, capable of completing nD NMR acquisitions within a single scan. Potential limitations of these approaches include the need for a relatively slow 2D-type serial data collection procedure in the former case, and a need for at least n high-performance, linearly independent gradients and a sufficiently high sensitivity in the latter. The present study introduces a new scheme that comes to address these limitations, by combining the basic features of the projection reconstruction and the ultrafast approaches into a single, unified nD NMR experiment. In the resulting method each member within the series of 2D cross sections required by projection reconstruction to deliver the nD NMR spectrum being sought, is acquired within a single scan with the aid of the 2D ultrafast protocol. Full nD NMR spectra can thus become available by backprojecting a small number of 2D sets, collected using a minimum number of scans. Principles, opportunities, and limitations of the resulting approach, together with demonstrations of its practical advantages, are here discussed and illustrated with a series of three-dimensional homo- and heteronuclear NMR correlation experiments.

  17. Thermoelastic damping in thin microrings with two-dimensional heat conduction

    NASA Astrophysics Data System (ADS)

    Fang, Yuming; Li, Pu

    2015-05-01

    Accurate determination of thermoelastic damping (TED) is very challenging in the design of micro-resonators. Microrings are widely used in many micro-resonators. In the past, to model the TED effect on the microrings, some analytical models have been developed. However, in the previous works, the heat conduction within the microring is modeled by using the one-dimensional approach. The governing equation for heat conduction is solved only for the one-dimensional heat conduction along the radial thickness of the microring. This paper presents a simple analytical model for TED in microrings. The two-dimensional heat conduction over the thermoelastic temperature gradients along the radial thickness and the circumferential direction are considered in the present model. A two-dimensional heat conduction equation is developed. The solution of the equation is represented by the product of an assumed sine series along the radial thickness and an assumed trigonometric series along the circumferential direction. The analytical results obtained by the present 2-D model show a good agreement with the numerical (FEM) results. The limitations of the previous 1-D model are assessed.

  18. Spherical harmonics analysis of surface density fluctuations of spherical ionic SDS and nonionic C12E8 micelles: A molecular dynamics study

    NASA Astrophysics Data System (ADS)

    Yoshii, Noriyuki; Nimura, Yuki; Fujimoto, Kazushi; Okazaki, Susumu

    2017-07-01

    The surface structure and its fluctuation of spherical micelles were investigated using a series of density correlation functions newly defined by spherical harmonics and Legendre polynomials based on the molecular dynamics calculations. To investigate the influence of head-group charges on the micelle surface structure, ionic sodium dodecyl sulfate and nonionic octaethyleneglycol monododecylether (C12E8) micelles were investigated as model systems. Large-scale density fluctuations were observed for both micelles in the calculated surface static structure factor. The area compressibility of the micelle surface evaluated by the surface static structure factor was tens-of-times larger than a typical value of a lipid membrane surface. The structural relaxation time, which was evaluated from the surface intermediate scattering function, indicates that the relaxation mechanism of the long-range surface structure can be well described by the hydrostatic approximation. The density fluctuation on the two-dimensional micelle surface has similar characteristics to that of three-dimensional fluids near the critical point.

  19. Spherical harmonics analysis of surface density fluctuations of spherical ionic SDS and nonionic C12E8 micelles: A molecular dynamics study.

    PubMed

    Yoshii, Noriyuki; Nimura, Yuki; Fujimoto, Kazushi; Okazaki, Susumu

    2017-07-21

    The surface structure and its fluctuation of spherical micelles were investigated using a series of density correlation functions newly defined by spherical harmonics and Legendre polynomials based on the molecular dynamics calculations. To investigate the influence of head-group charges on the micelle surface structure, ionic sodium dodecyl sulfate and nonionic octaethyleneglycol monododecylether (C 12 E 8 ) micelles were investigated as model systems. Large-scale density fluctuations were observed for both micelles in the calculated surface static structure factor. The area compressibility of the micelle surface evaluated by the surface static structure factor was tens-of-times larger than a typical value of a lipid membrane surface. The structural relaxation time, which was evaluated from the surface intermediate scattering function, indicates that the relaxation mechanism of the long-range surface structure can be well described by the hydrostatic approximation. The density fluctuation on the two-dimensional micelle surface has similar characteristics to that of three-dimensional fluids near the critical point.

  20. Discontinuous Galerkin algorithms for fully kinetic plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Juno, J.; Hakim, A.; TenBarge, J.

    Here, we present a new algorithm for the discretization of the non-relativistic Vlasov–Maxwell system of equations for the study of plasmas in the kinetic regime. Using the discontinuous Galerkin finite element method for the spatial discretization, we obtain a high order accurate solution for the plasma's distribution function. Time stepping for the distribution function is done explicitly with a third order strong-stability preserving Runge–Kutta method. Since the Vlasov equation in the Vlasov–Maxwell system is a high dimensional transport equation, up to six dimensions plus time, we take special care to note various features we have implemented to reduce the costmore » while maintaining the integrity of the solution, including the use of a reduced high-order basis set. A series of benchmarks, from simple wave and shock calculations, to a five dimensional turbulence simulation, are presented to verify the efficacy of our set of numerical methods, as well as demonstrate the power of the implemented features.« less

  1. Discontinuous Galerkin algorithms for fully kinetic plasmas

    DOE PAGES

    Juno, J.; Hakim, A.; TenBarge, J.; ...

    2017-10-10

    Here, we present a new algorithm for the discretization of the non-relativistic Vlasov–Maxwell system of equations for the study of plasmas in the kinetic regime. Using the discontinuous Galerkin finite element method for the spatial discretization, we obtain a high order accurate solution for the plasma's distribution function. Time stepping for the distribution function is done explicitly with a third order strong-stability preserving Runge–Kutta method. Since the Vlasov equation in the Vlasov–Maxwell system is a high dimensional transport equation, up to six dimensions plus time, we take special care to note various features we have implemented to reduce the costmore » while maintaining the integrity of the solution, including the use of a reduced high-order basis set. A series of benchmarks, from simple wave and shock calculations, to a five dimensional turbulence simulation, are presented to verify the efficacy of our set of numerical methods, as well as demonstrate the power of the implemented features.« less

  2. Rapid computation of directional wellbore drawdown in a confined aquifer via Poisson resummation

    NASA Astrophysics Data System (ADS)

    Blumenthal, Benjamin J.; Zhan, Hongbin

    2016-08-01

    We have derived a rapidly computed analytical solution for drawdown caused by a partially or fully penetrating directional wellbore (vertical, horizontal, or slant) via Green's function method. The mathematical model assumes an anisotropic, homogeneous, confined, box-shaped aquifer. Any dimension of the box can have one of six possible boundary conditions: 1) both sides no-flux; 2) one side no-flux - one side constant-head; 3) both sides constant-head; 4) one side no-flux; 5) one side constant-head; 6) free boundary conditions. The solution has been optimized for rapid computation via Poisson Resummation, derivation of convergence rates, and numerical optimization of integration techniques. Upon application of the Poisson Resummation method, we were able to derive two sets of solutions with inverse convergence rates, namely an early-time rapidly convergent series (solution-A) and a late-time rapidly convergent series (solution-B). From this work we were able to link Green's function method (solution-B) back to image well theory (solution-A). We then derived an equation defining when the convergence rate between solution-A and solution-B is the same, which we termed the switch time. Utilizing the more rapidly convergent solution at the appropriate time, we obtained rapid convergence at all times. We have also shown that one may simplify each of the three infinite series for the three-dimensional solution to 11 terms and still maintain a maximum relative error of less than 10-14.

  3. Cosmological rotating black holes in five-dimensional fake supergravity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nozawa, Masato; Maeda, Kei-ichi; Waseda Research Institute for Science and Engineering, Okubo 3-4-1, Shinjuku, Tokyo 169-8555

    2011-01-15

    In recent series of papers, we found an arbitrary dimensional, time-evolving, and spatially inhomogeneous solution in Einstein-Maxwell-dilaton gravity with particular couplings. Similar to the supersymmetric case, the solution can be arbitrarily superposed in spite of nontrivial time-dependence, since the metric is specified by a set of harmonic functions. When each harmonic has a single point source at the center, the solution describes a spherically symmetric black hole with regular Killing horizons and the spacetime approaches asymptotically to the Friedmann-Lemaitre-Robertson-Walker (FLRW) cosmology. We discuss in this paper that in 5 dimensions, this equilibrium condition traces back to the first-order 'Killing spinor'more » equation in 'fake supergravity' coupled to arbitrary U(1) gauge fields and scalars. We present a five-dimensional, asymptotically FLRW, rotating black-hole solution admitting a nontrivial 'Killing spinor', which is a spinning generalization of our previous solution. We argue that the solution admits nondegenerate and rotating Killing horizons in contrast with the supersymmetric solutions. It is shown that the present pseudo-supersymmetric solution admits closed timelike curves around the central singularities. When only one harmonic is time-dependent, the solution oxidizes to 11 dimensions and realizes the dynamically intersecting M2/M2/M2-branes in a rotating Kasner universe. The Kaluza-Klein-type black holes are also discussed.« less

  4. Application of adaptive gridding to magnetohydrodynamic flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schnack, D.D.; Lotatti, I.; Satyanarayana, P.

    1996-12-31

    The numerical simulation of the primitive, three-dimensional, time-dependent, resistive MHD equations on an unstructured, adaptive poloidal mesh using the TRIM code has been reported previously. The toroidal coordinate is approximated pseudo-spectrally with finite Fourier series and Fast-Fourier Transforms. The finite-volume algorithm preserves the magnetic field as solenoidal to round-off error, and also conserves mass, energy, and magnetic flux exactly. A semi-implicit method is used to allow for large time steps on the unstructured mesh. This is important for tokamak calculations where the relevant time scale is determined by the poloidal Alfven time. This also allows the viscosity to be treatedmore » implicitly. A conjugate-gradient method with pre-conditioning is used for matrix inversion. Applications to the growth and saturation of ideal instabilities in several toroidal fusion systems has been demonstrated. Recently we have concentrated on the details of the mesh adaption algorithm used in TRIM. We present several two-dimensional results relating to the use of grid adaptivity to track the evolution of hydrodynamic and MHD structures. Examples of plasma guns, opening switches, and supersonic flow over a magnetized sphere are presented. Issues relating to mesh adaption criteria are discussed.« less

  5. Interactive visualization of multi-data-set Rietveld analyses using Cinema:Debye-Scherrer.

    PubMed

    Vogel, Sven C; Biwer, Chris M; Rogers, David H; Ahrens, James P; Hackenberg, Robert E; Onken, Drew; Zhang, Jianzhong

    2018-06-01

    A tool named Cinema:Debye-Scherrer to visualize the results of a series of Rietveld analyses is presented. The multi-axis visualization of the high-dimensional data sets resulting from powder diffraction analyses allows identification of analysis problems, prediction of suitable starting values, identification of gaps in the experimental parameter space and acceleration of scientific insight from the experimental data. The tool is demonstrated with analysis results from 59 U-Nb alloy samples with different compositions, annealing times and annealing temperatures as well as with a high-temperature study of the crystal structure of CsPbBr 3 . A script to extract parameters from a series of Rietveld analyses employing the widely used GSAS Rietveld software is also described. Both software tools are available for download.

  6. Interactive visualization of multi-data-set Rietveld analyses using Cinema:Debye-Scherrer

    PubMed Central

    Biwer, Chris M.; Rogers, David H.; Ahrens, James P.; Hackenberg, Robert E.; Onken, Drew; Zhang, Jianzhong

    2018-01-01

    A tool named Cinema:Debye-Scherrer to visualize the results of a series of Rietveld analyses is presented. The multi-axis visualization of the high-dimensional data sets resulting from powder diffraction analyses allows identification of analysis problems, prediction of suitable starting values, identification of gaps in the experimental parameter space and acceleration of scientific insight from the experimental data. The tool is demonstrated with analysis results from 59 U–Nb alloy samples with different compositions, annealing times and annealing temperatures as well as with a high-temperature study of the crystal structure of CsPbBr3. A script to extract parameters from a series of Rietveld analyses employing the widely used GSAS Rietveld software is also described. Both software tools are available for download. PMID:29896062

  7. Localized motion in random matrix decomposition of complex financial systems

    NASA Astrophysics Data System (ADS)

    Jiang, Xiong-Fei; Zheng, Bo; Ren, Fei; Qiu, Tian

    2017-04-01

    With the random matrix theory, we decompose the multi-dimensional time series of complex financial systems into a set of orthogonal eigenmode functions, which are classified into the market mode, sector mode, and random mode. In particular, the localized motion generated by the business sectors, plays an important role in financial systems. Both the business sectors and their impact on the stock market are identified from the localized motion. We clarify that the localized motion induces different characteristics of the time correlations for the stock-market index and individual stocks. With a variation of a two-factor model, we reproduce the return-volatility correlations of the eigenmodes.

  8. Multi-Spacecraft 3D differential emission measure tomography of the solar corona: STEREO results.

    NASA Astrophysics Data System (ADS)

    Vásquez, A. M.; Frazin, R. A.

    We have recently developed a novel technique (called DEMT) for the em- pirical determination of the three-dimensional (3D) distribution of the so- lar corona differential emission measure through multi-spacecraft solar ro- tational tomography of extreme-ultaviolet (EUV) image time series (like those provided by EIT/SOHO and EUVI/STEREO). The technique allows, for the first time, to develop global 3D empirical maps of the coronal elec- tron temperature and density, in the height range 1.0 to 1.25 RS . DEMT constitutes a simple and powerful 3D analysis tool that obviates the need for structure specific modeling.

  9. Synchrotron-based X-ray computed tomography during compression loading of cellular materials

    DOE PAGES

    Cordes, Nikolaus L.; Henderson, Kevin; Stannard, Tyler; ...

    2015-04-29

    Three-dimensional X-ray computed tomography (CT) of in situ dynamic processes provides internal snapshot images as a function of time. Tomograms are mathematically reconstructed from a series of radiographs taken in rapid succession as the specimen is rotated in small angular increments. In addition to spatial resolution, temporal resolution is important. Thus temporal resolution indicates how close together in time two distinct tomograms can be acquired. Tomograms taken in rapid succession allow detailed analyses of internal processes that cannot be obtained by other means. This article describes the state-of-the-art for such measurements acquired using synchrotron radiation as the X-ray source.

  10. Observatory geoelectric fields induced in a two-layer lithosphere during magnetic storms

    USGS Publications Warehouse

    Love, Jeffrey J.; Swidinsky, Andrei

    2015-01-01

    We report on the development and validation of an algorithm for estimating geoelectric fields induced in the lithosphere beneath an observatory during a magnetic storm. To accommodate induction in three-dimensional lithospheric electrical conductivity, we analyze a simple nine-parameter model: two horizontal layers, each with uniform electrical conductivity properties given by independent distortion tensors. With Laplace transformation of the induction equations into the complex frequency domain, we obtain a transfer function describing induction of observatory geoelectric fields having frequency-dependent polarization. Upon inverse transformation back to the time domain, the convolution of the corresponding impulse-response function with a geomagnetic time series yields an estimated geoelectric time series. We obtain an optimized set of conductivity parameters using 1-s resolution geomagnetic and geoelectric field data collected at the Kakioka, Japan, observatory for five different intense magnetic storms, including the October 2003 Halloween storm; our estimated geoelectric field accounts for 93% of that measured during the Halloween storm. This work demonstrates the need for detailed modeling of the Earth’s lithospheric conductivity structure and the utility of co-located geomagnetic and geoelectric monitoring.

  11. Numerical modeling of simultaneous tracer release and piscicide treatment for invasive species control in the Chicago Sanitary and Ship Canal, Chicago, Illinois

    USGS Publications Warehouse

    Zhu, Zhenduo; Motta, Davide; Jackson, P. Ryan; Garcia, Marcelo H.

    2017-01-01

    In December 2009, during a piscicide treatment targeting the invasive Asian carp in the Chicago Sanitary and Ship Canal, Rhodamine WT dye was released to track and document the transport and dispersion of the piscicide. In this study, two modeling approaches are presented to reproduce the advection and dispersion of the dye tracer (and piscicide), a one-dimensional analytical solution and a three-dimensional numerical model. The two approaches were compared with field measurements of concentration and their applicability is discussed. Acoustic Doppler current profiler measurements were used to estimate the longitudinal dispersion coefficients at ten cross sections, which were taken as reference for calibrating the longitudinal dispersion coefficient in the one-dimensional analytical solution. While the analytical solution is fast, relatively simple, and can fairly accurately predict the core of the observed concentration time series at points downstream, it does not capture the tail of the breakthrough curves. These tails are well reproduced by the three-dimensional model, because it accounts for the effects of dead zones and a power plant which withdraws nearly 80 % of the water from the canal for cooling purposes before returning it back to the canal.

  12. On function classes related pertaining to strong approximation of double Fourier series

    NASA Astrophysics Data System (ADS)

    Baituyakova, Zhuldyz

    2015-09-01

    The investigation of embedding of function classes began a long time ago. After Alexits [1], Leindler [2], and Gogoladze[3] investigated estimates of strong approximation by Fourier series in 1965, G. Freud[4] raised the corresponding saturation problem in 1969. The list of the authors dealing with embedding problems partly is also very long. It suffices to mention some names: V. G. Krotov, W. Lenski, S. M. Mazhar, J. Nemeth, E. M. Nikisin, K. I. Oskolkov, G. Sunouchi, J. Szabados, R. Taberski and V. Totik. Study on this topic has since been carried on over a decade, but it seems that most of the results obtained are limited to the case of one dimension. In this paper, embedding results are considered which arise in the strong approximation by double Fourier series. We prove theorem on the interrelation between the classes Wr1,r2HS,M ω and H(λ, p, r1, r2, ω(δ1, δ2)), in the one-dimensional case proved by L. Leindler.

  13. Seasonality of vegetation types of South America depicted by moderate resolution imaging spectroradiometer (MODIS) time series

    NASA Astrophysics Data System (ADS)

    Adami, Marcos; Bernardes, Sérgio; Arai, Egidio; Freitas, Ramon M.; Shimabukuro, Yosio E.; Espírito-Santo, Fernando D. B.; Rudorff, Bernardo F. T.; Anderson, Liana O.

    2018-07-01

    The development, implementation and enforcement of policies involving the rational use of the land and the conservation of natural resources depend on an adequate characterization and understanding of the land cover, including its dynamics. This paper presents an approach for monitoring vegetation dynamics using high-quality time series of MODIS surface reflectance data by generating fraction images using Linear Spectral Mixing Model (LSMM) over South America continent. The approach uses physically-based fraction images, which highlight target information and reduce data dimensionality. Further dimensionality was also reduced by using the vegetation fraction images as input to a Principal Component Analysis (PCA). The RGB composite of the first three PCA components, accounting for 92.9% of the dataset variability, showed good agreement with the main ecological regions of South America continent. The analysis of 21 temporal profiles of vegetation fraction values and precipitation data over South America showed the ability of vegetation fractions to represent phenological cycles over a variety of environments. Comparisons between vegetation fractions and precipitation data indicated the close relationship between water availability and leaf mass/chlorophyll content for several vegetation types. In addition, phenological changes and disturbance resulting from anthropogenic pressure were identified, particularly those associated with agricultural practices and forest removal. Therefore the proposed method supports the management of natural and non-natural ecosystems, and can contribute to the understanding of key conservation issues in South America, including deforestation, disturbance and fire occurrence and management.

  14. Prehistoric earthquake history revealed by lacustrine slump deposits

    NASA Astrophysics Data System (ADS)

    Schnellmann, Michael; Anselmetti, Flavio S.; Giardini, Domenico; McKenzie, Judith A.; Ward, Steven N.

    2002-12-01

    Five strong paleoseismic events were recorded in the past 15 k.y. in a series of slump deposits in the subsurface of Lake Lucerne, central Switzerland, revealing for the first time the paleoseismic history of one of the most seismically active areas in central Europe. Although many slump deposits in marine and lacustrine environments were previously attributed to historic earthquakes, the lack of detailed three-dimensional stratigraphic correlation in combination with accurate dating hampered the use of multiple slump deposits as paleoseismic indicators. This study investigated the fingerprint of the well-described A.D. 1601 earthquake (I = VII VIII, Mw ˜ 6.2) in the sediments of Lake Lucerne. The earthquake triggered numerous synchronous slumps and megaturbidites within different subbasins of the lake, producing a characteristic pattern that can be used to assign a seismic triggering mechanism to prehistoric slump events. For each seismic event horizon, the slump synchronicity was established by seismic-stratigraphic correlation between individual slump deposits through a quasi-three-dimensional high-resolution seismic survey grid. Four prehistoric events, dated by accelerator mass spectrometry, 14C measurements, and tephrochronology on a series of long gravity cores, occurred at 2420, 9770, 13,910, and 14,560 calendar yr ago. These recurrence times are essential factors for assessing seismic hazard in the area. The seismic hazard for lakeshore communities is additionally amplified by slump-induced tsunami and seiche waves. Numerical modeling of such tsunami waves revealed wave heights to 3 m, indicating tsunami risk in lacustrine environments.

  15. Prospective MR image alignment between breath-holds: Application to renal BOLD MRI.

    PubMed

    Kalis, Inge M; Pilutti, David; Krafft, Axel J; Hennig, Jürgen; Bock, Michael

    2017-04-01

    To present an image registration method for renal blood oxygen level-dependent (BOLD) measurements that enables semiautomatic assessment of parenchymal and medullary R2* changes under a functional challenge. In a series of breath-hold acquisitions, three-dimensional data were acquired initially for prospective image registration of subsequent BOLD measurements. An algorithm for kidney alignment for BOLD renal imaging (KALIBRI) was implemented to detect the positions of the left and right kidney so that the kidneys were acquired in the subsequent BOLD measurement at consistent anatomical locations. Residual in-plane distortions were corrected retrospectively so that semiautomatic dynamic R2* measurements of the renal cortex and medulla become feasible. KALIBRI was tested in six healthy volunteers during a series of BOLD experiments, which included a 600- to 1000-mL water challenge. Prospective image registration and BOLD imaging of each kidney was achieved within a total measurement time of about 17 s, enabling its execution within a single breath-hold. KALIBRI improved the registration by up to 35% as found with mutual information measures. In four volunteers, a medullary R2* decrease of up to 40% was observed after water ingestion. KALIBRI improves the quality of two-dimensional time-resolved renal BOLD MRI by aligning local renal anatomy, which allows for consistent R2* measurements over many breath-holds. Magn Reson Med 77:1573-1582, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  16. Deformation Monitoring and Analysis of Lsp Landslide Based on Gbinsar

    NASA Astrophysics Data System (ADS)

    Zhou, L.; Guo, J.; Yang, F.

    2018-05-01

    Monitoring and analyzing the deformation of the river landslide in city to master the deformation law of landslide, which is an important means of landslide safety assessment. In this paper, aiming at the stability of the Liu Sha Peninsula Landslide during its strengthening process after the landslide disaster. Continuous and high precision deformation monitoring of the landslide was carried out by GBInSAR technique. Meanwhile, the two-dimensional deformation time series pictures of the landslide body were retrieved by the time series analysis method. The deformation monitoring and analysis results show that the reinforcement belt on the landslide body was basically stable and the deformation of most PS points on the reinforcement belt was within 1 mm. The deformation of most areas on the landslide body was basically within 4 mm, and the deformation presented obvious nonlinear changes. GBInSAR technique can quickly and effectively obtain the entire deformation information of the river landslide and the evolution process of deformation.

  17. Tipping point analysis of ocean acoustic noise

    NASA Astrophysics Data System (ADS)

    Livina, Valerie N.; Brouwer, Albert; Harris, Peter; Wang, Lian; Sotirakopoulos, Kostas; Robinson, Stephen

    2018-02-01

    We apply tipping point analysis to a large record of ocean acoustic data to identify the main components of the acoustic dynamical system and study possible bifurcations and transitions of the system. The analysis is based on a statistical physics framework with stochastic modelling, where we represent the observed data as a composition of deterministic and stochastic components estimated from the data using time-series techniques. We analyse long-term and seasonal trends, system states and acoustic fluctuations to reconstruct a one-dimensional stochastic equation to approximate the acoustic dynamical system. We apply potential analysis to acoustic fluctuations and detect several changes in the system states in the past 14 years. These are most likely caused by climatic phenomena. We analyse trends in sound pressure level within different frequency bands and hypothesize a possible anthropogenic impact on the acoustic environment. The tipping point analysis framework provides insight into the structure of the acoustic data and helps identify its dynamic phenomena, correctly reproducing the probability distribution and scaling properties (power-law correlations) of the time series.

  18. Optimal estimation of recurrence structures from time series

    NASA Astrophysics Data System (ADS)

    beim Graben, Peter; Sellers, Kristin K.; Fröhlich, Flavio; Hutt, Axel

    2016-05-01

    Recurrent temporal dynamics is a phenomenon observed frequently in high-dimensional complex systems and its detection is a challenging task. Recurrence quantification analysis utilizing recurrence plots may extract such dynamics, however it still encounters an unsolved pertinent problem: the optimal selection of distance thresholds for estimating the recurrence structure of dynamical systems. The present work proposes a stochastic Markov model for the recurrent dynamics that allows for the analytical derivation of a criterion for the optimal distance threshold. The goodness of fit is assessed by a utility function which assumes a local maximum for that threshold reflecting the optimal estimate of the system's recurrence structure. We validate our approach by means of the nonlinear Lorenz system and its linearized stochastic surrogates. The final application to neurophysiological time series obtained from anesthetized animals illustrates the method and reveals novel dynamic features of the underlying system. We propose the number of optimal recurrence domains as a statistic for classifying an animals' state of consciousness.

  19. Stochastic Forecasting of Labor Supply and Population: An Integrated Model.

    PubMed

    Fuchs, Johann; Söhnlein, Doris; Weber, Brigitte; Weber, Enzo

    2018-01-01

    This paper presents a stochastic model to forecast the German population and labor supply until 2060. Within a cohort-component approach, our population forecast applies principal components analysis to birth, mortality, emigration, and immigration rates, which allows for the reduction of dimensionality and accounts for correlation of the rates. Labor force participation rates are estimated by means of an econometric time series approach. All time series are forecast by stochastic simulation using the bootstrap method. As our model also distinguishes between German and foreign nationals, different developments in fertility, migration, and labor participation could be predicted. The results show that even rising birth rates and high levels of immigration cannot break the basic demographic trend in the long run. An important finding from an endogenous modeling of emigration rates is that high net migration in the long run will be difficult to achieve. Our stochastic perspective suggests therefore a high probability of substantially decreasing the labor supply in Germany.

  20. Hidden discriminative features extraction for supervised high-order time series modeling.

    PubMed

    Nguyen, Ngoc Anh Thi; Yang, Hyung-Jeong; Kim, Sunhee

    2016-11-01

    In this paper, an orthogonal Tucker-decomposition-based extraction of high-order discriminative subspaces from a tensor-based time series data structure is presented, named as Tensor Discriminative Feature Extraction (TDFE). TDFE relies on the employment of category information for the maximization of the between-class scatter and the minimization of the within-class scatter to extract optimal hidden discriminative feature subspaces that are simultaneously spanned by every modality for supervised tensor modeling. In this context, the proposed tensor-decomposition method provides the following benefits: i) reduces dimensionality while robustly mining the underlying discriminative features, ii) results in effective interpretable features that lead to an improved classification and visualization, and iii) reduces the processing time during the training stage and the filtering of the projection by solving the generalized eigenvalue issue at each alternation step. Two real third-order tensor-structures of time series datasets (an epilepsy electroencephalogram (EEG) that is modeled as channel×frequency bin×time frame and a microarray data that is modeled as gene×sample×time) were used for the evaluation of the TDFE. The experiment results corroborate the advantages of the proposed method with averages of 98.26% and 89.63% for the classification accuracies of the epilepsy dataset and the microarray dataset, respectively. These performance averages represent an improvement on those of the matrix-based algorithms and recent tensor-based, discriminant-decomposition approaches; this is especially the case considering the small number of samples that are used in practice. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Interpretation of a compositional time series

    NASA Astrophysics Data System (ADS)

    Tolosana-Delgado, R.; van den Boogaart, K. G.

    2012-04-01

    Common methods for multivariate time series analysis use linear operations, from the definition of a time-lagged covariance/correlation to the prediction of new outcomes. However, when the time series response is a composition (a vector of positive components showing the relative importance of a set of parts in a total, like percentages and proportions), then linear operations are afflicted of several problems. For instance, it has been long recognised that (auto/cross-)correlations between raw percentages are spurious, more dependent on which other components are being considered than on any natural link between the components of interest. Also, a long-term forecast of a composition in models with a linear trend will ultimately predict negative components. In general terms, compositional data should not be treated in a raw scale, but after a log-ratio transformation (Aitchison, 1986: The statistical analysis of compositional data. Chapman and Hill). This is so because the information conveyed by a compositional data is relative, as stated in their definition. The principle of working in coordinates allows to apply any sort of multivariate analysis to a log-ratio transformed composition, as long as this transformation is invertible. This principle is of full application to time series analysis. We will discuss how results (both auto/cross-correlation functions and predictions) can be back-transformed, viewed and interpreted in a meaningful way. One view is to use the exhaustive set of all possible pairwise log-ratios, which allows to express the results into D(D - 1)/2 separate, interpretable sets of one-dimensional models showing the behaviour of each possible pairwise log-ratios. Another view is the interpretation of estimated coefficients or correlations back-transformed in terms of compositions. These two views are compatible and complementary. These issues are illustrated with time series of seasonal precipitation patterns at different rain gauges of the USA. In this data set, the proportion of annual precipitation falling in winter, spring, summer and autumn is considered a 4-component time series. Three invertible log-ratios are defined for calculations, balancing rainfall in autumn vs. winter, in summer vs. spring, and in autumn-winter vs. spring-summer. Results suggest a 2-year correlation range, and certain oscillatory behaviour in the last balance, which does not occur in the other two.

  2. Space-Pseudo-Time Method: Application to the One-Dimensional Coulomb Potential and Density Funtional Theory

    NASA Astrophysics Data System (ADS)

    Weatherford, Charles; Gebremedhin, Daniel

    2016-03-01

    A new and efficient way of evolving a solution to an ordinary differential equation is presented. A finite element method is used where we expand in a convenient local basis set of functions that enforce both function and first derivative continuity across the boundaries of each element. We also implement an adaptive step size choice for each element that is based on a Taylor series expansion. The method is applied to solve for the eigenpairs of the one-dimensional soft-coulomb potential and the hard-coulomb limit is studied. The method is then used to calculate a numerical solution of the Kohn-Sham differential equation within the local density approximation is presented and is applied to the helium atom. Supported by the National Nuclear Security Agency, the Nuclear Regulatory Commission, and the Defense Threat Reduction Agency.

  3. Experiment and simulation on one-dimensional plasma photonic crystals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Lin; Ouyang, Ji-Ting, E-mail: jtouyang@bit.edu.cn

    2014-10-15

    The transmission characteristics of microwaves passing through one-dimensional plasma photonic crystals (PPCs) have been investigated by experiment and simulation. The PPCs were formed by a series of discharge tubes filled with argon at 5 Torr that the plasma density in tubes can be varied by adjusting the discharge current. The transmittance of X-band microwaves through the crystal structure was measured under different discharge currents and geometrical parameters. The finite-different time-domain method was employed to analyze the detailed properties of the microwaves propagation. The results show that there exist bandgaps when the plasma is turned on. The properties of bandgaps depend onmore » the plasma density and the geometrical parameters of the PPCs structure. The PPCs can perform as dynamical band-stop filter to control the transmission of microwaves within a wide frequency range.« less

  4. A coarse-grained Monte Carlo approach to diffusion processes in metallic nanoparticles

    NASA Astrophysics Data System (ADS)

    Hauser, Andreas W.; Schnedlitz, Martin; Ernst, Wolfgang E.

    2017-06-01

    A kinetic Monte Carlo approach on a coarse-grained lattice is developed for the simulation of surface diffusion processes of Ni, Pd and Au structures with diameters in the range of a few nanometers. Intensity information obtained via standard two-dimensional transmission electron microscopy imaging techniques is used to create three-dimensional structure models as input for a cellular automaton. A series of update rules based on reaction kinetics is defined to allow for a stepwise evolution in time with the aim to simulate surface diffusion phenomena such as Rayleigh breakup and surface wetting. The material flow, in our case represented by the hopping of discrete portions of metal on a given grid, is driven by the attempt to minimize the surface energy, which can be achieved by maximizing the number of filled neighbor cells.

  5. Prediction (early recognition) of emerging flu strain clusters

    NASA Astrophysics Data System (ADS)

    Li, X.; Phillips, J. C.

    2017-08-01

    Early detection of incipient dominant influenza strains is one of the key steps in the design and manufacture of an effective annual influenza vaccine. Here we report the most current results for pandemic H3N2 flu vaccine design. A 2006 model of dimensional reduction (compaction) of viral mutational complexity derives two-dimensional Cartesian mutational maps (2DMM) that exhibit an emergent dominant strain as a small and distinct cluster of as few as 10 strains. We show that recent extensions of this model can detect incipient strains one year or more in advance of their dominance in the human population. Our structural interpretation of our unexpectedly rich 2DMM involves sialic acid, and is based on nearly 6000 strains in a series of recent 3-year time windows. Vaccine effectiveness is predicted best by analyzing dominant mutational epitopes.

  6. Using spectrotemporal indices to improve the fruit-tree crop classification accuracy

    NASA Astrophysics Data System (ADS)

    Peña, M. A.; Liao, R.; Brenning, A.

    2017-06-01

    This study assesses the potential of spectrotemporal indices derived from satellite image time series (SITS) to improve the classification accuracy of fruit-tree crops. Six major fruit-tree crop types in the Aconcagua Valley, Chile, were classified by applying various linear discriminant analysis (LDA) techniques on a Landsat-8 time series of nine images corresponding to the 2014-15 growing season. As features we not only used the complete spectral resolution of the SITS, but also all possible normalized difference indices (NDIs) that can be constructed from any two bands of the time series, a novel approach to derive features from SITS. Due to the high dimensionality of this "enhanced" feature set we used the lasso and ridge penalized variants of LDA (PLDA). Although classification accuracies yielded by the standard LDA applied on the full-band SITS were good (misclassification error rate, MER = 0.13), they were further improved by 23% (MER = 0.10) with ridge PLDA using the enhanced feature set. The most important bands to discriminate the crops of interest were mainly concentrated on the first two image dates of the time series, corresponding to the crops' greenup stage. Despite the high predictor weights provided by the red and near infrared bands, typically used to construct greenness spectral indices, other spectral regions were also found important for the discrimination, such as the shortwave infrared band at 2.11-2.19 μm, sensitive to foliar water changes. These findings support the usefulness of spectrotemporal indices in the context of SITS-based crop type classifications, which until now have been mainly constructed by the arithmetic combination of two bands of the same image date in order to derive greenness temporal profiles like those from the normalized difference vegetation index.

  7. Dynamic phase differences based on quantitative phase imaging for the objective evaluation of cell behavior.

    PubMed

    Krizova, Aneta; Collakova, Jana; Dostal, Zbynek; Kvasnica, Lukas; Uhlirova, Hana; Zikmund, Tomas; Vesely, Pavel; Chmelik, Radim

    2015-01-01

    Quantitative phase imaging (QPI) brought innovation to noninvasive observation of live cell dynamics seen as cell behavior. Unlike the Zernike phase contrast or differential interference contrast, QPI provides quantitative information about cell dry mass distribution. We used such data for objective evaluation of live cell behavioral dynamics by the advanced method of dynamic phase differences (DPDs). The DPDs method is considered a rational instrument offered by QPI. By subtracting the antecedent from the subsequent image in a time-lapse series, only the changes in mass distribution in the cell are detected. The result is either visualized as a two dimensional color-coded projection of these two states of the cell or as a time dependence of changes quantified in picograms. Then in a series of time-lapse recordings, the chain of cell mass distribution changes that would otherwise escape attention is revealed. Consequently, new salient features of live cell behavior should emerge. Construction of the DPDs method and results exhibiting the approach are presented. Advantage of the DPDs application is demonstrated on cells exposed to an osmotic challenge. For time-lapse acquisition of quantitative phase images, the recently developed coherence-controlled holographic microscope was employed.

  8. Dynamic phase differences based on quantitative phase imaging for the objective evaluation of cell behavior

    NASA Astrophysics Data System (ADS)

    Krizova, Aneta; Collakova, Jana; Dostal, Zbynek; Kvasnica, Lukas; Uhlirova, Hana; Zikmund, Tomas; Vesely, Pavel; Chmelik, Radim

    2015-11-01

    Quantitative phase imaging (QPI) brought innovation to noninvasive observation of live cell dynamics seen as cell behavior. Unlike the Zernike phase contrast or differential interference contrast, QPI provides quantitative information about cell dry mass distribution. We used such data for objective evaluation of live cell behavioral dynamics by the advanced method of dynamic phase differences (DPDs). The DPDs method is considered a rational instrument offered by QPI. By subtracting the antecedent from the subsequent image in a time-lapse series, only the changes in mass distribution in the cell are detected. The result is either visualized as a two-dimensional color-coded projection of these two states of the cell or as a time dependence of changes quantified in picograms. Then in a series of time-lapse recordings, the chain of cell mass distribution changes that would otherwise escape attention is revealed. Consequently, new salient features of live cell behavior should emerge. Construction of the DPDs method and results exhibiting the approach are presented. Advantage of the DPDs application is demonstrated on cells exposed to an osmotic challenge. For time-lapse acquisition of quantitative phase images, the recently developed coherence-controlled holographic microscope was employed.

  9. Dimensional adjectives: factors affecting children's ability to compare objects using novel words.

    PubMed

    Ryalls, B O

    2000-05-01

    A series of 3 studies tested the hypothesis that children's difficulty acquiring dimensional adjectives, such as big, little, tall, and short, is a consequence of how these words are used by adults. Three- and 4-year-olds were asked to compare pairs of objects drawn from a novel stimulus series using real dimension words (taller and shorter; Study 1) and novel dimension words (maller and borger; Studies 1-3). Characteristics of testing, including the presence or absence of a categorization task, were manipulated. Findings indicated that children easily acquired novel dimension words when they were used in a strictly comparative fashion but had difficulty when also exposed to the categorical form of usage. It is concluded that having to learn both categorical and comparative meanings at once may impede acquisition of dimensional adjectives. Copyright 2000 Academic Press.

  10. PERSONALITY DISORDER RESEARCH AGENDA FOR THE DSM–V

    PubMed Central

    Widiger, Thomas A.; Simonsen, Erik; Krueger, Robert; Livesley, W. John; Verheul, Roel

    2008-01-01

    The American Psychiatric Association is sponsoring a series of international conferences to set a research agenda for the development of the next edition of the diagnostic manual. The first conference in this series, “Dimensional Models of Personality Disorder: Etiology, Pathology, Phenomenology, & Treatment,” was devoted to reviewing the existing research and setting a future research agenda that would be most effective in leading the field toward a dimensional classification of personality disorder. The purpose of this article, authored by the Steering Committee of this conference, was to provide a summary of the conference papers and their recommendations for research. Covered herein are the reviews and recommendations concerning alternative dimensional models of personality disorder, behavioral genetics and gene mapping, neurobiological mechanisms, childhood antecedents, cross–cultural issues, Axes I and II continuity, coverage and cutoff points for diagnosis, and clinical utility. PMID:16175740

  11. Dynamical behaviors of inter-out-of-equilibrium state intervals in Korean futures exchange markets

    NASA Astrophysics Data System (ADS)

    Lim, Gyuchang; Kim, SooYong; Kim, Kyungsik; Lee, Dong-In; Scalas, Enrico

    2008-05-01

    A recently discovered feature of financial markets, the two-phase phenomenon, is utilized to categorize a financial time series into two phases, namely equilibrium and out-of-equilibrium states. For out-of-equilibrium states, we analyze the time intervals at which the state is revisited. The power-law distribution of inter-out-of-equilibrium state intervals is shown and we present an analogy with discrete-time heat bath dynamics, similar to random Ising systems. In the mean-field approximation, this model reduces to a one-dimensional multiplicative process. By varying global and local model parameters, the relevance between volatilities in financial markets and the interaction strengths between agents in the Ising model are investigated and discussed.

  12. Interannual variability in phytoplankton pigment distribution during the spring transition along the west coast of North America

    NASA Technical Reports Server (NTRS)

    Thomas, A. C.; Strub, P. T.

    1989-01-01

    A 5-year time series of coastal zone color scanner imagery (1980-1983, 1986) is used to examine changes in the large-scale pattern of chlorophyll pigment concentration coincident with the spring transition in winds and currents along the west coast of North America. The data show strong interannual variability in the timing and spatial patterns of pigment concentration at the time of the transition event. Interannual variability in the response of pigment concentration to the spring transition appears to be a function of spatial and temporal variability in vertical nutrient flux induced by wind mixing and/or the upwelling initiated at the time of the transition. Interannual differences in the mixing regime are illustrated with a one-dimensional mixing model.

  13. Flight investigation of a four-dimensional terminal area guidance system for STOL aircraft

    NASA Technical Reports Server (NTRS)

    Neuman, F.; Hardy, G. H.

    1981-01-01

    A series of flight tests and fast-time simulations were conducted, using the augmentor wing jet STOL research aircraft and the STOLAND 4D-RNAV system to add to the growing data base of 4D-RNAV system performance capabilities. To obtain statistically meaningful data a limited amount of flight data were supplemented by a statistically significant amount of data obtained from fast-time simulation. The results of these tests are reported. Included are comparisons of the 4D-RNAV estimated winds with actual winds encountered in flight, as well as data on along-track navigation and guidance errors, and time-of-arrival errors at the final approach waypoint. In addition, a slight improvement of the STOLAND 4D-RNAV system is proposed and demonstrated, using the fast-time simulation.

  14. Cross-entropy clustering framework for catchment classification

    NASA Astrophysics Data System (ADS)

    Tongal, Hakan; Sivakumar, Bellie

    2017-09-01

    There is an increasing interest in catchment classification and regionalization in hydrology, as they are useful for identification of appropriate model complexity and transfer of information from gauged catchments to ungauged ones, among others. This study introduces a nonlinear cross-entropy clustering (CEC) method for classification of catchments. The method specifically considers embedding dimension (m), sample entropy (SampEn), and coefficient of variation (CV) to represent dimensionality, complexity, and variability of the time series, respectively. The method is applied to daily streamflow time series from 217 gauging stations across Australia. The results suggest that a combination of linear and nonlinear parameters (i.e. m, SampEn, and CV), representing different aspects of the underlying dynamics of streamflows, could be useful for determining distinct patterns of flow generation mechanisms within a nonlinear clustering framework. For the 217 streamflow time series, nine hydrologically homogeneous clusters that have distinct patterns of flow regime characteristics and specific dominant hydrological attributes with different climatic features are obtained. Comparison of the results with those obtained using the widely employed k-means clustering method (which results in five clusters, with the loss of some information about the features of the clusters) suggests the superiority of the cross-entropy clustering method. The outcomes from this study provide a useful guideline for employing the nonlinear dynamic approaches based on hydrologic signatures and for gaining an improved understanding of streamflow variability at a large scale.

  15. Spatial-temporal forecasting the sunspot diagram

    NASA Astrophysics Data System (ADS)

    Covas, Eurico

    2017-09-01

    Aims: We attempt to forecast the Sun's sunspot butterfly diagram in both space (I.e. in latitude) and time, instead of the usual one-dimensional time series forecasts prevalent in the scientific literature. Methods: We use a prediction method based on the non-linear embedding of data series in high dimensions. We use this method to forecast both in latitude (space) and in time, using a full spatial-temporal series of the sunspot diagram from 1874 to 2015. Results: The analysis of the results shows that it is indeed possible to reconstruct the overall shape and amplitude of the spatial-temporal pattern of sunspots, but that the method in its current form does not have real predictive power. We also apply a metric called structural similarity to compare the forecasted and the observed butterfly cycles, showing that this metric can be a useful addition to the usual root mean square error metric when analysing the efficiency of different prediction methods. Conclusions: We conclude that it is in principle possible to reconstruct the full sunspot butterfly diagram for at least one cycle using this approach and that this method and others should be explored since just looking at metrics such as sunspot count number or sunspot total area coverage is too reductive given the spatial-temporal dynamical complexity of the sunspot butterfly diagram. However, more data and/or an improved approach is probably necessary to have true predictive power.

  16. A Regularized Linear Dynamical System Framework for Multivariate Time Series Analysis.

    PubMed

    Liu, Zitao; Hauskrecht, Milos

    2015-01-01

    Linear Dynamical System (LDS) is an elegant mathematical framework for modeling and learning Multivariate Time Series (MTS). However, in general, it is difficult to set the dimension of an LDS's hidden state space. A small number of hidden states may not be able to model the complexities of a MTS, while a large number of hidden states can lead to overfitting. In this paper, we study learning methods that impose various regularization penalties on the transition matrix of the LDS model and propose a regularized LDS learning framework (rLDS) which aims to (1) automatically shut down LDSs' spurious and unnecessary dimensions, and consequently, address the problem of choosing the optimal number of hidden states; (2) prevent the overfitting problem given a small amount of MTS data; and (3) support accurate MTS forecasting. To learn the regularized LDS from data we incorporate a second order cone program and a generalized gradient descent method into the Maximum a Posteriori framework and use Expectation Maximization to obtain a low-rank transition matrix of the LDS model. We propose two priors for modeling the matrix which lead to two instances of our rLDS. We show that our rLDS is able to recover well the intrinsic dimensionality of the time series dynamics and it improves the predictive performance when compared to baselines on both synthetic and real-world MTS datasets.

  17. Modelling Parsing Constraints with High-Dimensional Context Space.

    ERIC Educational Resources Information Center

    Burgess, Curt; Lund, Kevin

    1997-01-01

    Presents a model of high-dimensional context space, the Hyperspace Analogue to Language (HAL), with a series of simulations modelling human empirical results. Proposes that HAL's context space can be used to provide a basic categorization of semantic and grammatical concepts; model certain aspects of morphological ambiguity in verbs; and provide…

  18. Three-dimensional representation of curved nanowires.

    PubMed

    Huang, Z; Dikin, D A; Ding, W; Qiao, Y; Chen, X; Fridman, Y; Ruoff, R S

    2004-12-01

    Nanostructures, such as nanowires, nanotubes and nanocoils, can be described in many cases as quasi one-dimensional curved objects projecting in three-dimensional space. A parallax method to construct the correct three-dimensional geometry of such one-dimensional nanostructures is presented. A series of scanning electron microscope images was acquired at different view angles, thus providing a set of image pairs that were used to generate three-dimensional representations using a matlab program. An error analysis as a function of the view angle between the two images is presented and discussed. As an example application, the importance of knowing the true three-dimensional shape of boron nanowires is demonstrated; without the nanowire's correct length and diameter, mechanical resonance data cannot provide an accurate estimate of Young's modulus.

  19. A two-dimensional graphing program for the Tektronix 4050-series graphics computers

    USGS Publications Warehouse

    Kipp, K.L.

    1983-01-01

    A refined, two-dimensional graph-plotting program was developed for use on Tektronix 4050-series graphics computers. Important features of this program include: any combination of logarithmic and linear axes, optional automatic scaling and numbering of the axes, multiple-curve plots, character or drawn symbol-point plotting, optional cartridge-tape data input and plot-format storage, optional spline fitting for smooth curves, and built-in data-editing options. The program is run while the Tektronix is not connected to any large auxiliary computer, although data from files on an auxiliary computer easily can be transferred to data-cartridge for later plotting. The user is led through the plot-construction process by a series of questions and requests for data input. Five example plots are presented to illustrate program capability and the sequence of program operation. (USGS)

  20. Topology of large-scale structure. IV - Topology in two dimensions

    NASA Technical Reports Server (NTRS)

    Melott, Adrian L.; Cohen, Alexander P.; Hamilton, Andrew J. S.; Gott, J. Richard, III; Weinberg, David H.

    1989-01-01

    In a recent series of papers, an algorithm was developed for quantitatively measuring the topology of the large-scale structure of the universe and this algorithm was applied to numerical models and to three-dimensional observational data sets. In this paper, it is shown that topological information can be derived from a two-dimensional cross section of a density field, and analytic expressions are given for a Gaussian random field. The application of a two-dimensional numerical algorithm for measuring topology to cross sections of three-dimensional models is demonstrated.

  1. Evaluation of drought using SPEI drought class transitions and log-linear models for different agro-ecological regions of India

    NASA Astrophysics Data System (ADS)

    Alam, N. M.; Sharma, G. C.; Moreira, Elsa; Jana, C.; Mishra, P. K.; Sharma, N. K.; Mandal, D.

    2017-08-01

    Markov chain and 3-dimensional log-linear models were attempted to model drought class transitions derived from the newly developed drought index the Standardized Precipitation Evapotranspiration Index (SPEI) at a 12 month time scale for six major drought prone areas of India. Log-linear modelling approach has been used to investigate differences relative to drought class transitions using SPEI-12 time series derived form 48 yeas monthly rainfall and temperature data. In this study, the probabilities of drought class transition, the mean residence time, the 1, 2 or 3 months ahead prediction of average transition time between drought classes and the drought severity class have been derived. Seasonality of precipitation has been derived for non-homogeneous Markov chains which could be used to explain the effect of the potential retreat of drought. Quasi-association and Quasi-symmetry log-linear models have been fitted to the drought class transitions derived from SPEI-12 time series. The estimates of odds along with their confidence intervals were obtained to explain the progression of drought and estimation of drought class transition probabilities. For initial months as the drought severity increases the calculated odds shows lower value and the odds decreases for the succeeding months. This indicates that the ratio of expected frequencies of occurrence of transition from drought class to the non-drought class decreases as compared to transition to any drought class when the drought severity of the present class increases. From 3-dimensional log-linear model it is clear that during the last 24 years the drought probability has increased for almost all the six regions. The findings from the present study will immensely help to assess the impact of drought on the gross primary production and to develop future contingent planning in similar regions worldwide.

  2. Constructing the reduced dynamical models of interannual climate variability from spatial-distributed time series

    NASA Astrophysics Data System (ADS)

    Mukhin, Dmitry; Gavrilov, Andrey; Loskutov, Evgeny; Feigin, Alexander

    2016-04-01

    We suggest a method for empirical forecast of climate dynamics basing on the reconstruction of reduced dynamical models in a form of random dynamical systems [1,2] derived from observational time series. The construction of proper embedding - the set of variables determining the phase space the model works in - is no doubt the most important step in such a modeling, but this task is non-trivial due to huge dimension of time series of typical climatic fields. Actually, an appropriate expansion of observational time series is needed yielding the number of principal components considered as phase variables, which are to be efficient for the construction of low-dimensional evolution operator. We emphasize two main features the reduced models should have for capturing the main dynamical properties of the system: (i) taking into account time-lagged teleconnections in the atmosphere-ocean system and (ii) reflecting the nonlinear nature of these teleconnections. In accordance to these principles, in this report we present the methodology which includes the combination of a new way for the construction of an embedding by the spatio-temporal data expansion and nonlinear model construction on the basis of artificial neural networks. The methodology is aplied to NCEP/NCAR reanalysis data including fields of sea level pressure, geopotential height, and wind speed, covering Northern Hemisphere. Its efficiency for the interannual forecast of various climate phenomena including ENSO, PDO, NAO and strong blocking event condition over the mid latitudes, is demonstrated. Also, we investigate the ability of the models to reproduce and predict the evolution of qualitative features of the dynamics, such as spectral peaks, critical transitions and statistics of extremes. This research was supported by the Government of the Russian Federation (Agreement No. 14.Z50.31.0033 with the Institute of Applied Physics RAS) [1] Y. I. Molkov, E. M. Loskutov, D. N. Mukhin, and A. M. Feigin, "Random dynamical models from time series," Phys. Rev. E, vol. 85, no. 3, p. 036216, 2012. [2] D. Mukhin, D. Kondrashov, E. Loskutov, A. Gavrilov, A. Feigin, and M. Ghil, "Predicting Critical Transitions in ENSO models. Part II: Spatially Dependent Models," J. Clim., vol. 28, no. 5, pp. 1962-1976, 2015.

  3. Spectral Eclipse Timing

    NASA Astrophysics Data System (ADS)

    Dobbs-Dixon, Ian; Agol, Eric; Deming, Drake

    2015-12-01

    We utilize multi-dimensional simulations of varying equatorial jet strength to predict wavelength-dependent variations in the eclipse times of gas-giant planets. A displaced hot spot introduces an asymmetry in the secondary eclipse light curve that manifests itself as a measured offset in the timing of the center of eclipse. A multi-wavelength observation of secondary eclipse, one probing the timing of barycentric eclipse at short wavelengths and another probing at longer wavelengths, will reveal the longitudinal displacement of the hot spot and break the degeneracy between this effect and that associated with the asymmetry due to an eccentric orbit. The effect of time offsets was first explored in the IRAC wavebands by Williams et al. Here we improve upon their methodology, extend to a broad range of wavelengths, and demonstrate our technique on a series of multi-dimensional radiative-hydrodynamical simulations of HD 209458b with varying equatorial jet strength and hot-spot displacement. Simulations with the largest hot-spot displacement result in timing offsets of up to 100 s in the infrared. Though we utilize a particular radiative hydrodynamical model to demonstrate this effect, the technique is model independent. This technique should allow a much larger survey of hot-spot displacements with the James Webb Space Telescope than currently accessible with time-intensive phase curves, hopefully shedding light on the physical mechanisms associated with thermal energy advection in irradiated gas giants.

  4. TNT Prout-Tompkins Kinetics Calibration with PSUADE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wemhoff, A P; Hsieh, H

    2007-04-11

    We used the code PSUADE to calibrate Prout-Tompkins kinetic parameters for pure recrystallized TNT. The calibration was based on ALE3D simulations of a series of One Dimensional Time to Explosion (ODTX) experiments. The resultant kinetic parameters differed from TNT data points with an average error of 28%, which is slightly higher than the value of 23% previously calculated using a two-point optimization. The methodology described here provides a basis for future calibration studies using PSUADE. The files used in the procedure are listed in the Appendix.

  5. A high time and spatial resolution MRPC designed for muon tomography

    NASA Astrophysics Data System (ADS)

    Shi, L.; Wang, Y.; Huang, X.; Wang, X.; Zhu, W.; Li, Y.; Cheng, J.

    2014-12-01

    A prototype of cosmic muon scattering tomography system has been set up in Tsinghua University in Beijing. Multi-gap Resistive Plate Chamber (MRPC) is used in the system to get the muon tracks. Compared with other detectors, MRPC can not only provide the track but also the Time of Flight (ToF) between two detectors which can estimate the energy of particles. To get a more accurate track and higher efficiency of the tomography system, a new type of high time and two-dimensional spatial resolution MRPC has been developed. A series of experiments have been done to measure the efficiency, time resolution and spatial resolution. The results show that the efficiency can reach 95% and its time resolution is around 65 ps. The cluster size is around 4 and the spatial resolution can reach 200 μ m.

  6. Clean Floquet Time Crystals: Models and Realizations in Cold Atoms

    NASA Astrophysics Data System (ADS)

    Huang, Biao; Wu, Ying-Hai; Liu, W. Vincent

    2018-03-01

    Time crystals, a phase showing spontaneous breaking of time-translation symmetry, has been an intriguing subject for systems far away from equilibrium. Recent experiments found such a phase in both the presence and the absence of localization, while in theories localization by disorder is usually assumed a priori. In this work, we point out that time crystals can generally exist in systems without disorder. A series of clean quasi-one-dimensional models under Floquet driving are proposed to demonstrate this unexpected result in principle. Robust time crystalline orders are found in the strongly interacting regime along with the emergent integrals of motion in the dynamical system, which can be characterized by level statistics and the out-of-time-ordered correlators. We propose two cold atom experimental schemes to realize the clean Floquet time crystals, one by making use of dipolar gases and another by synthetic dimensions.

  7. A spatiotemporal-based scheme for efficient registration-based segmentation of thoracic 4-D MRI.

    PubMed

    Yang, Y; Van Reeth, E; Poh, C L; Tan, C H; Tham, I W K

    2014-05-01

    Dynamic three-dimensional (3-D) (four-dimensional, 4-D) magnetic resonance (MR) imaging is gaining importance in the study of pulmonary motion for respiratory diseases and pulmonary tumor motion for radiotherapy. To perform quantitative analysis using 4-D MR images, segmentation of anatomical structures such as the lung and pulmonary tumor is required. Manual segmentation of entire thoracic 4-D MRI data that typically contains many 3-D volumes acquired over several breathing cycles is extremely tedious, time consuming, and suffers high user variability. This requires the development of new automated segmentation schemes for 4-D MRI data segmentation. Registration-based segmentation technique that uses automatic registration methods for segmentation has been shown to be an accurate method to segment structures for 4-D data series. However, directly applying registration-based segmentation to segment 4-D MRI series lacks efficiency. Here we propose an automated 4-D registration-based segmentation scheme that is based on spatiotemporal information for the segmentation of thoracic 4-D MR lung images. The proposed scheme saved up to 95% of computation amount while achieving comparable accurate segmentations compared to directly applying registration-based segmentation to 4-D dataset. The scheme facilitates rapid 3-D/4-D visualization of the lung and tumor motion and potentially the tracking of tumor during radiation delivery.

  8. A 2-D/1-D transverse leakage approximation based on azimuthal, Fourier moments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stimpson, Shane G.; Collins, Benjamin S.; Downar, Thomas

    Here, the MPACT code being developed collaboratively by Oak Ridge National Laboratory and the University of Michigan is the primary deterministic neutron transport solver within the Virtual Environment for Reactor Applications Core Simulator (VERA-CS). In MPACT, the two-dimensional (2-D)/one-dimensional (1-D) scheme is the most commonly used method for solving neutron transport-based three-dimensional nuclear reactor core physics problems. Several axial solvers in this scheme assume isotropic transverse leakages, but work with the axial S N solver has extended these leakages to include both polar and azimuthal dependence. However, explicit angular representation can be burdensome for run-time and memory requirements. The workmore » here alleviates this burden by assuming that the azimuthal dependence of the angular flux and transverse leakages are represented by a Fourier series expansion. At the heart of this is a new axial SN solver that takes in a Fourier expanded radial transverse leakage and generates the angular fluxes used to construct the axial transverse leakages used in the 2-D-Method of Characteristics calculations.« less

  9. A 2-D/1-D transverse leakage approximation based on azimuthal, Fourier moments

    DOE PAGES

    Stimpson, Shane G.; Collins, Benjamin S.; Downar, Thomas

    2017-01-12

    Here, the MPACT code being developed collaboratively by Oak Ridge National Laboratory and the University of Michigan is the primary deterministic neutron transport solver within the Virtual Environment for Reactor Applications Core Simulator (VERA-CS). In MPACT, the two-dimensional (2-D)/one-dimensional (1-D) scheme is the most commonly used method for solving neutron transport-based three-dimensional nuclear reactor core physics problems. Several axial solvers in this scheme assume isotropic transverse leakages, but work with the axial S N solver has extended these leakages to include both polar and azimuthal dependence. However, explicit angular representation can be burdensome for run-time and memory requirements. The workmore » here alleviates this burden by assuming that the azimuthal dependence of the angular flux and transverse leakages are represented by a Fourier series expansion. At the heart of this is a new axial SN solver that takes in a Fourier expanded radial transverse leakage and generates the angular fluxes used to construct the axial transverse leakages used in the 2-D-Method of Characteristics calculations.« less

  10. A simple method for in vivo measurement of implant rod three-dimensional geometry during scoliosis surgery.

    PubMed

    Salmingo, Remel A; Tadano, Shigeru; Fujisaki, Kazuhiro; Abe, Yuichiro; Ito, Manabu

    2012-05-01

    Scoliosis is defined as a spinal pathology characterized as a three-dimensional deformity of the spine combined with vertebral rotation. Treatment for severe scoliosis is achieved when the scoliotic spine is surgically corrected and fixed using implanted rods and screws. Several studies performed biomechanical modeling and corrective forces measurements of scoliosis correction. These studies were able to predict the clinical outcome and measured the corrective forces acting on screws, however, they were not able to measure the intraoperative three-dimensional geometry of the spinal rod. In effect, the results of biomechanical modeling might not be so realistic and the corrective forces during the surgical correction procedure were intra-operatively difficult to measure. Projective geometry has been shown to be successful in the reconstruction of a three-dimensional structure using a series of images obtained from different views. In this study, we propose a new method to measure the three-dimensional geometry of an implant rod using two cameras. The reconstruction method requires only a few parameters, the included angle θ between the two cameras, the actual length of the rod in mm, and the location of points for curve fitting. The implant rod utilized in spine surgery was used to evaluate the accuracy of the current method. The three-dimensional geometry of the rod was measured from the image obtained by a scanner and compared to the proposed method using two cameras. The mean error in the reconstruction measurements ranged from 0.32 to 0.45 mm. The method presented here demonstrated the possibility of intra-operatively measuring the three-dimensional geometry of spinal rod. The proposed method could be used in surgical procedures to better understand the biomechanics of scoliosis correction through real-time measurement of three-dimensional implant rod geometry in vivo.

  11. Two-Dimensional Air-Flow Tests of the Effect of ITA Flowliner Slot Modification by Grinding/Polishing on Edge Tone Generation Potential

    NASA Technical Reports Server (NTRS)

    Sutliff, Daniel L. (Technical Monitor); Walker, Bruce E.

    2004-01-01

    Hersh Walker Acoustics (HWA) has performed a series of wind tunnel tests to support crack-repair studies for ITA flowliner vent slots. The overall goal of these tests is to determine if slot shape details have a significant influence on the propensity of the flowliner to produce aero-acoustic oscillations that could increase unsteady stresses on the flowliner walls. The test series, conducted using a full-scale two-dimensional model of a six-slot segment of the 38 slot liner, was intended to investigate the effects of altering slot shape by grinding away cracked portions.

  12. Towards deconstruction of the Type D (2,0) theory

    NASA Astrophysics Data System (ADS)

    Bourget, Antoine; Pini, Alessandro; Rodriguez-Gomez, Diego

    2017-12-01

    We propose a four-dimensional supersymmetric theory that deconstructs, in a particular limit, the six-dimensional (2, 0) theory of type D k . This 4d theory is defined by a necklace quiver with alternating gauge nodes O(2 k) and Sp( k). We test this proposal by comparing the 6d half-BPS index to the Higgs branch Hilbert series of the 4d theory. In the process, we overcome several technical difficulties, such as Hilbert series calculations for non-complete intersections, and the choice of O versus SO gauge groups. Consistently, the result matches the Coulomb branch formula for the mirror theory upon reduction to 3d.

  13. PROTEUS two-dimensional Navier-Stokes computer code, version 1.0. Volume 1: Analysis description

    NASA Technical Reports Server (NTRS)

    Towne, Charles E.; Schwab, John R.; Benson, Thomas J.; Suresh, Ambady

    1990-01-01

    A new computer code was developed to solve the two-dimensional or axisymmetric, Reynolds averaged, unsteady compressible Navier-Stokes equations in strong conservation law form. The thin-layer or Euler equations may also be solved. Turbulence is modeled using an algebraic eddy viscosity model. The objective was to develop a code for aerospace applications that is easy to use and easy to modify. Code readability, modularity, and documentation were emphasized. The equations are written in nonorthogonal body-fitted coordinates, and solved by marching in time using a fully-coupled alternating direction-implicit procedure with generalized first- or second-order time differencing. All terms are linearized using second-order Taylor series. The boundary conditions are treated implicitly, and may be steady, unsteady, or spatially periodic. Simple Cartesian or polar grids may be generated internally by the program. More complex geometries require an externally generated computational coordinate system. The documentation is divided into three volumes. Volume 1 is the Analysis Description, and describes in detail the governing equations, the turbulence model, the linearization of the equations and boundary conditions, the time and space differencing formulas, the ADI solution procedure, and the artificial viscosity models.

  14. Impact of reduced near-field entrainment of overpressured volcanic jets on plume development

    USGS Publications Warehouse

    Saffaraval, Farhad; Solovitz, Stephen A.; Ogden, Darcy E.; Mastin, Larry G.

    2012-01-01

    Volcanic plumes are often studied using one-dimensional analytical models, which use an empirical entrainment ratio to close the equations. Although this ratio is typically treated as constant, its value near the vent is significantly reduced due to flow development and overpressured conditions. To improve the accuracy of these models, a series of experiments was performed using particle image velocimetry, a high-accuracy, full-field velocity measurement technique. Experiments considered a high-speed jet with Reynolds numbers up to 467,000 and exit pressures up to 2.93 times atmospheric. Exit gas densities were also varied from 0.18 to 1.4 times that of air. The measured velocity was integrated to determine entrainment directly. For jets with exit pressures near atmospheric, entrainment was approximately 30% less than the fully developed level at 20 diameters from the exit. At pressures nearly three times that of the atmosphere, entrainment was 60% less. These results were introduced into Plumeria, a one-dimensional plume model, to examine the impact of reduced entrainment. The maximum column height was only slightly modified, but the critical radius for collapse was significantly reduced, decreasing by nearly a factor of two at moderate eruptive pressures.

  15. FAILURE OF A NEUTRINO-DRIVEN EXPLOSION AFTER CORE-COLLAPSE MAY LEAD TO A THERMONUCLEAR SUPERNOVA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kushnir, Doron; Katz, Boaz, E-mail: kushnir@ias.edu

    We demonstrate that ∼10 s after the core-collapse of a massive star, a thermonuclear explosion of the outer shells is possible for some (tuned) initial density and composition profiles, assuming that the neutrinos failed to explode the star. The explosion may lead to a successful supernova, as first suggested by Burbidge et al. We perform a series of one-dimensional (1D) calculations of collapsing massive stars with simplified initial density profiles (similar to the results of stellar evolution calculations) and various compositions (not similar to 1D stellar evolution calculations). We assume that the neutrinos escaped with a negligible effect on themore » outer layers, which inevitably collapse. As the shells collapse, they compress and heat up adiabatically, enhancing the rate of thermonuclear burning. In some cases, where significant shells of mixed helium and oxygen are present with pre-collapsed burning times of ≲100 s (≈10 times the free-fall time), a thermonuclear detonation wave is ignited, which unbinds the outer layers of the star, leading to a supernova. The energy released is small, ≲10{sup 50} erg, and negligible amounts of synthesized material (including {sup 56}Ni) are ejected, implying that these 1D simulations are unlikely to represent typical core-collapse supernovae. However, they do serve as a proof of concept that the core-collapse-induced thermonuclear explosions are possible, and more realistic two-dimensional and three-dimensional simulations are within current computational capabilities.« less

  16. An analytical formulation of two‐dimensional groundwater dispersion induced by surficial recharge variability

    USGS Publications Warehouse

    Swain, Eric D.; Chin, David A.

    2003-01-01

    A predominant cause of dispersion in groundwater is advective mixing due to variability in seepage rates. Hydraulic conductivity variations have been extensively researched as a cause of this seepage variability. In this paper the effect of variations in surface recharge to a shallow surficial aquifer is investigated as an important additional effect. An analytical formulation has been developed that relates aquifer parameters and the statistics of recharge variability to increases in the dispersivity. This is accomplished by solving Fourier transforms of the small perturbation forms of the groundwater flow equations. Two field studies are presented in this paper to determine the statistics of recharge variability for input to the analytical formulation. A time series of water levels at a continuous groundwater recorder is used to investigate the temporal statistics of hydraulic head caused by recharge, and a series of infiltrometer measurements are used to define the spatial variability in the recharge parameters. With these field statistics representing head fluctuations due to recharge, the analytical formulation can be used to compute the dispersivity without an explicit representation of the recharge boundary. Results from a series of numerical experiments are used to define the limits of this analytical formulation and to provide some comparison. A sophisticated model has been developed using a particle‐tracking algorithm (modified to account for temporal variations) to estimate groundwater dispersion. Dispersivity increases of 9 percent are indicated by the analytical formulation for the aquifer at the field site. A comparison with numerical model results indicates that the analytical results are reasonable for shallow surficial aquifers in which two‐dimensional flow can be assumed.

  17. Liver DCE-MRI Registration in Manifold Space Based on Robust Principal Component Analysis.

    PubMed

    Feng, Qianjin; Zhou, Yujia; Li, Xueli; Mei, Yingjie; Lu, Zhentai; Zhang, Yu; Feng, Yanqiu; Liu, Yaqin; Yang, Wei; Chen, Wufan

    2016-09-29

    A technical challenge in the registration of dynamic contrast-enhanced magnetic resonance (DCE-MR) imaging in the liver is intensity variations caused by contrast agents. Such variations lead to the failure of the traditional intensity-based registration method. To address this problem, a manifold-based registration framework for liver DCE-MR time series is proposed. We assume that liver DCE-MR time series are located on a low-dimensional manifold and determine intrinsic similarities between frames. Based on the obtained manifold, the large deformation of two dissimilar images can be decomposed into a series of small deformations between adjacent images on the manifold through gradual deformation of each frame to the template image along the geodesic path. Furthermore, manifold construction is important in automating the selection of the template image, which is an approximation of the geodesic mean. Robust principal component analysis is performed to separate motion components from intensity changes induced by contrast agents; the components caused by motion are used to guide registration in eliminating the effect of contrast enhancement. Visual inspection and quantitative assessment are further performed on clinical dataset registration. Experiments show that the proposed method effectively reduces movements while preserving the topology of contrast-enhancing structures and provides improved registration performance.

  18. Functional Connectivity among Spikes in Low Dimensional Space during Working Memory Task in Rat

    PubMed Central

    Tian, Xin

    2014-01-01

    Working memory (WM) is critically important in cognitive tasks. The functional connectivity has been a powerful tool for understanding the mechanism underlying the information processing during WM tasks. The aim of this study is to investigate how to effectively characterize the dynamic variations of the functional connectivity in low dimensional space among the principal components (PCs) which were extracted from the instantaneous firing rate series. Spikes were obtained from medial prefrontal cortex (mPFC) of rats with implanted microelectrode array and then transformed into continuous series via instantaneous firing rate method. Granger causality method is proposed to study the functional connectivity. Then three scalar metrics were applied to identify the changes of the reduced dimensionality functional network during working memory tasks: functional connectivity (GC), global efficiency (E) and casual density (CD). As a comparison, GC, E and CD were also calculated to describe the functional connectivity in the original space. The results showed that these network characteristics dynamically changed during the correct WM tasks. The measure values increased to maximum, and then decreased both in the original and in the reduced dimensionality. Besides, the feature values of the reduced dimensionality were significantly higher during the WM tasks than they were in the original space. These findings suggested that functional connectivity among the spikes varied dynamically during the WM tasks and could be described effectively in the low dimensional space. PMID:24658291

  19. Application of State-Space Smoothing to fMRI Data for Calculation of Lagged Transinformation between Human Brain Activations

    NASA Astrophysics Data System (ADS)

    Watanabe, Jobu

    2009-09-01

    Mutual information can be given a directional sense by introducing a time lag in one of the variables. In an author's previous study, to investigate the network dynamics of human brain regions, lagged transinformation (LTI) was introduced using time delayed mutual information. The LTI makes it possible to quantify the time course of dynamic information transfer between regions in the temporal domain. The LTI was applied to functional magnetic resonance imaging (fMRI) data involved in neural processing of the transformation and comparison from three-dimensional (3D) visual information to a two-dimensional (2D) location to calculate directed information flows between the activated brain regions. In the present study, for more precise estimation of LTI, Kalman filter smoothing was applied to the same fMRI data. Because the smoothing method exploits the full length of the time series data for the estimation, its application increases the precision. Large information flows were found from the bilateral prefrontal cortices to the parietal cortices. The results suggest that information of the 3D images stored as working memory was retrieved and transferred from the prefrontal cortices to the parietal cortices for comparison with information of the 2D images.

  20. Symbolic dynamics techniques for complex systems: Application to share price dynamics

    NASA Astrophysics Data System (ADS)

    Xu, Dan; Beck, Christian

    2017-05-01

    The symbolic dynamics technique is well known for low-dimensional dynamical systems and chaotic maps, and lies at the roots of the thermodynamic formalism of dynamical systems. Here we show that this technique can also be successfully applied to time series generated by complex systems of much higher dimensionality. Our main example is the investigation of share price returns in a coarse-grained way. A nontrivial spectrum of Rényi entropies is found. We study how the spectrum depends on the time scale of returns, the sector of stocks considered, as well as the number of symbols used for the symbolic description. Overall our analysis confirms that in the symbol space transition probabilities of observed share price returns depend on the entire history of previous symbols, thus emphasizing the need for a modelling based on non-Markovian stochastic processes. Our method allows for quantitative comparisons of entirely different complex systems, for example the statistics of symbol sequences generated by share price returns using 4 symbols can be compared with that of genomic sequences.

  1. A modified sparse reconstruction method for three-dimensional synthetic aperture radar image

    NASA Astrophysics Data System (ADS)

    Zhang, Ziqiang; Ji, Kefeng; Song, Haibo; Zou, Huanxin

    2018-03-01

    There is an increasing interest in three-dimensional Synthetic Aperture Radar (3-D SAR) imaging from observed sparse scattering data. However, the existing 3-D sparse imaging method requires large computing times and storage capacity. In this paper, we propose a modified method for the sparse 3-D SAR imaging. The method processes the collection of noisy SAR measurements, usually collected over nonlinear flight paths, and outputs 3-D SAR imagery. Firstly, the 3-D sparse reconstruction problem is transformed into a series of 2-D slices reconstruction problem by range compression. Then the slices are reconstructed by the modified SL0 (smoothed l0 norm) reconstruction algorithm. The improved algorithm uses hyperbolic tangent function instead of the Gaussian function to approximate the l0 norm and uses the Newton direction instead of the steepest descent direction, which can speed up the convergence rate of the SL0 algorithm. Finally, numerical simulation results are given to demonstrate the effectiveness of the proposed algorithm. It is shown that our method, compared with existing 3-D sparse imaging method, performs better in reconstruction quality and the reconstruction time.

  2. Dynamical signature of localization-delocalization transition in a one-dimensional incommensurate lattice

    NASA Astrophysics Data System (ADS)

    Yang, Chao; Wang, Yucheng; Wang, Pei; Gao, Xianlong; Chen, Shu

    2017-05-01

    We investigate the quench dynamics of a one-dimensional incommensurate lattice described by the Aubry-André model by a sudden change of the strength of incommensurate potential Δ and unveil that the dynamical signature of localization-delocalization transition can be characterized by the occurrence of zero points in the Loschmidt echo. For the quench process with quenching taking place between two limits of Δ =0 and Δ =∞ , we give analytical expressions of the Loschmidt echo, which indicate the existence of a series of zero points in the Loschmidt echo. For a general quench process, we calculate the Loschmidt echo numerically and analyze its statistical behavior. Our results show that if both the initial and post-quench Hamiltonian are in extended phase or localized phase, Loschmidt echo will always be greater than a positive number; however if they locate in different phases, Loschmidt echo can reach nearby zero at some time intervals.

  3. Lagrangian coherent structures at the onset of hyperchaos in the two-dimensional Navier-Stokes equations.

    PubMed

    Miranda, Rodrigo A; Rempel, Erico L; Chian, Abraham C-L; Seehafer, Norbert; Toledo, Benjamin A; Muñoz, Pablo R

    2013-09-01

    We study a transition to hyperchaos in the two-dimensional incompressible Navier-Stokes equations with periodic boundary conditions and an external forcing term. Bifurcation diagrams are constructed by varying the Reynolds number, and a transition to hyperchaos (HC) is identified. Before the onset of HC, there is coexistence of two chaotic attractors and a hyperchaotic saddle. After the transition to HC, the two chaotic attractors merge with the hyperchaotic saddle, generating random switching between chaos and hyperchaos, which is responsible for intermittent bursts in the time series of energy and enstrophy. The chaotic mixing properties of the flow are characterized by detecting Lagrangian coherent structures. After the transition to HC, the flow displays complex Lagrangian patterns and an increase in the level of Lagrangian chaoticity during the bursty periods that can be predicted statistically by the hyperchaotic saddle prior to HC transition.

  4. A computer program to trace seismic ray distribution in complex two-dimensional geological models

    USGS Publications Warehouse

    Yacoub, Nazieh K.; Scott, James H.

    1970-01-01

    A computer program has been developed to trace seismic rays and their amplitudes and energies through complex two-dimensional geological models, for which boundaries between elastic units are defined by a series of digitized X-, Y-coordinate values. Input data for the program includes problem identification, control parameters, model coordinates and elastic parameter for the elastic units. The program evaluates the partitioning of ray amplitude and energy at elastic boundaries, computes the total travel time, total travel distance and other parameters for rays arising at the earth's surface. Instructions are given for punching program control cards and data cards, and for arranging input card decks. An example of printer output for a simple problem is presented. The program is written in FORTRAN IV language. The listing of the program is shown in the Appendix, with an example output from a CDC-6600 computer.

  5. Direct determination approach for the multifractal detrending moving average analysis

    NASA Astrophysics Data System (ADS)

    Xu, Hai-Chuan; Gu, Gao-Feng; Zhou, Wei-Xing

    2017-11-01

    In the canonical framework, we propose an alternative approach for the multifractal analysis based on the detrending moving average method (MF-DMA). We define a canonical measure such that the multifractal mass exponent τ (q ) is related to the partition function and the multifractal spectrum f (α ) can be directly determined. The performances of the direct determination approach and the traditional approach of the MF-DMA are compared based on three synthetic multifractal and monofractal measures generated from the one-dimensional p -model, the two-dimensional p -model, and the fractional Brownian motions. We find that both approaches have comparable performances to unveil the fractal and multifractal nature. In other words, without loss of accuracy, the multifractal spectrum f (α ) can be directly determined using the new approach with less computation cost. We also apply the new MF-DMA approach to the volatility time series of stock prices and confirm the presence of multifractality.

  6. Simulation of two-dimensional turbulent flows in a rotating annulus

    NASA Astrophysics Data System (ADS)

    Storey, Brian D.

    2004-05-01

    Rotating water tank experiments have been used to study fundamental processes of atmospheric and geophysical turbulence in a controlled laboratory setting. When these tanks are undergoing strong rotation the forced turbulent flow becomes highly two dimensional along the axis of rotation. An efficient numerical method has been developed for simulating the forced quasi-geostrophic equations in an annular geometry to model current laboratory experiments. The algorithm employs a spectral method with Fourier series and Chebyshev polynomials as basis functions. The algorithm has been implemented on a parallel architecture to allow modelling of a wide range of spatial scales over long integration times. This paper describes the derivation of the model equations, numerical method, testing and performance of the algorithm. Results provide reasonable agreement with the experimental data, indicating that such computations can be used as a predictive tool to design future experiments.

  7. MINIMAL ENDOILLUMINATION LEVELS AND DISPLAY LUMINOUS EMITTANCE DURING THREE-DIMENSIONAL HEADS-UP VITREORETINAL SURGERY.

    PubMed

    Adam, Murtaza K; Thornton, Sarah; Regillo, Carl D; Park, Carl; Ho, Allen C; Hsu, Jason

    2017-09-01

    To determine minimal endoillumination levels required to perform 3-dimensional heads-up vitreoretinal surgery and to correlate endoillumination levels used for measurements of heads-up display (HUD) luminous emittance. Prospective, observational surgical case series of 10 patients undergoing vitreoretinal surgery. Endoillumination levels were set to 40% of maximum output and were decreased at set intervals until the illumination level was 0%. Corresponding luminous emittance (lux) of the HUD was measured 40 cm from the display using a luxmeter (Dr. Meter, Model #LX1010BS). In 9 of 10 cases, the surgeon felt that they could operate comfortably at an endoillumination level of 10% of maximum output with corresponding HUD emittance of 14.3 ± 9.5 lux. In the remaining case, the surgeon felt comfortable at a 3% endoillumination level with corresponding HUD emittance of 15 lux. Below this threshold, subjective image dimness and digital noise limited visibility. Endoillumination levels were correlated with luminous emittance from the 3-dimensional HUD (P < 0.01). The average coefficient of variation of HUD luminance was 0.546. There were no intraoperative complications. With real-time digital processing and automated brightness control, 3-dimensional HUD platforms may allow for reduced intraoperative endoillumination levels and a theoretically reduced risk of retinal phototoxicity during vitreoretinal surgery.

  8. Directional reversals enable Myxococcus xanthus cells to produce collective one-dimensional streams during fruiting-body formation

    PubMed Central

    Thutupalli, Shashi; Sun, Mingzhai; Bunyak, Filiz; Palaniappan, Kannappan; Shaevitz, Joshua W.

    2015-01-01

    The formation of a collectively moving group benefits individuals within a population in a variety of ways. The surface-dwelling bacterium Myxococcus xanthus forms dynamic collective groups both to feed on prey and to aggregate during times of starvation. The latter behaviour, termed fruiting-body formation, involves a complex, coordinated series of density changes that ultimately lead to three-dimensional aggregates comprising hundreds of thousands of cells and spores. How a loose, two-dimensional sheet of motile cells produces a fixed aggregate has remained a mystery as current models of aggregation are either inconsistent with experimental data or ultimately predict unstable structures that do not remain fixed in space. Here, we use high-resolution microscopy and computer vision software to spatio-temporally track the motion of thousands of individuals during the initial stages of fruiting-body formation. We find that cells undergo a phase transition from exploratory flocking, in which unstable cell groups move rapidly and coherently over long distances, to a reversal-mediated localization into one-dimensional growing streams that are inherently stable in space. These observations identify a new phase of active collective behaviour and answer a long-standing open question in Myxococcus development by describing how motile cell groups can remain statistically fixed in a spatial location. PMID:26246416

  9. Computer-generated 3D ultrasound images of the carotid artery

    NASA Technical Reports Server (NTRS)

    Selzer, Robert H.; Lee, Paul L.; Lai, June Y.; Frieden, Howard J.; Blankenhorn, David H.

    1989-01-01

    A method is under development to measure carotid artery lesions from a computer-generated three-dimensional ultrasound image. For each image, the position of the transducer in six coordinates (x, y, z, azimuth, elevation, and roll) is recorded and used to position each B-mode picture element in its proper spatial position in a three-dimensional memory array. After all B-mode images have been assembled in the memory, the three-dimensional image is filtered and resampled to produce a new series of parallel-plane two-dimensional images from which arterial boundaries are determined using edge tracking methods.

  10. Computer-generated 3D ultrasound images of the carotid artery

    NASA Astrophysics Data System (ADS)

    Selzer, Robert H.; Lee, Paul L.; Lai, June Y.; Frieden, Howard J.; Blankenhorn, David H.

    A method is under development to measure carotid artery lesions from a computer-generated three-dimensional ultrasound image. For each image, the position of the transducer in six coordinates (x, y, z, azimuth, elevation, and roll) is recorded and used to position each B-mode picture element in its proper spatial position in a three-dimensional memory array. After all B-mode images have been assembled in the memory, the three-dimensional image is filtered and resampled to produce a new series of parallel-plane two-dimensional images from which arterial boundaries are determined using edge tracking methods.

  11. Concrete thawing studied by single-point ramped imaging.

    PubMed

    Prado, P J; Balcom, B J; Beyea, S D; Armstrong, R L; Bremner, T W

    1997-12-01

    A series of two-dimensional images of proton distribution in a hardened concrete sample has been obtained during the thawing process (from -50 degrees C up to 11 degrees C). The SPRITE sequence is optimal for this study given the characteristic short relaxation times of water in this porous media (T2* < 200 micros and T1 < 3.6 ms). The relaxation parameters of the sample were determined in order to optimize the time efficiency of the sequence, permitting a 4-scan 64 x 64 acquisition in under 3 min. The image acquisition is fast on the time scale of the temperature evolution of the specimen. The frozen water distribution is quantified through a position based study of the image contrast. A multiple point acquisition method is presented and the signal sensitivity improvement is discussed.

  12. A wrinkle in time: asymmetric valuation of past and future events.

    PubMed

    Caruso, Eugene M; Gilbert, Daniel T; Wilson, Timothy D

    2008-08-01

    A series of studies shows that people value future events more than equivalent events in the equidistant past. Whether people imagined being compensated or compensating others, they required and offered more compensation for events that would take place in the future than for identical events that had taken place in the past. This temporal value asymmetry (TVA) was robust in between-persons comparisons and absent in within-persons comparisons, which suggests that participants considered the TVA irrational. Contemplating future events produced greater affect than did contemplating past events, and this difference mediated the TVA. We suggest that the TVA, the gain-loss asymmetry, and hyperbolic time discounting can be unified in a three-dimensional value function that describes how people value gains and losses of different magnitudes at different moments in time.

  13. Correction of 3D rigid body motion in fMRI time series by independent estimation of rotational and translational effects in k-space.

    PubMed

    Costagli, Mauro; Waggoner, R Allen; Ueno, Kenichi; Tanaka, Keiji; Cheng, Kang

    2009-04-15

    In functional magnetic resonance imaging (fMRI), even subvoxel motion dramatically corrupts the blood oxygenation level-dependent (BOLD) signal, invalidating the assumption that intensity variation in time is primarily due to neuronal activity. Thus, correction of the subject's head movements is a fundamental step to be performed prior to data analysis. Most motion correction techniques register a series of volumes assuming that rigid body motion, characterized by rotational and translational parameters, occurs. Unlike the most widely used applications for fMRI data processing, which correct motion in the image domain by numerically estimating rotational and translational components simultaneously, the algorithm presented here operates in a three-dimensional k-space, to decouple and correct rotations and translations independently, offering new ways and more flexible procedures to estimate the parameters of interest. We developed an implementation of this method in MATLAB, and tested it on both simulated and experimental data. Its performance was quantified in terms of square differences and center of mass stability across time. Our data show that the algorithm proposed here successfully corrects for rigid-body motion, and its employment in future fMRI studies is feasible and promising.

  14. Semiparametric modeling: Correcting low-dimensional model error in parametric models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berry, Tyrus, E-mail: thb11@psu.edu; Harlim, John, E-mail: jharlim@psu.edu; Department of Meteorology, the Pennsylvania State University, 503 Walker Building, University Park, PA 16802-5013

    2016-03-01

    In this paper, a semiparametric modeling approach is introduced as a paradigm for addressing model error arising from unresolved physical phenomena. Our approach compensates for model error by learning an auxiliary dynamical model for the unknown parameters. Practically, the proposed approach consists of the following steps. Given a physics-based model and a noisy data set of historical observations, a Bayesian filtering algorithm is used to extract a time-series of the parameter values. Subsequently, the diffusion forecast algorithm is applied to the retrieved time-series in order to construct the auxiliary model for the time evolving parameters. The semiparametric forecasting algorithm consistsmore » of integrating the existing physics-based model with an ensemble of parameters sampled from the probability density function of the diffusion forecast. To specify initial conditions for the diffusion forecast, a Bayesian semiparametric filtering method that extends the Kalman-based filtering framework is introduced. In difficult test examples, which introduce chaotically and stochastically evolving hidden parameters into the Lorenz-96 model, we show that our approach can effectively compensate for model error, with forecasting skill comparable to that of the perfect model.« less

  15. Hybrid Dion-Jacobson 2D Lead Iodide Perovskites.

    PubMed

    Mao, Lingling; Ke, Weijun; Pedesseau, Laurent; Wu, Yilei; Katan, Claudine; Even, Jacky; Wasielewski, Michael R; Stoumpos, Constantinos C; Kanatzidis, Mercouri G

    2018-03-14

    The three-dimensional hybrid organic-inorganic perovskites have shown huge potential for use in solar cells and other optoelectronic devices. Although these materials are under intense investigation, derivative materials with lower dimensionality are emerging, offering higher tunability of physical properties and new capabilities. Here, we present two new series of hybrid two-dimensional (2D) perovskites that adopt the Dion-Jacobson (DJ) structure type, which are the first complete homologous series reported in halide perovskite chemistry. Lead iodide DJ perovskites adopt a general formula A'A n-1 Pb n I 3 n+1 (A' = 3-(aminomethyl)piperidinium (3AMP) or 4-(aminomethyl)piperidinium (4AMP), A = methylammonium (MA)). These materials have layered structures where the stacking of inorganic layers is unique as they lay exactly on top of another. With a slightly different position of the functional group in the templating cation 3AMP and 4AMP, the as-formed DJ perovskites show different optical properties, with the 3AMP series having smaller band gaps than the 4AMP series. Analysis on the crystal structures and density functional theory (DFT) calculations suggest that the origin of the systematic band gap shift is the strong but indirect influence of the organic cation on the inorganic framework. Fabrication of photovoltaic devices utilizing these materials as light absorbers reveals that (3AMP)(MA) 3 Pb 4 I 13 has the best power conversion efficiency (PCE) of 7.32%, which is much higher than that of the corresponding (4AMP)(MA) 3 Pb 4 I 13 .

  16. A model of Fe speciation and biogeochemistry at the Tropical Eastern North Atlantic Time-Series Observatory site

    NASA Astrophysics Data System (ADS)

    Ye, Y.; Völker, C.; Wolf-Gladrow, D. A.

    2009-10-01

    A one-dimensional model of Fe speciation and biogeochemistry, coupled with the General Ocean Turbulence Model (GOTM) and a NPZD-type ecosystem model, is applied for the Tropical Eastern North Atlantic Time-Series Observatory (TENATSO) site. Among diverse processes affecting Fe speciation, this study is focusing on investigating the role of dust particles in removing dissolved iron (DFe) by a more complex description of particle aggregation and sinking, and explaining the abundance of organic Fe-binding ligands by modelling their origin and fate. The vertical distribution of different particle classes in the model shows high sensitivity to changing aggregation rates. Using the aggregation rates from the sensitivity study in this work, modelled particle fluxes are close to observations, with dust particles dominating near the surface and aggregates deeper in the water column. POC export at 1000 m is a little higher than regional sediment trap measurements, suggesting further improvement of modelling particle aggregation, sinking or remineralisation. Modelled strong ligands have a high abundance near the surface and decline rapidly below the deep chlorophyll maximum, showing qualitative similarity to observations. Without production of strong ligands, phytoplankton concentration falls to 0 within the first 2 years in the model integration, caused by strong Fe-limitation. A nudging of total weak ligands towards a constant value is required for reproducing the observed nutrient-like profiles, assuming a decay time of 7 years for weak ligands. This indicates that weak ligands have a longer decay time and therefore cannot be modelled adequately in a one-dimensional model. The modelled DFe profile is strongly influenced by particle concentration and vertical distribution, because the most important removal of DFe in deeper waters is colloid formation and aggregation. Redissolution of particulate iron is required to reproduce an observed DFe profile at TENATSO site. Assuming colloidal iron is mainly composed of inorganic colloids, the modelled colloidal to soluble iron ratio is lower that observations, indicating the importance of organic colloids.

  17. Theoretical and numerical studies of chaotic mixing

    NASA Astrophysics Data System (ADS)

    Kim, Ho Jun

    Theoretical and numerical studies of chaotic mixing are performed to circumvent the difficulties of efficient mixing, which come from the lack of turbulence in microfluidic devices. In order to carry out efficient and accurate parametric studies and to identify a fully chaotic state, a spectral element algorithm for solution of the incompressible Navier-Stokes and species transport equations is developed. Using Taylor series expansions in time marching, the new algorithm employs an algebraic factorization scheme on multi-dimensional staggered spectral element grids, and extends classical conforming Galerkin formulations to nonconforming spectral elements. Lagrangian particle tracking methods are utilized to study particle dispersion in the mixing device using spectral element and fourth order Runge-Kutta discretizations in space and time, respectively. Comparative studies of five different techniques commonly employed to identify the chaotic strength and mixing efficiency in microfluidic systems are presented to demonstrate the competitive advantages and shortcomings of each method. These are the stirring index based on the box counting method, Poincare sections, finite time Lyapunov exponents, the probability density function of the stretching field, and mixing index inverse, based on the standard deviation of scalar species distribution. Series of numerical simulations are performed by varying the Peclet number (Pe) at fixed kinematic conditions. The mixing length (lm) is characterized as function of the Pe number, and lm ∝ ln(Pe) scaling is demonstrated for fully chaotic cases. Employing the aforementioned techniques, optimum kinematic conditions and the actuation frequency of the stirrer that result in the highest mixing/stirring efficiency are identified in a zeta potential patterned straight micro channel, where a continuous flow is generated by superposition of a steady pressure driven flow and time periodic electroosmotic flow induced by a stream-wise AC electric field. Finally, it is shown that the invariant manifold of hyperbolic periodic point determines the geometry of fast mixing zones in oscillatory flows in two-dimensional cavity.

  18. Determination of the temperature field of shell structures

    NASA Astrophysics Data System (ADS)

    Rodionov, N. G.

    1986-10-01

    A stationary heat conduction problem is formulated for the case of shell structures, such as those found in gas-turbine and jet engines. A two-dimensional elliptic differential equation of stationary heat conduction is obtained which allows, in an approximate manner, for temperature changes along a third variable, i.e., the shell thickness. The two-dimensional problem is reduced to a series of one-dimensional problems which are then solved using efficient difference schemes. The approach proposed here is illustrated by a specific example.

  19. Two-dimensional energy spectra in a high Reynolds number turbulent boundary layer

    NASA Astrophysics Data System (ADS)

    Chandran, Dileep; Baidya, Rio; Monty, Jason; Marusic, Ivan

    2016-11-01

    The current study measures the two-dimensional (2D) spectra of streamwise velocity component (u) in a high Reynolds number turbulent boundary layer for the first time. A 2D spectra shows the contribution of streamwise (λx) and spanwise (λy) length scales to the streamwise variance at a given wall height (z). 2D spectra could be a better tool to analyse spectral scaling laws as it is devoid of energy aliasing errors that could be present in one-dimensional spectra. A novel method is used to calculate the 2D spectra from the 2D correlation of u which is obtained by measuring velocity time series at various spanwise locations using hot-wire anemometry. At low Reynolds number, the shape of the 2D spectra at a constant energy level shows λy √{ zλx } behaviour at larger scales which is in agreement with the literature. However, at high Reynolds number, it is observed that the square-root relationship gradually transforms into a linear relationship (λy λx) which could be caused by the large packets of eddies whose length grows proportionately to the growth of its width. Additionally, we will show that this linear relationship observed at high Reynolds number is consistent with attached eddy predictions. The authors gratefully acknowledge the support from the Australian Research Council.

  20. Application of the Virtual Fields Method to a relaxation behaviour of rubbers

    NASA Astrophysics Data System (ADS)

    Yoon, Sung-ho; Siviour, Clive R.

    2018-07-01

    This paper presents the application of the Virtual Fields Method (VFM) for the characterization of viscoelastic behaviour of rubbers. The relaxation behaviour of the rubbers following a dynamic loading event is characterized using the dynamic VFM in which full-field (two dimensional) strain and acceleration data, obtained from high-speed imaging, are analysed by the principle of virtual work without traction force data, instead using the acceleration fields in the specimen to provide stress information. Two (silicone and nitrile) rubbers were tested in tension using a drop-weight apparatus. It is assumed that the dynamic behaviour is described by the combination of hyperelastic and Prony series models. A VFM based procedure is designed and used to produce the identification of the modulus term of a hyperelastic model and the Prony series parameters within a time scale determined by two experimental factors: imaging speed and loading duration. Then, the time range of the data is extended using experiments at different temperatures combined with the time-temperature superposition principle. Prior to these experimental analyses, finite element simulations were performed to validate the application of the proposed VFM analysis. Therefore, for the first time, it has been possible to identify relaxation behaviour of a material following dynamic loading, using a technique that can be applied to both small and large deformations.

  1. Record statistics of a strongly correlated time series: random walks and Lévy flights

    NASA Astrophysics Data System (ADS)

    Godrèche, Claude; Majumdar, Satya N.; Schehr, Grégory

    2017-08-01

    We review recent advances on the record statistics of strongly correlated time series, whose entries denote the positions of a random walk or a Lévy flight on a line. After a brief survey of the theory of records for independent and identically distributed random variables, we focus on random walks. During the last few years, it was indeed realized that random walks are a very useful ‘laboratory’ to test the effects of correlations on the record statistics. We start with the simple one-dimensional random walk with symmetric jumps (both continuous and discrete) and discuss in detail the statistics of the number of records, as well as of the ages of the records, i.e. the lapses of time between two successive record breaking events. Then we review the results that were obtained for a wide variety of random walk models, including random walks with a linear drift, continuous time random walks, constrained random walks (like the random walk bridge) and the case of multiple independent random walkers. Finally, we discuss further observables related to records, like the record increments, as well as some questions raised by physical applications of record statistics, like the effects of measurement error and noise.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rycroft, Chris H.; Bazant, Martin Z.

    An advection-diffusion-limited dissolution model of an object being eroded by a two-dimensional potential flow is presented. By taking advantage of the conformal invariance of the model, a numerical method is introduced that tracks the evolution of the object boundary in terms of a time-dependent Laurent series. Simulations of a variety of dissolving objects are shown, which shrink and collapse to a single point in finite time. The simulations reveal a surprising exact relationship, whereby the collapse point is the root of a non-Analytic function given in terms of the flow velocity and the Laurent series coefficients describing the initial shape.more » This result is subsequently derived using residue calculus. The structure of the non-Analytic function is examined for three different test cases, and a practical approach to determine the collapse point using a generalized Newton-Raphson root-finding algorithm is outlined. These examples also illustrate the possibility that the model breaks down in finite time prior to complete collapse, due to a topological singularity, as the dissolving boundary overlaps itself rather than breaking up into multiple domains (analogous to droplet pinch-off in fluid mechanics). In conclusion, the model raises fundamental mathematical questions about broken symmetries in finite-Time singularities of both continuous and stochastic dynamical systems.« less

  3. Asymmetric collapse by dissolution or melting in a uniform flow

    PubMed Central

    Bazant, Martin Z.

    2016-01-01

    An advection–diffusion-limited dissolution model of an object being eroded by a two-dimensional potential flow is presented. By taking advantage of the conformal invariance of the model, a numerical method is introduced that tracks the evolution of the object boundary in terms of a time-dependent Laurent series. Simulations of a variety of dissolving objects are shown, which shrink and collapse to a single point in finite time. The simulations reveal a surprising exact relationship, whereby the collapse point is the root of a non-analytic function given in terms of the flow velocity and the Laurent series coefficients describing the initial shape. This result is subsequently derived using residue calculus. The structure of the non-analytic function is examined for three different test cases, and a practical approach to determine the collapse point using a generalized Newton–Raphson root-finding algorithm is outlined. These examples also illustrate the possibility that the model breaks down in finite time prior to complete collapse, due to a topological singularity, as the dissolving boundary overlaps itself rather than breaking up into multiple domains (analogous to droplet pinch-off in fluid mechanics). The model raises fundamental mathematical questions about broken symmetries in finite-time singularities of both continuous and stochastic dynamical systems. PMID:26997890

  4. Asymmetric collapse by dissolution or melting in a uniform flow

    DOE PAGES

    Rycroft, Chris H.; Bazant, Martin Z.

    2016-01-06

    An advection-diffusion-limited dissolution model of an object being eroded by a two-dimensional potential flow is presented. By taking advantage of the conformal invariance of the model, a numerical method is introduced that tracks the evolution of the object boundary in terms of a time-dependent Laurent series. Simulations of a variety of dissolving objects are shown, which shrink and collapse to a single point in finite time. The simulations reveal a surprising exact relationship, whereby the collapse point is the root of a non-Analytic function given in terms of the flow velocity and the Laurent series coefficients describing the initial shape.more » This result is subsequently derived using residue calculus. The structure of the non-Analytic function is examined for three different test cases, and a practical approach to determine the collapse point using a generalized Newton-Raphson root-finding algorithm is outlined. These examples also illustrate the possibility that the model breaks down in finite time prior to complete collapse, due to a topological singularity, as the dissolving boundary overlaps itself rather than breaking up into multiple domains (analogous to droplet pinch-off in fluid mechanics). In conclusion, the model raises fundamental mathematical questions about broken symmetries in finite-Time singularities of both continuous and stochastic dynamical systems.« less

  5. Iterative Refinement of a Binding Pocket Model: Active Computational Steering of Lead Optimization

    PubMed Central

    2012-01-01

    Computational approaches for binding affinity prediction are most frequently demonstrated through cross-validation within a series of molecules or through performance shown on a blinded test set. Here, we show how such a system performs in an iterative, temporal lead optimization exercise. A series of gyrase inhibitors with known synthetic order formed the set of molecules that could be selected for “synthesis.” Beginning with a small number of molecules, based only on structures and activities, a model was constructed. Compound selection was done computationally, each time making five selections based on confident predictions of high activity and five selections based on a quantitative measure of three-dimensional structural novelty. Compound selection was followed by model refinement using the new data. Iterative computational candidate selection produced rapid improvements in selected compound activity, and incorporation of explicitly novel compounds uncovered much more diverse active inhibitors than strategies lacking active novelty selection. PMID:23046104

  6. Molecular surface representation using 3D Zernike descriptors for protein shape comparison and docking.

    PubMed

    Kihara, Daisuke; Sael, Lee; Chikhi, Rayan; Esquivel-Rodriguez, Juan

    2011-09-01

    The tertiary structures of proteins have been solved in an increasing pace in recent years. To capitalize the enormous efforts paid for accumulating the structure data, efficient and effective computational methods need to be developed for comparing, searching, and investigating interactions of protein structures. We introduce the 3D Zernike descriptor (3DZD), an emerging technique to describe molecular surfaces. The 3DZD is a series expansion of mathematical three-dimensional function, and thus a tertiary structure is represented compactly by a vector of coefficients of terms in the series. A strong advantage of the 3DZD is that it is invariant to rotation of target object to be represented. These two characteristics of the 3DZD allow rapid comparison of surface shapes, which is sufficient for real-time structure database screening. In this article, we review various applications of the 3DZD, which have been recently proposed.

  7. Guide to Using Onionskin Analysis Code (U)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fugate, Michael Lynn; Morzinski, Jerome Arthur

    2016-09-15

    This document is a guide to using R-code written for the purpose of analyzing onionskin experiments. We expect the user to be very familiar with statistical methods and the R programming language. For more details about onionskin experiments and the statistical methods mentioned in this document see Storlie, Fugate, et al. (2013). Engineers at LANL experiment with detonators and high explosives to assess performance. The experimental unit, called an onionskin, is a hemisphere consisting of a detonator and a booster pellet surrounded by explosive material. When the detonator explodes, a streak camera mounted above the pole of the hemisphere recordsmore » when the shock wave arrives at the surface. The output from the camera is a two-dimensional image that is transformed into a curve that shows the arrival time as a function of polar angle. The statistical challenge is to characterize a baseline population of arrival time curves and to compare the baseline curves to curves from a new, so-called, test series. The hope is that the new test series of curves is statistically similar to the baseline population.« less

  8. Linking market interaction intensity of 3D Ising type financial model with market volatility

    NASA Astrophysics Data System (ADS)

    Fang, Wen; Ke, Jinchuan; Wang, Jun; Feng, Ling

    2016-11-01

    Microscopic interaction models in physics have been used to investigate the complex phenomena of economic systems. The simple interactions involved can lead to complex behaviors and help the understanding of mechanisms in the financial market at a systemic level. This article aims to develop a financial time series model through 3D (three-dimensional) Ising dynamic system which is widely used as an interacting spins model to explain the ferromagnetism in physics. Through Monte Carlo simulations of the financial model and numerical analysis for both the simulation return time series and historical return data of Hushen 300 (HS300) index in Chinese stock market, we show that despite its simplicity, this model displays stylized facts similar to that seen in real financial market. We demonstrate a possible underlying link between volatility fluctuations of real stock market and the change in interaction strengths of market participants in the financial model. In particular, our stochastic interaction strength in our model demonstrates that the real market may be consistently operating near the critical point of the system.

  9. Decomposing Time Series Data by a Non-negative Matrix Factorization Algorithm with Temporally Constrained Coefficients

    PubMed Central

    Cheung, Vincent C. K.; Devarajan, Karthik; Severini, Giacomo; Turolla, Andrea; Bonato, Paolo

    2017-01-01

    The non-negative matrix factorization algorithm (NMF) decomposes a data matrix into a set of non-negative basis vectors, each scaled by a coefficient. In its original formulation, the NMF assumes the data samples and dimensions to be independently distributed, making it a less-than-ideal algorithm for the analysis of time series data with temporal correlations. Here, we seek to derive an NMF that accounts for temporal dependencies in the data by explicitly incorporating a very simple temporal constraint for the coefficients into the NMF update rules. We applied the modified algorithm to 2 multi-dimensional electromyographic data sets collected from the human upper-limb to identify muscle synergies. We found that because it reduced the number of free parameters in the model, our modified NMF made it possible to use the Akaike Information Criterion to objectively identify a model order (i.e., the number of muscle synergies composing the data) that is more functionally interpretable, and closer to the numbers previously determined using ad hoc measures. PMID:26737046

  10. Towards a novel look on low-frequency climate reconstructions

    NASA Astrophysics Data System (ADS)

    Kamenik, Christian; Goslar, Tomasz; Hicks, Sheila; Barnekow, Lena; Huusko, Antti

    2010-05-01

    Information on low-frequency (millennial to sub-centennial) climate change is often derived from sedimentary archives, such as peat profiles or lake sediments. Usually, these archives have non-annual and varying time resolution. Their dating is mainly based on radionuclides, which provide probabilistic age-depth relationships with complex error structures. Dating uncertainties impede the interpretation of sediment-based climate reconstructions. They complicate the calculation of time-dependent rates. In most cases, they make any calibration in time impossible. Sediment-based climate proxies are therefore often presented as a single, best-guess time series without proper calibration and error estimation. Errors along time and dating errors that propagate into the calculation of time-dependent rates are neglected. Our objective is to overcome the aforementioned limitations by using a 'swarm' or 'ensemble' of reconstructions instead of a single best-guess. The novelty of our approach is to take into account age-depth uncertainties by permuting through a large number of potential age-depth relationships of the archive of interest. For each individual permutation we can then calculate rates, calibrate proxies in time, and reconstruct the climate-state variable of interest. From the resulting swarm of reconstructions, we can derive realistic estimates of even complex error structures. The likelihood of reconstructions is visualized by a grid of two-dimensional kernels that take into account probabilities along time and the climate-state variable of interest simultaneously. For comparison and regional synthesis, likelihoods can be scored against other independent climate time series.

  11. BGFit: management and automated fitting of biological growth curves.

    PubMed

    Veríssimo, André; Paixão, Laura; Neves, Ana Rute; Vinga, Susana

    2013-09-25

    Existing tools to model cell growth curves do not offer a flexible integrative approach to manage large datasets and automatically estimate parameters. Due to the increase of experimental time-series from microbiology and oncology, the need for a software that allows researchers to easily organize experimental data and simultaneously extract relevant parameters in an efficient way is crucial. BGFit provides a web-based unified platform, where a rich set of dynamic models can be fitted to experimental time-series data, further allowing to efficiently manage the results in a structured and hierarchical way. The data managing system allows to organize projects, experiments and measurements data and also to define teams with different editing and viewing permission. Several dynamic and algebraic models are already implemented, such as polynomial regression, Gompertz, Baranyi, Logistic and Live Cell Fraction models and the user can add easily new models thus expanding current ones. BGFit allows users to easily manage their data and models in an integrated way, even if they are not familiar with databases or existing computational tools for parameter estimation. BGFit is designed with a flexible architecture that focus on extensibility and leverages free software with existing tools and methods, allowing to compare and evaluate different data modeling techniques. The application is described in the context of bacterial and tumor cells growth data fitting, but it is also applicable to any type of two-dimensional data, e.g. physical chemistry and macroeconomic time series, being fully scalable to high number of projects, data and model complexity.

  12. Spatial Pyramid Covariance based Compact Video Code for Robust Face Retrieval in TV-series.

    PubMed

    Li, Yan; Wang, Ruiping; Cui, Zhen; Shan, Shiguang; Chen, Xilin

    2016-10-10

    We address the problem of face video retrieval in TV-series which searches video clips based on the presence of specific character, given one face track of his/her. This is tremendously challenging because on one hand, faces in TV-series are captured in largely uncontrolled conditions with complex appearance variations, and on the other hand retrieval task typically needs efficient representation with low time and space complexity. To handle this problem, we propose a compact and discriminative representation for the huge body of video data, named Compact Video Code (CVC). Our method first models the face track by its sample (i.e., frame) covariance matrix to capture the video data variations in a statistical manner. To incorporate discriminative information and obtain more compact video signature suitable for retrieval, the high-dimensional covariance representation is further encoded as a much lower-dimensional binary vector, which finally yields the proposed CVC. Specifically, each bit of the code, i.e., each dimension of the binary vector, is produced via supervised learning in a max margin framework, which aims to make a balance between the discriminability and stability of the code. Besides, we further extend the descriptive granularity of covariance matrix from traditional pixel-level to more general patchlevel, and proceed to propose a novel hierarchical video representation named Spatial Pyramid Covariance (SPC) along with a fast calculation method. Face retrieval experiments on two challenging TV-series video databases, i.e., the Big Bang Theory and Prison Break, demonstrate the competitiveness of the proposed CVC over state-of-the-art retrieval methods. In addition, as a general video matching algorithm, CVC is also evaluated in traditional video face recognition task on a standard Internet database, i.e., YouTube Celebrities, showing its quite promising performance by using an extremely compact code with only 128 bits.

  13. The spike trains of inhibited pacemaker neurons seen through the magnifying glass of nonlinear analyses.

    PubMed

    Segundo, J P; Sugihara, G; Dixon, P; Stiber, M; Bersier, L F

    1998-12-01

    This communication describes the new information that may be obtained by applying nonlinear analytical techniques to neurobiological time-series. Specifically, we consider the sequence of interspike intervals Ti (the "timing") of trains recorded from synaptically inhibited crayfish pacemaker neurons. As reported earlier, different postsynaptic spike train forms (sets of timings with shared properties) are generated by varying the average rate and/or pattern (implying interval dispersions and sequences) of presynaptic spike trains. When the presynaptic train is Poisson (independent exponentially distributed intervals), the form is "Poisson-driven" (unperturbed and lengthened intervals succeed each other irregularly). When presynaptic trains are pacemaker (intervals practically equal), forms are either "p:q locked" (intervals repeat periodically), "intermittent" (mostly almost locked but disrupted irregularly), "phase walk throughs" (intermittencies with briefer regular portions), or "messy" (difficult to predict or describe succinctly). Messy trains are either "erratic" (some intervals natural and others lengthened irregularly) or "stammerings" (intervals are integral multiples of presynaptic intervals). The individual spike train forms were analysed using attractor reconstruction methods based on the lagged coordinates provided by successive intervals from the time-series Ti. Numerous models were evaluated in terms of their predictive performance by a trial-and-error procedure: the most successful model was taken as best reflecting the true nature of the system's attractor. Each form was characterized in terms of its dimensionality, nonlinearity and predictability. (1) The dimensionality of the underlying dynamical attractor was estimated by the minimum number of variables (coordinates Ti) required to model acceptably the system's dynamics, i.e. by the system's degrees of freedom. Each model tested was based on a different number of Ti; the smallest number whose predictions were judged successful provided the best integer approximation of the attractor's true dimension (not necessarily an integer). Dimensionalities from three to five provided acceptable fits. (2) The degree of nonlinearity was estimated by: (i) comparing the correlations between experimental results and data from linear and nonlinear models, and (ii) tuning model nonlinearity via a distance-weighting function and identifying the either local or global neighborhood size. Lockings were compatible with linear models and stammerings were marginal; nonlinear models were best for Poisson-driven, intermittent and erratic forms. (3) Finally, prediction accuracy was plotted against increasingly long sequences of intervals forecast: the accuracies for Poisson-driven, locked and stammering forms were invariant, revealing irregularities due to uncorrelated noise, but those of intermittent and messy erratic forms decayed rapidly, indicating an underlying deterministic process. The excellent reconstructions possible for messy erratic and for some intermittent forms are especially significant because of their relatively low dimensionality (around 4), high degree of nonlinearity and prediction decay with time. This is characteristic of chaotic systems, and provides evidence that nonlinear couplings between relatively few variables are the major source of the apparent complexity seen in these cases. This demonstration of different dimensions, degrees of nonlinearity and predictabilities provides rigorous support for the categorization of different synaptically driven discharge forms proposed earlier on the basis of more heuristic criteria. This has significant implications. (1) It demonstrates that heterogeneous postsynaptic forms can indeed be induced by manipulating a few presynaptic variables. (2) Each presynaptic timing induces a form with characteristic dimensionality, thus breaking up the preparation into subsystems such that the physical variables in each operate as one

  14. Bayesian wavelet PCA methodology for turbomachinery damage diagnosis under uncertainty

    NASA Astrophysics Data System (ADS)

    Xu, Shengli; Jiang, Xiaomo; Huang, Jinzhi; Yang, Shuhua; Wang, Xiaofang

    2016-12-01

    Centrifugal compressor often suffers various defects such as impeller cracking, resulting in forced outage of the total plant. Damage diagnostics and condition monitoring of such a turbomachinery system has become an increasingly important and powerful tool to prevent potential failure in components and reduce unplanned forced outage and further maintenance costs, while improving reliability, availability and maintainability of a turbomachinery system. This paper presents a probabilistic signal processing methodology for damage diagnostics using multiple time history data collected from different locations of a turbomachine, considering data uncertainty and multivariate correlation. The proposed methodology is based on the integration of three advanced state-of-the-art data mining techniques: discrete wavelet packet transform, Bayesian hypothesis testing, and probabilistic principal component analysis. The multiresolution wavelet analysis approach is employed to decompose a time series signal into different levels of wavelet coefficients. These coefficients represent multiple time-frequency resolutions of a signal. Bayesian hypothesis testing is then applied to each level of wavelet coefficient to remove possible imperfections. The ratio of posterior odds Bayesian approach provides a direct means to assess whether there is imperfection in the decomposed coefficients, thus avoiding over-denoising. Power spectral density estimated by the Welch method is utilized to evaluate the effectiveness of Bayesian wavelet cleansing method. Furthermore, the probabilistic principal component analysis approach is developed to reduce dimensionality of multiple time series and to address multivariate correlation and data uncertainty for damage diagnostics. The proposed methodology and generalized framework is demonstrated with a set of sensor data collected from a real-world centrifugal compressor with impeller cracks, through both time series and contour analyses of vibration signal and principal components.

  15. Efficient computation of PDF-based characteristics from diffusion MR signal.

    PubMed

    Assemlal, Haz-Edine; Tschumperlé, David; Brun, Luc

    2008-01-01

    We present a general method for the computation of PDF-based characteristics of the tissue micro-architecture in MR imaging. The approach relies on the approximation of the MR signal by a series expansion based on Spherical Harmonics and Laguerre-Gaussian functions, followed by a simple projection step that is efficiently done in a finite dimensional space. The resulting algorithm is generic, flexible and is able to compute a large set of useful characteristics of the local tissues structure. We illustrate the effectiveness of this approach by showing results on synthetic and real MR datasets acquired in a clinical time-frame.

  16. Persistence and stochastic periodicity in the intensity dynamics of a fiber laser during the transition to optical turbulence

    NASA Astrophysics Data System (ADS)

    Carpi, Laura; Masoller, Cristina

    2018-02-01

    Many natural systems display transitions among different dynamical regimes, which are difficult to identify when the data are noisy and high dimensional. A technologically relevant example is a fiber laser, which can display complex dynamical behaviors that involve nonlinear interactions of millions of cavity modes. Here we study the laminar-turbulence transition that occurs when the laser pump power is increased. By applying various data analysis tools to empirical intensity time series we characterize their persistence and demonstrate that at the transition temporal correlations can be precisely represented by a surprisingly simple model.

  17. Low-grazing angle laser scans of foreshore topography, swash and inner surf-zone wave heights, and mean water level: validation and storm response

    NASA Astrophysics Data System (ADS)

    Brodie, K. L.; McNinch, J. E.; Forte, M.; Slocum, R.

    2010-12-01

    Accurately predicting beach evolution during storms requires models that correctly parameterize wave runup and inner surf-zone processes, the principle drivers of sediment exchange between the beach and surf-zone. Previous studies that aimed at measuring wave runup and swash zone water levels have been restricted to analyzing water-elevation time series of (1) the shoreward-most swash excursion using video imaging or near-bed resistance wires, or (2) the free water surface at a particular location on the foreshore using pressure sensors. These data are often compared with wave forcing parameters in deeper water as well as with beach topography observed at finite intervals throughout the time series to identify links between foreshore evolution, wave spectra, and water level variations. These approaches have lead to numerous parameterizations and empirical equations for wave runup but have difficulty providing adequate data to quantify and understand short-term spatial and temporal variations in foreshore evolution. As a result, modeling shoreline response and changes in sub-aerial beach volume during storms remains a substantial challenge. Here, we demonstrate a novel technique in which a terrestrial laser scanner is used to continuously measure beach and foreshore topography as well as water elevation (and wave height) in the swash and inner surf-zone during storms. The terrestrial laser scanner is mounted 2-m above the dune crest at the Field Research Facility in Duck, NC in line with cross-shore wave gauges located at 2-m, 3-m, 5-m, 6-m, and 8-m of water depth. The laser is automated to collect hourly, two-dimensional, 20-minute time series of data along a narrow swath in addition to an hourly three-dimensional laser scan of beach and dune topography +/- 250m alongshore from the laser. Low grazing-angle laser scans are found to reflect off of the surface of the water, providing spatially (e.g. dx <= 0.1 m) and temporally (e.g. dt = 3Hz) dense elevation data of the foreshore, swash, and inner-surf zone bore heights. Foreshore elevation precision is observed to be < 0.01m. Sea surface elevation data is confined to the breaking region and is more extensive in rough, fully-dissipative surf zones, with the fronts of breaking waves and dissipated bores resolved most clearly. Time series of swash front (runup) data will be compared with simultaneously collected video-imaged swash timestacks, and wave height data of the inner surf zone will be compared with wave data from an aquadopp in 2m of water depth. In addition, analysis of the water level time series data at 10 cm intervals across the profile enables reconstruction of the shoreline setup profile as well as cross-shore variations in 1D wave spectra. Foreshore beach morphology evolution is analyzed using both the 2D cross-shore profile data, as well as the 3D topographic data during multiple storm events. Potential sources of error in the measurements, such as shadowing of the wave troughs or reflectance off of wave spray is identified and quantified.

  18. How are you feeling?: A personalized methodology for predicting mental states from temporally observable physical and behavioral information.

    PubMed

    Tuarob, Suppawong; Tucker, Conrad S; Kumara, Soundar; Giles, C Lee; Pincus, Aaron L; Conroy, David E; Ram, Nilam

    2017-04-01

    It is believed that anomalous mental states such as stress and anxiety not only cause suffering for the individuals, but also lead to tragedies in some extreme cases. The ability to predict the mental state of an individual at both current and future time periods could prove critical to healthcare practitioners. Currently, the practical way to predict an individual's mental state is through mental examinations that involve psychological experts performing the evaluations. However, such methods can be time and resource consuming, mitigating their broad applicability to a wide population. Furthermore, some individuals may also be unaware of their mental states or may feel uncomfortable to express themselves during the evaluations. Hence, their anomalous mental states could remain undetected for a prolonged period of time. The objective of this work is to demonstrate the ability of using advanced machine learning based approaches to generate mathematical models that predict current and future mental states of an individual. The problem of mental state prediction is transformed into the time series forecasting problem, where an individual is represented as a multivariate time series stream of monitored physical and behavioral attributes. A personalized mathematical model is then automatically generated to capture the dependencies among these attributes, which is used for prediction of mental states for each individual. In particular, we first illustrate the drawbacks of traditional multivariate time series forecasting methodologies such as vector autoregression. Then, we show that such issues could be mitigated by using machine learning regression techniques which are modified for capturing temporal dependencies in time series data. A case study using the data from 150 human participants illustrates that the proposed machine learning based forecasting methods are more suitable for high-dimensional psychological data than the traditional vector autoregressive model in terms of both magnitude of error and directional accuracy. These results not only present a successful usage of machine learning techniques in psychological studies, but also serve as a building block for multiple medical applications that could rely on an automated system to gauge individuals' mental states. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Experimental identification of a comb-shaped chaotic region in multiple parameter spaces simulated by the Hindmarsh—Rose neuron model

    NASA Astrophysics Data System (ADS)

    Jia, Bing

    2014-03-01

    A comb-shaped chaotic region has been simulated in multiple two-dimensional parameter spaces using the Hindmarsh—Rose (HR) neuron model in many recent studies, which can interpret almost all of the previously simulated bifurcation processes with chaos in neural firing patterns. In the present paper, a comb-shaped chaotic region in a two-dimensional parameter space was reproduced, which presented different processes of period-adding bifurcations with chaos with changing one parameter and fixed the other parameter at different levels. In the biological experiments, different period-adding bifurcation scenarios with chaos by decreasing the extra-cellular calcium concentration were observed from some neural pacemakers at different levels of extra-cellular 4-aminopyridine concentration and from other pacemakers at different levels of extra-cellular caesium concentration. By using the nonlinear time series analysis method, the deterministic dynamics of the experimental chaotic firings were investigated. The period-adding bifurcations with chaos observed in the experiments resembled those simulated in the comb-shaped chaotic region using the HR model. The experimental results show that period-adding bifurcations with chaos are preserved in different two-dimensional parameter spaces, which provides evidence of the existence of the comb-shaped chaotic region and a demonstration of the simulation results in different two-dimensional parameter spaces in the HR neuron model. The results also present relationships between different firing patterns in two-dimensional parameter spaces.

  20. Generalized Weierstrass-Mandelbrot Function Model for Actual Stocks Markets Indexes with Nonlinear Characteristics

    NASA Astrophysics Data System (ADS)

    Zhang, L.; Yu, C.; Sun, J. Q.

    2015-03-01

    It is difficult to simulate the dynamical behavior of actual financial markets indexes effectively, especially when they have nonlinear characteristics. So it is significant to propose a mathematical model with these characteristics. In this paper, we investigate a generalized Weierstrass-Mandelbrot function (WMF) model with two nonlinear characteristics: fractal dimension D where 2 > D > 1.5 and Hurst exponent (H) where 1 > H > 0.5 firstly. And then we study the dynamical behavior of H for WMF as D and the spectrum of the time series γ change in three-dimensional space, respectively. Because WMF and the actual stock market indexes have two common features: fractal behavior using fractal dimension and long memory effect by Hurst exponent, we study the relationship between WMF and the actual stock market indexes. We choose a random value of γ and fixed value of D for WMF to simulate the S&P 500 indexes at different time ranges. As shown in the simulation results of three-dimensional space, we find that γ is important in WMF model and different γ may have the same effect for the nonlinearity of WMF. Then we calculate the skewness and kurtosis of actual Daily S&P 500 index in different time ranges which can be used to choose the value of γ. Based on these results, we choose appropriate γ, D and initial value into WMF to simulate Daily S&P 500 indexes. Using the fit line method in two-dimensional space for the simulated values, we find that the generalized WMF model is effective for simulating different actual stock market indexes in different time ranges. It may be useful for understanding the dynamical behavior of many different financial markets.

  1. Left ventricular volume estimation in cardiac three-dimensional ultrasound: a semiautomatic border detection approach.

    PubMed

    van Stralen, Marijn; Bosch, Johan G; Voormolen, Marco M; van Burken, Gerard; Krenning, Boudewijn J; van Geuns, Robert-Jan M; Lancée, Charles T; de Jong, Nico; Reiber, Johan H C

    2005-10-01

    We propose a semiautomatic endocardial border detection method for three-dimensional (3D) time series of cardiac ultrasound (US) data based on pattern matching and dynamic programming, operating on two-dimensional (2D) slices of the 3D plus time data, for the estimation of full cycle left ventricular volume, with minimal user interaction. The presented method is generally applicable to 3D US data and evaluated on data acquired with the Fast Rotating Ultrasound (FRU-) Transducer, developed by Erasmus Medical Center (Rotterdam, the Netherlands), a conventional phased-array transducer, rotating at very high speed around its image axis. The detection is based on endocardial edge pattern matching using dynamic programming, which is constrained by a 3D plus time shape model. It is applied to an automatically selected subset of 2D images of the original data set, for typically 10 equidistant rotation angles and 16 cardiac phases (160 images). Initialization requires the drawing of four contours per patient manually. We evaluated this method on 14 patients against MRI end-diastole and end-systole volumes. Initialization requires the drawing of four contours per patient manually. We evaluated this method on 14 patients against MRI end-diastolic (ED) and end-systolic (ES) volumes. The semiautomatic border detection approach shows good correlations with MRI ED/ES volumes (r = 0.938) and low interobserver variability (y = 1.005x - 16.7, r = 0.943) over full-cycle volume estimations. It shows a high consistency in tracking the user-defined initial borders over space and time. We show that the ease of the acquisition using the FRU-transducer and the semiautomatic endocardial border detection method together can provide a way to quickly estimate the left ventricular volume over the full cardiac cycle using little user interaction.

  2. Stereoscopic Imaging in Hypersonics Boundary Layers using Planar Laser-Induced Fluorescence

    NASA Technical Reports Server (NTRS)

    Danehy, Paul M.; Bathel, Brett; Inman, Jennifer A.; Alderfer, David W.; Jones, Stephen B.

    2008-01-01

    Stereoscopic time-resolved visualization of three-dimensional structures in a hypersonic flow has been performed for the first time. Nitric Oxide (NO) was seeded into hypersonic boundary layer flows that were designed to transition from laminar to turbulent. A thick laser sheet illuminated and excited the NO, causing spatially-varying fluorescence. Two cameras in a stereoscopic configuration were used to image the fluorescence. The images were processed in a computer visualization environment to provide stereoscopic image pairs. Two methods were used to display these image pairs: a cross-eyed viewing method which can be viewed by naked eyes, and red/blue anaglyphs, which require viewing through red/blue glasses. The images visualized three-dimensional information that would be lost if conventional planar laser-induced fluorescence imaging had been used. Two model configurations were studied in NASA Langley Research Center's 31-Inch Mach 10 Air Wind tunnel. One model was a 10 degree half-angle wedge containing a small protuberance to force the flow to transition. The other model was a 1/3-scale, truncated Hyper-X forebody model with blowing through a series of holes to force the boundary layer flow to transition to turbulence. In the former case, low flowrates of pure NO seeded and marked the boundary layer fluid. In the latter, a trace concentration of NO was seeded into the injected N2 gas. The three-dimensional visualizations have an effective time resolution of about 500 ns, which is fast enough to freeze this hypersonic flow. The 512x512 resolution of the resulting images is much higher than high-speed laser-sheet scanning systems with similar time response, which typically measure 10-20 planes.

  3. Fast, adaptive summation of point forces in the two-dimensional Poisson equation

    NASA Technical Reports Server (NTRS)

    Van Dommelen, Leon; Rundensteiner, Elke A.

    1989-01-01

    A comparatively simple procedure is presented for the direct summation of the velocity field introduced by point vortices which significantly reduces the required number of operations by replacing selected partial sums by asymptotic series. Tables are presented which demonstrate the speed of this algorithm in terms of the mere doubling of computational time in dealing with a doubling of the number of vortices; current methods involve a computational time extension by a factor of 4. This procedure need not be restricted to the solution of the Poisson equation, and may be applied to other problems involving groups of points in which the interaction between elements of different groups can be simplified when the distance between groups is sufficiently great.

  4. Three-dimensional pin-to-pin analyses of VVER-440 cores by the MOBY-DICK code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lehmann, M.; Mikolas, P.

    1994-12-31

    Nuclear design for the Dukovany (EDU) VVER-440s nuclear power plant is routinely performed by the MOBY-DICK system. After its implementation on Hewlett Packard series 700 workstations, it is able to perform routinely three-dimensional pin-to-pin core analyses. For purposes of code validation, the benchmark prepared from EDU operational data was solved.

  5. The Impact of Three-Dimensional Computational Modeling on Student Understanding of Astronomy Concepts: A Qualitative Analysis. Research Report

    ERIC Educational Resources Information Center

    Hansen, John; Barnett, Michael; MaKinster, James; Keating, Thomas

    2004-01-01

    In this study, we explore an alternate mode for teaching and learning the dynamic, three-dimensional (3D) relationships that are central to understanding astronomical concepts. To this end, we implemented an innovative undergraduate course in which we used inexpensive computer modeling tools. As the second of a two-paper series, this report…

  6. Development and Assessment of a New 3D Neuroanatomy Teaching Tool for MRI Training

    ERIC Educational Resources Information Center

    Drapkin, Zachary A.; Lindgren, Kristen A.; Lopez, Michael J.; Stabio, Maureen E.

    2015-01-01

    A computerized three-dimensional (3D) neuroanatomy teaching tool was developed for training medical students to identify subcortical structures on a magnetic resonance imaging (MRI) series of the human brain. This program allows the user to transition rapidly between two-dimensional (2D) MRI slices, 3D object composites, and a combined model in…

  7. LSAT Dimensionality Analysis for the December 1991, June 1992, and October 1992 Administrations. Statistical Report. LSAC Research Report Series.

    ERIC Educational Resources Information Center

    Douglas, Jeff; Kim, Hae-Rim; Roussos, Louis; Stout, William; Zhang, Jinming

    An extensive nonparametric dimensionality analysis of latent structure was conducted on three forms of the Law School Admission Test (LSAT) (December 1991, June 1992, and October 1992) using the DIMTEST model in confirmatory analyses and using DIMTEST, FAC, DETECT, HCA, PROX, and a genetic algorithm in exploratory analyses. Results indicate that…

  8. [Extraction and recognition of attractors in three-dimensional Lorenz plot].

    PubMed

    Hu, Min; Jang, Chengfan; Wang, Suxia

    2018-02-01

    Lorenz plot (LP) method which gives a global view of long-time electrocardiogram signals, is an efficient simple visualization tool to analyze cardiac arrhythmias, and the morphologies and positions of the extracted attractors may reveal the underlying mechanisms of the onset and termination of arrhythmias. But automatic diagnosis is still impossible because it is lack of the method of extracting attractors by now. We presented here a methodology of attractor extraction and recognition based upon homogeneously statistical properties of the location parameters of scatter points in three dimensional LP (3DLP), which was constructed by three successive RR intervals as X , Y and Z axis in Cartesian coordinate system. Validation experiments were tested in a group of RR-interval time series and tags data with frequent unifocal premature complexes exported from a 24-hour Holter system. The results showed that this method had excellent effective not only on extraction of attractors, but also on automatic recognition of attractors by the location parameters such as the azimuth of the points peak frequency ( A PF ) of eccentric attractors once stereographic projection of 3DLP along the space diagonal. Besides, A PF was still a powerful index of differential diagnosis of atrial and ventricular extrasystole. Additional experiments proved that this method was also available on several other arrhythmias. Moreover, there were extremely relevant relationships between 3DLP and two dimensional LPs which indicate any conventional achievement of LPs could be implanted into 3DLP. It would have a broad application prospect to integrate this method into conventional long-time electrocardiogram monitoring and analysis system.

  9. A new series of two-dimensional silicon crystals with versatile electronic properties

    NASA Astrophysics Data System (ADS)

    Chae, Kisung; Kim, Duck Young; Son, Young-Woo

    2018-04-01

    Silicon (Si) is one of the most extensively studied materials owing to its significance to semiconductor science and technology. While efforts to find a new three-dimensional (3D) Si crystal with unusual properties have made some progress, its two-dimensional (2D) phases have not yet been explored as much. Here, based on a newly developed systematic ab initio materials searching strategy, we report a series of novel 2D Si crystals with unprecedented structural and electronic properties. The new structures exhibit perfectly planar outermost surface layers of a distorted hexagonal network with their thicknesses varying with the atomic arrangement inside. Dramatic changes in electronic properties ranging from semimetal to semiconducting with indirect energy gaps and even to one with direct energy gaps are realized by varying thickness as well as by surface oxidation. Our predicted 2D Si crystals with flat surfaces and tunable electronic properties will shed light on the development of silicon-based 2D electronics technology.

  10. Real-time monitoring of the solution concentration variation during the crystallization process of protein-lysozyme by using digital holographic interferometry.

    PubMed

    Zhang, Yanyan; Zhao, Jianlin; Di, Jianglei; Jiang, Hongzhen; Wang, Qian; Wang, Jun; Guo, Yunzhu; Yin, Dachuan

    2012-07-30

    We report a real-time measurement method of the solution concentration variation during the growth of protein-lysozyme crystals based on digital holographic interferometry. A series of holograms containing the information of the solution concentration variation in the whole crystallization process is recorded by CCD. Based on the principle of double-exposure holographic interferometry and the relationship between the phase difference of the reconstructed object wave and the solution concentration, the solution concentration variation with time for arbitrary point in the solution can be obtained, and then the two-dimensional concentration distribution of the solution during crystallization process can also be figured out under the precondition which the refractive index is constant through the light propagation direction. The experimental results turns out that it is feasible to in situ, full-field and real-time monitor the crystal growth process by using this method.

  11. Finite element techniques in computational time series analysis of turbulent flows

    NASA Astrophysics Data System (ADS)

    Horenko, I.

    2009-04-01

    In recent years there has been considerable increase of interest in the mathematical modeling and analysis of complex systems that undergo transitions between several phases or regimes. Such systems can be found, e.g., in weather forecast (transitions between weather conditions), climate research (ice and warm ages), computational drug design (conformational transitions) and in econometrics (e.g., transitions between different phases of the market). In all cases, the accumulation of sufficiently detailed time series has led to the formation of huge databases, containing enormous but still undiscovered treasures of information. However, the extraction of essential dynamics and identification of the phases is usually hindered by the multidimensional nature of the signal, i.e., the information is "hidden" in the time series. The standard filtering approaches (like f.~e. wavelets-based spectral methods) have in general unfeasible numerical complexity in high-dimensions, other standard methods (like f.~e. Kalman-filter, MVAR, ARCH/GARCH etc.) impose some strong assumptions about the type of the underlying dynamics. Approach based on optimization of the specially constructed regularized functional (describing the quality of data description in terms of the certain amount of specified models) will be introduced. Based on this approach, several new adaptive mathematical methods for simultaneous EOF/SSA-like data-based dimension reduction and identification of hidden phases in high-dimensional time series will be presented. The methods exploit the topological structure of the analysed data an do not impose severe assumptions on the underlying dynamics. Special emphasis will be done on the mathematical assumptions and numerical cost of the constructed methods. The application of the presented methods will be first demonstrated on a toy example and the results will be compared with the ones obtained by standard approaches. The importance of accounting for the mathematical assumptions used in the analysis will be pointed up in this example. Finally, applications to analysis of meteorological and climate data will be presented.

  12. Control and Measurement of an Xmon with the Quantum Socket

    NASA Astrophysics Data System (ADS)

    McConkey, T. G.; Bejanin, J. H.; Earnest, C. T.; McRae, C. R. H.; Rinehart, J. R.; Weides, M.; Mariantoni, M.

    The implementation of superconducting quantum processors is rapidly reaching scalability limitations. Extensible electronics and wiring solutions for superconducting quantum bits (qubits) are among the most imminent issues to be tackled. The necessity to substitute planar electrical interconnects (e.g., wire bonds) with three-dimensional wires is emerging as a fundamental pillar towards scalability. In a previous work, we have shown that three-dimensional wires housed in a suitable package, named the quantum socket, can be utilized to measure high-quality superconducting resonators. In this work, we set out to test the quantum socket with actual superconducting qubits to verify its suitability as a wiring solution in the development of an extensible quantum computing architecture. To this end, we have designed and fabricated a series of Xmon qubits. The qubits range in frequency from about 6 to 7 GHz with anharmonicity of 200 MHz and can be tuned by means of Z pulses. Controlling tunable Xmons will allow us to verify whether the three-dimensional wires contact resistance is low enough for qubit operation. Qubit T1 and T2 times and single qubit gate fidelities are compared against current standards in the field.

  13. Modeling the basin of attraction as a two-dimensional manifold from experimental data: Applications to balance in humans

    NASA Astrophysics Data System (ADS)

    Zakynthinaki, Maria S.; Stirling, James R.; Cordente Martínez, Carlos A.; Díaz de Durana, Alfonso López; Quintana, Manuel Sillero; Romo, Gabriel Rodríguez; Molinuevo, Javier Sampedro

    2010-03-01

    We present a method of modeling the basin of attraction as a three-dimensional function describing a two-dimensional manifold on which the dynamics of the system evolves from experimental time series data. Our method is based on the density of the data set and uses numerical optimization and data modeling tools. We also show how to obtain analytic curves that describe both the contours and the boundary of the basin. Our method is applied to the problem of regaining balance after perturbation from quiet vertical stance using data of an elite athlete. Our method goes beyond the statistical description of the experimental data, providing a function that describes the shape of the basin of attraction. To test its robustness, our method has also been applied to two different data sets of a second subject and no significant differences were found between the contours of the calculated basin of attraction for the different data sets. The proposed method has many uses in a wide variety of areas, not just human balance for which there are many applications in medicine, rehabilitation, and sport.

  14. Simple Models of the Spatial Distribution of Cloud Radiative Properties for Remote Sensing Studies

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This project aimed to assess the degree to which estimates of three-dimensional cloud structure can be inferred from a time series of profiles obtained at a point. The work was motivated by the desire to understand the extent to which high-frequency profiles of the atmosphere (e.g. ARM data streams) can be used to assess the magnitude of non-plane parallel transfer of radiation in thc atmosphere. We accomplished this by performing an observing system simulation using a large-eddy simulation and a Monte Carlo radiative transfer model. We define the 3D effect as the part of the radiative transfer that isn't captured by one-dimensional radiative transfer calculations. We assess the magnitude of the 3D effect in small cumulus clouds by using a fine-scale cloud model to simulate many hours of cloudiness over a continental site. We then use a Monte Carlo radiative transfer model to compute the broadband shortwave fluxes at the surface twice, once using the complete three-dimensional radiative transfer F(sup 3D), and once using the ICA F (sup ICA); the difference between them is the 3D effect given.

  15. Multidimensional brain activity dictated by winner-take-all mechanisms.

    PubMed

    Tozzi, Arturo; Peters, James F

    2018-06-21

    A novel demon-based architecture is introduced to elucidate brain functions such as pattern recognition during human perception and mental interpretation of visual scenes. Starting from the topological concepts of invariance and persistence, we introduce a Selfridge pandemonium variant of brain activity that takes into account a novel feature, namely, demons that recognize short straight-line segments, curved lines and scene shapes, such as shape interior, density and texture. Low-level representations of objects can be mapped to higher-level views (our mental interpretations): a series of transformations can be gradually applied to a pattern in a visual scene, without affecting its invariant properties. This makes it possible to construct a symbolic multi-dimensional representation of the environment. These representations can be projected continuously to an object that we have seen and continue to see, thanks to the mapping from shapes in our memory to shapes in Euclidean space. Although perceived shapes are 3-dimensional (plus time), the evaluation of shape features (volume, color, contour, closeness, texture, and so on) leads to n-dimensional brain landscapes. Here we discuss the advantages of our parallel, hierarchical model in pattern recognition, computer vision and biological nervous system's evolution. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. A new two-dimensional theory for vibrations of piezoelectric crystal plates with electroded faces

    NASA Astrophysics Data System (ADS)

    Lee, P. C. Y.; Yu, J. D.; Lin, W. S.

    1998-02-01

    A system of two-dimensional (2-D) governing equations for piezoelectric plates with general crystal symmetry and with electroded faces is deduced from the three-dimensional (3-D) equations of linear piezoelectricity by expansion in series of trigonometric functions of thickness coordinate. The essential difference of the present derivation from the earlier studies by trigonometrical series expansion is that the antisymmetric in-plane displacements induced by gradients of the bending deflection (the zero-order component of transverse displacement) are expressed by the linear functions of the thickness coordinate, and the rest of displacements are expanded in cosine series of the thickness coordinate. For the electric potential, a sine-series expansion is used for it is well suited for satisfying the electrical conditions at the faces covered with conductive electrodes. A system of approximate first-order equations is extracted from the infinite system of 2-D equations. Dispersion curves for thickness shear, flexure, and face-shear modes varying along x1 and those for thickness twist and face shear varying along x3 for AT-cut quartz plates are calculated from the present 2-D equations as well as from the 3-D equations, and comparison shows that the agreement is very close without introducing any corrections. Predicted frequency spectra by the present equations are shown to agree closely with the experimental data by Koga and Fukuyo [J. Inst. Elec. Comm. Engrs. of Japan 36, 59 (1953)] and those by Nakazawa, Horiuchi, and Ito [Proceedings of 1990 IEEE Ultrasonics Symposium (IEEE, New York, 1990)].

  17. Proteus two-dimensional Navier-Stokes computer code, version 2.0. Volume 1: Analysis description

    NASA Technical Reports Server (NTRS)

    Towne, Charles E.; Schwab, John R.; Bui, Trong T.

    1993-01-01

    A computer code called Proteus 2D was developed to solve the two-dimensional planar or axisymmetric, Reynolds-averaged, unsteady compressible Navier-Stokes equations in strong conservation law form. The objective in this effort was to develop a code for aerospace propulsion applications that is easy to use and easy to modify. Code readability, modularity, and documentation were emphasized. The governing equations are solved in generalized nonorthogonal body-fitted coordinates, by marching in time using a fully-coupled ADI solution procedure. The boundary conditions are treated implicitly. All terms, including the diffusion terms, are linearized using second-order Taylor series expansions. Turbulence is modeled using either an algebraic or two-equation eddy viscosity model. The thin-layer or Euler equations may also be solved. The energy equation may be eliminated by the assumption of constant total enthalpy. Explicit and implicit artificial viscosity may be used. Several time step options are available for convergence acceleration. The documentation is divided into three volumes. This is the Analysis Description, and presents the equations and solution procedure. The governing equations, the turbulence model, the linearization of the equations and boundary conditions, the time and space differencing formulas, the ADI solution procedure, and the artificial viscosity models are described in detail.

  18. Proteus three-dimensional Navier-Stokes computer code, version 1.0. Volume 1: Analysis description

    NASA Technical Reports Server (NTRS)

    Towne, Charles E.; Schwab, John R.; Bui, Trong T.

    1993-01-01

    A computer code called Proteus 3D has been developed to solve the three dimensional, Reynolds averaged, unsteady compressible Navier-Stokes equations in strong conservation law form. The objective in this effort has been to develop a code for aerospace propulsion applications that is easy to use and easy to modify. Code readability, modularity, and documentation have been emphasized. The governing equations are solved in generalized non-orthogonal body-fitted coordinates by marching in time using a fully-coupled ADI solution procedure. The boundary conditions are treated implicitly. All terms, including the diffusion terms, are linearized using second-order Taylor series expansions. Turbulence is modeled using either an algebraic or two-equation eddy viscosity model. The thin-layer or Euler equations may also be solved. The energy equation may be eliminated by the assumption of constant total enthalpy. Explicit and implicit artificial viscosity may be used. Several time step options are available for convergence acceleration. The documentation is divided into three volumes. This is the Analysis Description, and presents the equations and solution procedure. It describes in detail the governing equations, the turbulence model, the linearization of the equations and boundary conditions, the time and space differencing formulas, the ADI solution procedure, and the artificial viscosity models.

  19. Preliminary Results for the OECD/NEA Time Dependent Benchmark using Rattlesnake, Rattlesnake-IQS and TDKENO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeHart, Mark D.; Mausolff, Zander; Weems, Zach

    2016-08-01

    One goal of the MAMMOTH M&S project is to validate the analysis capabilities within MAMMOTH. Historical data has shown limited value for validation of full three-dimensional (3D) multi-physics methods. Initial analysis considered the TREAT startup minimum critical core and one of the startup transient tests. At present, validation is focusing on measurements taken during the M8CAL test calibration series. These exercises will valuable in preliminary assessment of the ability of MAMMOTH to perform coupled multi-physics calculations; calculations performed to date are being used to validate the neutron transport solver Rattlesnake\\cite{Rattlesnake} and the fuels performance code BISON. Other validation projects outsidemore » of TREAT are available for single-physics benchmarking. Because the transient solution capability of Rattlesnake is one of the key attributes that makes it unique for TREAT transient simulations, validation of the transient solution of Rattlesnake using other time dependent kinetics benchmarks has considerable value. The Nuclear Energy Agency (NEA) of the Organization for Economic Cooperation and Development (OECD) has recently developed a computational benchmark for transient simulations. This benchmark considered both two-dimensional (2D) and 3D configurations for a total number of 26 different transients. All are negative reactivity insertions, typically returning to the critical state after some time.« less

  20. Graphite Ablation and Thermal Response Simulation Under Arc-Jet Flow Conditions

    NASA Technical Reports Server (NTRS)

    Chen, Y.-K.; Milos, F. S.; Reda, D. C.; Stewart, D. A.; Venkatapathy, Ethiraj (Technical Monitor)

    2002-01-01

    The Two-dimensional Implicit Thermal Response and Ablation program, TITAN, was developed and integrated with a Navier-Stokes solver, GIANTS, for multidimensional ablation and shape change simulation of thermal protection systems in hypersonic flow environments. The governing equations in both codes are demoralized using the same finite-volume approximation with a general body-fitted coordinate system. Time-dependent solutions are achieved by an implicit time marching technique using Gauess-Siedel line relaxation with alternating sweeps. As the first part of a code validation study, this paper compares TITAN-GIANTS predictions with thermal response and recession data obtained from arc-jet tests recently conducted in the Interaction Heating Facility (IHF) at NASA Ames Research Center. The test models are graphite sphere-cones. Graphite was selected as a test material to minimize the uncertainties from material properties. Recession and thermal response data were obtained from two separate arc-jet test series. The first series was at a heat flux where graphite ablation is mainly due to sublimation, and the second series was at a relatively low heat flux where recession is the result of diffusion-controlled oxidation. Ablation and thermal response solutions for both sets of conditions, as calculated by TITAN-GIANTS, are presented and discussed in detail. Predicted shape change and temperature histories generally agree well with the data obtained from the arc-jet tests.

  1. An advanced analysis and modelling the air pollutant concentration temporal dynamics in atmosphere of the industrial cities: Odessa city

    NASA Astrophysics Data System (ADS)

    Buyadzhi, V. V.; Glushkov, A. V.; Khetselius, O. Yu; Ternovsky, V. B.; Serga, I. N.; Bykowszczenko, N.

    2017-10-01

    Results of analysis and modelling the air pollutant (dioxide of nitrogen) concentration temporal dynamics in atmosphere of the industrial city Odessa are presented for the first time and based on computing by nonlinear methods of the chaos and dynamical systems theories. A chaotic behaviour is discovered and investigated. To reconstruct the corresponding strange chaotic attractor, the time delay and embedding dimension are computed. The former is determined by the methods of autocorrelation function and average mutual information, and the latter is calculated by means of correlation dimension method and algorithm of false nearest neighbours. It is shown that low-dimensional chaos exists in the nitrogen dioxide concentration time series under investigation. Further, the Lyapunov’s exponents spectrum, Kaplan-Yorke dimension and Kolmogorov entropy are computed.

  2. 4D electron tomography.

    PubMed

    Kwon, Oh-Hoon; Zewail, Ahmed H

    2010-06-25

    Electron tomography provides three-dimensional (3D) imaging of noncrystalline and crystalline equilibrium structures, as well as elemental volume composition, of materials and biological specimens, including those of viruses and cells. We report the development of 4D electron tomography by integrating the fourth dimension (time resolution) with the 3D spatial resolution obtained from a complete tilt series of 2D projections of an object. The different time frames of tomograms constitute a movie of the object in motion, thus enabling studies of nonequilibrium structures and transient processes. The method was demonstrated using carbon nanotubes of a bracelet-like ring structure for which 4D tomograms display different modes of motion, such as breathing and wiggling, with resonance frequencies up to 30 megahertz. Applications can now make use of the full space-time range with the nanometer-femtosecond resolution of ultrafast electron tomography.

  3. 4D Electron Tomography

    NASA Astrophysics Data System (ADS)

    Kwon, Oh-Hoon; Zewail, Ahmed H.

    2010-06-01

    Electron tomography provides three-dimensional (3D) imaging of noncrystalline and crystalline equilibrium structures, as well as elemental volume composition, of materials and biological specimens, including those of viruses and cells. We report the development of 4D electron tomography by integrating the fourth dimension (time resolution) with the 3D spatial resolution obtained from a complete tilt series of 2D projections of an object. The different time frames of tomograms constitute a movie of the object in motion, thus enabling studies of nonequilibrium structures and transient processes. The method was demonstrated using carbon nanotubes of a bracelet-like ring structure for which 4D tomograms display different modes of motion, such as breathing and wiggling, with resonance frequencies up to 30 megahertz. Applications can now make use of the full space-time range with the nanometer-femtosecond resolution of ultrafast electron tomography.

  4. Electronic structure of disordered CuPd alloys: A two-dimensional positron-annihilation study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smedskjaer, L.C.; Benedek, R.; Siegel, R.W.

    1987-11-23

    Two-dimensional--angular-correlation experiments using posi- tron-annihilation spectroscopy were performed on a series of disordered Cu-rich CuPd-alloy single crystals. The results are compared with theoretical calculations based on the Korringa-Kohn-Rostoker coherent-potential approximation. Our experiments confirm the theoretically predicted flattening of the alloy Fermi surface near (110) with increasing Pd concentration. The momentum densities and the two-dimensional--angular-correlation spectra around zero momentum exhibit a characteristic signature of the electronic states near the valence-band edge in the alloy.

  5. Diffraction mode terahertz tomography

    DOEpatents

    Ferguson, Bradley; Wang, Shaohong; Zhang, Xi-Cheng

    2006-10-31

    A method of obtaining a series of images of a three-dimensional object. The method includes the steps of transmitting pulsed terahertz (THz) radiation through the entire object from a plurality of angles, optically detecting changes in the transmitted THz radiation using pulsed laser radiation, and constructing a plurality of imaged slices of the three-dimensional object using the detected changes in the transmitted THz radiation. The THz radiation is transmitted through the object as a two-dimensional array of parallel rays. The optical detection is an array of detectors such as a CCD sensor.

  6. Exact Fourier expansion in cylindrical coordinates for the three-dimensional Helmholtz Green function

    NASA Astrophysics Data System (ADS)

    Conway, John T.; Cohl, Howard S.

    2010-06-01

    A new method is presented for Fourier decomposition of the Helmholtz Green function in cylindrical coordinates, which is equivalent to obtaining the solution of the Helmholtz equation for a general ring source. The Fourier coefficients of the Green function are split into their half advanced + half retarded and half advanced-half retarded components, and closed form solutions for these components are then obtained in terms of a Horn function and a Kampé de Fériet function respectively. Series solutions for the Fourier coefficients are given in terms of associated Legendre functions, Bessel and Hankel functions and a hypergeometric function. These series are derived either from the closed form 2-dimensional hypergeometric solutions or from an integral representation, or from both. A simple closed form far-field solution for the general Fourier coefficient is derived from the Hankel series. Numerical calculations comparing different methods of calculating the Fourier coefficients are presented. Fourth order ordinary differential equations for the Fourier coefficients are also given and discussed briefly.

  7. Adults' acquisition of novel dimension words: creating a semantic congruity effect.

    PubMed

    Ryalls, B O; Smith, L B

    2000-07-01

    The semantic congruity effect is exhibited when adults are asked to compare pairs of items from a series, and their response is faster when the direction of the comparison coincides with the location of the stimuli in the series. For example, people are faster at picking the bigger of 2 big items than the littler of 2 big items. In the 4 experiments presented, adults were taught new dimensional adjectives (mal/ler and borg/er). Characteristics of the learning situation, such as the nature of the stimulus series and the relative frequency of labeling, were varied. Results revealed that the participants who learned the relative meaning of the artificial dimensional adjectives also formed categories and developed a semantic congruity effect regardless of the characteristics of training. These findings have important implications for our understanding of adult acquisition of novel relational words, the relationship between learning such words and categorization, and the explanations of the semantic congruity effect.

  8. Local Modelling of Groundwater Flow Using Analytic Element Method Three-dimensional Transient Unconfined Groundwater Flow With Partially Penetrating Wells and Ellipsoidal Inhomogeneites

    NASA Astrophysics Data System (ADS)

    Jankovic, I.; Barnes, R. J.; Soule, R.

    2001-12-01

    The analytic element method is used to model local three-dimensional flow in the vicinity of partially penetrating wells. The flow domain is bounded by an impermeable horizontal base, a phreatic surface with recharge and a cylindrical lateral boundary. The analytic element solution for this problem contains (1) a fictitious source technique to satisfy the head and the discharge conditions along the phreatic surface, (2) a fictitious source technique to satisfy specified head conditions along the cylindrical boundary, (3) a method of imaging to satisfy the no-flow condition across the impermeable base, (4) the classical analytic solution for a well and (5) spheroidal harmonics to account for the influence of the inhomogeneities in hydraulic conductivity. Temporal variations of the flow system due to time-dependent recharge and pumping are represented by combining the analytic element method with a finite difference method: analytic element method is used to represent spatial changes in head and discharge, while the finite difference method represents temporal variations. The solution provides a very detailed description of local groundwater flow with an arbitrary number of wells of any orientation and an arbitrary number of ellipsoidal inhomogeneities of any size and conductivity. These inhomogeneities may be used to model local hydrogeologic features (such as gravel packs and clay lenses) that significantly influence the flow in the vicinity of partially penetrating wells. Several options for specifying head values along the lateral domain boundary are available. These options allow for inclusion of the model into steady and transient regional groundwater models. The head values along the lateral domain boundary may be specified directly (as time series). The head values along the lateral boundary may also be assigned by specifying the water-table gradient and a head value at a single point (as time series). A case study is included to demonstrate the application of the model in local modeling of the groundwater flow. Transient three-dimensional capture zones are delineated for a site on Prairie Island, MN. Prairie Island is located on the Mississippi River 40 miles south of the Twin Cities metropolitan area. The case study focuses on a well that has been known to contain viral DNA. The objective of the study was to assess the potential for pathogen migration toward the well.

  9. Probabilistic Solar Wind Forecasting Using Large Ensembles of Near-Sun Conditions With a Simple One-Dimensional "Upwind" Scheme

    NASA Astrophysics Data System (ADS)

    Owens, Mathew J.; Riley, Pete

    2017-11-01

    Long lead-time space-weather forecasting requires accurate prediction of the near-Earth solar wind. The current state of the art uses a coronal model to extrapolate the observed photospheric magnetic field to the upper corona, where it is related to solar wind speed through empirical relations. These near-Sun solar wind and magnetic field conditions provide the inner boundary condition to three-dimensional numerical magnetohydrodynamic (MHD) models of the heliosphere out to 1 AU. This physics-based approach can capture dynamic processes within the solar wind, which affect the resulting conditions in near-Earth space. However, this deterministic approach lacks a quantification of forecast uncertainty. Here we describe a complementary method to exploit the near-Sun solar wind information produced by coronal models and provide a quantitative estimate of forecast uncertainty. By sampling the near-Sun solar wind speed at a range of latitudes about the sub-Earth point, we produce a large ensemble (N = 576) of time series at the base of the Sun-Earth line. Propagating these conditions to Earth by a three-dimensional MHD model would be computationally prohibitive; thus, a computationally efficient one-dimensional "upwind" scheme is used. The variance in the resulting near-Earth solar wind speed ensemble is shown to provide an accurate measure of the forecast uncertainty. Applying this technique over 1996-2016, the upwind ensemble is found to provide a more "actionable" forecast than a single deterministic forecast; potential economic value is increased for all operational scenarios, but particularly when false alarms are important (i.e., where the cost of taking mitigating action is relatively large).

  10. Probabilistic Solar Wind Forecasting Using Large Ensembles of Near-Sun Conditions With a Simple One-Dimensional "Upwind" Scheme.

    PubMed

    Owens, Mathew J; Riley, Pete

    2017-11-01

    Long lead-time space-weather forecasting requires accurate prediction of the near-Earth solar wind. The current state of the art uses a coronal model to extrapolate the observed photospheric magnetic field to the upper corona, where it is related to solar wind speed through empirical relations. These near-Sun solar wind and magnetic field conditions provide the inner boundary condition to three-dimensional numerical magnetohydrodynamic (MHD) models of the heliosphere out to 1 AU. This physics-based approach can capture dynamic processes within the solar wind, which affect the resulting conditions in near-Earth space. However, this deterministic approach lacks a quantification of forecast uncertainty. Here we describe a complementary method to exploit the near-Sun solar wind information produced by coronal models and provide a quantitative estimate of forecast uncertainty. By sampling the near-Sun solar wind speed at a range of latitudes about the sub-Earth point, we produce a large ensemble (N = 576) of time series at the base of the Sun-Earth line. Propagating these conditions to Earth by a three-dimensional MHD model would be computationally prohibitive; thus, a computationally efficient one-dimensional "upwind" scheme is used. The variance in the resulting near-Earth solar wind speed ensemble is shown to provide an accurate measure of the forecast uncertainty. Applying this technique over 1996-2016, the upwind ensemble is found to provide a more "actionable" forecast than a single deterministic forecast; potential economic value is increased for all operational scenarios, but particularly when false alarms are important (i.e., where the cost of taking mitigating action is relatively large).

  11. Probabilistic Solar Wind Forecasting Using Large Ensembles of Near‐Sun Conditions With a Simple One‐Dimensional “Upwind” Scheme

    PubMed Central

    Riley, Pete

    2017-01-01

    Abstract Long lead‐time space‐weather forecasting requires accurate prediction of the near‐Earth solar wind. The current state of the art uses a coronal model to extrapolate the observed photospheric magnetic field to the upper corona, where it is related to solar wind speed through empirical relations. These near‐Sun solar wind and magnetic field conditions provide the inner boundary condition to three‐dimensional numerical magnetohydrodynamic (MHD) models of the heliosphere out to 1 AU. This physics‐based approach can capture dynamic processes within the solar wind, which affect the resulting conditions in near‐Earth space. However, this deterministic approach lacks a quantification of forecast uncertainty. Here we describe a complementary method to exploit the near‐Sun solar wind information produced by coronal models and provide a quantitative estimate of forecast uncertainty. By sampling the near‐Sun solar wind speed at a range of latitudes about the sub‐Earth point, we produce a large ensemble (N = 576) of time series at the base of the Sun‐Earth line. Propagating these conditions to Earth by a three‐dimensional MHD model would be computationally prohibitive; thus, a computationally efficient one‐dimensional “upwind” scheme is used. The variance in the resulting near‐Earth solar wind speed ensemble is shown to provide an accurate measure of the forecast uncertainty. Applying this technique over 1996–2016, the upwind ensemble is found to provide a more “actionable” forecast than a single deterministic forecast; potential economic value is increased for all operational scenarios, but particularly when false alarms are important (i.e., where the cost of taking mitigating action is relatively large). PMID:29398982

  12. The possible equilibrium shapes of static pendant drops

    NASA Astrophysics Data System (ADS)

    Sumesh, P. T.; Govindarajan, Rama

    2010-10-01

    Analytical and numerical studies are carried out on the shapes of two-dimensional and axisymmetric pendant drops hanging under gravity from a solid surface. Drop shapes with both pinned and equilibrium contact angles are obtained naturally from a single boundary condition in the analytical energy optimization procedure. The numerical procedure also yields optimum energy shapes, satisfying Young's equation without the explicit imposition of a boundary condition at the plate. It is shown analytically that a static pendant two-dimensional drop can never be longer than 3.42 times the capillary length. A related finding is that a range of existing solutions for long two-dimensional drops correspond to unphysical drop shapes. Therefore, two-dimensional drops of small volume display only one static solution. In contrast, it is known that axisymmetric drops can display multiple solutions for a given volume. We demonstrate numerically that there is no limit to the height of multiple-lobed Kelvin drops, but the total volume is finite, with the volume of successive lobes forming a convergent series. The stability of such drops is in question, though. Drops of small volume can attain large heights. A bifurcation is found within the one-parameter space of Laplacian shapes, with a range of longer drops displaying a minimum in energy in the investigated space. Axisymmetric Kelvin drops exhibit an infinite number of bifurcations.

  13. Dynamic equations for an isotropic spherical shell using the power series method and surface differential operators

    NASA Astrophysics Data System (ADS)

    Okhovat, Reza; Boström, Anders

    2017-04-01

    Dynamic equations for an isotropic spherical shell are derived by using a series expansion technique. The displacement field is split into a scalar (radial) part and a vector (tangential) part. Surface differential operators are introduced to decrease the length of all equations. The starting point is a power series expansion of the displacement components in the thickness coordinate relative to the mid-surface of the shell. By using the expansions of the displacement components, the three-dimensional elastodynamic equations yield a set of recursion relations among the expansion functions that can be used to eliminate all but the four of lowest order and to express higher order expansion functions in terms of those of lowest orders. Applying the boundary conditions on the surfaces of the spherical shell and eliminating all but the four lowest order expansion functions give the shell equations as a power series in the shell thickness. After lengthy manipulations, the final four shell equations are obtained in a relatively compact form which are given to second order in shell thickness explicitly. The eigenfrequencies are compared to exact three-dimensional theory with excellent agreement and to membrane theory.

  14. Analysis of temperature time series to estimate direction and magnitude of water fluxes in near-surface sediments

    NASA Astrophysics Data System (ADS)

    Munz, Matthias; Oswald, Sascha E.; Schmidt, Christian

    2017-04-01

    The application of heat as a hydrological tracer has become a standard method for quantifying water fluxes between groundwater and surface water. Typically, time series of temperatures in the surface water and in the sediment are observed and are subsequently evaluated by a vertical 1D representation of heat transport by advection and dispersion. Several analytical solutions as well as their implementation into user-friendly software exist in order to estimate water fluxes from the observed temperatures. The underlying assumption of a stationary, one-dimensional vertical flow field is frequently violated in natural systems. Here subsurface water flow often has a significant horizontal component. We developed a methodology for identifying the geometry of the subsurface flow field based on the variations of diurnal temperature amplitudes with depths. For instance: Purely vertical heat transport is characterized by an exponential decline of temperature amplitudes with increasing depth. Pure horizontal flow would be indicated by a constant, depth independent vertical amplitude profile. The decline of temperature amplitudes with depths could be fitted by polynomials of different order whereby the best fit was defined by the highest Akaike Information Criterion. The stepwise model optimization and selection, evaluating the shape of vertical amplitude ratio profiles was used to determine the predominant subsurface flow field, which could be systematically categorized in purely vertical and horizontal (hyporheic, parafluvial) components. Analytical solutions to estimate water fluxes from the observed temperatures are restricted to specific boundary conditions such as a sinusoidal upper temperature boundary. In contrast numerical solutions offer higher flexibility and can handle temperature data which is characterized by irregular variations such as storm-event induced temperature changes and thus cannot readily be incorporated in analytical solutions. There are several numerical models that simulate heat transport in porous media (e.g. VS2DH, HydroGeoSphere, FEFLOW) but there can be a steep learning curve to the modelling frameworks and may therefore not readily accessible to routinely infer water fluxes between groundwater and surface water. We developed a user-friendly, straightforeward to use software to estimate water FLUXes Based On Temperatures- FLUX-BOT. FLUX-BOT is a numerical code written in MATLAB that calculates time variable vertical water fluxes in saturated sediments based on the inversion of measured temperature time series observed at multiple depths. It applies a cell-centered Crank-Nicolson implicit finite difference scheme to solve the one-dimensional heat advection-conduction equation (FLUX-BOT can be downloaded from the following web site: https://bitbucket.org/flux-bot/flux-bot). We provide applications of FLUX-BOT to generic as well as to measured temperature data to demonstrate its performance. Both, the empirical analysis of temperature amplitudes as well as the numerical inversion of measured temperature time series to estimate the vertical magnitude of water fluxes extent the suite of current heat tracing methods and may provide insight into temperature data from an additional perspective.

  15. Land science with Sentinel-2 and Sentinel-3 data series synergy

    NASA Astrophysics Data System (ADS)

    Moreno, Jose; Guanter, Luis; Alonso, Luis; Gomez, Luis; Amoros, Julia; Camps, Gustavo; Delegido, Jesus

    2010-05-01

    Although the GMES/Sentinel satellite series were primarily designed to provide observations for operational services and routine applications, there is a growing interest in the scientific community towards the usage of Sentinel data for more advanced and innovative science. Apart from the improved spatial and spectral capabilities, the availability of consistent time series covering a period of over 20 years opens possibilities never explored before, such as systematic data assimilation approaches exploiting the time-series concept, or the incorporation in the modelling approaches of processes covering time scales from weeks to decades. Sentinel-3 will provide continuity to current ENVISAT MERIS/AATSR capabilities. The results already derived from MERIS/AATRS will be more systematically exploited by using OLCI in synergy with SLST. Particularly innovative is the case of Sentinel-2, which is specifically designed for land applications. Built on a constellation of two satellites operating simultaneously to provide 5 days geometric revisit time, the Sentinel-2 system will providing global and systematic acquisitions with high spatial resolution and with a high revisit time tailored towards the needs of land monitoring. Apart from providing continuity to Landsat and SPOT time series, the Sentinel-2 Multi-Spectral Instrument (MSI) incorporates new narrow bands around the red-edge for improved retrievals of biophysical parameters. The limitations imposed by the need of a proper cloud screening and atmospheric corrections have represented a serious constraint in the past for optical data. The fact that both Sentinel-2 and 3 have dedicated bands to allow such needed corrections for optical data represents an important step towards a proper exploitation, guarantying consistent time series showing actual variability in land surface conditions without the artefacts introduced by the atmosphere. Expected operational products (such as Land Cover maps, Leaf Area Index, Fractional Vegetation Cover, Fraction of Absorbed Photosynthetically Active Radiation, and Leaf Chlorophyll and Water Contents), will be enhanced with new scientific applications. Higher level products will also be provided, by means of mosaicking, averaging, synthesising or compositing of spatially and temporally resampled data. A key element in the exploitation of the Sentinel series will be the adequate use of data synergy, which will open new possibilities for improved Land Models. This paper analyses in particular the possibilities offered by mosaicking and compositing information derived from Sentinel-2 observations in high spatial resolution to complement dense time series derived from Sentinel-3 data with more frequent coverage. Interpolation of gaps in high spatial resolution time series (from Sentinel-2 data) by using medium/low resolution data from Sentinel-3 (OLCI and SLSTR) is also a way of making series more temporally consistent with high spatial resolution. The primary goal of such temporal interpolation / spatial mosaicking techniques is to derive consistent surface reflectance data virtually for every date and geographical location, no matter the initial spatial/temporal coverage of the original data used to produce the composite. As a result, biophysical products can be derived in a more consistent way from the spectral information of Sentinel-3 data by making use of a description of surface heterogeneity derived from Sentinel-2 data. Using data from dedicated experiments (SEN2FLEX, CEFLES2, SEN3EXP), that include a large dataset of satellite and airborne data and of ground-based measurements of atmospheric and vegetation parameters, different techniques are tested, including empirical / statistical approaches that builds nonlinear regression by mapping spectra to a high dimensional space, up to model inversion / data assimilation scenarios. Exploitation of the temporal domain and spatial multi-scale domain becomes then a driver for the systematic exploitation of GMES/Sentinels data time series. This paper review current status, and identifies research priorities in such direction.

  16. Teaching Three-Dimensional Structural Chemistry Using Crystal Structure Databases. 4. Examples of Discovery-Based Learning Using the Complete Cambridge Structural Database

    ERIC Educational Resources Information Center

    Battle, Gary M.; Allen, Frank H.; Ferrence, Gregory M.

    2011-01-01

    Parts 1 and 2 of this series described the educational value of experimental three-dimensional (3D) chemical structures determined by X-ray crystallography and retrieved from the crystallographic databases. In part 1, we described the information content of the Cambridge Structural Database (CSD) and discussed a representative teaching subset of…

  17. Teaching Three-Dimensional Structural Chemistry Using Crystal Structure Databases. 3. The Cambridge Structural Database System: Information Content and Access Software in Educational Applications

    ERIC Educational Resources Information Center

    Battle, Gary M.; Allen, Frank H.; Ferrence, Gregory M.

    2011-01-01

    Parts 1 and 2 of this series described the educational value of experimental three-dimensional (3D) chemical structures determined by X-ray crystallography and retrieved from the crystallographic databases. In part 1, we described the information content of the Cambridge Structural Database (CSD) and discussed a representative teaching subset of…

  18. Exploration and exploitation of homologous series of bis(acrylamido)alkanes containing pyridyl and phenyl groups: β-sheet versus two-dimensional layers in solid-state photochemical [2 + 2] reactions.

    PubMed

    Garai, Mousumi; Biradha, Kumar

    2015-09-01

    The homologous series of phenyl and pyridyl substituted bis(acrylamido)alkanes have been synthesized with the aim of systematic analysis of their crystal structures and their solid-state [2 + 2] reactivities. The changes in the crystal structures with respect to a small change in the molecular structure, that is by varying alkyl spacers between acrylamides and/or by varying the end groups (phenyl, 2-pyridyl, 3-pyridyl, 4-pyridyl) on the C-terminal of the amide, were analyzed in terms of hydrogen-bonding interference (N-H⋯Npy versus N-H⋯O=C) and network geometries. In this series, a greater tendency towards the formation of N-H⋯O hydrogen bonds (β-sheets and two-dimensional networks) over N-H⋯N hydrogen bonds was observed. Among all the structures seven structures were found to have the required alignments of double bonds for the [2 + 2] reaction such that the formations of single dimer, double dimer and polymer are facilitated. However, only four structures were found to exhibit such a solid-state [2 + 2] reaction to form a single dimer and polymers. The two-dimensional hydrogen-bonding layer via N-H⋯O hydrogen bonds was found to promote solid-state [2 + 2] photo-polymerization in a single-crystal-to-single-crystal manner. Such two-dimensional layers were encountered only when the spacer between acryl amide moieties is butyl. Only four out of the 16 derivatives were found to form hydrates, two each from 2-pyridyl and 4-pyridyl derivatives. The water molecules in these structures govern the hydrogen-bonding networks by the formation of an octameric water cluster and one-dimensional zigzag water chains. The trends in the melting points and densities were also analyzed.

  19. Screening of oil sources by using comprehensive two-dimensional gas chromatography/time-of-flight mass spectrometry and multivariate statistical analysis.

    PubMed

    Zhang, Wanfeng; Zhu, Shukui; He, Sheng; Wang, Yanxin

    2015-02-06

    Using comprehensive two-dimensional gas chromatography coupled to time-of-flight mass spectrometry (GC×GC/TOFMS), volatile and semi-volatile organic compounds in crude oil samples from different reservoirs or regions were analyzed for the development of a molecular fingerprint database. Based on the GC×GC/TOFMS fingerprints of crude oils, principal component analysis (PCA) and cluster analysis were used to distinguish the oil sources and find biomarkers. As a supervised technique, the geological characteristics of crude oils, including thermal maturity, sedimentary environment etc., are assigned to the principal components. The results show that tri-aromatic steroid (TAS) series are the suitable marker compounds in crude oils for the oil screening, and the relative abundances of individual TAS compounds have excellent correlation with oil sources. In order to correct the effects of some other external factors except oil sources, the variables were defined as the content ratio of some target compounds and 13 parameters were proposed for the screening of oil sources. With the developed model, the crude oils were easily discriminated, and the result is in good agreement with the practical geological setting. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. Getting to the root of plant biology: impact of the Arabidopsis genome sequence on root research

    PubMed Central

    Benfey, Philip N.; Bennett, Malcolm; Schiefelbein, John

    2010-01-01

    Summary Prior to the availability of the genome sequence, the root of Arabidopsis had attracted a small but ardent group of researchers drawn to its accessibility and developmental simplicity. Roots are easily observed when grown on the surface of nutrient agar media, facilitating analysis of responses to stimuli such as gravity and touch. Developmental biologists were attracted to the simple radial organization of primary root tissues, which form a series of concentric cylinders around the central vascular tissue. Equally attractive was the mode of propagation, with stem cells at the tip giving rise to progeny that were confined to cell files. These properties of root development reduced the normal four-dimensional problem of development (three spatial dimensions and time) to a two-dimensional problem, with cell type on the radial axis and developmental time along the longitudinal axis. The availability of the complete Arabidopsis genome sequence has dramatically accelerated traditional genetic research on root biology, and has also enabled entirely new experimental strategies to be applied. Here we review examples of the ways in which availability of the Arabidopsis genome sequence has enhanced progress in understanding root biology. PMID:20409273

  1. Generalized time-dependent Schrödinger equation in two dimensions under constraints

    NASA Astrophysics Data System (ADS)

    Sandev, Trifce; Petreska, Irina; Lenzi, Ervin K.

    2018-01-01

    We investigate a generalized two-dimensional time-dependent Schrödinger equation on a comb with a memory kernel. A Dirac delta term is introduced in the Schrödinger equation so that the quantum motion along the x-direction is constrained at y = 0. The wave function is analyzed by using Green's function approach for several forms of the memory kernel, which are of particular interest. Closed form solutions for the cases of Dirac delta and power-law memory kernels in terms of Fox H-function, as well as for a distributed order memory kernel, are obtained. Further, a nonlocal term is also introduced and investigated analytically. It is shown that the solution for such a case can be represented in terms of infinite series in Fox H-functions. Green's functions for each of the considered cases are analyzed and plotted for the most representative ones. Anomalous diffusion signatures are evident from the presence of the power-law tails. The normalized Green's functions obtained in this work are of broader interest, as they are an important ingredient for further calculations and analyses of some interesting effects in the transport properties in low-dimensional heterogeneous media.

  2. Characterization of sulfur and nitrogen compounds in Brazilian petroleum derivatives using ionic liquid capillary columns in comprehensive two-dimensional gas chromatography with time-of-flight mass spectrometric detection.

    PubMed

    Cappelli Fontanive, Fernando; Souza-Silva, Érica Aparecida; Macedo da Silva, Juliana; Bastos Caramão, Elina; Alcaraz Zini, Claudia

    2016-08-26

    Diesel and naphtha samples were analyzed using ionic liquid (IL) columns to evaluate the best column set for the investigation of organic sulfur compounds (OSC) and nitrogen(N)-containing compounds analyses with comprehensive two-dimensional gas chromatography coupled to time-of-flight mass spectrometry detector (GC×GC/TOFMS). Employing a series of stationary phase sets, namely DB-5MS/DB-17, DB-17/DB-5MS, DB-5MS/IL-59, and IL-59/DB-5MS, the following parameters were systematically evaluated: number of tentatively identified OSC, 2D chromatographic space occupation, number of polyaromatic hydrocarbons (PAH) and OSC co-elutions, and percentage of asymmetric peaks. DB-5MS/IL-59 was chosen for OSC analysis, while IL59/DB-5MS was chosen for nitrogen compounds, as each stationary phase set provided the best chromatographic efficiency for these two classes of compounds, respectively. Most compounds were tentatively identified by Lee and Van den Dool and Kratz retention indexes, and spectra-matching to library. Whenever available, compounds were also positively identified via injection of authentic standards. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. A Global Three-Dimensional Radiation Hydrodynamic Simulation of a Self-Gravitating Accretion Disk

    NASA Astrophysics Data System (ADS)

    Phillipson, Rebecca; Vogeley, Michael S.; McMillan, Stephen; Boyd, Patricia

    2018-01-01

    We present three-dimensional, radiation hydrodynamic simulations of initially thin accretion disks with self-gravity using the grid-based code PLUTO. We produce simulated light curves and spectral energy distributions and compare to observational data of X-ray binary (XRB) and active galactic nuclei (AGN) variability. These simulations are of interest for modeling the role of radiation in accretion physics across decades of mass and frequency. In particular, the characteristics of the time variability in various bandwidths can probe the timescales over which different physical processes dominate the accretion flow. For example, in the case of some XRBs, superorbital periods much longer than the companion orbital period have been observed. Smoothed particle hydrodynamics (SPH) calculations have shown that irradiation-driven warping could be the mechanism underlying these long periods. In the case of AGN, irradiation-driven warping is also predicted to occur in addition to strong outflows originating from thermal and radiation pressure driving forces, which are important processes in understanding feedback and star formation in active galaxies. We compare our simulations to various toy models via traditional time series analysis of our synthetic and observed light curves.

  4. AIRS Ozone Burden During Antarctic Winter: Time Series from 8/1/2005 to 9/30/2005

    NASA Technical Reports Server (NTRS)

    2007-01-01

    [figure removed for brevity, see original site] Click on the image for movie of AIRS Ozone Burden During Antarctic Winter

    AIRS provides a daily global 3-dimensional view of Earth's ozone layer. Since AIRS observes in the thermal infrared spectral range, it also allows scientists to view from space the Antarctic ozone hole for the first time continuously during polar winter. This image sequence captures the intensification of the annual ozone hole in the Antarctic Polar Vortex.

    The Atmospheric Infrared Sounder Experiment, with its visible, infrared, and microwave detectors, provides a three-dimensional look at Earth's weather. Working in tandem, the three instruments can make simultaneous observations all the way down to the Earth's surface, even in the presence of heavy clouds. With more than 2,000 channels sensing different regions of the atmosphere, the system creates a global, 3-D map of atmospheric temperature and humidity and provides information on clouds, greenhouse gases, and many other atmospheric phenomena. The AIRS Infrared Sounder Experiment flies onboard NASA's Aqua spacecraft and is managed by NASA's Jet Propulsion Laboratory, Pasadena, Calif., under contract to NASA. JPL is a division of the California Institute of Technology in Pasadena.

  5. Proteus two-dimensional Navier-Stokes computer code, version 2.0. Volume 3: Programmer's reference

    NASA Technical Reports Server (NTRS)

    Towne, Charles E.; Schwab, John R.; Bui, Trong T.

    1993-01-01

    A computer code called Proteus 2D was developed to solve the two-dimensional planar or axisymmetric, Reynolds-averaged, unsteady compressible Navier-Stokes equations in strong conservation law form. The objective in this effort was to develop a code for aerospace propulsion applications that is easy to use and easy to modify. Code readability, modularity, and documentation were emphasized. The governing equations are solved in generalized nonorthogonal body-fitted coordinates, by marching in time using a fully-coupled ADI solution procedure. The boundary conditions are treated implicitly. All terms, including the diffusion terms, are linearized using second-order Taylor series expansions. Turbulence is modeled using either an algebraic or two-equation eddy viscosity model. The thin-layer or Euler equations may also be solved. The energy equation may be eliminated by the assumption of constant total enthalpy. Explicit and implicit artificial viscosity may be used. Several time step options are available for convergence acceleration. The documentation is divided into three volumes. The Programmer's Reference contains detailed information useful when modifying the program. The program structure, the Fortran variables stored in common blocks, and the details of each subprogram are described.

  6. PROTEUS two-dimensional Navier-Stokes computer code, version 1.0. Volume 2: User's guide

    NASA Technical Reports Server (NTRS)

    Towne, Charles E.; Schwab, John R.; Benson, Thomas J.; Suresh, Ambady

    1990-01-01

    A new computer code was developed to solve the two-dimensional or axisymmetric, Reynolds averaged, unsteady compressible Navier-Stokes equations in strong conservation law form. The thin-layer or Euler equations may also be solved. Turbulence is modeled using an algebraic eddy viscosity model. The objective was to develop a code for aerospace applications that is easy to use and easy to modify. Code readability, modularity, and documentation were emphasized. The equations are written in nonorthogonal body-fitted coordinates, and solved by marching in time using a fully-coupled alternating direction-implicit procedure with generalized first- or second-order time differencing. All terms are linearized using second-order Taylor series. The boundary conditions are treated implicitly, and may be steady, unsteady, or spatially periodic. Simple Cartesian or polar grids may be generated internally by the program. More complex geometries require an externally generated computational coordinate system. The documentation is divided into three volumes. Volume 2 is the User's Guide, and describes the program's general features, the input and output, the procedure for setting up initial conditions, the computer resource requirements, the diagnostic messages that may be generated, the job control language used to run the program, and several test cases.

  7. Proteus three-dimensional Navier-Stokes computer code, version 1.0. Volume 3: Programmer's reference

    NASA Technical Reports Server (NTRS)

    Towne, Charles E.; Schwab, John R.; Bui, Trong T.

    1993-01-01

    A computer code called Proteus 3D was developed to solve the three-dimensional, Reynolds-averaged, unsteady compressible Navier-Stokes equations in strong conservation law form. The objective in this effort was to develop a code for aerospace propulsion applications that is easy to use and easy to modify. Code readability, modularity, and documentation were emphasized. The governing equations are solved in generalized nonorthogonal body fitted coordinates, by marching in time using a fully-coupled ADI solution procedure. The boundary conditions are treated implicitly. All terms, including the diffusion terms, are linearized using second-order Taylor series expansions. Turbulence is modeled using either an algebraic or two-equation eddy viscosity model. The thin-layer or Euler equations may also be solved. The energy equation may be eliminated by the assumption of constant total enthalpy. Explicit and implicit artificial viscosity may be used. Several time step options are available for convergence acceleration. The documentation is divided into three volumes. The Programmer's Reference contains detailed information useful when modifying the program. The program structure, the Fortran variables stored in common blocks, and the details of each subprogram are described.

  8. Proteus three-dimensional Navier-Stokes computer code, version 1.0. Volume 2: User's guide

    NASA Technical Reports Server (NTRS)

    Towne, Charles E.; Schwab, John R.; Bui, Trong T.

    1993-01-01

    A computer code called Proteus 3D was developed to solve the three-dimensional, Reynolds-averaged, unsteady compressible Navier-Stokes equations in strong conservation law form. The objective in this effort was to develop a code for aerospace propulsion applications that is easy to use and easy to modify. Code readability, modularity, and documentation were emphasized. The governing equations are solved in generalized nonorthogonal body-fitted coordinates, by marching in time using a fully-coupled ADI solution procedure. The boundary conditions are treated implicitly. All terms, including the diffusion terms, are linearized using second-order Taylor series expansions. Turbulence is modeled using either an algebraic or two-equation eddy viscosity model. The thin-layer or Euler equations may also be solved. The energy equation may be eliminated by the assumption of constant total enthalpy. Explicit and implicit artificial viscosity may be used. Several time step options are available for convergence acceleration. The documentation is divided into three volumes. This User's Guide describes the program's features, the input and output, the procedure for setting up initial conditions, the computer resource requirements, the diagnostic messages that may be generated, the job control language used to run the program, and several test cases.

  9. Dynamical behavior of lean swirling premixed flame generated by change in gravitational orientation

    NASA Astrophysics Data System (ADS)

    Gotoda, Hiroshi; Miyano, Takaya; Shepherd, Ian

    2010-11-01

    The dynamic behavior of flame front instability in lean swirling premixed flame generated by the effect of gravitational orientation has been experimentally investigated in this work. When the gravitational direction is changed relative to the flame front, i.e., in inverted gravity, an unstably fluctuating flame (unstable flame) is formed in a limited domain of equivalence ratio and swirl number (Gotoda. H et al., Physical Review E, vol. 81, 026211, 2010). The time history of flame front fluctuations show that in the buoyancy-dominated region, chaotic irregular fluctuation with low frequencies is superimposed on the dominant periodic oscillation of the unstable flame. This periodic oscillation is produced by unstable large-scale vortex motion in combustion products generated by a change in the buoyancy/swirl interaction due to the inversion of gravitational orientation. As a result, the dynamic behavior of the unstable flame becomes low-dimensional deterministic chaos. Its dynamics maintains low-dimensional deterministic chaos even in the momentum-dominated region, in which vortex breakdown in the combustion products clearly occurs. These results were clearly demonstrated by the use of nonlinear time series analysis based on chaos theory, which has not been widely applied to the investigation of combustion phenomena.

  10. Combined scanning transmission electron microscopy tilt- and focal series.

    PubMed

    Dahmen, Tim; Baudoin, Jean-Pierre; Lupini, Andrew R; Kübel, Christian; Slusallek, Philipp; de Jonge, Niels

    2014-04-01

    In this study, a combined tilt- and focal series is proposed as a new recording scheme for high-angle annular dark-field scanning transmission electron microscopy (STEM) tomography. Three-dimensional (3D) data were acquired by mechanically tilting the specimen, and recording a through-focal series at each tilt direction. The sample was a whole-mount macrophage cell with embedded gold nanoparticles. The tilt-focal algebraic reconstruction technique (TF-ART) is introduced as a new algorithm to reconstruct tomograms from such combined tilt- and focal series. The feasibility of TF-ART was demonstrated by 3D reconstruction of the experimental 3D data. The results were compared with a conventional STEM tilt series of a similar sample. The combined tilt- and focal series led to smaller "missing wedge" artifacts, and a higher axial resolution than obtained for the STEM tilt series, thus improving on one of the main issues of tilt series-based electron tomography.

  11. Detection of conveyance changes in St. Clair River using historical water-level and flow data with inverse one-dimensional hydrodynamic modeling

    USGS Publications Warehouse

    Holtschlag, David J.; Hoard, C.J.

    2009-01-01

    St. Clair River is a connecting channel that transports water from Lake Huron to the St. Clair River Delta and Lake St. Clair. A negative trend has been detected in differences between water levels on Lake Huron and Lake St. Clair. This trend may indicate a combination of flow and conveyance changes within St. Clair River. To identify where conveyance change may be taking place, eight water-level gaging stations along St. Clair River were selected to delimit seven reaches. Positive trends in water-level fall were detected in two reaches, and negative trends were detected in two other reaches. The presence of both positive and negative trends in water-level fall indicates that changes in conveyance are likely occurring among some reaches because all reaches transmit essentially the same flow. Annual water-level fall in reaches and reach lengths was used to compute conveyance ratios for all pairs of reaches by use of water-level data from 1962 to 2007. Positive and negative trends in conveyance ratios indicate that relative conveyance is changing among some reaches. Inverse one-dimensional (1-D) hydrodynamic modeling was used to estimate a partial annual series of effective channel-roughness parameters in reaches forming the St. Clair River for 21 years when flow measurements were sufficient to support parameter estimation. Monotonic, persistent but non-monotonic, and irregular changes in estimated effective channel roughness with time were interpreted as systematic changes in conveyances in five reaches. Time-varying parameter estimates were used to simulate flow throughout the St. Clair River and compute changes in conveyance with time. Based on the partial annual series of parameters, conveyance in the St. Clair River increased about 10 percent from 1962 to 2002. Conveyance decreased, however, about 4.1 percent from 2003 to 2007, so that conveyance was about 5.9 percent higher in 2007 than in 1962.

  12. Is there evidence for the existence of nonlinear behavior within the interplanetary solar sector structure?

    NASA Astrophysics Data System (ADS)

    Brown, A. G.; Francis, N. M.; Broomhead, D. S.; Cannon, P. S.; Akram, A.

    1999-06-01

    Using data from the Sweden and Britain Radar Experiment (SABRE) VHF coherent radar, Yeoman et al. [1990] found evidence for two and four sector structures during the declining phase of solar cycle (SC) 21. No such obvious harmonic features were present during the ascending phase of SC 22. It was suggested that the structure of the heliospheric current sheet might exhibit nonlinear behavior during the latter period. A direct test of this suggestion, using established nonlinear methods, would require the computation of the fractal dimension of the data, for example. However, the quality of the SABRE data is insufficient for this purpose. Therefore we have tried to answer a simpler question: Is there any evidence that the SABRE data was generated by a (low-dimensional) nonlinear process? If this were the case, it would be a powerful indicator of nonlinear behavior in the solar current sheet. Our approach has been to use a system of orthogonal linear filters to separate the data into linearly uncorrelated time series. We then look for nonlinear dynamical relationships between these time series, using radial basis function models (which can be thought of as a class of neural networks). The presence of such a relationship, indicated by the ability to model one filter output given another, would equate to the presence of nonlinear properties within the data. Using this technique, evidence is found for the presence of low-level nonlinear behavior during both phases of the solar cycle investigated in this study. The evidence for nonlinear behavior is stronger during the descending phase of SC 21. However, it is not possible to distinguish between nonlinear dynamics and a nonlinearly transformed colored Gaussian noise process in either instance, using the available data. Therefore, in conclusion, we find insufficient evidence within the SABRE data set to support the suggestion of increased nonlinear dynamical behavior during the ascending phase of SC 22. In fact, nonlinear dynamics would seem to exert very little influence within the measurement time series at all, given the observed data. Therefore it is likely that stochastic or unresolved high-dimensional nonlinear mechanisms are responsible for the observed spectrum complexity during the ascending phase of SC 22.

  13. Origin and structures of solar eruptions II: Magnetic modeling

    NASA Astrophysics Data System (ADS)

    Guo, Yang; Cheng, Xin; Ding, MingDe

    2017-07-01

    The topology and dynamics of the three-dimensional magnetic field in the solar atmosphere govern various solar eruptive phenomena and activities, such as flares, coronal mass ejections, and filaments/prominences. We have to observe and model the vector magnetic field to understand the structures and physical mechanisms of these solar activities. Vector magnetic fields on the photosphere are routinely observed via the polarized light, and inferred with the inversion of Stokes profiles. To analyze these vector magnetic fields, we need first to remove the 180° ambiguity of the transverse components and correct the projection effect. Then, the vector magnetic field can be served as the boundary conditions for a force-free field modeling after a proper preprocessing. The photospheric velocity field can also be derived from a time sequence of vector magnetic fields. Three-dimensional magnetic field could be derived and studied with theoretical force-free field models, numerical nonlinear force-free field models, magnetohydrostatic models, and magnetohydrodynamic models. Magnetic energy can be computed with three-dimensional magnetic field models or a time series of vector magnetic field. The magnetic topology is analyzed by pinpointing the positions of magnetic null points, bald patches, and quasi-separatrix layers. As a well conserved physical quantity, magnetic helicity can be computed with various methods, such as the finite volume method, discrete flux tube method, and helicity flux integration method. This quantity serves as a promising parameter characterizing the activity level of solar active regions.

  14. Empirical parameterization of setup, swash, and runup

    USGS Publications Warehouse

    Stockdon, H.F.; Holman, R.A.; Howd, P.A.; Sallenger, A.H.

    2006-01-01

    Using shoreline water-level time series collected during 10 dynamically diverse field experiments, an empirical parameterization for extreme runup, defined by the 2% exceedence value, has been developed for use on natural beaches over a wide range of conditions. Runup, the height of discrete water-level maxima, depends on two dynamically different processes; time-averaged wave setup and total swash excursion, each of which is parameterized separately. Setup at the shoreline was best parameterized using a dimensional form of the more common Iribarren-based setup expression that includes foreshore beach slope, offshore wave height, and deep-water wavelength. Significant swash can be decomposed into the incident and infragravity frequency bands. Incident swash is also best parameterized using a dimensional form of the Iribarren-based expression. Infragravity swash is best modeled dimensionally using offshore wave height and wavelength and shows no statistically significant linear dependence on either foreshore or surf-zone slope. On infragravity-dominated dissipative beaches, the magnitudes of both setup and swash, modeling both incident and infragravity frequency components together, are dependent only on offshore wave height and wavelength. Statistics of predicted runup averaged over all sites indicate a - 17 cm bias and an rms error of 38 cm: the mean observed runup elevation for all experiments was 144 cm. On intermediate and reflective beaches with complex foreshore topography, the use of an alongshore-averaged beach slope in practical applications of the runup parameterization may result in a relative runup error equal to 51% of the fractional variability between the measured and the averaged slope.

  15. Dimensional analysis of acoustically propagated signals

    NASA Technical Reports Server (NTRS)

    Hansen, Scott D.; Thomson, Dennis W.

    1993-01-01

    Traditionally, long term measurements of atmospherically propagated sound signals have consisted of time series of multiminute averages. Only recently have continuous measurements with temporal resolution corresponding to turbulent time scales been available. With modern digital data acquisition systems we now have the capability to simultaneously record both acoustical and meteorological parameters with sufficient temporal resolution to allow us to examine in detail relationships between fluctuating sound and the meteorological variables, particularly wind and temperature, which locally determine the acoustic refractive index. The atmospheric acoustic propagation medium can be treated as a nonlinear dynamical system, a kind of signal processor whose innards depend on thermodynamic and turbulent processes in the atmosphere. The atmosphere is an inherently nonlinear dynamical system. In fact one simple model of atmospheric convection, the Lorenz system, may well be the most widely studied of all dynamical systems. In this paper we report some results of our having applied methods used to characterize nonlinear dynamical systems to study the characteristics of acoustical signals propagated through the atmosphere. For example, we investigate whether or not it is possible to parameterize signal fluctuations in terms of fractal dimensions. For time series one such parameter is the limit capacity dimension. Nicolis and Nicolis were among the first to use the kind of methods we have to study the properties of low dimension global attractors.

  16. The inverse Numerical Computer Program FLUX-BOT for estimating Vertical Water Fluxes from Temperature Time-Series.

    NASA Astrophysics Data System (ADS)

    Trauth, N.; Schmidt, C.; Munz, M.

    2016-12-01

    Heat as a natural tracer to quantify water fluxes between groundwater and surface water has evolved to a standard hydrological method. Typically, time series of temperatures in the surface water and in the sediment are observed and are subsequently evaluated by a vertical 1D representation of heat transport by advection and dispersion. Several analytical solutions as well as their implementation into user-friendly software exist in order to estimate water fluxes from the observed temperatures. Analytical solutions can be easily implemented but assumptions on the boundary conditions have to be made a priori, e.g. sinusoidal upper temperature boundary. Numerical models offer more flexibility and can handle temperature data which is characterized by irregular variations such as storm-event induced temperature changes and thus cannot readily be incorporated in analytical solutions. This also reduced the effort of data preprocessing such as the extraction of the diurnal temperature variation. We developed a software to estimate water FLUXes Based On Temperatures- FLUX-BOT. FLUX-BOT is a numerical code written in MATLAB which is intended to calculate vertical water fluxes in saturated sediments, based on the inversion of measured temperature time series observed at multiple depths. It applies a cell-centered Crank-Nicolson implicit finite difference scheme to solve the one-dimensional heat advection-conduction equation. Besides its core inverse numerical routines, FLUX-BOT includes functions visualizing the results and functions for performing uncertainty analysis. We provide applications of FLUX-BOT to generic as well as to measured temperature data to demonstrate its performance.

  17. A Composite View of Ozone Evolution in the 1995-1996 Northern Winter Polar Vortex Developed from Airborne Lidar and Satellite Observations

    NASA Technical Reports Server (NTRS)

    Douglass, A. R.; Schoeberl, M. R.; Kawa, S. R.; Browell, E. V.

    2000-01-01

    The processes which contribute to the ozone evolution in the high latitude northern lower stratosphere are evaluated using a three dimensional model simulation and ozone observations. The model uses winds and temperatures from the Goddard Earth Observing System Data Assimilation System. The simulation results are compared with ozone observations from three platforms: the differential absorption lidar (DIAL) which was flown on the NASA DC-8 as part of the Vortex Ozone Transport Experiment; the Microwave Limb Sounder (MLS); the Polar Ozone and Aerosol Measurement (POAM II) solar occultation instrument. Time series for the different data sets are consistent with each other, and diverge from model time series during December and January. The model ozone in December and January is shown to be much less sensitive to the model photochemistry than to the model vertical transport, which depends on the model vertical motion as well as the model vertical gradient. We evaluate the dependence of model ozone evolution on the model ozone gradient by comparing simulations with different initial conditions for ozone. The modeled ozone throughout December and January most closely resembles observed ozone when the vertical profiles between 12 and 20 km within the polar vortex closely match December DIAL observations. We make a quantitative estimate of the uncertainty in the vertical advection using diabatic trajectory calculations. The net transport uncertainty is significant, and should be accounted for when comparing observations with model ozone. The observed and modeled ozone time series during December and January are consistent when these transport uncertainties are taken into account.

  18. Nonlocal Reformulations of Water and Internal Waves and Asymptotic Reductions

    NASA Astrophysics Data System (ADS)

    Ablowitz, Mark J.

    2009-09-01

    Nonlocal reformulations of the classical equations of water waves and two ideal fluids separated by a free interface, bounded above by either a rigid lid or a free surface, are obtained. The kinematic equations may be written in terms of integral equations with a free parameter. By expressing the pressure, or Bernoulli, equation in terms of the surface/interface variables, a closed system is obtained. An advantage of this formulation, referred to as the nonlocal spectral (NSP) formulation, is that the vertical component is eliminated, thus reducing the dimensionality and fixing the domain in which the equations are posed. The NSP equations and the Dirichlet-Neumann operators associated with the water wave or two-fluid equations can be related to each other and the Dirichlet-Neumann series can be obtained from the NSP equations. Important asymptotic reductions obtained from the two-fluid nonlocal system include the generalizations of the Benney-Luke and Kadomtsev-Petviashvili (KP) equations, referred to as intermediate-long wave (ILW) generalizations. These 2+1 dimensional equations possess lump type solutions. In the water wave problem high-order asymptotic series are obtained for two and three dimensional gravity-capillary solitary waves. In two dimensions, the first term in the asymptotic series is the well-known hyperbolic secant squared solution of the KdV equation; in three dimensions, the first term is the rational lump solution of the KP equation.

  19. JADA: a graphical user interface for comprehensive internal dose assessment in nuclear medicine.

    PubMed

    Grimes, Joshua; Uribe, Carlos; Celler, Anna

    2013-07-01

    The main objective of this work was to design a comprehensive dosimetry package that would keep all aspects of internal dose calculation within the framework of a single software environment and that would be applicable for a variety of dose calculation approaches. Our MATLAB-based graphical user interface (GUI) can be used for processing data obtained using pure planar, pure SPECT, or hybrid planar/SPECT imaging. Time-activity data for source regions are obtained using a set of tools that allow the user to reconstruct SPECT images, load images, coregister a series of planar images, and to perform two-dimensional and three-dimensional image segmentation. Curve fits are applied to the acquired time-activity data to construct time-activity curves, which are then integrated to obtain time-integrated activity coefficients. Subsequently, dose estimates are made using one of three methods. The organ level dose calculation subGUI calculates mean organ doses that are equivalent to dose assessment performed by OLINDA/EXM. Voxelized dose calculation options, which include the voxel S value approach and Monte Carlo simulation using the EGSnrc user code DOSXYZnrc, are available within the process 3D image data subGUI. The developed internal dosimetry software package provides an assortment of tools for every step in the dose calculation process, eliminating the need for manual data transfer between programs. This saves times and minimizes user errors, while offering a versatility that can be used to efficiently perform patient-specific internal dose calculations in a variety of clinical situations.

  20. Three-dimensional rearrangement of single atoms using actively controlled optical microtraps.

    PubMed

    Lee, Woojun; Kim, Hyosub; Ahn, Jaewook

    2016-05-02

    We propose and demonstrate three-dimensional rearrangements of single atoms. In experiments performed with single 87Rb atoms in optical microtraps actively controlled by a spatial light modulator, we demonstrate various dynamic rearrangements of up to N = 9 atoms including rotation, 2D vacancy filling, guiding, compactification, and 3D shuffling. With the capability of a phase-only Fourier mask to generate arbitrary shapes of the holographic microtraps, it was possible to place single atoms at arbitrary geometries of a few μm size and even continuously reconfigure them by conveying each atom. For this purpose, we loaded a series of computer-generated phase masks in the full frame rate of 60 Hz of the spatial light modulator, so the animation of phase mask transformed the holographic microtraps in real time, driving each atom along the assigned trajectory. Possible applications of this method of transformation of single atoms include preparation of scalable quantum platforms for quantum computation, quantum simulation, and quantum many-body physics.

  1. A near one-dimensional indirectly driven implosion at convergence ratio 30

    NASA Astrophysics Data System (ADS)

    MacLaren, S. A.; Masse, L. P.; Czajka, C. E.; Khan, S. F.; Kyrala, G. A.; Ma, T.; Ralph, J. E.; Salmonson, J. D.; Bachmann, B.; Benedetti, L. R.; Bhandarkar, S. D.; Bradley, P. A.; Hatarik, R.; Herrmann, H. W.; Mariscal, D. A.; Millot, M.; Patel, P. K.; Pino, J. E.; Ratledge, M.; Rice, N. G.; Tipton, R. E.; Tommasini, R.; Yeamans, C. B.

    2018-05-01

    Inertial confinement fusion cryogenic-layered implosions at the National Ignition Facility, while successfully demonstrating self-heating due to alpha-particle deposition, have fallen short of the performance predicted by one-dimensional (1D) multi-physics implosion simulations. The current understanding, from experimental evidence as well as simulations, suggests that engineering features such as the capsule tent and fill tube, as well as time-dependent low-mode asymmetry, are to blame for the lack of agreement. A short series of experiments designed specifically to avoid these degradations to the implosion are described here in order to understand if, once they are removed, a high-convergence cryogenic-layered deuterium-tritium implosion can achieve the 1D simulated performance. The result is a cryogenic layered implosion, round at stagnation, that matches closely the performance predicted by 1D simulations. This agreement can then be exploited to examine the sensitivity of approximations in the model to the constraints imposed by the data.

  2. Multigrid one shot methods for optimal control problems: Infinite dimensional control

    NASA Technical Reports Server (NTRS)

    Arian, Eyal; Taasan, Shlomo

    1994-01-01

    The multigrid one shot method for optimal control problems, governed by elliptic systems, is introduced for the infinite dimensional control space. ln this case, the control variable is a function whose discrete representation involves_an increasing number of variables with grid refinement. The minimization algorithm uses Lagrange multipliers to calculate sensitivity gradients. A preconditioned gradient descent algorithm is accelerated by a set of coarse grids. It optimizes for different scales in the representation of the control variable on different discretization levels. An analysis which reduces the problem to the boundary is introduced. It is used to approximate the two level asymptotic convergence rate, to determine the amplitude of the minimization steps, and the choice of a high pass filter to be used when necessary. The effectiveness of the method is demonstrated on a series of test problems. The new method enables the solutions of optimal control problems at the same cost of solving the corresponding analysis problems just a few times.

  3. Three-dimensional landing zone ladar

    NASA Astrophysics Data System (ADS)

    Savage, James; Goodrich, Shawn; Burns, H. N.

    2016-05-01

    Three-Dimensional Landing Zone (3D-LZ) refers to a series of Air Force Research Laboratory (AFRL) programs to develop high-resolution, imaging ladar to address helicopter approach and landing in degraded visual environments with emphasis on brownout; cable warning and obstacle avoidance; and controlled flight into terrain. Initial efforts adapted ladar systems built for munition seekers, and success led to a the 3D-LZ Joint Capability Technology Demonstration (JCTD) , a 27-month program to develop and demonstrate a ladar subsystem that could be housed with the AN/AAQ-29 FLIR turret flown on US Air Force Combat Search and Rescue (CSAR) HH-60G Pave Hawk helicopters. Following the JCTD flight demonstration, further development focused on reducing size, weight, and power while continuing to refine the real-time geo-referencing, dust rejection, obstacle and cable avoidance, and Helicopter Terrain Awareness and Warning (HTAWS) capability demonstrated under the JCTD. This paper summarizes significant ladar technology development milestones to date, individual LADAR technologies within 3D-LZ, and results of the flight testing.

  4. Application of Dynamic Analysis in Semi-Analytical Finite Element Method.

    PubMed

    Liu, Pengfei; Xing, Qinyan; Wang, Dawei; Oeser, Markus

    2017-08-30

    Analyses of dynamic responses are significantly important for the design, maintenance and rehabilitation of asphalt pavement. In order to evaluate the dynamic responses of asphalt pavement under moving loads, a specific computational program, SAFEM, was developed based on a semi-analytical finite element method. This method is three-dimensional and only requires a two-dimensional FE discretization by incorporating Fourier series in the third dimension. In this paper, the algorithm to apply the dynamic analysis to SAFEM was introduced in detail. Asphalt pavement models under moving loads were built in the SAFEM and commercial finite element software ABAQUS to verify the accuracy and efficiency of the SAFEM. The verification shows that the computational accuracy of SAFEM is high enough and its computational time is much shorter than ABAQUS. Moreover, experimental verification was carried out and the prediction derived from SAFEM is consistent with the measurement. Therefore, the SAFEM is feasible to reliably predict the dynamic response of asphalt pavement under moving loads, thus proving beneficial to road administration in assessing the pavement's state.

  5. Single-shot full strain tensor determination with microbeam X-ray Laue diffraction and a two-dimensional energy-dispersive detector.

    PubMed

    Abboud, A; Kirchlechner, C; Keckes, J; Conka Nurdan, T; Send, S; Micha, J S; Ulrich, O; Hartmann, R; Strüder, L; Pietsch, U

    2017-06-01

    The full strain and stress tensor determination in a triaxially stressed single crystal using X-ray diffraction requires a series of lattice spacing measurements at different crystal orientations. This can be achieved using a tunable X-ray source. This article reports on a novel experimental procedure for single-shot full strain tensor determination using polychromatic synchrotron radiation with an energy range from 5 to 23 keV. Microbeam X-ray Laue diffraction patterns were collected from a copper micro-bending beam along the central axis (centroid of the cross section). Taking advantage of a two-dimensional energy-dispersive X-ray detector (pnCCD), the position and energy of the collected Laue spots were measured for multiple positions on the sample, allowing the measurement of variations in the local microstructure. At the same time, both the deviatoric and hydrostatic components of the elastic strain and stress tensors were calculated.

  6. HiSPoD: a program for high-speed polychromatic X-ray diffraction experiments and data analysis on polycrystalline samples

    DOE PAGES

    Sun, Tao; Fezzaa, Kamel

    2016-06-17

    Here, a high-speed X-ray diffraction technique was recently developed at the 32-ID-B beamline of the Advanced Photon Source for studying highly dynamic, yet non-repeatable and irreversible, materials processes. In experiments, the microstructure evolution in a single material event is probed by recording a series of diffraction patterns with extremely short exposure time and high frame rate. Owing to the limited flux in a short pulse and the polychromatic nature of the incident X-rays, analysis of the diffraction data is challenging. Here, HiSPoD, a stand-alone Matlab-based software for analyzing the polychromatic X-ray diffraction data from polycrystalline samples, is described. With HiSPoD,more » researchers are able to perform diffraction peak indexing, extraction of one-dimensional intensity profiles by integrating a two-dimensional diffraction pattern, and, more importantly, quantitative numerical simulations to obtain precise sample structure information.« less

  7. Estimating soil hydraulic properties from soil moisture time series by inversion of a dual-permeability model

    NASA Astrophysics Data System (ADS)

    Dalla Valle, Nicolas; Wutzler, Thomas; Meyer, Stefanie; Potthast, Karin; Michalzik, Beate

    2017-04-01

    Dual-permeability type models are widely used to simulate water fluxes and solute transport in structured soils. These models contain two spatially overlapping flow domains with different parameterizations or even entirely different conceptual descriptions of flow processes. They are usually able to capture preferential flow phenomena, but a large set of parameters is needed, which are very laborious to obtain or cannot be measured at all. Therefore, model inversions are often used to derive the necessary parameters. Although these require sufficient input data themselves, they can use measurements of state variables instead, which are often easier to obtain and can be monitored by automated measurement systems. In this work we show a method to estimate soil hydraulic parameters from high frequency soil moisture time series data gathered at two different measurement depths by inversion of a simple one dimensional dual-permeability model. The model uses an advection equation based on the kinematic wave theory to describe the flow in the fracture domain and a Richards equation for the flow in the matrix domain. The soil moisture time series data were measured in mesocosms during sprinkling experiments. The inversion consists of three consecutive steps: First, the parameters of the water retention function were assessed using vertical soil moisture profiles in hydraulic equilibrium. This was done using two different exponential retention functions and the Campbell function. Second, the soil sorptivity and diffusivity functions were estimated from Boltzmann-transformed soil moisture data, which allowed the calculation of the hydraulic conductivity function. Third, the parameters governing flow in the fracture domain were determined using the whole soil moisture time series. The resulting retention functions were within the range of values predicted by pedotransfer functions apart from very dry conditions, where all retention functions predicted lower matrix potentials. The diffusivity function predicted values of a similar range as shown in other studies. Overall, the model was able to emulate soil moisture time series for low measurement depths, but deviated increasingly at larger depths. This indicates that some of the model parameters are not constant throughout the profile. However, overall seepage fluxes were still predicted correctly. In the near future we will apply the inversion method to lower frequency soil moisture data from different sites to evaluate the model's ability to predict preferential flow seepage fluxes at the field scale.

  8. New Method for Solving Inductive Electric Fields in the Ionosphere

    NASA Astrophysics Data System (ADS)

    Vanhamäki, H.

    2005-12-01

    We present a new method for calculating inductive electric fields in the ionosphere. It is well established that on large scales the ionospheric electric field is a potential field. This is understandable, since the temporal variations of large scale current systems are generally quite slow, in the timescales of several minutes, so inductive effects should be small. However, studies of Alfven wave reflection have indicated that in some situations inductive phenomena could well play a significant role in the reflection process, and thus modify the nature of ionosphere-magnetosphere coupling. The input to our calculation method are the time series of the potential part of the ionospheric electric field together with the Hall and Pedersen conductances. The output is the time series of the induced rotational part of the ionospheric electric field. The calculation method works in the time-domain and can be used with non-uniform, time-dependent conductances. In addition no particular symmetry requirements are imposed on the input potential electric field. The presented method makes use of special non-local vector basis functions called Cartesian Elementary Current Systems (CECS). This vector basis offers a convenient way of representing curl-free and divergence-free parts of 2-dimensional vector fields and makes it possible to solve the induction problem using simple linear algebra. The new calculation method is validated by comparing it with previously published results for Alfven wave reflection from uniformly conducting ionosphere.

  9. Development of an On-board Failure Diagnostics and Prognostics System for Solid Rocket Booster

    NASA Technical Reports Server (NTRS)

    Smelyanskiy, Vadim N.; Luchinsky, Dmitry G.; Osipov, Vyatcheslav V.; Timucin, Dogan A.; Uckun, Serdar

    2009-01-01

    We develop a case breach model for the on-board fault diagnostics and prognostics system for subscale solid-rocket boosters (SRBs). The model development was motivated by recent ground firing tests, in which a deviation of measured time-traces from the predicted time-series was observed. A modified model takes into account the nozzle ablation, including the effect of roughness of the nozzle surface, the geometry of the fault, and erosion and burning of the walls of the hole in the metal case. The derived low-dimensional performance model (LDPM) of the fault can reproduce the observed time-series data very well. To verify the performance of the LDPM we build a FLUENT model of the case breach fault and demonstrate a good agreement between theoretical predictions based on the analytical solution of the model equations and the results of the FLUENT simulations. We then incorporate the derived LDPM into an inferential Bayesian framework and verify performance of the Bayesian algorithm for the diagnostics and prognostics of the case breach fault. It is shown that the obtained LDPM allows one to track parameters of the SRB during the flight in real time, to diagnose case breach fault, and to predict its values in the future. The application of the method to fault diagnostics and prognostics (FD&P) of other SRB faults modes is discussed.

  10. Insights on correlation dimension from dynamics mapping of three experimental nonlinear laser systems.

    PubMed

    McMahon, Christopher J; Toomey, Joshua P; Kane, Deb M

    2017-01-01

    We have analysed large data sets consisting of tens of thousands of time series from three Type B laser systems: a semiconductor laser in a photonic integrated chip, a semiconductor laser subject to optical feedback from a long free-space-external-cavity, and a solid-state laser subject to optical injection from a master laser. The lasers can deliver either constant, periodic, pulsed, or chaotic outputs when parameters such as the injection current and the level of external perturbation are varied. The systems represent examples of experimental nonlinear systems more generally and cover a broad range of complexity including systematically varying complexity in some regions. In this work we have introduced a new procedure for semi-automatically interrogating experimental laser system output power time series to calculate the correlation dimension (CD) using the commonly adopted Grassberger-Proccacia algorithm. The new CD procedure is called the 'minimum gradient detection algorithm'. A value of minimum gradient is returned for all time series in a data set. In some cases this can be identified as a CD, with uncertainty. Applying the new 'minimum gradient detection algorithm' CD procedure, we obtained robust measurements of the correlation dimension for many of the time series measured from each laser system. By mapping the results across an extended parameter space for operation of each laser system, we were able to confidently identify regions of low CD (CD < 3) and assign these robust values for the correlation dimension. However, in all three laser systems, we were not able to measure the correlation dimension at all parts of the parameter space. Nevertheless, by mapping the staged progress of the algorithm, we were able to broadly classify the dynamical output of the lasers at all parts of their respective parameter spaces. For two of the laser systems this included displaying regions of high-complexity chaos and dynamic noise. These high-complexity regions are differentiated from regions where the time series are dominated by technical noise. This is the first time such differentiation has been achieved using a CD analysis approach. More can be known of the CD for a system when it is interrogated in a mapping context, than from calculations using isolated time series. This has been shown for three laser systems and the approach is expected to be useful in other areas of nonlinear science where large data sets are available and need to be semi-automatically analysed to provide real dimensional information about the complex dynamics. The CD/minimum gradient algorithm measure provides additional information that complements other measures of complexity and relative complexity, such as the permutation entropy; and conventional physical measurements.

  11. Insights on correlation dimension from dynamics mapping of three experimental nonlinear laser systems

    PubMed Central

    McMahon, Christopher J.; Toomey, Joshua P.

    2017-01-01

    Background We have analysed large data sets consisting of tens of thousands of time series from three Type B laser systems: a semiconductor laser in a photonic integrated chip, a semiconductor laser subject to optical feedback from a long free-space-external-cavity, and a solid-state laser subject to optical injection from a master laser. The lasers can deliver either constant, periodic, pulsed, or chaotic outputs when parameters such as the injection current and the level of external perturbation are varied. The systems represent examples of experimental nonlinear systems more generally and cover a broad range of complexity including systematically varying complexity in some regions. Methods In this work we have introduced a new procedure for semi-automatically interrogating experimental laser system output power time series to calculate the correlation dimension (CD) using the commonly adopted Grassberger-Proccacia algorithm. The new CD procedure is called the ‘minimum gradient detection algorithm’. A value of minimum gradient is returned for all time series in a data set. In some cases this can be identified as a CD, with uncertainty. Findings Applying the new ‘minimum gradient detection algorithm’ CD procedure, we obtained robust measurements of the correlation dimension for many of the time series measured from each laser system. By mapping the results across an extended parameter space for operation of each laser system, we were able to confidently identify regions of low CD (CD < 3) and assign these robust values for the correlation dimension. However, in all three laser systems, we were not able to measure the correlation dimension at all parts of the parameter space. Nevertheless, by mapping the staged progress of the algorithm, we were able to broadly classify the dynamical output of the lasers at all parts of their respective parameter spaces. For two of the laser systems this included displaying regions of high-complexity chaos and dynamic noise. These high-complexity regions are differentiated from regions where the time series are dominated by technical noise. This is the first time such differentiation has been achieved using a CD analysis approach. Conclusions More can be known of the CD for a system when it is interrogated in a mapping context, than from calculations using isolated time series. This has been shown for three laser systems and the approach is expected to be useful in other areas of nonlinear science where large data sets are available and need to be semi-automatically analysed to provide real dimensional information about the complex dynamics. The CD/minimum gradient algorithm measure provides additional information that complements other measures of complexity and relative complexity, such as the permutation entropy; and conventional physical measurements. PMID:28837602

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Yi, E-mail: zhouyihn@163.com; Huang, Yan; Li, Dang

    Graphical abstract: SEM images of the samples synthesized at different hydrothermal temperatures for 8 h: (a) 75; (b) 100; (c) 120; and (d) 140°C, followed by calcination at 450 °C for 2 h. Highlights: ► Effects of calcination temperature on the phase transformation were studied. ► Effects of hydrothermal temperature and time on the morphology growth were studied. ► A two-stage reaction mechanism for the formation was presented. ► The photocatalytic activity was evaluated under sunlight irradiation. ► Effects of calcination temperature on the photocatalytic activity were studied. - Abstract: Novel three-dimensional sea-urchin-like hierarchical TiO{sub 2} superstructures were synthesized onmore » a Ti plate in a mixture of H{sub 2}O{sub 2} and NaOH aqueous solution by a facile one-pot hydrothermal method at a low temperature, followed by protonation and calcination. The results of series of electron microscopy characterizations suggested that the hierarchical TiO{sub 2} superstructures consisted of numerous one-dimensional nanostructures. The microspheres were approximately 2–4 μm in diameter, and the one-dimensional TiO{sub 2} nanostructures were up to 600–700 nm long. A two-stage reaction mechanism, i.e., initial growth and then assembly, was proposed for the formation of these architectures. The three-dimensional sea-urchin-like hierarchical TiO{sub 2} microstructures showed excellent photocatalytic activity for the degradation of Rhodamine B aqueous solution under sunlight irradiation, which was attributed to the special three-dimensional hierarchical superstructure, and increased number of surface active sites. This novel superstructure has promising use in practical aqueous purification.« less

  13. A spatial length scale analysis of turbulent temperature and velocity fluctuations within and above an orchard canopy

    USGS Publications Warehouse

    Wang, Y.S.; Miller, D.R.; Anderson, D.E.; Cionco, R.M.; Lin, J.D.

    1992-01-01

    Turbulent flow within and above an almond orchard was measured with three-dimensional wind sensors and fine-wire thermocouple sensors arranged in a horizontal array. The data showed organized turbulent structures as indicated by coherent asymmetric ramp patterns in the time series traces across the sensor array. Space-time correlation analysis indicated that velocity and temperature fluctuations were significantly correlated over a transverse distance more than 4m. Integral length scales of velocity and temperature fluctuations were substantially greater in unstable conditions than those in stable conditions. The coherence spectral analysis indicated that Davenport's geometric similarity hypothesis was satisfied in the lower frequency region. From the geometric similarity hypothesis, the spatial extents of large ramp structures were also estimated with the coherence functions.

  14. Stochastic modelling of intermittency.

    PubMed

    Stemler, Thomas; Werner, Johannes P; Benner, Hartmut; Just, Wolfram

    2010-01-13

    Recently, methods have been developed to model low-dimensional chaotic systems in terms of stochastic differential equations. We tested such methods in an electronic circuit experiment. We aimed to obtain reliable drift and diffusion coefficients even without a pronounced time-scale separation of the chaotic dynamics. By comparing the analytical solutions of the corresponding Fokker-Planck equation with experimental data, we show here that crisis-induced intermittency can be described in terms of a stochastic model which is dominated by state-space-dependent diffusion. Further on, we demonstrate and discuss some limits of these modelling approaches using numerical simulations. This enables us to state a criterion that can be used to decide whether a stochastic model will capture the essential features of a given time series. This journal is © 2010 The Royal Society

  15. Numerical solution of the unsteady diffusion-convection-reaction equation based on improved spectral Galerkin method

    NASA Astrophysics Data System (ADS)

    Zhong, Jiaqi; Zeng, Cheng; Yuan, Yupeng; Zhang, Yuzhe; Zhang, Ye

    2018-04-01

    The aim of this paper is to present an explicit numerical algorithm based on improved spectral Galerkin method for solving the unsteady diffusion-convection-reaction equation. The principal characteristics of this approach give the explicit eigenvalues and eigenvectors based on the time-space separation method and boundary condition analysis. With the help of Fourier series and Galerkin truncation, we can obtain the finite-dimensional ordinary differential equations which facilitate the system analysis and controller design. By comparing with the finite element method, the numerical solutions are demonstrated via two examples. It is shown that the proposed method is effective.

  16. Terahertz spectroscopic polarimetry of generalized anisotropic media composed of Archimedean spiral arrays: Experiments and simulations.

    PubMed

    Aschaffenburg, Daniel J; Williams, Michael R C; Schmuttenmaer, Charles A

    2016-05-07

    Terahertz time-domain spectroscopic polarimetry has been used to measure the polarization state of all spectral components in a broadband THz pulse upon transmission through generalized anisotropic media consisting of two-dimensional arrays of lithographically defined Archimedean spirals. The technique allows a full determination of the frequency-dependent, complex-valued transmission matrix and eigenpolarizations of the spiral arrays. Measurements were made on a series of spiral array orientations. The frequency-dependent transmission matrix elements as well as the eigenpolarizations were determined, and the eigenpolarizations were found be to elliptically corotating, as expected from their symmetry. Numerical simulations are in quantitative agreement with measured spectra.

  17. Robotic thoracic surgery: technical considerations and learning curve for pulmonary resection.

    PubMed

    Veronesi, Giulia

    2014-05-01

    Retrospective series indicate that robot-assisted approaches to lung cancer resection offer comparable radicality and safety to video-assisted thoracic surgery or open surgery. More intuitive movements, greater flexibility, and high-definition three-dimensional vision overcome limitations of video-assisted thoracic surgery and may encourage wider adoption of robotic surgery for lung cancer, particularly as more early stage cases are diagnosed by screening. High capital and running costs, limited instrument availability, and long operating times are important disadvantages. Entry of competitor companies should drive down costs. Studies are required to assess quality of life, morbidity, oncologic radicality, and cost effectiveness. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Three-dimensional gauge theories and gravitational instantons from string theory

    NASA Astrophysics Data System (ADS)

    Cherkis, Sergey Alexander

    Various realizations of gauge theories in string theory allow an identification of their spaces of vacua with gravitational instantons. Also, they provide a correspondence of vacua of gauge theories with nonabelian monopole configurations and solutions of a system of integrable equations called Nahm equations. These identifications make it possible to apply powerful techniques of differential and algebraic geometry to solve the gauge theories in question. In other words, it becomes possible to find the exact metrics on their moduli spaces of vacua with all quantum corrections included. As another outcome we obtain for the first time the description of a series of all Dk-type gravitational instantons.

  19. Dynamic properties of combustion instability in a lean premixed gas-turbine combustor.

    PubMed

    Gotoda, Hiroshi; Nikimoto, Hiroyuki; Miyano, Takaya; Tachibana, Shigeru

    2011-03-01

    We experimentally investigate the dynamic behavior of the combustion instability in a lean premixed gas-turbine combustor from the viewpoint of nonlinear dynamics. A nonlinear time series analysis in combination with a surrogate data method clearly reveals that as the equivalence ratio increases, the dynamic behavior of the combustion instability undergoes a significant transition from stochastic fluctuation to periodic oscillation through low-dimensional chaotic oscillation. We also show that a nonlinear forecasting method is useful for predicting the short-term dynamic behavior of the combustion instability in a lean premixed gas-turbine combustor, which has not been addressed in the fields of combustion science and physics.

  20. Adomian decomposition method used to solve the one-dimensional acoustic equations

    NASA Astrophysics Data System (ADS)

    Dispini, Meta; Mungkasi, Sudi

    2017-05-01

    In this paper we propose the use of Adomian decomposition method to solve one-dimensional acoustic equations. This recursive method can be calculated easily and the result is an approximation of the exact solution. We use the Maple software to compute the series in the Adomian decomposition. We obtain that the Adomian decomposition method is able to solve the acoustic equations with the physically correct behavior.

Top