Multiple Indicator Stationary Time Series Models.
ERIC Educational Resources Information Center
Sivo, Stephen A.
2001-01-01
Discusses the propriety and practical advantages of specifying multivariate time series models in the context of structural equation modeling for time series and longitudinal panel data. For time series data, the multiple indicator model specification improves on classical time series analysis. For panel data, the multiple indicator model…
A time-series approach to dynamical systems from classical and quantum worlds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fossion, Ruben
2014-01-08
This contribution discusses some recent applications of time-series analysis in Random Matrix Theory (RMT), and applications of RMT in the statistial analysis of eigenspectra of correlation matrices of multivariate time series.
Neural network versus classical time series forecasting models
NASA Astrophysics Data System (ADS)
Nor, Maria Elena; Safuan, Hamizah Mohd; Shab, Noorzehan Fazahiyah Md; Asrul, Mohd; Abdullah, Affendi; Mohamad, Nurul Asmaa Izzati; Lee, Muhammad Hisyam
2017-05-01
Artificial neural network (ANN) has advantage in time series forecasting as it has potential to solve complex forecasting problems. This is because ANN is data driven approach which able to be trained to map past values of a time series. In this study the forecast performance between neural network and classical time series forecasting method namely seasonal autoregressive integrated moving average models was being compared by utilizing gold price data. Moreover, the effect of different data preprocessing on the forecast performance of neural network being examined. The forecast accuracy was evaluated using mean absolute deviation, root mean square error and mean absolute percentage error. It was found that ANN produced the most accurate forecast when Box-Cox transformation was used as data preprocessing.
Harmonic oscillators and resonance series generated by a periodic unstable classical orbit
NASA Technical Reports Server (NTRS)
Kazansky, A. K.; Ostrovsky, Valentin N.
1995-01-01
The presence of an unstable periodic classical orbit allows one to introduce the decay time as a purely classical magnitude: inverse of the Lyapunov index which characterizes the orbit instability. The Uncertainty Relation gives the corresponding resonance width which is proportional to the Planck constant. The more elaborate analysis is based on the parabolic equation method where the problem is effectively reduced to the multidimensional harmonic oscillator with the time-dependent frequency. The resonances form series in the complex energy plane which is equidistant in the direction perpendicular to the real axis. The applications of the general approach to various problems in atomic physics are briefly exposed.
NASA Astrophysics Data System (ADS)
Shimada, Yutaka; Ikeguchi, Tohru; Shigehara, Takaomi
2012-10-01
In this Letter, we propose a framework to transform a complex network to a time series. The transformation from complex networks to time series is realized by the classical multidimensional scaling. Applying the transformation method to a model proposed by Watts and Strogatz [Nature (London) 393, 440 (1998)], we show that ring lattices are transformed to periodic time series, small-world networks to noisy periodic time series, and random networks to random time series. We also show that these relationships are analytically held by using the circulant-matrix theory and the perturbation theory of linear operators. The results are generalized to several high-dimensional lattices.
Onisko, Agnieszka; Druzdzel, Marek J; Austin, R Marshall
2016-01-01
Classical statistics is a well-established approach in the analysis of medical data. While the medical community seems to be familiar with the concept of a statistical analysis and its interpretation, the Bayesian approach, argued by many of its proponents to be superior to the classical frequentist approach, is still not well-recognized in the analysis of medical data. The goal of this study is to encourage data analysts to use the Bayesian approach, such as modeling with graphical probabilistic networks, as an insightful alternative to classical statistical analysis of medical data. This paper offers a comparison of two approaches to analysis of medical time series data: (1) classical statistical approach, such as the Kaplan-Meier estimator and the Cox proportional hazards regression model, and (2) dynamic Bayesian network modeling. Our comparison is based on time series cervical cancer screening data collected at Magee-Womens Hospital, University of Pittsburgh Medical Center over 10 years. The main outcomes of our comparison are cervical cancer risk assessments produced by the three approaches. However, our analysis discusses also several aspects of the comparison, such as modeling assumptions, model building, dealing with incomplete data, individualized risk assessment, results interpretation, and model validation. Our study shows that the Bayesian approach is (1) much more flexible in terms of modeling effort, and (2) it offers an individualized risk assessment, which is more cumbersome for classical statistical approaches.
Time series, correlation matrices and random matrix models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vinayak; Seligman, Thomas H.
2014-01-08
In this set of five lectures the authors have presented techniques to analyze open classical and quantum systems using correlation matrices. For diverse reasons we shall see that random matrices play an important role to describe a null hypothesis or a minimum information hypothesis for the description of a quantum system or subsystem. In the former case various forms of correlation matrices of time series associated with the classical observables of some system. The fact that such series are necessarily finite, inevitably introduces noise and this finite time influence lead to a random or stochastic component in these time series.more » By consequence random correlation matrices have a random component, and corresponding ensembles are used. In the latter we use random matrices to describe high temperature environment or uncontrolled perturbations, ensembles of differing chaotic systems etc. The common theme of the lectures is thus the importance of random matrix theory in a wide range of fields in and around physics.« less
NASA Astrophysics Data System (ADS)
Ningrum, R. W.; Surarso, B.; Farikhin; Safarudin, Y. M.
2018-03-01
This paper proposes the combination of Firefly Algorithm (FA) and Chen Fuzzy Time Series Forecasting. Most of the existing fuzzy forecasting methods based on fuzzy time series use the static length of intervals. Therefore, we apply an artificial intelligence, i.e., Firefly Algorithm (FA) to set non-stationary length of intervals for each cluster on Chen Method. The method is evaluated by applying on the Jakarta Composite Index (IHSG) and compare with classical Chen Fuzzy Time Series Forecasting. Its performance verified through simulation using Matlab.
Koopman Operator Framework for Time Series Modeling and Analysis
NASA Astrophysics Data System (ADS)
Surana, Amit
2018-01-01
We propose an interdisciplinary framework for time series classification, forecasting, and anomaly detection by combining concepts from Koopman operator theory, machine learning, and linear systems and control theory. At the core of this framework is nonlinear dynamic generative modeling of time series using the Koopman operator which is an infinite-dimensional but linear operator. Rather than working with the underlying nonlinear model, we propose two simpler linear representations or model forms based on Koopman spectral properties. We show that these model forms are invariants of the generative model and can be readily identified directly from data using techniques for computing Koopman spectral properties without requiring the explicit knowledge of the generative model. We also introduce different notions of distances on the space of such model forms which is essential for model comparison/clustering. We employ the space of Koopman model forms equipped with distance in conjunction with classical machine learning techniques to develop a framework for automatic feature generation for time series classification. The forecasting/anomaly detection framework is based on using Koopman model forms along with classical linear systems and control approaches. We demonstrate the proposed framework for human activity classification, and for time series forecasting/anomaly detection in power grid application.
Time-Series Analysis of Intermittent Velocity Fluctuations in Turbulent Boundary Layers
NASA Astrophysics Data System (ADS)
Zayernouri, Mohsen; Samiee, Mehdi; Meerschaert, Mark M.; Klewicki, Joseph
2017-11-01
Classical turbulence theory is modified under the inhomogeneities produced by the presence of a wall. In this regard, we propose a new time series model for the streamwise velocity fluctuations in the inertial sub-layer of turbulent boundary layers. The new model employs tempered fractional calculus and seamlessly extends the classical 5/3 spectral model of Kolmogorov in the inertial subrange to the whole spectrum from large to small scales. Moreover, the proposed time-series model allows the quantification of data uncertainties in the underlying stochastic cascade of turbulent kinetic energy. The model is tested using well-resolved streamwise velocity measurements up to friction Reynolds numbers of about 20,000. The physics of the energy cascade are briefly described within the context of the determined model parameters. This work was supported by the AFOSR Young Investigator Program (YIP) award (FA9550-17-1-0150) and partially by MURI/ARO (W911NF-15-1-0562).
2017-01-01
are the shear relaxation moduli and relaxation times , which make up the classical Prony series . A Prony- series expansion is a relaxation function...approximation for modeling time -dependent damping. The scalar parameters 1 and 2 control the nonlinearity of the Prony series . Under the...Velodyne that best fit the experimental stress-strain data. To do so, the Design Analysis Kit for Optimization and Terascale Applications (DAKOTA
NASA Astrophysics Data System (ADS)
Fink, G.; Koch, M.
2010-12-01
An important aspect in water resources and hydrological engineering is the assessment of hydrological risk, due to the occurrence of extreme events, e.g. droughts or floods. When dealing with the latter - as is the focus here - the classical methods of flood frequency analysis (FFA) are usually being used for the proper dimensioning of a hydraulic structure, for the purpose of bringing down the flood risk to an acceptable level. FFA is based on extreme value statistics theory. Despite the progress of methods in this scientific branch, the development, decision, and fitting of an appropriate distribution function stills remains a challenge, particularly, when certain underlying assumptions of the theory are not met in real applications. This is, for example, the case when the stationarity-condition for a random flood time series is not satisfied anymore, as could be the situation when long-term hydrological impacts of future climate change are to be considered. The objective here is to verify the applicability of classical (stationary) FFA to predicted flood time series in the Fulda catchment in central Germany, as they may occur in the wake of climate change during the 21st century. These discharge time series at the outlet of the Fulda basin have been simulated with a distributed hydrological model (SWAT) that is forced by predicted climate variables of a regional climate model for Germany (REMO). From the simulated future daily time series, annual maximum (extremes) values are computed and analyzed for the purpose of risk evaluation. Although the 21st century estimated extreme flood series of the Fulda river turn out to be only mildly non-stationary, alleviating the need for further action and concern at the first sight, the more detailed analysis of the risk, as quantified, for example, by the return period, shows non-negligent differences in the calculated risk levels. This could be verified by employing a new method, the so-called flood series maximum analysis (FSMA) method, which consists in the stochastic simulation of numerous trajectories of a stochastic process with a given GEV-distribution over a certain length of time (> larger than a desired return period). Then the maximum value for each trajectory is computed, all of which are then used to determine the empirical distribution of this maximum series. Through graphical inversion of this distribution function the size of the design flood for a given risk (quantile) and given life duration can be inferred. The results of numerous simulations show that for stationary flood series, the new FSMA method results, expectedly, in nearly identical risk values as the classical FFA approach. However, once the flood time series becomes slightly non-stationary - for reasons as discussed - and regardless of whether the trend is increasing or decreasing, large differences in the computed risk values for a given design flood occur. Or in other word, for the same risk, the new FSMA method would lead to different values in the design flood for a hydraulic structure than the classical FFA method. This, in turn, could lead to some cost savings in the realization of a hydraulic project.
Scale invariance in chaotic time series: Classical and quantum examples
NASA Astrophysics Data System (ADS)
Landa, Emmanuel; Morales, Irving O.; Stránský, Pavel; Fossion, Rubén; Velázquez, Victor; López Vieyra, J. C.; Frank, Alejandro
Important aspects of chaotic behavior appear in systems of low dimension, as illustrated by the Map Module 1. It is indeed a remarkable fact that all systems tha make a transition from order to disorder display common properties, irrespective of their exacta functional form. We discuss evidence for 1/f power spectra in the chaotic time series associated in classical and quantum examples, the one-dimensional map module 1 and the spectrum of 48Ca. A Detrended Fluctuation Analysis (DFA) method is applied to investigate the scaling properties of the energy fluctuations in the spectrum of 48Ca obtained with a large realistic shell model calculation (ANTOINE code) and with a random shell model (TBRE) calculation also in the time series obtained with the map mod 1. We compare the scale invariant properties of the 48Ca nuclear spectrum sith similar analyses applied to the RMT ensambles GOE and GDE. A comparison with the corresponding power spectra is made in both cases. The possible consequences of the results are discussed.
Improvements to surrogate data methods for nonstationary time series.
Lucio, J H; Valdés, R; Rodríguez, L R
2012-05-01
The method of surrogate data has been extensively applied to hypothesis testing of system linearity, when only one realization of the system, a time series, is known. Normally, surrogate data should preserve the linear stochastic structure and the amplitude distribution of the original series. Classical surrogate data methods (such as random permutation, amplitude adjusted Fourier transform, or iterative amplitude adjusted Fourier transform) are successful at preserving one or both of these features in stationary cases. However, they always produce stationary surrogates, hence existing nonstationarity could be interpreted as dynamic nonlinearity. Certain modifications have been proposed that additionally preserve some nonstationarity, at the expense of reproducing a great deal of nonlinearity. However, even those methods generally fail to preserve the trend (i.e., global nonstationarity in the mean) of the original series. This is the case of time series with unit roots in their autoregressive structure. Additionally, those methods, based on Fourier transform, either need first and last values in the original series to match, or they need to select a piece of the original series with matching ends. These conditions are often inapplicable and the resulting surrogates are adversely affected by the well-known artefact problem. In this study, we propose a simple technique that, applied within existing Fourier-transform-based methods, generates surrogate data that jointly preserve the aforementioned characteristics of the original series, including (even strong) trends. Moreover, our technique avoids the negative effects of end mismatch. Several artificial and real, stationary and nonstationary, linear and nonlinear time series are examined, in order to demonstrate the advantages of the methods. Corresponding surrogate data are produced with the classical and with the proposed methods, and the results are compared.
Wilkinson, R L
1995-07-01
Mystery has long surrounded the collapse of the Classic lowland Mayan civilization of the Peten region in Guatemala. Recent population reconstructions derived from archaeological evidence from the central lowlands show population declines from urban levels of between 2.5 and 3.5 million to around 536,000 in the two hundred year interval between 800 A.D. and 1000 A.D., the period known as the Classic Maya Collapse. A steady, but lesser rate of population decline continued until the time of European contact. When knowledge of the ecology and epidemiology of yellow fever and its known mosquito vectors are compared with what is known of the ecological conditions of lowland Guatemala as modified by the Classic Maya, provocative similarities are observed. When infection and mortality patterns of more recent urban yellow fever epidemics are used as models for a possible series of Classic Maya epidemics, a correlation is noted between the modeled rate of population decline for a series of epidemics, and population decline figures reconstructed from archaeological evidence.
Clustering of financial time series
NASA Astrophysics Data System (ADS)
D'Urso, Pierpaolo; Cappelli, Carmela; Di Lallo, Dario; Massari, Riccardo
2013-05-01
This paper addresses the topic of classifying financial time series in a fuzzy framework proposing two fuzzy clustering models both based on GARCH models. In general clustering of financial time series, due to their peculiar features, needs the definition of suitable distance measures. At this aim, the first fuzzy clustering model exploits the autoregressive representation of GARCH models and employs, in the framework of a partitioning around medoids algorithm, the classical autoregressive metric. The second fuzzy clustering model, also based on partitioning around medoids algorithm, uses the Caiado distance, a Mahalanobis-like distance, based on estimated GARCH parameters and covariances that takes into account the information about the volatility structure of time series. In order to illustrate the merits of the proposed fuzzy approaches an application to the problem of classifying 29 time series of Euro exchange rates against international currencies is presented and discussed, also comparing the fuzzy models with their crisp version.
Advanced spectral methods for climatic time series
Ghil, M.; Allen, M.R.; Dettinger, M.D.; Ide, K.; Kondrashov, D.; Mann, M.E.; Robertson, A.W.; Saunders, A.; Tian, Y.; Varadi, F.; Yiou, P.
2002-01-01
The analysis of univariate or multivariate time series provides crucial information to describe, understand, and predict climatic variability. The discovery and implementation of a number of novel methods for extracting useful information from time series has recently revitalized this classical field of study. Considerable progress has also been made in interpreting the information so obtained in terms of dynamical systems theory. In this review we describe the connections between time series analysis and nonlinear dynamics, discuss signal- to-noise enhancement, and present some of the novel methods for spectral analysis. The various steps, as well as the advantages and disadvantages of these methods, are illustrated by their application to an important climatic time series, the Southern Oscillation Index. This index captures major features of interannual climate variability and is used extensively in its prediction. Regional and global sea surface temperature data sets are used to illustrate multivariate spectral methods. Open questions and further prospects conclude the review.
Homogenising time series: beliefs, dogmas and facts
NASA Astrophysics Data System (ADS)
Domonkos, P.
2011-06-01
In the recent decades various homogenisation methods have been developed, but the real effects of their application on time series are still not known sufficiently. The ongoing COST action HOME (COST ES0601) is devoted to reveal the real impacts of homogenisation methods more detailed and with higher confidence than earlier. As a part of the COST activity, a benchmark dataset was built whose characteristics approach well the characteristics of real networks of observed time series. This dataset offers much better opportunity than ever before to test the wide variety of homogenisation methods, and analyse the real effects of selected theoretical recommendations. Empirical results show that real observed time series usually include several inhomogeneities of different sizes. Small inhomogeneities often have similar statistical characteristics than natural changes caused by climatic variability, thus the pure application of the classic theory that change-points of observed time series can be found and corrected one-by-one is impossible. However, after homogenisation the linear trends, seasonal changes and long-term fluctuations of time series are usually much closer to the reality than in raw time series. Some problems around detecting multiple structures of inhomogeneities, as well as that of time series comparisons within homogenisation procedures are discussed briefly in the study.
Application of the Allan Variance to Time Series Analysis in Astrometry and Geodesy: A Review.
Malkin, Zinovy
2016-04-01
The Allan variance (AVAR) was introduced 50 years ago as a statistical tool for assessing the frequency standards deviations. For the past decades, AVAR has increasingly been used in geodesy and astrometry to assess the noise characteristics in geodetic and astrometric time series. A specific feature of astrometric and geodetic measurements, as compared with clock measurements, is that they are generally associated with uncertainties; thus, an appropriate weighting should be applied during data analysis. In addition, some physically connected scalar time series naturally form series of multidimensional vectors. For example, three station coordinates time series X, Y, and Z can be combined to analyze 3-D station position variations. The classical AVAR is not intended for processing unevenly weighted and/or multidimensional data. Therefore, AVAR modifications, namely weighted AVAR (WAVAR), multidimensional AVAR (MAVAR), and weighted multidimensional AVAR (WMAVAR), were introduced to overcome these deficiencies. In this paper, a brief review is given of the experience of using AVAR and its modifications in processing astrogeodetic time series.
NASA Astrophysics Data System (ADS)
Papacharalampous, Georgia; Tyralis, Hristos; Koutsoyiannis, Demetris
2018-02-01
We investigate the predictability of monthly temperature and precipitation by applying automatic univariate time series forecasting methods to a sample of 985 40-year-long monthly temperature and 1552 40-year-long monthly precipitation time series. The methods include a naïve one based on the monthly values of the last year, as well as the random walk (with drift), AutoRegressive Fractionally Integrated Moving Average (ARFIMA), exponential smoothing state-space model with Box-Cox transformation, ARMA errors, Trend and Seasonal components (BATS), simple exponential smoothing, Theta and Prophet methods. Prophet is a recently introduced model inspired by the nature of time series forecasted at Facebook and has not been applied to hydrometeorological time series before, while the use of random walk, BATS, simple exponential smoothing and Theta is rare in hydrology. The methods are tested in performing multi-step ahead forecasts for the last 48 months of the data. We further investigate how different choices of handling the seasonality and non-normality affect the performance of the models. The results indicate that: (a) all the examined methods apart from the naïve and random walk ones are accurate enough to be used in long-term applications; (b) monthly temperature and precipitation can be forecasted to a level of accuracy which can barely be improved using other methods; (c) the externally applied classical seasonal decomposition results mostly in better forecasts compared to the automatic seasonal decomposition used by the BATS and Prophet methods; and (d) Prophet is competitive, especially when it is combined with externally applied classical seasonal decomposition.
Xue, Fangzheng; Li, Qian; Li, Xiumin
2017-01-01
Recently, echo state network (ESN) has attracted a great deal of attention due to its high accuracy and efficient learning performance. Compared with the traditional random structure and classical sigmoid units, simple circle topology and leaky integrator neurons have more advantages on reservoir computing of ESN. In this paper, we propose a new model of ESN with both circle reservoir structure and leaky integrator units. By comparing the prediction capability on Mackey-Glass chaotic time series of four ESN models: classical ESN, circle ESN, traditional leaky integrator ESN, circle leaky integrator ESN, we find that our circle leaky integrator ESN shows significantly better performance than other ESNs with roughly 2 orders of magnitude reduction of the predictive error. Moreover, this model has stronger ability to approximate nonlinear dynamics and resist noise than conventional ESN and ESN with only simple circle structure or leaky integrator neurons. Our results show that the combination of circle topology and leaky integrator neurons can remarkably increase dynamical diversity and meanwhile decrease the correlation of reservoir states, which contribute to the significant improvement of computational performance of Echo state network on time series prediction.
Time series modeling in traffic safety research.
Lavrenz, Steven M; Vlahogianni, Eleni I; Gkritza, Konstantina; Ke, Yue
2018-08-01
The use of statistical models for analyzing traffic safety (crash) data has been well-established. However, time series techniques have traditionally been underrepresented in the corresponding literature, due to challenges in data collection, along with a limited knowledge of proper methodology. In recent years, new types of high-resolution traffic safety data, especially in measuring driver behavior, have made time series modeling techniques an increasingly salient topic of study. Yet there remains a dearth of information to guide analysts in their use. This paper provides an overview of the state of the art in using time series models in traffic safety research, and discusses some of the fundamental techniques and considerations in classic time series modeling. It also presents ongoing and future opportunities for expanding the use of time series models, and explores newer modeling techniques, including computational intelligence models, which hold promise in effectively handling ever-larger data sets. The information contained herein is meant to guide safety researchers in understanding this broad area of transportation data analysis, and provide a framework for understanding safety trends that can influence policy-making. Copyright © 2017 Elsevier Ltd. All rights reserved.
Deriving the exact nonadiabatic quantum propagator in the mapping variable representation.
Hele, Timothy J H; Ananth, Nandini
2016-12-22
We derive an exact quantum propagator for nonadiabatic dynamics in multi-state systems using the mapping variable representation, where classical-like Cartesian variables are used to represent both continuous nuclear degrees of freedom and discrete electronic states. The resulting Liouvillian is a Moyal series that, when suitably approximated, can allow for the use of classical dynamics to efficiently model large systems. We demonstrate that different truncations of the exact Liouvillian lead to existing approximate semiclassical and mixed quantum-classical methods and we derive an associated error term for each method. Furthermore, by combining the imaginary-time path-integral representation of the Boltzmann operator with the exact Liouvillian, we obtain an analytic expression for thermal quantum real-time correlation functions. These results provide a rigorous theoretical foundation for the development of accurate and efficient classical-like dynamics to compute observables such as electron transfer reaction rates in complex quantized systems.
Forbidden patterns in financial time series
NASA Astrophysics Data System (ADS)
Zanin, Massimiliano
2008-03-01
The existence of forbidden patterns, i.e., certain missing sequences in a given time series, is a recently proposed instrument of potential application in the study of time series. Forbidden patterns are related to the permutation entropy, which has the basic properties of classic chaos indicators, such as Lyapunov exponent or Kolmogorov entropy, thus allowing to separate deterministic (usually chaotic) from random series; however, it requires fewer values of the series to be calculated, and it is suitable for using with small datasets. In this paper, the appearance of forbidden patterns is studied in different economical indicators such as stock indices (Dow Jones Industrial Average and Nasdaq Composite), NYSE stocks (IBM and Boeing), and others (ten year Bond interest rate), to find evidence of deterministic behavior in their evolutions. Moreover, the rate of appearance of the forbidden patterns is calculated, and some considerations about the underlying dynamics are suggested.
Numerical solution methods for viscoelastic orthotropic materials
NASA Technical Reports Server (NTRS)
Gramoll, K. C.; Dillard, D. A.; Brinson, H. F.
1988-01-01
Numerical solution methods for viscoelastic orthotropic materials, specifically fiber reinforced composite materials, are examined. The methods include classical lamination theory using time increments, direction solution of the Volterra Integral, Zienkiewicz's linear Prony series method, and a new method called Nonlinear Differential Equation Method (NDEM) which uses a nonlinear Prony series. The criteria used for comparison of the various methods include the stability of the solution technique, time step size stability, computer solution time length, and computer memory storage. The Volterra Integral allowed the implementation of higher order solution techniques but had difficulties solving singular and weakly singular compliance function. The Zienkiewicz solution technique, which requires the viscoelastic response to be modeled by a Prony series, works well for linear viscoelastic isotropic materials and small time steps. The new method, NDEM, uses a modified Prony series which allows nonlinear stress effects to be included and can be used with orthotropic nonlinear viscoelastic materials. The NDEM technique is shown to be accurate and stable for both linear and nonlinear conditions with minimal computer time.
Riemannian multi-manifold modeling and clustering in brain networks
NASA Astrophysics Data System (ADS)
Slavakis, Konstantinos; Salsabilian, Shiva; Wack, David S.; Muldoon, Sarah F.; Baidoo-Williams, Henry E.; Vettel, Jean M.; Cieslak, Matthew; Grafton, Scott T.
2017-08-01
This paper introduces Riemannian multi-manifold modeling in the context of brain-network analytics: Brainnetwork time-series yield features which are modeled as points lying in or close to a union of a finite number of submanifolds within a known Riemannian manifold. Distinguishing disparate time series amounts thus to clustering multiple Riemannian submanifolds. To this end, two feature-generation schemes for brain-network time series are put forth. The first one is motivated by Granger-causality arguments and uses an auto-regressive moving average model to map low-rank linear vector subspaces, spanned by column vectors of appropriately defined observability matrices, to points into the Grassmann manifold. The second one utilizes (non-linear) dependencies among network nodes by introducing kernel-based partial correlations to generate points in the manifold of positivedefinite matrices. Based on recently developed research on clustering Riemannian submanifolds, an algorithm is provided for distinguishing time series based on their Riemannian-geometry properties. Numerical tests on time series, synthetically generated from real brain-network structural connectivity matrices, reveal that the proposed scheme outperforms classical and state-of-the-art techniques in clustering brain-network states/structures.
NASA Astrophysics Data System (ADS)
Serov, Vladislav V.; Kheifets, A. S.
2014-12-01
We analyze a transfer ionization (TI) reaction in the fast proton-helium collision H++He →H0+He2 ++ e- by solving a time-dependent Schrödinger equation (TDSE) under the classical projectile motion approximation in one-dimensional kinematics. In addition, we construct various time-independent analogs of our model using lowest-order perturbation theory in the form of the Born series. By comparing various aspects of the TDSE and the Born series calculations, we conclude that the recent discrepancies of experimental and theoretical data may be attributed to deficiency of the Born models used by other authors. We demonstrate that the correct Born series for TI should include the momentum-space overlap between the double-ionization amplitude and the wave function of the transferred electron.
NASA Technical Reports Server (NTRS)
Aires, Filipe; Rossow, William B.; Chedin, Alain; Hansen, James E. (Technical Monitor)
2001-01-01
The Independent Component Analysis is a recently developed technique for component extraction. This new method requires the statistical independence of the extracted components, a stronger constraint that uses higher-order statistics, instead of the classical decorrelation, a weaker constraint that uses only second-order statistics. This technique has been used recently for the analysis of geophysical time series with the goal of investigating the causes of variability in observed data (i.e. exploratory approach). We demonstrate with a data simulation experiment that, if initialized with a Principal Component Analysis, the Independent Component Analysis performs a rotation of the classical PCA (or EOF) solution. This rotation uses no localization criterion like other Rotation Techniques (RT), only the global generalization of decorrelation by statistical independence is used. This rotation of the PCA solution seems to be able to solve the tendency of PCA to mix several physical phenomena, even when the signal is just their linear sum.
Zhang, Yatao; Wei, Shoushui; Liu, Hai; Zhao, Lina; Liu, Chengyu
2016-09-01
The Lempel-Ziv (LZ) complexity and its variants have been extensively used to analyze the irregularity of physiological time series. To date, these measures cannot explicitly discern between the irregularity and the chaotic characteristics of physiological time series. Our study compared the performance of an encoding LZ (ELZ) complexity algorithm, a novel variant of the LZ complexity algorithm, with those of the classic LZ (CLZ) and multistate LZ (MLZ) complexity algorithms. Simulation experiments on Gaussian noise, logistic chaotic, and periodic time series showed that only the ELZ algorithm monotonically declined with the reduction in irregularity in time series, whereas the CLZ and MLZ approaches yielded overlapped values for chaotic time series and time series mixed with Gaussian noise, demonstrating the accuracy of the proposed ELZ algorithm in capturing the irregularity, rather than the complexity, of physiological time series. In addition, the effect of sequence length on the ELZ algorithm was more stable compared with those on CLZ and MLZ, especially when the sequence length was longer than 300. A sensitivity analysis for all three LZ algorithms revealed that both the MLZ and the ELZ algorithms could respond to the change in time sequences, whereas the CLZ approach could not. Cardiac interbeat (RR) interval time series from the MIT-BIH database were also evaluated, and the results showed that the ELZ algorithm could accurately measure the inherent irregularity of the RR interval time series, as indicated by lower LZ values yielded from a congestive heart failure group versus those yielded from a normal sinus rhythm group (p < 0.01). Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Comparison of time series models for predicting campylobacteriosis risk in New Zealand.
Al-Sakkaf, A; Jones, G
2014-05-01
Predicting campylobacteriosis cases is a matter of considerable concern in New Zealand, after the number of the notified cases was the highest among the developed countries in 2006. Thus, there is a need to develop a model or a tool to predict accurately the number of campylobacteriosis cases as the Microbial Risk Assessment Model used to predict the number of campylobacteriosis cases failed to predict accurately the number of actual cases. We explore the appropriateness of classical time series modelling approaches for predicting campylobacteriosis. Finding the most appropriate time series model for New Zealand data has additional practical considerations given a possible structural change, that is, a specific and sudden change in response to the implemented interventions. A univariate methodological approach was used to predict monthly disease cases using New Zealand surveillance data of campylobacteriosis incidence from 1998 to 2009. The data from the years 1998 to 2008 were used to model the time series with the year 2009 held out of the data set for model validation. The best two models were then fitted to the full 1998-2009 data and used to predict for each month of 2010. The Holt-Winters (multiplicative) and ARIMA (additive) intervention models were considered the best models for predicting campylobacteriosis in New Zealand. It was noticed that the prediction by an additive ARIMA with intervention was slightly better than the prediction by a Holt-Winter multiplicative method for the annual total in year 2010, the former predicting only 23 cases less than the actual reported cases. It is confirmed that classical time series techniques such as ARIMA with intervention and Holt-Winters can provide a good prediction performance for campylobacteriosis risk in New Zealand. The results reported by this study are useful to the New Zealand Health and Safety Authority's efforts in addressing the problem of the campylobacteriosis epidemic. © 2013 Blackwell Verlag GmbH.
Multiscale entropy-based methods for heart rate variability complexity analysis
NASA Astrophysics Data System (ADS)
Silva, Luiz Eduardo Virgilio; Cabella, Brenno Caetano Troca; Neves, Ubiraci Pereira da Costa; Murta Junior, Luiz Otavio
2015-03-01
Physiologic complexity is an important concept to characterize time series from biological systems, which associated to multiscale analysis can contribute to comprehension of many complex phenomena. Although multiscale entropy has been applied to physiological time series, it measures irregularity as function of scale. In this study we purpose and evaluate a set of three complexity metrics as function of time scales. Complexity metrics are derived from nonadditive entropy supported by generation of surrogate data, i.e. SDiffqmax, qmax and qzero. In order to access accuracy of proposed complexity metrics, receiver operating characteristic (ROC) curves were built and area under the curves was computed for three physiological situations. Heart rate variability (HRV) time series in normal sinus rhythm, atrial fibrillation, and congestive heart failure data set were analyzed. Results show that proposed metric for complexity is accurate and robust when compared to classic entropic irregularity metrics. Furthermore, SDiffqmax is the most accurate for lower scales, whereas qmax and qzero are the most accurate when higher time scales are considered. Multiscale complexity analysis described here showed potential to assess complex physiological time series and deserves further investigation in wide context.
MEASUREMENT OF TIME INTERVALS FOR TIME CORRELATED RADIOACTIVE DECAY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lindeman, H.; Mornel, E.; Galil, U.
1960-11-01
The distribution of time intervals between successive counts was measured for radioactive decay in the thorium series. The measurements showed that the classical Marsden-Barratt law does not apply to this case of timecorrelated decay. They appeared, however, to be in agreement with the theory of Lindeman-Rosen, taking into account the fact that the counter receives only the radiation emitted in a solid angle near to 2 pi . (auth)
Acosta-Mesa, Héctor-Gabriel; Rechy-Ramírez, Fernando; Mezura-Montes, Efrén; Cruz-Ramírez, Nicandro; Hernández Jiménez, Rodolfo
2014-06-01
In this work, we present a novel application of time series discretization using evolutionary programming for the classification of precancerous cervical lesions. The approach optimizes the number of intervals in which the length and amplitude of the time series should be compressed, preserving the important information for classification purposes. Using evolutionary programming, the search for a good discretization scheme is guided by a cost function which considers three criteria: the entropy regarding the classification, the complexity measured as the number of different strings needed to represent the complete data set, and the compression rate assessed as the length of the discrete representation. This discretization approach is evaluated using a time series data based on temporal patterns observed during a classical test used in cervical cancer detection; the classification accuracy reached by our method is compared with the well-known times series discretization algorithm SAX and the dimensionality reduction method PCA. Statistical analysis of the classification accuracy shows that the discrete representation is as efficient as the complete raw representation for the present application, reducing the dimensionality of the time series length by 97%. This representation is also very competitive in terms of classification accuracy when compared with similar approaches. Copyright © 2014 Elsevier Inc. All rights reserved.
Dou, Chao
2016-01-01
The storage volume of internet data center is one of the classical time series. It is very valuable to predict the storage volume of a data center for the business value. However, the storage volume series from a data center is always “dirty,” which contains the noise, missing data, and outliers, so it is necessary to extract the main trend of storage volume series for the future prediction processing. In this paper, we propose an irregular sampling estimation method to extract the main trend of the time series, in which the Kalman filter is used to remove the “dirty” data; then the cubic spline interpolation and average method are used to reconstruct the main trend. The developed method is applied in the storage volume series of internet data center. The experiment results show that the developed method can estimate the main trend of storage volume series accurately and make great contribution to predict the future volume value. PMID:28090205
Miao, Beibei; Dou, Chao; Jin, Xuebo
2016-01-01
The storage volume of internet data center is one of the classical time series. It is very valuable to predict the storage volume of a data center for the business value. However, the storage volume series from a data center is always "dirty," which contains the noise, missing data, and outliers, so it is necessary to extract the main trend of storage volume series for the future prediction processing. In this paper, we propose an irregular sampling estimation method to extract the main trend of the time series, in which the Kalman filter is used to remove the "dirty" data; then the cubic spline interpolation and average method are used to reconstruct the main trend. The developed method is applied in the storage volume series of internet data center. The experiment results show that the developed method can estimate the main trend of storage volume series accurately and make great contribution to predict the future volume value. .
NASA Astrophysics Data System (ADS)
Masselot, Pierre; Chebana, Fateh; Bélanger, Diane; St-Hilaire, André; Abdous, Belkacem; Gosselin, Pierre; Ouarda, Taha B. M. J.
2018-01-01
In a number of environmental studies, relationships between natural processes are often assessed through regression analyses, using time series data. Such data are often multi-scale and non-stationary, leading to a poor accuracy of the resulting regression models and therefore to results with moderate reliability. To deal with this issue, the present paper introduces the EMD-regression methodology consisting in applying the empirical mode decomposition (EMD) algorithm on data series and then using the resulting components in regression models. The proposed methodology presents a number of advantages. First, it accounts of the issues of non-stationarity associated to the data series. Second, this approach acts as a scan for the relationship between a response variable and the predictors at different time scales, providing new insights about this relationship. To illustrate the proposed methodology it is applied to study the relationship between weather and cardiovascular mortality in Montreal, Canada. The results shed new knowledge concerning the studied relationship. For instance, they show that the humidity can cause excess mortality at the monthly time scale, which is a scale not visible in classical models. A comparison is also conducted with state of the art methods which are the generalized additive models and distributed lag models, both widely used in weather-related health studies. The comparison shows that EMD-regression achieves better prediction performances and provides more details than classical models concerning the relationship.
Nonlinear Dynamics, Poor Data, and What to Make of Them?
NASA Astrophysics Data System (ADS)
Ghil, M.; Zaliapin, I. V.
2005-12-01
The analysis of univariate or multivariate time series provides crucial information to describe, understand, and predict variability in the geosciences. The discovery and implementation of a number of novel methods for extracting useful information from time series has recently revitalized this classical field of study. Considerable progress has also been made in interpreting the information so obtained in terms of dynamical systems theory. In this talk we will describe the connections between time series analysis and nonlinear dynamics, discuss signal-to-noise enhancement, and present some of the novel methods for spectral analysis. These fall into two broad categories: (i) methods that try to ferret out regularities of the time series; and (ii) methods aimed at describing the characteristics of irregular processes. The former include singular-spectrum analysis (SSA), the multi-taper method (MTM), and the maximum-entropy method (MEM). The various steps, as well as the advantages and disadvantages of these methods, will be illustrated by their application to several important climatic time series, such as the Southern Oscillation Index (SOI), paleoclimatic time series, and instrumental temperature time series. The SOI index captures major features of interannual climate variability and is used extensively in its prediction. The other time series cover interdecadal and millennial time scales. The second category includes the calculation of fractional dimension, leading Lyapunov exponents, and Hurst exponents. More recently, multi-trend analysis (MTA), binary-decomposition analysis (BDA), and related methods have attempted to describe the structure of time series that include both regular and irregular components. Within the time available, I will try to give a feeling for how these methods work, and how well.
NASA Astrophysics Data System (ADS)
Gábor Hatvani, István; Kern, Zoltán; Leél-Őssy, Szabolcs; Demény, Attila
2018-01-01
Uneven spacing is a common feature of sedimentary paleoclimate records, in many cases causing difficulties in the application of classical statistical and time series methods. Although special statistical tools do exist to assess unevenly spaced data directly, the transformation of such data into a temporally equidistant time series which may then be examined using commonly employed statistical tools remains, however, an unachieved goal. The present paper, therefore, introduces an approach to obtain evenly spaced time series (using cubic spline fitting) from unevenly spaced speleothem records with the application of a spectral guidance to avoid the spectral bias caused by interpolation and retain the original spectral characteristics of the data. The methodology was applied to stable carbon and oxygen isotope records derived from two stalagmites from the Baradla Cave (NE Hungary) dating back to the late 18th century. To show the benefit of the equally spaced records to climate studies, their coherence with climate parameters is explored using wavelet transform coherence and discussed. The obtained equally spaced time series are available at https://doi.org/10.1594/PANGAEA.875917.
Long-time predictions in nonlinear dynamics
NASA Technical Reports Server (NTRS)
Szebehely, V.
1980-01-01
It is known that nonintegrable dynamical systems do not allow precise predictions concerning their behavior for arbitrary long times. The available series solutions are not uniformly convergent according to Poincare's theorem and numerical integrations lose their meaningfulness after the elapse of arbitrary long times. Two approaches are the use of existing global integrals and statistical methods. This paper presents a generalized method along the first approach. As examples long-time predictions in the classical gravitational satellite and planetary problems are treated.
Homogenising time series: Beliefs, dogmas and facts
NASA Astrophysics Data System (ADS)
Domonkos, P.
2010-09-01
For obtaining reliable information about climate change and climate variability the use of high quality data series is essentially important, and one basic tool of quality improvements is the statistical homogenisation of observed time series. In the recent decades large number of homogenisation methods has been developed, but the real effects of their application on time series are still not known entirely. The ongoing COST HOME project (COST ES0601) is devoted to reveal the real impacts of homogenisation methods more detailed and with higher confidence than earlier. As part of the COST activity, a benchmark dataset was built whose characteristics approach well the characteristics of real networks of observed time series. This dataset offers much better opportunity than ever to test the wide variety of homogenisation methods, and analyse the real effects of selected theoretical recommendations. The author believes that several old theoretical rules have to be re-evaluated. Some examples of the hot questions, a) Statistically detected change-points can be accepted only with the confirmation of metadata information? b) Do semi-hierarchic algorithms for detecting multiple change-points in time series function effectively in practise? c) Is it good to limit the spatial comparison of candidate series with up to five other series in the neighbourhood? Empirical results - those from the COST benchmark, and other experiments too - show that real observed time series usually include several inhomogeneities of different sizes. Small inhomogeneities seem like part of the climatic variability, thus the pure application of classic theory that change-points of observed time series can be found and corrected one-by-one is impossible. However, after homogenisation the linear trends, seasonal changes and long-term fluctuations of time series are usually much closer to the reality, than in raw time series. The developers and users of homogenisation methods have to bear in mind that the eventual purpose of homogenisation is not to find change-points, but to have the observed time series with statistical properties those characterise well the climate change and climate variability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dékány, I.; Minniti, D.; Majaess, D.
2015-10-20
Solid insight into the physics of the inner Milky Way is key to understanding our Galaxy’s evolution, but extreme dust obscuration has historically hindered efforts to map the area along the Galactic mid-plane. New comprehensive near-infrared time-series photometry from the VVV Survey has revealed 35 classical Cepheids, tracing a previously unobserved component of the inner Galaxy, namely a ubiquitous inner thin disk of young stars along the Galactic mid-plane, traversing across the bulge. The discovered period (age) spread of these classical Cepheids implies a continuous supply of newly formed stars in the central region of the Galaxy over the lastmore » 100 million years.« less
Large-deviation probabilities for correlated Gaussian processes and intermittent dynamical systems
NASA Astrophysics Data System (ADS)
Massah, Mozhdeh; Nicol, Matthew; Kantz, Holger
2018-05-01
In its classical version, the theory of large deviations makes quantitative statements about the probability of outliers when estimating time averages, if time series data are identically independently distributed. We study large-deviation probabilities (LDPs) for time averages in short- and long-range correlated Gaussian processes and show that long-range correlations lead to subexponential decay of LDPs. A particular deterministic intermittent map can, depending on a control parameter, also generate long-range correlated time series. We illustrate numerically, in agreement with the mathematical literature, that this type of intermittency leads to a power law decay of LDPs. The power law decay holds irrespective of whether the correlation time is finite or infinite, and hence irrespective of whether the central limit theorem applies or not.
NASA Astrophysics Data System (ADS)
Akhoondzadeh, M.
2013-04-01
In this paper, a number of classical and intelligent methods, including interquartile, autoregressive integrated moving average (ARIMA), artificial neural network (ANN) and support vector machine (SVM), have been proposed to quantify potential thermal anomalies around the time of the 11 August 2012 Varzeghan, Iran, earthquake (Mw = 6.4). The duration of the data set, which is comprised of Aqua-MODIS land surface temperature (LST) night-time snapshot images, is 62 days. In order to quantify variations of LST data obtained from satellite images, the air temperature (AT) data derived from the meteorological station close to the earthquake epicenter has been taken into account. For the models examined here, results indicate the following: (i) ARIMA models, which are the most widely used in the time series community for short-term forecasting, are quickly and easily implemented, and can efficiently act through linear solutions. (ii) A multilayer perceptron (MLP) feed-forward neural network can be a suitable non-parametric method to detect the anomalous changes of a non-linear time series such as variations of LST. (iii) Since SVMs are often used due to their many advantages for classification and regression tasks, it can be shown that, if the difference between the predicted value using the SVM method and the observed value exceeds the pre-defined threshold value, then the observed value could be regarded as an anomaly. (iv) ANN and SVM methods could be powerful tools in modeling complex phenomena such as earthquake precursor time series where we may not know what the underlying data generating process is. There is good agreement in the results obtained from the different methods for quantifying potential anomalies in a given LST time series. This paper indicates that the detection of the potential thermal anomalies derive credibility from the overall efficiencies and potentialities of the four integrated methods.
Fourier series expansion for nonlinear Hamiltonian oscillators.
Méndez, Vicenç; Sans, Cristina; Campos, Daniel; Llopis, Isaac
2010-06-01
The problem of nonlinear Hamiltonian oscillators is one of the classical questions in physics. When an analytic solution is not possible, one can resort to obtaining a numerical solution or using perturbation theory around the linear problem. We apply the Fourier series expansion to find approximate solutions to the oscillator position as a function of time as well as the period-amplitude relationship. We compare our results with other recent approaches such as variational methods or heuristic approximations, in particular the Ren-He's method. Based on its application to the Duffing oscillator, the nonlinear pendulum and the eardrum equation, it is shown that the Fourier series expansion method is the most accurate.
Deviations from uniform power law scaling in nonstationary time series
NASA Technical Reports Server (NTRS)
Viswanathan, G. M.; Peng, C. K.; Stanley, H. E.; Goldberger, A. L.
1997-01-01
A classic problem in physics is the analysis of highly nonstationary time series that typically exhibit long-range correlations. Here we test the hypothesis that the scaling properties of the dynamics of healthy physiological systems are more stable than those of pathological systems by studying beat-to-beat fluctuations in the human heart rate. We develop techniques based on the Fano factor and Allan factor functions, as well as on detrended fluctuation analysis, for quantifying deviations from uniform power-law scaling in nonstationary time series. By analyzing extremely long data sets of up to N = 10(5) beats for 11 healthy subjects, we find that the fluctuations in the heart rate scale approximately uniformly over several temporal orders of magnitude. By contrast, we find that in data sets of comparable length for 14 subjects with heart disease, the fluctuations grow erratically, indicating a loss of scaling stability.
Levesque, Danielle L; Menzies, Allyson K; Landry-Cuerrier, Manuelle; Larocque, Guillaume; Humphries, Murray M
2017-07-01
Recent research is revealing incredible diversity in the thermoregulatory patterns of wild and captive endotherms. As a result of these findings, classic thermoregulatory categories of 'homeothermy', 'daily heterothermy', and 'hibernation' are becoming harder to delineate, impeding our understanding of the physiological and evolutionary significance of variation within and around these categories. However, we lack a generalized analytical approach for evaluating and comparing the complex and diversified nature of the full breadth of heterothermy expressed by individuals, populations, and species. Here we propose a new approach that decomposes body temperature time series into three inherent properties-waveform, amplitude, and period-using a non-stationary technique that accommodates the temporal variability of body temperature patterns. This approach quantifies circadian and seasonal variation in thermoregulatory patterns, and uses the distribution of observed thermoregulatory patterns as a basis for intra- and inter-specific comparisons. We analyse body temperature time series from multiple species, including classical hibernators, tropical heterotherms, and homeotherms, to highlight the approach's general usefulness and the major axes of thermoregulatory variation that it reveals.
Cui, Yiqian; Shi, Junyou; Wang, Zili
2015-11-01
Quantum Neural Networks (QNN) models have attracted great attention since it innovates a new neural computing manner based on quantum entanglement. However, the existing QNN models are mainly based on the real quantum operations, and the potential of quantum entanglement is not fully exploited. In this paper, we proposes a novel quantum neuron model called Complex Quantum Neuron (CQN) that realizes a deep quantum entanglement. Also, a novel hybrid networks model Complex Rotation Quantum Dynamic Neural Networks (CRQDNN) is proposed based on Complex Quantum Neuron (CQN). CRQDNN is a three layer model with both CQN and classical neurons. An infinite impulse response (IIR) filter is embedded in the Networks model to enable the memory function to process time series inputs. The Levenberg-Marquardt (LM) algorithm is used for fast parameter learning. The networks model is developed to conduct time series predictions. Two application studies are done in this paper, including the chaotic time series prediction and electronic remaining useful life (RUL) prediction. Copyright © 2015 Elsevier Ltd. All rights reserved.
The Pythagorean Proposition, Classics in Mathematics Education Series.
ERIC Educational Resources Information Center
Loomis, Elisha Scott
This book is a reissue of the second edition which appeared in 1940. It has the distinction of being the first vintage mathematical work published in the NCTM series "Classics in Mathematics Education." The text includes a biography of Pythagoras and an account of historical data pertaining to his proposition. The remainder of the book shows 370…
Hybrid model for forecasting time series with trend, seasonal and salendar variation patterns
NASA Astrophysics Data System (ADS)
Suhartono; Rahayu, S. P.; Prastyo, D. D.; Wijayanti, D. G. P.; Juliyanto
2017-09-01
Most of the monthly time series data in economics and business in Indonesia and other Moslem countries not only contain trend and seasonal, but also affected by two types of calendar variation effects, i.e. the effect of the number of working days or trading and holiday effects. The purpose of this research is to develop a hybrid model or a combination of several forecasting models to predict time series that contain trend, seasonal and calendar variation patterns. This hybrid model is a combination of classical models (namely time series regression and ARIMA model) and/or modern methods (artificial intelligence method, i.e. Artificial Neural Networks). A simulation study was used to show that the proposed procedure for building the hybrid model could work well for forecasting time series with trend, seasonal and calendar variation patterns. Furthermore, the proposed hybrid model is applied for forecasting real data, i.e. monthly data about inflow and outflow of currency at Bank Indonesia. The results show that the hybrid model tend to provide more accurate forecasts than individual forecasting models. Moreover, this result is also in line with the third results of the M3 competition, i.e. the hybrid model on average provides a more accurate forecast than the individual model.
Multivariate stochastic analysis for Monthly hydrological time series at Cuyahoga River Basin
NASA Astrophysics Data System (ADS)
zhang, L.
2011-12-01
Copula has become a very powerful statistic and stochastic methodology in case of the multivariate analysis in Environmental and Water resources Engineering. In recent years, the popular one-parameter Archimedean copulas, e.g. Gumbel-Houggard copula, Cook-Johnson copula, Frank copula, the meta-elliptical copula, e.g. Gaussian Copula, Student-T copula, etc. have been applied in multivariate hydrological analyses, e.g. multivariate rainfall (rainfall intensity, duration and depth), flood (peak discharge, duration and volume), and drought analyses (drought length, mean and minimum SPI values, and drought mean areal extent). Copula has also been applied in the flood frequency analysis at the confluences of river systems by taking into account the dependence among upstream gauge stations rather than by using the hydrological routing technique. In most of the studies above, the annual time series have been considered as stationary signal which the time series have been assumed as independent identically distributed (i.i.d.) random variables. But in reality, hydrological time series, especially the daily and monthly hydrological time series, cannot be considered as i.i.d. random variables due to the periodicity existed in the data structure. Also, the stationary assumption is also under question due to the Climate Change and Land Use and Land Cover (LULC) change in the fast years. To this end, it is necessary to revaluate the classic approach for the study of hydrological time series by relaxing the stationary assumption by the use of nonstationary approach. Also as to the study of the dependence structure for the hydrological time series, the assumption of same type of univariate distribution also needs to be relaxed by adopting the copula theory. In this paper, the univariate monthly hydrological time series will be studied through the nonstationary time series analysis approach. The dependence structure of the multivariate monthly hydrological time series will be studied through the copula theory. As to the parameter estimation, the maximum likelihood estimation (MLE) will be applied. To illustrate the method, the univariate time series model and the dependence structure will be determined and tested using the monthly discharge time series of Cuyahoga River Basin.
A time series analysis of the rabies control programme in Chile.
Ernst, S. N.; Fabrega, F.
1989-01-01
The classical time series decomposition method was used to compare the temporal pattern of rabies in Chile before and after the implementation of the control programme. In the years 1950-60, a period without control measures, rabies showed an increasing trend, a seasonal excess of cases in November and December and a cyclic behaviour with outbreaks occurring every 5 years. During 1961-1970 and 1971-86, a 26-year period that includes two different phases of the rabies programme which started in 1961, there was a general decline in the incidence of rabies. The seasonality disappeared when the disease reached a low frequency level and the cyclical component was not evident. PMID:2606167
NASA Technical Reports Server (NTRS)
Aires, Filipe; Rossow, William B.; Chedin, Alain; Hansen, James E. (Technical Monitor)
2000-01-01
The use of the Principal Component Analysis technique for the analysis of geophysical time series has been questioned in particular for its tendency to extract components that mix several physical phenomena even when the signal is just their linear sum. We demonstrate with a data simulation experiment that the Independent Component Analysis, a recently developed technique, is able to solve this problem. This new technique requires the statistical independence of components, a stronger constraint, that uses higher-order statistics, instead of the classical decorrelation a weaker constraint, that uses only second-order statistics. Furthermore, ICA does not require additional a priori information such as the localization constraint used in Rotational Techniques.
NASA Astrophysics Data System (ADS)
Kbaier Ben Ismail, Dhouha; Lazure, Pascal; Puillat, Ingrid
2016-10-01
In marine sciences, many fields display high variability over a large range of spatial and temporal scales, from seconds to thousands of years. The longer recorded time series, with an increasing sampling frequency, in this field are often nonlinear, nonstationary, multiscale and noisy. Their analysis faces new challenges and thus requires the implementation of adequate and specific methods. The objective of this paper is to highlight time series analysis methods already applied in econometrics, signal processing, health, etc. to the environmental marine domain, assess advantages and inconvenients and compare classical techniques with more recent ones. Temperature, turbidity and salinity are important quantities for ecosystem studies. The authors here consider the fluctuations of sea level, salinity, turbidity and temperature recorded from the MAREL Carnot system of Boulogne-sur-Mer (France), which is a moored buoy equipped with physico-chemical measuring devices, working in continuous and autonomous conditions. In order to perform adequate statistical and spectral analyses, it is necessary to know the nature of the considered time series. For this purpose, the stationarity of the series and the occurrence of unit-root are addressed with the Augmented-Dickey Fuller tests. As an example, the harmonic analysis is not relevant for temperature, turbidity and salinity due to the nonstationary condition, except for the nearly stationary sea level datasets. In order to consider the dominant frequencies associated to the dynamics, the large number of data provided by the sensors should enable the estimation of Fourier spectral analysis. Different power spectra show a complex variability and reveal an influence of environmental factors such as tides. However, the previous classical spectral analysis, namely the Blackman-Tukey method, requires not only linear and stationary data but also evenly-spaced data. Interpolating the time series introduces numerous artifacts to the data. The Lomb-Scargle algorithm is adapted to unevenly-spaced data and is used as an alternative. The limits of the method are also set out. It was found that beyond 50% of missing measures, few significant frequencies are detected, several seasonalities are no more visible, and even a whole range of high frequency disappears progressively. Furthermore, two time-frequency decomposition methods, namely wavelets and Hilbert-Huang Transformation (HHT), are applied for the analysis of the entire dataset. Using the Continuous Wavelet Transform (CWT), some properties of the time series are determined. Then, the inertial wave and several low-frequency tidal waves are identified by the application of the Empirical Mode Decomposition (EMD). Finally, EMD based Time Dependent Intrinsic Correlation (TDIC) analysis is applied to consider the correlation between two nonstationary time series.
Integrating Classical Music into the Elementary Social Studies Curriculum.
ERIC Educational Resources Information Center
Bracken, Khlare R.
1997-01-01
Provides a rationale for using classical music as the basis of interdisciplinary units in elementary social studies. Recommends beginning with a series of humorous pieces to familiarize students with classical music. Includes many examples for utilizing pieces related to geography and history. (MJP)
Kumaraswamy autoregressive moving average models for double bounded environmental data
NASA Astrophysics Data System (ADS)
Bayer, Fábio Mariano; Bayer, Débora Missio; Pumi, Guilherme
2017-12-01
In this paper we introduce the Kumaraswamy autoregressive moving average models (KARMA), which is a dynamic class of models for time series taking values in the double bounded interval (a,b) following the Kumaraswamy distribution. The Kumaraswamy family of distribution is widely applied in many areas, especially hydrology and related fields. Classical examples are time series representing rates and proportions observed over time. In the proposed KARMA model, the median is modeled by a dynamic structure containing autoregressive and moving average terms, time-varying regressors, unknown parameters and a link function. We introduce the new class of models and discuss conditional maximum likelihood estimation, hypothesis testing inference, diagnostic analysis and forecasting. In particular, we provide closed-form expressions for the conditional score vector and conditional Fisher information matrix. An application to environmental real data is presented and discussed.
Off-diagonal expansion quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Albash, Tameem; Wagenbreth, Gene; Hen, Itay
2017-12-01
We propose a Monte Carlo algorithm designed to simulate quantum as well as classical systems at equilibrium, bridging the algorithmic gap between quantum and classical thermal simulation algorithms. The method is based on a decomposition of the quantum partition function that can be viewed as a series expansion about its classical part. We argue that the algorithm not only provides a theoretical advancement in the field of quantum Monte Carlo simulations, but is optimally suited to tackle quantum many-body systems that exhibit a range of behaviors from "fully quantum" to "fully classical," in contrast to many existing methods. We demonstrate the advantages, sometimes by orders of magnitude, of the technique by comparing it against existing state-of-the-art schemes such as path integral quantum Monte Carlo and stochastic series expansion. We also illustrate how our method allows for the unification of quantum and classical thermal parallel tempering techniques into a single algorithm and discuss its practical significance.
Off-diagonal expansion quantum Monte Carlo.
Albash, Tameem; Wagenbreth, Gene; Hen, Itay
2017-12-01
We propose a Monte Carlo algorithm designed to simulate quantum as well as classical systems at equilibrium, bridging the algorithmic gap between quantum and classical thermal simulation algorithms. The method is based on a decomposition of the quantum partition function that can be viewed as a series expansion about its classical part. We argue that the algorithm not only provides a theoretical advancement in the field of quantum Monte Carlo simulations, but is optimally suited to tackle quantum many-body systems that exhibit a range of behaviors from "fully quantum" to "fully classical," in contrast to many existing methods. We demonstrate the advantages, sometimes by orders of magnitude, of the technique by comparing it against existing state-of-the-art schemes such as path integral quantum Monte Carlo and stochastic series expansion. We also illustrate how our method allows for the unification of quantum and classical thermal parallel tempering techniques into a single algorithm and discuss its practical significance.
Hybrid wavelet-support vector machine approach for modelling rainfall-runoff process.
Komasi, Mehdi; Sharghi, Soroush
2016-01-01
Because of the importance of water resources management, the need for accurate modeling of the rainfall-runoff process has rapidly grown in the past decades. Recently, the support vector machine (SVM) approach has been used by hydrologists for rainfall-runoff modeling and the other fields of hydrology. Similar to the other artificial intelligence models, such as artificial neural network (ANN) and adaptive neural fuzzy inference system, the SVM model is based on the autoregressive properties. In this paper, the wavelet analysis was linked to the SVM model concept for modeling the rainfall-runoff process of Aghchai and Eel River watersheds. In this way, the main time series of two variables, rainfall and runoff, were decomposed to multiple frequent time series by wavelet theory; then, these time series were imposed as input data on the SVM model in order to predict the runoff discharge one day ahead. The obtained results show that the wavelet SVM model can predict both short- and long-term runoff discharges by considering the seasonality effects. Also, the proposed hybrid model is relatively more appropriate than classical autoregressive ones such as ANN and SVM because it uses the multi-scale time series of rainfall and runoff data in the modeling process.
Finding hidden periodic signals in time series - an application to stock prices
NASA Astrophysics Data System (ADS)
O'Shea, Michael
2014-03-01
Data in the form of time series appear in many areas of science. In cases where the periodicity is apparent and the only other contribution to the time series is stochastic in origin, the data can be `folded' to improve signal to noise and this has been done for light curves of variable stars with the folding resulting in a cleaner light curve signal. Stock index prices versus time are classic examples of time series. Repeating patterns have been claimed by many workers and include unusually large returns on small-cap stocks during the month of January, and small returns on the Dow Jones Industrial average (DJIA) in the months June through September compared to the rest of the year. Such observations imply that these prices have a periodic component. We investigate this for the DJIA. If such a component exists it is hidden in a large non-periodic variation and a large stochastic variation. We show how to extract this periodic component and for the first time reveal its yearly (averaged) shape. This periodic component leads directly to the `Sell in May and buy at Halloween' adage. We also drill down and show that this yearly variation emerges from approximately half of the underlying stocks making up the DJIA index.
Aerosol Index Dynamics over Athens and Beijing
NASA Astrophysics Data System (ADS)
Christodoulakis, J.; Varotsos, C.; Tzanis, C.; Xue, Y.
2014-11-01
We present the analysis of monthly mean Aerosol Index (AI) values, over Athens, Greece, and Beijing, China, for the period 1979-2012. The aim of the analysis is the identification of time scaling in the AI time series, by using a data analysis technique that would not be affected by the non-stationarity of the data. The appropriate technique satisfying this criterion is the Detrended Fluctuation Analysis (DF A). For the deseasonalization of time series classic Wiener method was applied filtering out the seasonal - 3 months, semiannual - 6 months and annual - 12 months periods. The data analysis for both Athens and Beijing revealed that the exponents α for both time periods are greater than 0.5 indicating that persistence of the correlations in the fluctuations of the deseasonalized AI values exists for time scales between about 4 months and 3.5 years (for the period 1979-1993) or 4 years (for the period 1996-2012).
Aerosol Index Dynamics over Athens and Beijing
NASA Astrophysics Data System (ADS)
Christodoulakis, J.; Varotsos, C.; Tzanis, C.; Xue, Y.
2014-11-01
We present the analysis of monthly mean Aerosol Index (AI) values, over Athens, Greece, and Beijing, China, for the period 1979- 2012. The aim of the analysis is the identification of time scaling in the AI time series, by using a data analysis technique that would not be affected by the non-stationarity of the data. The appropriate technique satisfying this criterion is the Detrended Fluctuation Analysis (DFA). For the deseasonalization of time series classic Wiener method was applied filtering out the seasonal - 3 months, semiannual - 6 months and annual - 12 months periods. The data analysis for both Athens and Beijing revealed that the exponents α for both time periods are greater than 0.5 indicating that persistence of the correlations in the fluctuations of the deseasonalized AI values exists for time scales between about 4 months and 3.5 years (for the period 1979-1993) or 4 years (for the period 1996-2012).
NASA Astrophysics Data System (ADS)
Ausloos, M.; Ivanova, K.
2004-06-01
The classical technical analysis methods of financial time series based on the moving average and momentum is recalled. Illustrations use the IBM share price and Latin American (Argentinian MerVal, Brazilian Bovespa and Mexican IPC) market indices. We have also searched for scaling ranges and exponents in exchange rates between Latin American currencies ($ARS$, $CLP$, $MXP$) and other major currencies $DEM$, $GBP$, $JPY$, $USD$, and $SDR$s. We have sorted out correlations and anticorrelations of such exchange rates with respect to $DEM$, $GBP$, $JPY$ and $USD$. They indicate a very complex or speculative behavior.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Xiaofeng, E-mail: xfyang@math.sc.edu; Han, Daozhi, E-mail: djhan@iu.edu
2017-02-01
In this paper, we develop a series of linear, unconditionally energy stable numerical schemes for solving the classical phase field crystal model. The temporal discretizations are based on the first order Euler method, the second order backward differentiation formulas (BDF2) and the second order Crank–Nicolson method, respectively. The schemes lead to linear elliptic equations to be solved at each time step, and the induced linear systems are symmetric positive definite. We prove that all three schemes are unconditionally energy stable rigorously. Various classical numerical experiments in 2D and 3D are performed to validate the accuracy and efficiency of the proposedmore » schemes.« less
NASA Astrophysics Data System (ADS)
McCauley, Joseph L.
2009-09-01
Preface; 1. Econophysics: why and what; 2. Neo-classical economic theory; 3. Probability and stochastic processes; 4. Introduction to financial economics; 5. Introduction to portfolio selection theory; 6. Scaling, pair correlations, and conditional densities; 7. Statistical ensembles: deducing dynamics from time series; 8. Martingale option pricing; 9. FX market globalization: evolution of the dollar to worldwide reserve currency; 10. Macroeconomics and econometrics: regression models vs. empirically based modeling; 11. Complexity; Index.
A Neutral Odor May Become a Sexual Incentive through Classical Conditioning in Male Rats
ERIC Educational Resources Information Center
Kvitvik, Inger-Line; Berg, Kristine Marit; Agmo, Anders
2010-01-01
A neutral olfactory stimulus was employed as CS in a series of experiments with a sexually receptive female as UCS and the execution of an intromission as the UCR. Each experimental session lasted until the male ejaculated. The time the experimental subject spent in a zone adjacent to the source of the olfactory stimulus during the 10 s of CS…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yamada, Atsushi; Kojima, Hidekazu; Okazaki, Susumu, E-mail: okazaki@apchem.nagoya-u.ac.jp
2014-08-28
In order to investigate proton transfer reaction in solution, mixed quantum-classical molecular dynamics calculations have been carried out based on our previously proposed quantum equation of motion for the reacting system [A. Yamada and S. Okazaki, J. Chem. Phys. 128, 044507 (2008)]. Surface hopping method was applied to describe forces acting on the solvent classical degrees of freedom. In a series of our studies, quantum and solvent effects on the reaction dynamics in solutions have been analysed in detail. Here, we report our mixed quantum-classical molecular dynamics calculations for intramolecular proton transfer of malonaldehyde in water. Thermally activated proton transfermore » process, i.e., vibrational excitation in the reactant state followed by transition to the product state and vibrational relaxation in the product state, as well as tunneling reaction can be described by solving the equation of motion. Zero point energy is, of course, included, too. The quantum simulation in water has been compared with the fully classical one and the wave packet calculation in vacuum. The calculated quantum reaction rate in water was 0.70 ps{sup −1}, which is about 2.5 times faster than that in vacuum, 0.27 ps{sup −1}. This indicates that the solvent water accelerates the reaction. Further, the quantum calculation resulted in the reaction rate about 2 times faster than the fully classical calculation, which indicates that quantum effect enhances the reaction rate, too. Contribution from three reaction mechanisms, i.e., tunneling, thermal activation, and barrier vanishing reactions, is 33:46:21 in the mixed quantum-classical calculations. This clearly shows that the tunneling effect is important in the reaction.« less
Acuna-Soto, Rodolfo; Stahle, David W; Therrell, Matthew D; Gomez Chavez, Sergio; Cleaveland, Malcolm K
2005-01-01
The classical period in Mexico (AD 250-750) was an era of splendor. The city of Teotihuacan was one of the largest and most sophisticated human conglomerates of the pre-industrial world. The Mayan civilization in southeastern Mexico and the Yucatan peninsula reached an impressive degree of development at the same time. This time of prosperity came to an end during the Terminal Classic Period (AD 750-950) a time of massive population loss throughout Mesoamerica. A second episode of massive depopulation in the same area was experienced during the sixteenth century when, in less than one century, between 80% and 90% of the entire indigenous population was lost. The 16th century depopulation of Mexico constitutes one of the worst demographic catastrophes in human history. Although newly imported European and African diseases caused high mortality among the native population, the major 16th century population losses were caused by a series of epidemics of a hemorrhagic fever called Cocoliztli, a highly lethal disease unknown to both Aztec and European physicians during the colonial era. The cocoliztli epidemics occurred during the 16th century megadrought, when severe drought extended at times from central Mexico to the boreal forest of Canada, and from the Pacific to the Atlantic coast. The collapse of the cultures of the Classic Period seems also to have occurred during a time of severe drought. Tree ring and lake sediment records indicate that some of the most severe and prolonged droughts to impact North America-Mesoamerica in the past 1000-4000 years occurred between AD 650 and 1000, particularly during the 8th and 9th centuries, a period of time that coincides with the Terminal Classic Period. Based on the similarities of the climatic (severe drought) and demographic (massive population loss) events in Mesoamerica during the sixteenth century, we propose that drought-associated epidemics of hemorrhagic fever may have contributed to the massive population loss during the Terminal Classic Period.
NASA Astrophysics Data System (ADS)
Van Pelt, S.; Kohfeld, K. E.; Allen, D. M.
2015-12-01
The decline of the Mayan Civilization is thought to be caused by a series of droughts that affected the Yucatan Peninsula during the Terminal Classic Period (T.C.P.) 800-1000 AD. The goals of this study are two-fold: (a) to compare paleo-model simulations of the past 1000 years with a compilation of multiple proxies of changes in moisture conditions for the Yucatan Peninsula during the T.C.P. and (b) to use this comparison to inform the modeling of groundwater recharge in this region, with a focus on generating the daily climate data series needed as input to a groundwater recharge model. To achieve the first objective, we compiled a dataset of 5 proxies from seven locations across the Yucatan Peninsula, to be compared with temperature and precipitation output from the Community Climate System Model Version 4 (CCSM4), which is part of the Coupled Model Intercomparison Project Phase 5 (CMIP5) past1000 experiment. The proxy dataset includes oxygen isotopes from speleothems and gastropod/ostrocod shells (11 records); and sediment density, mineralogy, and magnetic susceptibility records from lake sediment cores (3 records). The proxy dataset is supplemented by a compilation of reconstructed temperatures using pollen and tree ring records for North America (archived in the PAGES2k global network data). Our preliminary analysis suggests that many of these datasets show evidence of drier and warmer climate on the Yucatan Peninsula around the T.C.P. when compared to modern conditions, although the amplitude and timing of individual warming and drying events varies between sites. This comparison with modeled output will ultimately be used to inform backward shift factors that will be input to a stochastic weather generator. These shift factors will be based on monthly changes in temperature and precipitation and applied to a modern daily climate time series for the Yucatan Peninsula to produce a daily climate time series for the T.C.P.
Generalised Pareto distribution: impact of rounding on parameter estimation
NASA Astrophysics Data System (ADS)
Pasarić, Z.; Cindrić, K.
2018-05-01
Problems that occur when common methods (e.g. maximum likelihood and L-moments) for fitting a generalised Pareto (GP) distribution are applied to discrete (rounded) data sets are revealed by analysing the real, dry spell duration series. The analysis is subsequently performed on generalised Pareto time series obtained by systematic Monte Carlo (MC) simulations. The solution depends on the following: (1) the actual amount of rounding, as determined by the actual data range (measured by the scale parameter, σ) vs. the rounding increment (Δx), combined with; (2) applying a certain (sufficiently high) threshold and considering the series of excesses instead of the original series. For a moderate amount of rounding (e.g. σ/Δx ≥ 4), which is commonly met in practice (at least regarding the dry spell data), and where no threshold is applied, the classical methods work reasonably well. If cutting at the threshold is applied to rounded data—which is actually essential when dealing with a GP distribution—then classical methods applied in a standard way can lead to erroneous estimates, even if the rounding itself is moderate. In this case, it is necessary to adjust the theoretical location parameter for the series of excesses. The other solution is to add an appropriate uniform noise to the rounded data ("so-called" jittering). This, in a sense, reverses the process of rounding; and thereafter, it is straightforward to apply the common methods. Finally, if the rounding is too coarse (e.g. σ/Δx 1), then none of the above recipes would work; and thus, specific methods for rounded data should be applied.
Levavasseur, Etienne; Biacabe, Anne-Gaëlle; Comoy, Emmanuel; Culeux, Audrey; Grznarova, Katarina; Privat, Nicolas; Simoneau, Steve; Flan, Benoit; Sazdovitch, Véronique; Seilhean, Danielle; Baron, Thierry; Haïk, Stéphane
2017-01-01
The transmission of classical bovine spongiform encephalopathy (C-BSE) through contaminated meat product consumption is responsible for variant Creutzfeldt-Jakob disease (vCJD) in humans. More recent and atypical forms of BSE (L-BSE and H-BSE) have been identified in cattle since the C-BSE epidemic. Their low incidence and advanced age of onset are compatible with a sporadic origin, as are most cases of Creutzfeldt-Jakob disease (CJD) in humans. Transmissions studies in primates and transgenic mice expressing a human prion protein (PrP) indicated that atypical forms of BSE may be associated with a higher zoonotic potential than classical BSE, and require particular attention for public health. Recently, methods designed to amplify misfolded forms of PrP have emerged as promising tools to detect prion strains and to study their diversity. Here, we validated real-time quaking-induced conversion assay for the discrimination of atypical and classical BSE strains using a large series of bovine samples encompassing all the atypical BSE cases detected by the French Centre of Reference during 10 years of exhaustive active surveillance. We obtained a 100% sensitivity and specificity for atypical BSE detection. In addition, the assay was able to discriminate atypical and classical BSE in non-human primates, and also sporadic CJD and vCJD in humans. The RT-QuIC assay appears as a practical means for a reliable detection of atypical BSE strains in a homologous or heterologous PrP context.
Masking effects of speech and music: does the masker's hierarchical structure matter?
Shi, Lu-Feng; Law, Yvonne
2010-04-01
Speech and music are time-varying signals organized by parallel hierarchical rules. Through a series of four experiments, this study compared the masking effects of single-talker speech and instrumental music on speech perception while manipulating the complexity of hierarchical and temporal structures of the maskers. Listeners' word recognition was found to be similar between hierarchically intact and disrupted speech or classical music maskers (Experiment 1). When sentences served as the signal, significantly greater masking effects were observed with disrupted than intact speech or classical music maskers (Experiment 2), although not with jazz or serial music maskers, which differed from the classical music masker in their hierarchical structures (Experiment 3). Removing the classical music masker's temporal dynamics or partially restoring it affected listeners' sentence recognition; yet, differences in performance between intact and disrupted maskers remained robust (Experiment 4). Hence, the effect of structural expectancy was largely present across maskers when comparing them before and after their hierarchical structure was purposefully disrupted. This effect seemed to lend support to the auditory stream segregation theory.
NASA Astrophysics Data System (ADS)
Soszyński, I.; Udalski, A.; Szymański, M. K.; Wyrzykowski, Ł.; Ulaczyk, K.; Poleski, R.; Pietrukowicz, P.; Kozłowski, S.; Skowron, D. M.; Skowron, J.; Mróz, P.; Pawlak, M.; Rybicki, K.; Jacyszyn-Dobrzeniecka, A.
2017-12-01
We present a collection of classical, typeII, and anomalous Cepheids detected in the OGLE fields toward the Galactic center. The sample contains 87 classical Cepheids pulsating in one, two or three radial modes, 924 type II Cepheids divided into BL Her, W Vir, peculiar W Vir, and RV Tau stars, and 20 anomalous Cepheids - first such objects found in the Galactic bulge. Additionally, we upgrade the OGLE Collection of RR Lyr stars in the Galactic bulge by adding 828 newly identified variables. For all Cepheids and RRLyr stars, we publish time-series VI photometry obtained during the OGLE-IV project, from 2010 through 2017. We discuss basic properties of our classical pulsators: their spatial distribution, light curve morphology, period-luminosity relations, and position in the Petersen diagram. We present the most interesting individual objects in our collection: a typeII Cepheid with additional eclipsing modulation, WVir stars with the period doubling effect and the RVb phenomenon, a mode-switching RR Lyr star, and a triple-mode anomalous RRd star.
Signal Processing for Time-Series Functions on a Graph
2018-02-01
as filtering to functions supported on graphs. These methods can be applied to scalar functions with a domain that can be described by a fixed...classical signal processing such as filtering to account for the graph domain. This work essentially divides into 2 basic approaches: graph Laplcian...based filtering and weighted adjacency matrix-based filtering . In Shuman et al.,11 and elaborated in Bronstein et al.,13 filtering operators are
About the cumulants of periodic signals
NASA Astrophysics Data System (ADS)
Barrau, Axel; El Badaoui, Mohammed
2018-01-01
This note studies cumulants of time series. These functions originating from the probability theory being commonly used as features of deterministic signals, their classical properties are examined in this modified framework. We show additivity of cumulants, ensured in the case of independent random variables, requires here a different hypothesis. Practical applications are proposed, in particular an analysis of the failure of the JADE algorithm to separate some specific periodic signals.
Forecasting of natural gas consumption with neural network and neuro fuzzy system
NASA Astrophysics Data System (ADS)
Kaynar, Oguz; Yilmaz, Isik; Demirkoparan, Ferhan
2010-05-01
The prediction of natural gas consumption is crucial for Turkey which follows foreign-dependent policy in point of providing natural gas and whose stock capacity is only 5% of internal total consumption. Prediction accuracy of demand is one of the elements which has an influence on sectored investments and agreements about obtaining natural gas, so on development of sector. In recent years, new techniques, such as artificial neural networks and fuzzy inference systems, have been widely used in natural gas consumption prediction in addition to classical time series analysis. In this study, weekly natural gas consumption of Turkey has been predicted by means of three different approaches. The first one is Autoregressive Integrated Moving Average (ARIMA), which is classical time series analysis method. The second approach is the Artificial Neural Network. Two different ANN models, which are Multi Layer Perceptron (MLP) and Radial Basis Function Network (RBFN), are employed to predict natural gas consumption. The last is Adaptive Neuro Fuzzy Inference System (ANFIS), which combines ANN and Fuzzy Inference System. Different prediction models have been constructed and one model, which has the best forecasting performance, is determined for each method. Then predictions are made by using these models and results are compared. Keywords: ANN, ANFIS, ARIMA, Natural Gas, Forecasting
Nonlinear multivariate and time series analysis by neural network methods
NASA Astrophysics Data System (ADS)
Hsieh, William W.
2004-03-01
Methods in multivariate statistical analysis are essential for working with large amounts of geophysical data, data from observational arrays, from satellites, or from numerical model output. In classical multivariate statistical analysis, there is a hierarchy of methods, starting with linear regression at the base, followed by principal component analysis (PCA) and finally canonical correlation analysis (CCA). A multivariate time series method, the singular spectrum analysis (SSA), has been a fruitful extension of the PCA technique. The common drawback of these classical methods is that only linear structures can be correctly extracted from the data. Since the late 1980s, neural network methods have become popular for performing nonlinear regression and classification. More recently, neural network methods have been extended to perform nonlinear PCA (NLPCA), nonlinear CCA (NLCCA), and nonlinear SSA (NLSSA). This paper presents a unified view of the NLPCA, NLCCA, and NLSSA techniques and their applications to various data sets of the atmosphere and the ocean (especially for the El Niño-Southern Oscillation and the stratospheric quasi-biennial oscillation). These data sets reveal that the linear methods are often too simplistic to describe real-world systems, with a tendency to scatter a single oscillatory phenomenon into numerous unphysical modes or higher harmonics, which can be largely alleviated in the new nonlinear paradigm.
Subtle flickering in Cepheids: Kepler and MOST
NASA Astrophysics Data System (ADS)
Evans, Nancy Remage; Szabó, Robert; Szabados, Laszlo; Derekas, Aliz; Matthews, Jaymie M.; Cameron, Chris; the MOST Team
2014-02-01
Fundamental mode classical Cepheids have light curves which repeat accurately enough that we can watch them evolve (change period). The new level of accuracy and quantity of data with the Kepler and MOST satellites probes this further. An intriguing result was found in the long time-series of Kepler data for V1154 Cyg the one classical Cepheid (fundamental mode, P = 4.9 d) in the field, which has short term changes in period (~=20 minutes), correlated for ~=10 cycles (period jitter). To follow this up, we obtained a month long series of observations of the fundamental mode Cepheid RT Aur and the first overtone pulsator SZ Tau. RT Aur shows the traditional strict repetition of the light curve, with the Fourier amplitude ratio R 1/R 2 remaining nearly constant. The light curve of SZ Tau, on the other hand, fluctuates in amplitude ratio at the level of approximately 50%. Furthermore prewhitening the RT Aur data with 10 frequencies reduces the Fourier spectrum to noise. For SZ Tau, considerable power is left after this prewhitening in a complicated variety of frequencies.
Linking environmental variability to population and community dynamics: Chapter 7
Pantel, Jelena H.; Pendleton, Daniel E.; Walters, Annika W.; Rogers, Lauren A.
2014-01-01
Linking population and community responses to environmental variability lies at the heart of ecology, yet methodological approaches vary and existence of broad patterns spanning taxonomic groups remains unclear. We review the characteristics of environmental and biological variability. Classic approaches to link environmental variability to population and community variability are discussed as are the importance of biotic factors such as life history and community interactions. In addition to classic approaches, newer techniques such as information theory and artificial neural networks are reviewed. The establishment and expansion of observing networks will provide new long-term ecological time-series data, and with it, opportunities to incorporate environmental variability into research. This review can help guide future research in the field of ecological and environmental variability.
Goldman, Gretchen T; Mulholland, James A; Russell, Armistead G; Strickland, Matthew J; Klein, Mitchel; Waller, Lance A; Tolbert, Paige E
2011-06-22
Two distinctly different types of measurement error are Berkson and classical. Impacts of measurement error in epidemiologic studies of ambient air pollution are expected to depend on error type. We characterize measurement error due to instrument imprecision and spatial variability as multiplicative (i.e. additive on the log scale) and model it over a range of error types to assess impacts on risk ratio estimates both on a per measurement unit basis and on a per interquartile range (IQR) basis in a time-series study in Atlanta. Daily measures of twelve ambient air pollutants were analyzed: NO2, NOx, O3, SO2, CO, PM10 mass, PM2.5 mass, and PM2.5 components sulfate, nitrate, ammonium, elemental carbon and organic carbon. Semivariogram analysis was applied to assess spatial variability. Error due to this spatial variability was added to a reference pollutant time-series on the log scale using Monte Carlo simulations. Each of these time-series was exponentiated and introduced to a Poisson generalized linear model of cardiovascular disease emergency department visits. Measurement error resulted in reduced statistical significance for the risk ratio estimates for all amounts (corresponding to different pollutants) and types of error. When modelled as classical-type error, risk ratios were attenuated, particularly for primary air pollutants, with average attenuation in risk ratios on a per unit of measurement basis ranging from 18% to 92% and on an IQR basis ranging from 18% to 86%. When modelled as Berkson-type error, risk ratios per unit of measurement were biased away from the null hypothesis by 2% to 31%, whereas risk ratios per IQR were attenuated (i.e. biased toward the null) by 5% to 34%. For CO modelled error amount, a range of error types were simulated and effects on risk ratio bias and significance were observed. For multiplicative error, both the amount and type of measurement error impact health effect estimates in air pollution epidemiology. By modelling instrument imprecision and spatial variability as different error types, we estimate direction and magnitude of the effects of error over a range of error types.
Numerical method based on the lattice Boltzmann model for the Fisher equation.
Yan, Guangwu; Zhang, Jianying; Dong, Yinfeng
2008-06-01
In this paper, a lattice Boltzmann model for the Fisher equation is proposed. First, the Chapman-Enskog expansion and the multiscale time expansion are used to describe higher-order moment of equilibrium distribution functions and a series of partial differential equations in different time scales. Second, the modified partial differential equation of the Fisher equation with the higher-order truncation error is obtained. Third, comparison between numerical results of the lattice Boltzmann models and exact solution is given. The numerical results agree well with the classical ones.
A recurrence-weighted prediction algorithm for musical analysis
NASA Astrophysics Data System (ADS)
Colucci, Renato; Leguizamon Cucunuba, Juan Sebastián; Lloyd, Simon
2018-03-01
Forecasting the future behaviour of a system using past data is an important topic. In this article we apply nonlinear time series analysis in the context of music, and present new algorithms for extending a sample of music, while maintaining characteristics similar to the original piece. By using ideas from ergodic theory, we adapt the classical prediction method of Lorenz analogues so as to take into account recurrence times, and demonstrate with examples, how the new algorithm can produce predictions with a high degree of similarity to the original sample.
Chemistry in the Comics: Part 2. Classic Chemistry.
ERIC Educational Resources Information Center
Carter, Henry A.
1989-01-01
Describes topics in chemistry as related in the Classics Illustrated publications. Provides a list from "The Pioneers of Science" series with issue date, number, and biograhical topic. Lists references to topics in chemistry. Presents many pages from these comics. (MVL)
Shao, Chenxi; Xue, Yong; Fang, Fang; Bai, Fangzhou; Yin, Peifeng; Wang, Binghong
2015-07-01
The self-controlling feedback control method requires an external periodic oscillator with special design, which is technically challenging. This paper proposes a chaos control method based on time series non-uniform rational B-splines (SNURBS for short) signal feedback. It first builds the chaos phase diagram or chaotic attractor with the sampled chaotic time series and any target orbit can then be explicitly chosen according to the actual demand. Second, we use the discrete timing sequence selected from the specific target orbit to build the corresponding external SNURBS chaos periodic signal, whose difference from the system current output is used as the feedback control signal. Finally, by properly adjusting the feedback weight, we can quickly lead the system to an expected status. We demonstrate both the effectiveness and efficiency of our method by applying it to two classic chaotic systems, i.e., the Van der Pol oscillator and the Lorenz chaotic system. Further, our experimental results show that compared with delayed feedback control, our method takes less time to obtain the target point or periodic orbit (from the starting point) and that its parameters can be fine-tuned more easily.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shao, Chenxi, E-mail: cxshao@ustc.edu.cn; Xue, Yong; Fang, Fang
2015-07-15
The self-controlling feedback control method requires an external periodic oscillator with special design, which is technically challenging. This paper proposes a chaos control method based on time series non-uniform rational B-splines (SNURBS for short) signal feedback. It first builds the chaos phase diagram or chaotic attractor with the sampled chaotic time series and any target orbit can then be explicitly chosen according to the actual demand. Second, we use the discrete timing sequence selected from the specific target orbit to build the corresponding external SNURBS chaos periodic signal, whose difference from the system current output is used as the feedbackmore » control signal. Finally, by properly adjusting the feedback weight, we can quickly lead the system to an expected status. We demonstrate both the effectiveness and efficiency of our method by applying it to two classic chaotic systems, i.e., the Van der Pol oscillator and the Lorenz chaotic system. Further, our experimental results show that compared with delayed feedback control, our method takes less time to obtain the target point or periodic orbit (from the starting point) and that its parameters can be fine-tuned more easily.« less
NASA Astrophysics Data System (ADS)
Feng, L.; Vaulin, R.; Hewitt, J. N.; Remillard, R.; Kaplan, D. L.; Murphy, Tara; Kudryavtseva, N.; Hancock, P.; Bernardi, G.; Bowman, J. D.; Briggs, F.; Cappallo, R. J.; Deshpande, A. A.; Gaensler, B. M.; Greenhill, L. J.; Hazelton, B. J.; Johnston-Hollitt, M.; Lonsdale, C. J.; McWhirter, S. R.; Mitchell, D. A.; Morales, M. F.; Morgan, E.; Oberoi, D.; Ord, S. M.; Prabu, T.; Udaya Shankar, N.; Srivani, K. S.; Subrahmanyan, R.; Tingay, S. J.; Wayth, R. B.; Webster, R. L.; Williams, A.; Williams, C. L.
2017-03-01
Many astronomical sources produce transient phenomena at radio frequencies, but the transient sky at low frequencies (<300 MHz) remains relatively unexplored. Blind surveys with new wide-field radio instruments are setting increasingly stringent limits on the transient surface density on various timescales. Although many of these instruments are limited by classical confusion noise from an ensemble of faint, unresolved sources, one can in principle detect transients below the classical confusion limit to the extent that the classical confusion noise is independent of time. We develop a technique for detecting radio transients that is based on temporal matched filters applied directly to time series of images, rather than relying on source-finding algorithms applied to individual images. This technique has well-defined statistical properties and is applicable to variable and transient searches for both confusion-limited and non-confusion-limited instruments. Using the Murchison Widefield Array as an example, we demonstrate that the technique works well on real data despite the presence of classical confusion noise, sidelobe confusion noise, and other systematic errors. We searched for transients lasting between 2 minutes and 3 months. We found no transients and set improved upper limits on the transient surface density at 182 MHz for flux densities between ˜20 and 200 mJy, providing the best limits to date for hour- and month-long transients.
Electrostatic and structural similarity of classical and non-classical lactam compounds
NASA Astrophysics Data System (ADS)
Coll, Miguel; Frau, Juan; Vilanova, Bartolomé; Donoso, Josefa; Muñoz, Francisco
2001-09-01
Various electrostatic and structural parameters for a series of classical and non-classical β-lactams were determined and compared in order to ascertain whether some specific β-lactams possess antibacterial or β-lactamase inhibitory properties. The electrostatic parameters obtained, based on the Distributed Multipole Analysis (DMA) of high-quality wavefunctions for the studied structures, suggest that some non-classical β-lactams effectively inhibit the action of β-lactamases. As shown in this work, such electrostatic parameters provide much more reliable information about the antibacterial and inhibitory properties of β-lactams than do structural parameters.
Estimating and Comparing Dam Deformation Using Classical and GNSS Techniques.
Barzaghi, Riccardo; Cazzaniga, Noemi Emanuela; De Gaetani, Carlo Iapige; Pinto, Livio; Tornatore, Vincenza
2018-03-02
Global Navigation Satellite Systems (GNSS) receivers are nowadays commonly used in monitoring applications, e.g., in estimating crustal and infrastructure displacements. This is basically due to the recent improvements in GNSS instruments and methodologies that allow high-precision positioning, 24 h availability and semiautomatic data processing. In this paper, GNSS-estimated displacements on a dam structure have been analyzed and compared with pendulum data. This study has been carried out for the Eleonora D'Arborea (Cantoniera) dam, which is in Sardinia. Time series of pendulum and GNSS over a time span of 2.5 years have been aligned so as to be comparable. Analytical models fitting these time series have been estimated and compared. Those models were able to properly fit pendulum data and GNSS data, with standard deviation of residuals smaller than one millimeter. These encouraging results led to the conclusion that GNSS technique can be profitably applied to dam monitoring allowing a denser description, both in space and time, of the dam displacements than the one based on pendulum observations.
Ultimate open pit stochastic optimization
NASA Astrophysics Data System (ADS)
Marcotte, Denis; Caron, Josiane
2013-02-01
Classical open pit optimization (maximum closure problem) is made on block estimates, without directly considering the block grades uncertainty. We propose an alternative approach of stochastic optimization. The stochastic optimization is taken as the optimal pit computed on the block expected profits, rather than expected grades, computed from a series of conditional simulations. The stochastic optimization generates, by construction, larger ore and waste tonnages than the classical optimization. Contrary to the classical approach, the stochastic optimization is conditionally unbiased for the realized profit given the predicted profit. A series of simulated deposits with different variograms are used to compare the stochastic approach, the classical approach and the simulated approach that maximizes expected profit among simulated designs. Profits obtained with the stochastic optimization are generally larger than the classical or simulated pit. The main factor controlling the relative gain of stochastic optimization compared to classical approach and simulated pit is shown to be the information level as measured by the boreholes spacing/range ratio. The relative gains of the stochastic approach over the classical approach increase with the treatment costs but decrease with mining costs. The relative gains of the stochastic approach over the simulated pit approach increase both with the treatment and mining costs. At early stages of an open pit project, when uncertainty is large, the stochastic optimization approach appears preferable to the classical approach or the simulated pit approach for fair comparison of the values of alternative projects and for the initial design and planning of the open pit.
Quantifying non-linear dynamics of mass-springs in series oscillators via asymptotic approach
NASA Astrophysics Data System (ADS)
Starosta, Roman; Sypniewska-Kamińska, Grażyna; Awrejcewicz, Jan
2017-05-01
Dynamical regular response of an oscillator with two serially connected springs with nonlinear characteristics of cubic type and governed by a set of differential-algebraic equations (DAEs) is studied. The classical approach of the multiple scales method (MSM) in time domain has been employed and appropriately modified to solve the governing DAEs of two systems, i.e. with one- and two degrees-of-freedom. The approximate analytical solutions have been verified by numerical simulations.
NASA Astrophysics Data System (ADS)
Ivanov, Sergey V.; Buzykin, Oleg G.
2016-12-01
A classical approach is applied to calculate pressure broadening coefficients of CO2 vibration-rotational spectral lines perturbed by Ar. Three types of spectra are examined: electric dipole (infrared) absorption; isotropic and anisotropic Raman Q branches. Simple and explicit formulae of the classical impact theory are used along with exact 3D Hamilton equations for CO2-Ar molecular motion. The calculations utilize vibrationally independent most accurate ab initio potential energy surface (PES) of Hutson et al. expanded in Legendre polynomial series up to lmax = 24. New improved algorithm of classical rotational frequency selection is applied. The dependences of CO2 half-widths on rotational quantum number J up to J=100 are computed for the temperatures between 77 and 765 K and compared with available experimental data as well as with the results of fully quantum dynamical calculations performed on the same PES. To make the picture complete, the predictions of two independent variants of the semi-classical Robert-Bonamy formalism for dipole absorption lines are included. This method. however, has demonstrated poor accuracy almost for all temperatures. On the contrary, classical broadening coefficients are in excellent agreement both with measurements and with quantum results at all temperatures. The classical impact theory in its present variant is capable to produce quickly and accurately the pressure broadening coefficients of spectral lines of linear molecules for any J value (including high Js) using full-dimensional ab initio - based PES in the cases where other computational methods are either extremely time consuming (like the quantum close coupling method) or give erroneous results (like semi-classical methods).
An astronomer's guide to period searching
NASA Astrophysics Data System (ADS)
Schwarzenberg-Czerny, A.
2003-03-01
We concentrate on analysis of unevenly sampled time series, interrupted by periodic gaps, as often encountered in astronomy. While some of our conclusions may appear surprising, all are based on classical statistical principles of Fisher & successors. Except for discussion of the resolution issues, it is best for the reader to forget temporarily about Fourier transforms and to concentrate on problems of fitting of a time series with a model curve. According to their statistical content we divide the issues into several sections, consisting of: (ii) statistical numerical aspects of model fitting, (iii) evaluation of fitted models as hypotheses testing, (iv) the role of the orthogonal models in signal detection (v) conditions for equivalence of periodograms (vi) rating sensitivity by test power. An experienced observer working with individual objects would benefit little from formalized statistical approach. However, we demonstrate the usefulness of this approach in evaluation of performance of periodograms and in quantitative design of large variability surveys.
Cross over of recurrence networks to random graphs and random geometric graphs
NASA Astrophysics Data System (ADS)
Jacob, Rinku; Harikrishnan, K. P.; Misra, R.; Ambika, G.
2017-02-01
Recurrence networks are complex networks constructed from the time series of chaotic dynamical systems where the connection between two nodes is limited by the recurrence threshold. This condition makes the topology of every recurrence network unique with the degree distribution determined by the probability density variations of the representative attractor from which it is constructed. Here we numerically investigate the properties of recurrence networks from standard low-dimensional chaotic attractors using some basic network measures and show how the recurrence networks are different from random and scale-free networks. In particular, we show that all recurrence networks can cross over to random geometric graphs by adding sufficient amount of noise to the time series and into the classical random graphs by increasing the range of interaction to the system size. We also highlight the effectiveness of a combined plot of characteristic path length and clustering coefficient in capturing the small changes in the network characteristics.
Testing for detailed balance in a financial market
NASA Astrophysics Data System (ADS)
Fiebig, H. R.; Musgrove, D. P.
2015-06-01
We test a historical price-time series in a financial market (the NASDAQ 100 index) for a statistical property known as detailed balance. The presence of detailed balance would imply that the market can be modeled by a stochastic process based on a Markov chain, thus leading to equilibrium. In economic terms, a positive outcome of the test would support the efficient market hypothesis, a cornerstone of neo-classical economic theory. In contrast to the usage in prevalent economic theory the term equilibrium here is tied to the returns, rather than the price-time series. The test is based on an action functional S constructed from the elements of the detailed balance condition and the historical data set, and then analyzing S by means of simulated annealing. Checks are performed to verify the validity of the analysis method. We discuss the outcome of this analysis.
Individualistic and Time-Varying Tree-Ring Growth to Climate Sensitivity
Carrer, Marco
2011-01-01
The development of dendrochronological time series in order to analyze climate-growth relationships usually involves first a rigorous selection of trees and then the computation of the mean tree-growth measurement series. This study suggests a change in the perspective, passing from an analysis of climate-growth relationships that typically focuses on the mean response of a species to investigating the whole range of individual responses among sample trees. Results highlight that this new approach, tested on a larch and stone pine tree-ring dataset, outperforms, in terms of information obtained, the classical one, with significant improvements regarding the strength, distribution and time-variability of the individual tree-ring growth response to climate. Moreover, a significant change over time of the tree sensitivity to climatic variability has been detected. Accordingly, the best-responder trees at any one time may not always have been the best-responders and may not continue to be so. With minor adjustments to current dendroecological protocol and adopting an individualistic approach, we can improve the quality and reliability of the ecological inferences derived from the climate-growth relationships. PMID:21829523
Connected Text Reading and Differences in Text Reading Fluency in Adult Readers
Wallot, Sebastian; Hollis, Geoff; van Rooij, Marieke
2013-01-01
The process of connected text reading has received very little attention in contemporary cognitive psychology. This lack of attention is in parts due to a research tradition that emphasizes the role of basic lexical constituents, which can be studied in isolated words or sentences. However, this lack of attention is in parts also due to the lack of statistical analysis techniques, which accommodate interdependent time series. In this study, we investigate text reading performance with traditional and nonlinear analysis techniques and show how outcomes from multiple analyses can used to create a more detailed picture of the process of text reading. Specifically, we investigate reading performance of groups of literate adult readers that differ in reading fluency during a self-paced text reading task. Our results indicate that classical metrics of reading (such as word frequency) do not capture text reading very well, and that classical measures of reading fluency (such as average reading time) distinguish relatively poorly between participant groups. Nonlinear analyses of distribution tails and reading time fluctuations provide more fine-grained information about the reading process and reading fluency. PMID:23977177
Reversibility in Quantum Models of Stochastic Processes
NASA Astrophysics Data System (ADS)
Gier, David; Crutchfield, James; Mahoney, John; James, Ryan
Natural phenomena such as time series of neural firing, orientation of layers in crystal stacking and successive measurements in spin-systems are inherently probabilistic. The provably minimal classical models of such stochastic processes are ɛ-machines, which consist of internal states, transition probabilities between states and output values. The topological properties of the ɛ-machine for a given process characterize the structure, memory and patterns of that process. However ɛ-machines are often not ideal because their statistical complexity (Cμ) is demonstrably greater than the excess entropy (E) of the processes they represent. Quantum models (q-machines) of the same processes can do better in that their statistical complexity (Cq) obeys the relation Cμ >= Cq >= E. q-machines can be constructed to consider longer lengths of strings, resulting in greater compression. With code-words of sufficiently long length, the statistical complexity becomes time-symmetric - a feature apparently novel to this quantum representation. This result has ramifications for compression of classical information in quantum computing and quantum communication technology.
NASA Astrophysics Data System (ADS)
Grohs, Jacob R.; Li, Yongqiang; Dillard, David A.; Case, Scott W.; Ellis, Michael W.; Lai, Yeh-Hung; Gittleman, Craig S.
Temperature and humidity fluctuations in operating fuel cells impose significant biaxial stresses in the constrained proton exchange membranes (PEMs) of a fuel cell stack. The strength of the PEM, and its ability to withstand cyclic environment-induced stresses, plays an important role in membrane integrity and consequently, fuel cell durability. In this study, a pressure loaded blister test is used to characterize the biaxial strength of Gore-Select ® series 57 over a range of times and temperatures. Hencky's classical solution for a pressurized circular membrane is used to estimate biaxial strength values from burst pressure measurements. A hereditary integral is employed to construct the linear viscoelastic analog to Hencky's linear elastic exact solution. Biaxial strength master curves are constructed using traditional time-temperature superposition principle techniques and the associated temperature shift factors show good agreement with shift factors obtained from constitutive (stress relaxation) and fracture (knife slit) tests of the material.
Pi2 detection using Empirical Mode Decomposition (EMD)
NASA Astrophysics Data System (ADS)
Mieth, Johannes Z. D.; Frühauff, Dennis; Glassmeier, Karl-Heinz
2017-04-01
Empirical Mode Decomposition has been used as an alternative method to wavelet transformation to identify onset times of Pi2 pulsations in data sets of the Scandinavian Magnetometer Array (SMA). Pi2 pulsations are magnetohydrodynamic waves occurring during magnetospheric substorms. Almost always Pi2 are observed at substorm onset in mid to low latitudes on Earth's nightside. They are fed by magnetic energy release caused by dipolarization processes. Their periods lie between 40 to 150 seconds. Usually, Pi2 are detected using wavelet transformation. Here, Empirical Mode Decomposition (EMD) is presented as an alternative approach to the traditional procedure. EMD is a young signal decomposition method designed for nonlinear and non-stationary time series. It provides an adaptive, data driven, and complete decomposition of time series into slow and fast oscillations. An optimized version using Monte-Carlo-type noise assistance is used here. By displaying the results in a time-frequency space a characteristic frequency modulation is observed. This frequency modulation can be correlated with the onset of Pi2 pulsations. A basic algorithm to find the onset is presented. Finally, the results are compared to classical wavelet-based analysis. The use of different SMA stations furthermore allows the spatial analysis of Pi2 onset times. EMD mostly finds application in the fields of engineering and medicine. This work demonstrates the applicability of this method to geomagnetic time series.
Beyond Fractals and 1/f Noise: Multifractal Analysis of Complex Physiological Time Series
NASA Astrophysics Data System (ADS)
Ivanov, Plamen Ch.; Amaral, Luis A. N.; Ashkenazy, Yosef; Stanley, H. Eugene; Goldberger, Ary L.; Hausdorff, Jeffrey M.; Yoneyama, Mitsuru; Arai, Kuniharu
2001-03-01
We investigate time series with 1/f-like spectra generated by two physiologic control systems --- the human heartbeat and human gait. We show that physiological fluctuations exhibit unexpected ``hidden'' structures often described by scaling laws. In particular, our studies indicate that when analyzed on different time scales the heartbeat fluctuations exhibit cascades of branching patterns with self-similar (fractal) properties, characterized by long-range power-law anticorrelations. We find that these scaling features change during sleep and wake phases, and with pathological perturbations. Further, by means of a new wavelet-based technique, we find evidence of multifractality in the healthy human heartbeat even under resting conditions, and show that the multifractal character and nonlinear properties of the healthy heart are encoded in the Fourier phases. We uncover a loss of multifractality for a life-threatening condition, congestive heart failure. In contrast to the heartbeat, we find that the interstride interval time series of healthy human gait, a voluntary process under neural regulation, is described by a single fractal dimension (such as classical 1/f noise) indicating monofractal behavior. Thus our approach can help distinguish physiological and physical signals with comparable frequency spectra and two-point correlations, and guide modeling of their control mechanisms.
NASA Astrophysics Data System (ADS)
He, Jiayi; Shang, Pengjian; Xiong, Hui
2018-06-01
Stocks, as the concrete manifestation of financial time series with plenty of potential information, are often used in the study of financial time series. In this paper, we utilize the stock data to recognize their patterns through out the dissimilarity matrix based on modified cross-sample entropy, then three-dimensional perceptual maps of the results are provided through multidimensional scaling method. Two modified multidimensional scaling methods are proposed in this paper, that is, multidimensional scaling based on Kronecker-delta cross-sample entropy (MDS-KCSE) and multidimensional scaling based on permutation cross-sample entropy (MDS-PCSE). These two methods use Kronecker-delta based cross-sample entropy and permutation based cross-sample entropy to replace the distance or dissimilarity measurement in classical multidimensional scaling (MDS). Multidimensional scaling based on Chebyshev distance (MDSC) is employed to provide a reference for comparisons. Our analysis reveals a clear clustering both in synthetic data and 18 indices from diverse stock markets. It implies that time series generated by the same model are easier to have similar irregularity than others, and the difference in the stock index, which is caused by the country or region and the different financial policies, can reflect the irregularity in the data. In the synthetic data experiments, not only the time series generated by different models can be distinguished, the one generated under different parameters of the same model can also be detected. In the financial data experiment, the stock indices are clearly divided into five groups. Through analysis, we find that they correspond to five regions, respectively, that is, Europe, North America, South America, Asian-Pacific (with the exception of mainland China), mainland China and Russia. The results also demonstrate that MDS-KCSE and MDS-PCSE provide more effective divisions in experiments than MDSC.
NASA Astrophysics Data System (ADS)
Soja, B.; Krasna, H.; Boehm, J.; Gross, R. S.; Abbondanza, C.; Chin, T. M.; Heflin, M. B.; Parker, J. W.; Wu, X.
2017-12-01
The most recent realizations of the ITRS include several innovations, two of which are especially relevant to this study. On the one hand, the IERS ITRS combination center at DGFI-TUM introduced a two-level approach with DTRF2014, consisting of a classical deterministic frame based on normal equations and an optional coordinate time series of non-tidal displacements calculated from geophysical loading models. On the other hand, the JTRF2014 by the combination center at JPL is a time series representation of the ITRF determined by Kalman filtering. Both the JTRF2014 and the second level of the DTRF2014 are thus able to take into account short-term variations in the station coordinates. In this study, based on VLBI data, we combine these two approaches, applying them to the determination of both terrestrial and celestial reference frames. Our product has two levels like DTRF2014, with the second level being a Kalman filter solution like JTRF2014. First, we compute a classical TRF and CRF in a global least-squares adjustment by stacking normal equations from 5446 VLBI sessions between 1979 and 2016 using the Vienna VLBI and Satellite Software VieVS (solution level 1). Next, we obtain coordinate residuals from the global adjustment by applying the level-1 TRF and CRF in the single-session analysis and estimating coordinate offsets. These residuals are fed into a Kalman filter and smoother, taking into account the stochastic properties of the individual stations and radio sources. The resulting coordinate time series (solution level 2) serve as an additional layer representing irregular variations not considered in the first level of our approach. Both levels of our solution are implemented in VieVS in order to test their individual and combined performance regarding the repeatabilities of estimated baseline lengths, EOP, and radio source coordinates.
Frequency analysis via the method of moment functionals
NASA Technical Reports Server (NTRS)
Pearson, A. E.; Pan, J. Q.
1990-01-01
Several variants are presented of a linear-in-parameters least squares formulation for determining the transfer function of a stable linear system at specified frequencies given a finite set of Fourier series coefficients calculated from transient nonstationary input-output data. The basis of the technique is Shinbrot's classical method of moment functionals using complex Fourier based modulating functions to convert a differential equation model on a finite time interval into an algebraic equation which depends linearly on frequency-related parameters.
Laser-driven planar Rayleigh-Taylor instability experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Glendinning, S.G.; Weber, S.V.; Bell, P.
1992-08-24
We have performed a series of experiments on the Nova Laser Facility to examine the hydrodynamic behavior of directly driven planar foils with initial perturbations of varying wavelength. The foils were accelerated with a single, frequency doubled, smoothed and temporally shaped laser beam at 0.8{times}10{sup 14} W/cm{sup 2}. The experiments are in good agreement with numerical simulations using the computer codes LASNEX and ORCHID which show growth rates reduced to about 70% of classical for this nonlinear regime.
Classical Electrodynamics: Lecture notes
NASA Astrophysics Data System (ADS)
Likharev, Konstantin K.
2018-06-01
Essential Advanced Physics is a series comprising four parts: Classical Mechanics, Classical Electrodynamics, Quantum Mechanics and Statistical Mechanics. Each part consists of two volumes, Lecture notes and Problems with solutions, further supplemented by an additional collection of test problems and solutions available to qualifying university instructors. This volume, Classical Electrodynamics: Lecture notes is intended to be the basis for a two-semester graduate-level course on electricity and magnetism, including not only the interaction and dynamics charged point particles, but also properties of dielectric, conducting, and magnetic media. The course also covers special relativity, including its kinematics and particle-dynamics aspects, and electromagnetic radiation by relativistic particles.
The mass and age of the first SONG target: the red giant 46 LMi
NASA Astrophysics Data System (ADS)
Frandsen, S.; Fredslund Andersen, M.; Brogaard, K.; Jiang, C.; Arentoft, T.; Grundahl, F.; Kjeldsen, H.; Christensen-Dalsgaard, J.; Weiss, E.; Pallé, P.; Antoci, V.; Kjærgaard, P.; Sørensen, A. N.; Skottfelt, J.; Jørgensen, U. G.
2018-05-01
Context. The Stellar Observation Network Group (SONG) is an initiative to build a worldwide network of 1m telescopes with high-precision radial-velocity spectrographs. Here we analyse the first radial-velocity time series of a red-giant star measured by the SONG telescope at Tenerife. The asteroseismic results demonstrate a major increase in the achievable precision of the parameters for red-giant stars obtainable from ground-based observations. Reliable tests of the validity of these results are needed, however, before the accuracy of the parameters can be trusted. Aims: We analyse the first SONG time series for the star 46 LMi, which has a precise parallax and an angular diameter measured from interferometry, and therefore a good determination of the stellar radius. We use asteroseismic scaling relations to obtain an accurate mass, and modelling to determine the age. Methods: A 55-day time series of high-resolution, high S/N spectra were obtained with the first SONG telescope. We derive the asteroseismic parameters by analysing the power spectrum. To give a best guess on the large separation of modes in the power spectrum, we have applied a new method which uses the scaling of Kepler red-giant stars to 46 LMi. Results: Several methods have been applied: classical estimates, seismic methods using the observed time series, and model calculations to derive the fundamental parameters of 46 LMi. Parameters determined using the different methods are consistent within the uncertainties. We find the following values for the mass M (scaling), radius R (classical), age (modelling), and surface gravity (combining mass and radius): M = 1.09 ± 0.04M⊙, R = 7.95 ± 0.11R⊙ age t = 8.2 ± 1.9 Gy, and logg = 2.674 ± 0.013. Conclusions: The exciting possibilities for ground-based asteroseismology of solar-like oscillations with a fully robotic network have been illustrated with the results obtained from just a single site of the SONG network. The window function is still a severe problem which will be solved when there are more nodes in the network. Based on observations made with the Hertzsprung SONG telescope operated at the Spanish Observatorio del Teide on the island of Tenerife by the Aarhus and Copenhagen Universities and by the Instituto de Astrofísica de Canarias.
Generation of Classical DInSAR and PSI Ground Motion Maps on a Cloud Thematic Platform
NASA Astrophysics Data System (ADS)
Mora, Oscar; Ordoqui, Patrick; Romero, Laia
2016-08-01
This paper presents the experience of ALTAMIRA INFORMATION uploading InSAR (Synthetic Aperture Radar Interferometry) services in the Geohazard Exploitation Platform (GEP), supported by ESA. Two different processing chains are presented jointly with ground motion maps obtained from the cloud computing, one being DIAPASON for classical DInSAR and SPN (Stable Point Network) for PSI (Persistent Scatterer Interferometry) processing. The product obtained from DIAPASON is the interferometric phase related to ground motion (phase fringes from a SAR pair). SPN provides motion data (mean velocity and time series) on high-quality pixels from a stack of SAR images. DIAPASON is already implemented, and SPN is under development to be exploited with historical data coming from ERS-1/2 and ENVISAT satellites, and current acquisitions of SENTINEL-1 in SLC and TOPSAR modes.
Computation of solar perturbations with Poisson series
NASA Technical Reports Server (NTRS)
Broucke, R.
1974-01-01
Description of a project for computing first-order perturbations of natural or artificial satellites by integrating the equations of motion on a computer with automatic Poisson series expansions. A basic feature of the method of solution is that the classical variation-of-parameters formulation is used rather than rectangular coordinates. However, the variation-of-parameters formulation uses the three rectangular components of the disturbing force rather than the classical disturbing function, so that there is no problem in expanding the disturbing function in series. Another characteristic of the variation-of-parameters formulation employed is that six rather unusual variables are used in order to avoid singularities at the zero eccentricity and zero (or 90 deg) inclination. The integration process starts by assuming that all the orbit elements present on the right-hand sides of the equations of motion are constants. These right-hand sides are then simple Poisson series which can be obtained with the use of the Bessel expansions of the two-body problem in conjunction with certain interation methods. These Poisson series can then be integrated term by term, and a first-order solution is obtained.
Estimating and Comparing Dam Deformation Using Classical and GNSS Techniques
Barzaghi, Riccardo; De Gaetani, Carlo Iapige
2018-01-01
Global Navigation Satellite Systems (GNSS) receivers are nowadays commonly used in monitoring applications, e.g., in estimating crustal and infrastructure displacements. This is basically due to the recent improvements in GNSS instruments and methodologies that allow high-precision positioning, 24 h availability and semiautomatic data processing. In this paper, GNSS-estimated displacements on a dam structure have been analyzed and compared with pendulum data. This study has been carried out for the Eleonora D’Arborea (Cantoniera) dam, which is in Sardinia. Time series of pendulum and GNSS over a time span of 2.5 years have been aligned so as to be comparable. Analytical models fitting these time series have been estimated and compared. Those models were able to properly fit pendulum data and GNSS data, with standard deviation of residuals smaller than one millimeter. These encouraging results led to the conclusion that GNSS technique can be profitably applied to dam monitoring allowing a denser description, both in space and time, of the dam displacements than the one based on pendulum observations. PMID:29498650
Increasing temperature exacerbated Classic Maya conflict over the long term
NASA Astrophysics Data System (ADS)
Carleton, W. Christopher; Campbell, David; Collard, Mark
2017-05-01
The impact of climate change on conflict is an important but controversial topic. One issue that needs to be resolved is whether or not climate change exacerbates conflict over the long term. With this in mind, we investigated the relationship between climate change and conflict among Classic Maya polities over a period of several hundred years (363-888 CE). We compiled a list of conflicts recorded on dated monuments, and then located published temperature and rainfall records for the region. Subsequently, we used a recently developed time-series method to investigate the impact of the climatic variables on the frequency of conflict while controlling for trends in monument number. We found that there was a substantial increase in conflict in the approximately 500 years covered by the dataset. This increase could not be explained by change in the amount of rainfall. In contrast, the increase was strongly associated with an increase in summer temperature. These finding have implications not only for Classic Maya history but also for the debate about the likely effects of contemporary climate change.
Fernandes, Lohengrin Dias de Almeida; Fagundes Netto, Eduardo Barros; Coutinho, Ricardo
2017-01-01
Currently, spatial and temporal changes in nutrients availability, marine planktonic, and fish communities are best described on a shorter than inter-annual (seasonal) scale, primarily because the simultaneous year-to-year variations in physical, chemical, and biological parameters are very complex. The limited availability of time series datasets furnishing simultaneous evaluations of temperature, nutrients, plankton, and fish have limited our ability to describe and to predict variability related to short-term process, as species-specific phenology and environmental seasonality. In the present study, we combine a computational time series analysis on a 15-year (1995–2009) weekly-sampled time series (high-resolution long-term time series, 780 weeks) with an Autoregressive Distributed Lag Model to track non-seasonal changes in 10 potentially related parameters: sea surface temperature, nutrient concentrations (NO2, NO3, NH4 and PO4), phytoplankton biomass (as in situ chlorophyll a biomass), meroplankton (barnacle and mussel larvae), and fish abundance (Mugil liza and Caranx latus). Our data demonstrate for the first time that highly intense and frequent upwelling years initiate a huge energy flux that is not fully transmitted through classical size-structured food web by bottom-up stimulus but through additional ontogenetic steps. A delayed inter-annual sequential effect from phytoplankton up to top predators as carnivorous fishes is expected if most of energy is trapped into benthic filter feeding organisms and their larval forms. These sequential events can explain major changes in ecosystem food web that were not predicted in previous short-term models. PMID:28886162
Projective limits of state spaces I. Classical formalism
NASA Astrophysics Data System (ADS)
Lanéry, Suzanne; Thiemann, Thomas
2017-01-01
In this series of papers, we investigate the projective framework initiated by Jerzy Kijowski (1977) and Andrzej Okołów (2009, 2013, 2014), which describes the states of a quantum (field) theory as projective families of density matrices. A short reading guide to the series can be found in [27]. The present first paper aims at clarifying the classical structures that underlies this formalism, namely projective limits of symplectic manifolds [27, subsection 2.1]. In particular, this allows us to discuss accurately the issues hindering an easy implementation of the dynamics in this context, and to formulate a strategy for overcoming them [27, subsection 4.1].
NASA Astrophysics Data System (ADS)
Polanco Martínez, Josue M.; Medina-Elizalde, Martin; Burns, Stephen J.; Jiang, Xiuyang; Shen, Chuan-Chou
2015-04-01
It has been widely accepted by the paleoclimate and archaeology communities that extreme climate events (especially droughts) and past climate change played an important role in the cultural changes that occurred in at least some parts of the Maya Lowlands, from the Pre-Classic (2000 BC to 250 AD) to Post-Classic periods (1000 to 1521 AD) [1, 2]. In particular, a large number of studies suggest that the decline of the Maya civilization in the Terminal Classic Period was greatly influenced by prolonged severe drought events that probably triggered significant societal disruptions [1, 3, 4, 5]. Going further on these issues, the aim of this work is to detect climate regime shifts in several paleoclimate time series from the Yucatán Peninsula (México) that have been used as rainfall proxies [3, 5, 6, 7]. In order to extract information from the paleoclimate data studied, we have used a change point method [8] as implemented in the R package strucchange, as well as the RAMFIT method [9]. The preliminary results show for all the records analysed a prominent regime shift between 400 to 200 BCE (from a noticeable increase to a remarkable fall in precipitation), which is strongest in the recently obtained stalagmite (Itzamna) delta18-O precipitation record [7]. References [1] Gunn, J. D., Matheny, R. T., Folan, W. J., 2002. Climate-change studies in the Maya area. Ancient Mesoamerica, 13(01), 79-84. [2] Yaeger, J., Hodell, D. A., 2008. The collapse of Maya civilization: assessing the interaction of culture, climate, and environment. El Niño, Catastrophism, and Culture Change in Ancient America, 197-251. [3] Hodell, D. A., Curtis, J. H., Brenner, M., 1995. Possible role of climate in the collapse of Classic Maya civilization. Nature, 375(6530), 391-394. [4] Aimers, J., Hodell, D., 2011. Societal collapse: Drought and the Maya. Nature 479(7371), 44-45 (2011). [5] Medina-Elizalde, M., Rohling, E. J., 2012. Collapse of Classic Maya civilization related to modest reduction in precipitation. Science, 335(6071), 956-959. [6] Medina-Elizalde, M., Burns, S. J., Lea, D. W., Asmerom, Y., von Gunten, L., Polyak, V., Vuille, M., Karmalkar, A., 2010. High resolution stalagmite climate record from the Yucatán Peninsula spanning the Maya terminal classic period. Earth and Planetary Science Letters, 298(1), 255-262. [7] Medina-Elizalde, M., Burns, S. J, Jiang, X., Shen, C. C., Lases-Hernandez, F., Polanco-Martinez, J.M., High-resolution stalagmite record from the Yucatan Peninsula spanning the Preclassic period, work in progress to be submitted to the Global Planetary Change (by invitation). [8] Zeileis, A., Leisch, F., Hornik, K., Kleiber, C., 2002. strucchange: An R Package for Testing for Structural Change in Linear Regression Models. Journal of statistical software, 7(2), 1-38. [9] Mudelsee, M. (2000). Ramp function regression: a tool for quantifying climate transitions. Computers & Geosciences, 26(3), 293-307.
Classical Electrodynamics: Problems with solutions; Problems with solutions
NASA Astrophysics Data System (ADS)
Likharev, Konstantin K.
2018-06-01
l Advanced Physics is a series comprising four parts: Classical Mechanics, Classical Electrodynamics, Quantum Mechanics and Statistical Mechanics. Each part consists of two volumes, Lecture notes and Problems with solutions, further supplemented by an additional collection of test problems and solutions available to qualifying university instructors. This volume, Classical Electrodynamics: Lecture notes is intended to be the basis for a two-semester graduate-level course on electricity and magnetism, including not only the interaction and dynamics charged point particles, but also properties of dielectric, conducting, and magnetic media. The course also covers special relativity, including its kinematics and particle-dynamics aspects, and electromagnetic radiation by relativistic particles.
Classical space-times from the S-matrix
NASA Astrophysics Data System (ADS)
Neill, Duff; Rothstein, Ira Z.
2013-12-01
We show that classical space-times can be derived directly from the S-matrix for a theory of massive particles coupled to a massless spin two particle. As an explicit example we derive the Schwarzchild space-time as a series in GN. At no point of the derivation is any use made of the Einstein-Hilbert action or the Einstein equations. The intermediate steps involve only on-shell S-matrix elements which are generated via BCFW recursion relations and unitarity sewing techniques. The notion of a space-time metric is only introduced at the end of the calculation where it is extracted by matching the potential determined by the S-matrix to the geodesic motion of a test particle. Other static space-times such as Kerr follow in a similar manner. Furthermore, given that the procedure is action independent and depends only upon the choice of the representation of the little group, solutions to Yang-Mills (YM) theory can be generated in the same fashion. Moreover, the squaring relation between the YM and gravity three point functions shows that the seeds that generate solutions in the two theories are algebraically related. From a technical standpoint our methodology can also be utilized to calculate quantities relevant for the binary inspiral problem more efficiently then the more traditional Feynman diagram approach.
McCall, Patricia L; Parker, Karen F; MacDonald, John M
2008-09-01
After reaching their highest levels of the 20th century, homicide rates in the United States declined precipitously in the early 1990s. This study examines a number of factors that might have contributed to both the sharp increase and decline in homicide rates. We use a pooled cross-sectional time series model to assess the relationship between changes in structural conditions and the change in homicide rates over four decennial time points (1970, 1980, 1990, and 2000). We assess the extent to which structural covariates associated with social, economic and political conditions commonly used in homicide research (e.g., urban decay, poverty, and the weakening of family and social bonds) are related to the change in homicide rates. Along with these classic covariates, we incorporate some contemporary explanations (e.g., imprisonment rates and drug trafficking) that have been proposed to address the recent decline in urban homicide rates. Our results indicate that both classic and contemporary explanations are related to homicide trends over the last three decades of the 20th century. Specifically, changes in resource deprivation and in the relative size of the youth population are associated with changes in the homicide rate across these time points. Increased imprisonment is also significantly related to homicide changes. These findings lead us to conclude that efforts to understand the changing nature of homicide will require serious consideration, if not integration, of classic and contemporary explanations.
Time-Frequency Analyses of Tide-Gauge Sensor Data
Erol, Serdar
2011-01-01
The real world phenomena being observed by sensors are generally non-stationary in nature. The classical linear techniques for analysis and modeling natural time-series observations are inefficient and should be replaced by non-linear techniques of whose theoretical aspects and performances are varied. In this manner adopting the most appropriate technique and strategy is essential in evaluating sensors’ data. In this study, two different time-series analysis approaches, namely least squares spectral analysis (LSSA) and wavelet analysis (continuous wavelet transform, cross wavelet transform and wavelet coherence algorithms as extensions of wavelet analysis), are applied to sea-level observations recorded by tide-gauge sensors, and the advantages and drawbacks of these methods are reviewed. The analyses were carried out using sea-level observations recorded at the Antalya-II and Erdek tide-gauge stations of the Turkish National Sea-Level Monitoring System. In the analyses, the useful information hidden in the noisy signals was detected, and the common features between the two sea-level time series were clarified. The tide-gauge records have data gaps in time because of issues such as instrumental shortcomings and power outages. Concerning the difficulties of the time-frequency analysis of data with voids, the sea-level observations were preprocessed, and the missing parts were predicted using the neural network method prior to the analysis. In conclusion the merits and limitations of the techniques in evaluating non-stationary observations by means of tide-gauge sensors records were documented and an analysis strategy for the sequential sensors observations was presented. PMID:22163829
Time-frequency analyses of tide-gauge sensor data.
Erol, Serdar
2011-01-01
The real world phenomena being observed by sensors are generally non-stationary in nature. The classical linear techniques for analysis and modeling natural time-series observations are inefficient and should be replaced by non-linear techniques of whose theoretical aspects and performances are varied. In this manner adopting the most appropriate technique and strategy is essential in evaluating sensors' data. In this study, two different time-series analysis approaches, namely least squares spectral analysis (LSSA) and wavelet analysis (continuous wavelet transform, cross wavelet transform and wavelet coherence algorithms as extensions of wavelet analysis), are applied to sea-level observations recorded by tide-gauge sensors, and the advantages and drawbacks of these methods are reviewed. The analyses were carried out using sea-level observations recorded at the Antalya-II and Erdek tide-gauge stations of the Turkish National Sea-Level Monitoring System. In the analyses, the useful information hidden in the noisy signals was detected, and the common features between the two sea-level time series were clarified. The tide-gauge records have data gaps in time because of issues such as instrumental shortcomings and power outages. Concerning the difficulties of the time-frequency analysis of data with voids, the sea-level observations were preprocessed, and the missing parts were predicted using the neural network method prior to the analysis. In conclusion the merits and limitations of the techniques in evaluating non-stationary observations by means of tide-gauge sensors records were documented and an analysis strategy for the sequential sensors observations was presented.
Moran, John L; Solomon, Patricia J
2013-05-24
Statistical process control (SPC), an industrial sphere initiative, has recently been applied in health care and public health surveillance. SPC methods assume independent observations and process autocorrelation has been associated with increase in false alarm frequency. Monthly mean raw mortality (at hospital discharge) time series, 1995-2009, at the individual Intensive Care unit (ICU) level, were generated from the Australia and New Zealand Intensive Care Society adult patient database. Evidence for series (i) autocorrelation and seasonality was demonstrated using (partial)-autocorrelation ((P)ACF) function displays and classical series decomposition and (ii) "in-control" status was sought using risk-adjusted (RA) exponentially weighted moving average (EWMA) control limits (3 sigma). Risk adjustment was achieved using a random coefficient (intercept as ICU site and slope as APACHE III score) logistic regression model, generating an expected mortality series. Application of time-series to an exemplar complete ICU series (1995-(end)2009) was via Box-Jenkins methodology: autoregressive moving average (ARMA) and (G)ARCH ((Generalised) Autoregressive Conditional Heteroscedasticity) models, the latter addressing volatility of the series variance. The overall data set, 1995-2009, consisted of 491324 records from 137 ICU sites; average raw mortality was 14.07%; average(SD) raw and expected mortalities ranged from 0.012(0.113) and 0.013(0.045) to 0.296(0.457) and 0.278(0.247) respectively. For the raw mortality series: 71 sites had continuous data for assessment up to or beyond lag40 and 35% had autocorrelation through to lag40; and of 36 sites with continuous data for ≥ 72 months, all demonstrated marked seasonality. Similar numbers and percentages were seen with the expected series. Out-of-control signalling was evident for the raw mortality series with respect to RA-EWMA control limits; a seasonal ARMA model, with GARCH effects, displayed white-noise residuals which were in-control with respect to EWMA control limits and one-step prediction error limits (3SE). The expected series was modelled with a multiplicative seasonal autoregressive model. The data generating process of monthly raw mortality series at the ICU level displayed autocorrelation, seasonality and volatility. False-positive signalling of the raw mortality series was evident with respect to RA-EWMA control limits. A time series approach using residual control charts resolved these issues.
Preparation, Characterization, and Selectivity Study of Mixed-Valence Sulfites
ERIC Educational Resources Information Center
Silva, Luciana A.; de Andrade, Jailson B.
2010-01-01
A project involving the synthesis of an isomorphic double sulfite series and characterization by classical inorganic chemical analyses is described. The project is performed by upper-level undergraduate students in the laboratory. This compound series is suitable for examining several chemical concepts and analytical techniques in inorganic…
NASA Astrophysics Data System (ADS)
Li, Yi; Xu, Yan Long
2018-05-01
When the dependence of the function on uncertain variables is non-monotonic in interval, the interval of function obtained by the classic interval extension based on the first order Taylor series will exhibit significant errors. In order to reduce theses errors, the improved format of the interval extension with the first order Taylor series is developed here considering the monotonicity of function. Two typical mathematic examples are given to illustrate this methodology. The vibration of a beam with lumped masses is studied to demonstrate the usefulness of this method in the practical application, and the necessary input data of which are only the function value at the central point of interval, sensitivity and deviation of function. The results of above examples show that the interval of function from the method developed by this paper is more accurate than the ones obtained by the classic method.
NASA Astrophysics Data System (ADS)
Rokita, Pawel
Classical portfolio diversification methods do not take account of any dependence between extreme returns (losses). Many researchers provide, however, some empirical evidence for various assets that extreme-losses co-occur. If the co-occurrence is frequent enough to be statistically significant, it may seriously influence portfolio risk. Such effects may result from a few different properties of financial time series, like for instance: (1) extreme dependence in a (long-term) unconditional distribution, (2) extreme dependence in subsequent conditional distributions, (3) time-varying conditional covariance, (4) time-varying (long-term) unconditional covariance, (5) market contagion. Moreover, a mix of these properties may be present in return time series. Modeling each of them requires different approaches. It seams reasonable to investigate whether distinguishing between the properties is highly significant for portfolio risk measurement. If it is, identifying the effect responsible for high loss co-occurrence would be of a great importance. If it is not, the best solution would be selecting the easiest-to-apply model. This article concentrates on two of the aforementioned properties: extreme dependence (in a long-term unconditional distribution) and time-varying conditional covariance.
Scale-free avalanche dynamics in the stock market
NASA Astrophysics Data System (ADS)
Bartolozzi, M.; Leinweber, D. B.; Thomas, A. W.
2006-10-01
Self-organized criticality (SOC) has been claimed to play an important role in many natural and social systems. In the present work we empirically investigate the relevance of this theory to stock-market dynamics. Avalanches in stock-market indices are identified using a multi-scale wavelet-filtering analysis designed to remove Gaussian noise from the index. Here, new methods are developed to identify the optimal filtering parameters which maximize the noise removal. The filtered time series is reconstructed and compared with the original time series. A statistical analysis of both high-frequency Nasdaq E-mini Futures and daily Dow Jones data is performed. The results of this new analysis confirm earlier results revealing a robust power-law behaviour in the probability distribution function of the sizes, duration and laminar times between avalanches. This power-law behaviour holds the potential to be established as a stylized fact of stock market indices in general. While the memory process, implied by the power-law distribution of the laminar times, is not consistent with classical models for SOC, we note that a power-law distribution of the laminar times cannot be used to rule out self-organized critical behaviour.
An algebraic method for constructing stable and consistent autoregressive filters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harlim, John, E-mail: jharlim@psu.edu; Department of Meteorology, the Pennsylvania State University, University Park, PA 16802; Hong, Hoon, E-mail: hong@ncsu.edu
2015-02-15
In this paper, we introduce an algebraic method to construct stable and consistent univariate autoregressive (AR) models of low order for filtering and predicting nonlinear turbulent signals with memory depth. By stable, we refer to the classical stability condition for the AR model. By consistent, we refer to the classical consistency constraints of Adams–Bashforth methods of order-two. One attractive feature of this algebraic method is that the model parameters can be obtained without directly knowing any training data set as opposed to many standard, regression-based parameterization methods. It takes only long-time average statistics as inputs. The proposed method provides amore » discretization time step interval which guarantees the existence of stable and consistent AR model and simultaneously produces the parameters for the AR models. In our numerical examples with two chaotic time series with different characteristics of decaying time scales, we find that the proposed AR models produce significantly more accurate short-term predictive skill and comparable filtering skill relative to the linear regression-based AR models. These encouraging results are robust across wide ranges of discretization times, observation times, and observation noise variances. Finally, we also find that the proposed model produces an improved short-time prediction relative to the linear regression-based AR-models in forecasting a data set that characterizes the variability of the Madden–Julian Oscillation, a dominant tropical atmospheric wave pattern.« less
NASA Astrophysics Data System (ADS)
Masselot, Pierre; Chebana, Fateh; Bélanger, Diane; St-Hilaire, André; Abdous, Belkacem; Gosselin, Pierre; Ouarda, Taha B. M. J.
2018-07-01
In environmental epidemiology studies, health response data (e.g. hospitalization or mortality) are often noisy because of hospital organization and other social factors. The noise in the data can hide the true signal related to the exposure. The signal can be unveiled by performing a temporal aggregation on health data and then using it as the response in regression analysis. From aggregated series, a general methodology is introduced to account for the particularities of an aggregated response in a regression setting. This methodology can be used with usually applied regression models in weather-related health studies, such as generalized additive models (GAM) and distributed lag nonlinear models (DLNM). In particular, the residuals are modelled using an autoregressive-moving average (ARMA) model to account for the temporal dependence. The proposed methodology is illustrated by modelling the influence of temperature on cardiovascular mortality in Canada. A comparison with classical DLNMs is provided and several aggregation methods are compared. Results show that there is an increase in the fit quality when the response is aggregated, and that the estimated relationship focuses more on the outcome over several days than the classical DLNM. More precisely, among various investigated aggregation schemes, it was found that an aggregation with an asymmetric Epanechnikov kernel is more suited for studying the temperature-mortality relationship.
Universal dissymmetry and the origin of biomolecular chirality.
Mason, S F
1987-01-01
Handed systems are distributed over four general domains. These span the fundamental particles, the molecular enantiomers, the crystal enantiomorphs, and the spiral galaxies. The characterisation of the molecular enantiomers followed from the identification of the crystal enantiomorphs and revealed a chiral homogeneity in the biomolecules of the organic world. The origin of the homogeneity has been variously ascribed to a universal dissymmetric force, from Pasteur, or to a chance choice of the initial enantiomer perpetuated by the stereoselection of diastereomer production with recycling, from Fischer's "key and lock" hypothesis. The classical chiral fields identified by Curie require a particular time or location on the Earth's surface for a determinate molecular enantioselection, as do the weak charged current agencies of the non-classical weak interaction. The weak neutral current of the electroweak interaction provides a constant and uniform chiral agency which favours both the L-series of amino acids and polypeptides and the parent aldotriose of the D-series of sugars. The enantiomeric bias of the electroweak interaction is small at the molecular level: it may become significant either as a trigger-perturbation guiding the transition from a metastable autocatalytic racemic process to one of the two constituent enantiomeric reaction channels, or by cumulative amplification in a large chirally-homogeneous aggregate of enantiomer units.
NASA Astrophysics Data System (ADS)
Zhu, Yuxiang; Jiang, Jianmin; Huang, Changxing; Chen, Yongqin David; Zhang, Qiang
2018-04-01
This article, as part I, introduces three algorithms and applies them to both series of the monthly stream flow and rainfall in Xijiang River, southern China. The three algorithms include (1) normalization of probability distribution, (2) scanning U test for change points in correlation between two time series, and (3) scanning F-test for change points in variances. The normalization algorithm adopts the quantile method to normalize data from a non-normal into the normal probability distribution. The scanning U test and F-test have three common features: grafting the classical statistics onto the wavelet algorithm, adding corrections for independence into each statistic criteria at given confidence respectively, and being almost objective and automatic detection on multiscale time scales. In addition, the coherency analyses between two series are also carried out for changes in variance. The application results show that the changes of the monthly discharge are still controlled by natural precipitation variations in Xijiang's fluvial system. Human activities disturbed the ecological balance perhaps in certain content and in shorter spells but did not violate the natural relationships of correlation and variance changes so far.
NASA Astrophysics Data System (ADS)
Zhu, Liang; Wang, Youguo
2018-07-01
In this paper, a rumor diffusion model with uncertainty of human behavior under spatio-temporal diffusion framework is established. Take physical significance of spatial diffusion into account, a diffusion threshold is set under which the rumor is not a trend topic and only spreads along determined physical connections. Heterogeneity of degree distribution and distance distribution has also been considered in theoretical model at the same time. The global existence and uniqueness of classical solution are proved with a Lyapunov function and an approximate classical solution in form of infinite series is constructed with a system of eigenfunction. Simulations and numerical solutions both on Watts-Strogatz (WS) network and Barabási-Albert (BA) network display the variation of density of infected connections from spatial and temporal dimensions. Relevant results show that the density of infected connections is dominated by network topology and uncertainty of human behavior at threshold time. With increase of social capability, rumor diffuses to the steady state in a higher speed. And the variation trends of diffusion size with uncertainty are diverse on different artificial networks.
Connecting the time domain community with the Virtual Astronomical Observatory
NASA Astrophysics Data System (ADS)
Graham, Matthew J.; Djorgovski, S. G.; Donalek, Ciro; Drake, Andrew J.; Mahabal, Ashish A.; Plante, Raymond L.; Kantor, Jeffrey; Good, John C.
2012-09-01
The time domain has been identied as one of the most important areas of astronomical research for the next decade. The Virtual Observatory is in the vanguard with dedicated tools and services that enable and facilitate the discovery, dissemination and analysis of time domain data. These range in scope from rapid notications of time-critical astronomical transients to annotating long-term variables with the latest modelling results. In this paper, we will review the prior art in these areas and focus on the capabilities that the VAO is bringing to bear in support of time domain science. In particular, we will focus on the issues involved with the heterogeneous collections of (ancilllary) data associated with astronomical transients, and the time series characterization and classication tools required by the next generation of sky surveys, such as LSST and SKA.
A unified nonlinear stochastic time series analysis for climate science
Moon, Woosok; Wettlaufer, John S.
2017-01-01
Earth’s orbit and axial tilt imprint a strong seasonal cycle on climatological data. Climate variability is typically viewed in terms of fluctuations in the seasonal cycle induced by higher frequency processes. We can interpret this as a competition between the orbitally enforced monthly stability and the fluctuations/noise induced by weather. Here we introduce a new time-series method that determines these contributions from monthly-averaged data. We find that the spatio-temporal distribution of the monthly stability and the magnitude of the noise reveal key fingerprints of several important climate phenomena, including the evolution of the Arctic sea ice cover, the El Nio Southern Oscillation (ENSO), the Atlantic Nio and the Indian Dipole Mode. In analogy with the classical destabilising influence of the ice-albedo feedback on summertime sea ice, we find that during some time interval of the season a destabilising process operates in all of these climate phenomena. The interaction between the destabilisation and the accumulation of noise, which we term the memory effect, underlies phase locking to the seasonal cycle and the statistical nature of seasonal predictability. PMID:28287128
A unified nonlinear stochastic time series analysis for climate science.
Moon, Woosok; Wettlaufer, John S
2017-03-13
Earth's orbit and axial tilt imprint a strong seasonal cycle on climatological data. Climate variability is typically viewed in terms of fluctuations in the seasonal cycle induced by higher frequency processes. We can interpret this as a competition between the orbitally enforced monthly stability and the fluctuations/noise induced by weather. Here we introduce a new time-series method that determines these contributions from monthly-averaged data. We find that the spatio-temporal distribution of the monthly stability and the magnitude of the noise reveal key fingerprints of several important climate phenomena, including the evolution of the Arctic sea ice cover, the El Nio Southern Oscillation (ENSO), the Atlantic Nio and the Indian Dipole Mode. In analogy with the classical destabilising influence of the ice-albedo feedback on summertime sea ice, we find that during some time interval of the season a destabilising process operates in all of these climate phenomena. The interaction between the destabilisation and the accumulation of noise, which we term the memory effect, underlies phase locking to the seasonal cycle and the statistical nature of seasonal predictability.
Molenaar, Peter C M
2008-01-01
It is argued that general mathematical-statistical theorems imply that standard statistical analysis techniques of inter-individual variation are invalid to investigate developmental processes. Developmental processes have to be analyzed at the level of individual subjects, using time series data characterizing the patterns of intra-individual variation. It is shown that standard statistical techniques based on the analysis of inter-individual variation appear to be insensitive to the presence of arbitrary large degrees of inter-individual heterogeneity in the population. An important class of nonlinear epigenetic models of neural growth is described which can explain the occurrence of such heterogeneity in brain structures and behavior. Links with models of developmental instability are discussed. A simulation study based on a chaotic growth model illustrates the invalidity of standard analysis of inter-individual variation, whereas time series analysis of intra-individual variation is able to recover the true state of affairs. (c) 2007 Wiley Periodicals, Inc.
Structural Time Series Model for El Niño Prediction
NASA Astrophysics Data System (ADS)
Petrova, Desislava; Koopman, Siem Jan; Ballester, Joan; Rodo, Xavier
2015-04-01
ENSO is a dominant feature of climate variability on inter-annual time scales destabilizing weather patterns throughout the globe, and having far-reaching socio-economic consequences. It does not only lead to extensive rainfall and flooding in some regions of the world, and anomalous droughts in others, thus ruining local agriculture, but also substantially affects the marine ecosystems and the sustained exploitation of marine resources in particular coastal zones, especially the Pacific South American coast. As a result, forecasting of ENSO and especially of the warm phase of the oscillation (El Niño/EN) has long been a subject of intense research and improvement. Thus, the present study explores a novel method for the prediction of the Niño 3.4 index. In the state-of-the-art the advantageous statistical modeling approach of Structural Time Series Analysis has not been applied. Therefore, we have developed such a model using a State Space approach for the unobserved components of the time series. Its distinguishing feature is that observations consist of various components - level, seasonality, cycle, disturbance, and regression variables incorporated as explanatory covariates. These components are aimed at capturing the various modes of variability of the N3.4 time series. They are modeled separately, then combined in a single model for analysis and forecasting. Customary statistical ENSO prediction models essentially use SST, SLP and wind stress in the equatorial Pacific. We introduce new regression variables - subsurface ocean temperature in the western equatorial Pacific, motivated by recent (Ramesh and Murtugudde, 2012) and classical research (Jin, 1997), (Wyrtki, 1985), showing that subsurface processes and heat accumulation there are fundamental for initiation of an El Niño event; and a southern Pacific temperature-difference tracer, the Rossbell dipole, leading EN by about nine months (Ballester, 2011).
Radiation of a nonrelativistic particle during its finite motion in a central field
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karnakov, B. M., E-mail: karnak@theor.mephi.ru; Korneev, Ph. A., E-mail: korneev@theor.mephi.ru; Popruzhenko, S. V.
The spectrum and expressions for the intensity of dipole radiation lines are obtained for a classical nonrelativistic charged particle that executes a finite aperiodic motion in an arbitrary central field along a non-closed trajectory. It is shown that, in this case of a conditionally periodic motion, the radiaton spectrum consists of two series of equally spaced lines. It is pointed out that, according to the correspondence principle, the rise of two such series in the classical theory corresponds to the well-known selection rule |{delta}l = 1 for the dipole radiation in a central field in quantum theory, where l ismore » the orbital angular momentum of the particle. The results obtained can be applied to the description of the radiation and the absorption of a classical collisionless electron plasma in nanoparticles irradiated by an intense laser field. As an example, the rate of collisionless absorption of electromagnetic wave energy in equilibrium isotropic nanoplasma is calculated.« less
NASA Astrophysics Data System (ADS)
Kasatkina, T. I.; Dushkin, A. V.; Pavlov, V. A.; Shatovkin, R. R.
2018-03-01
In the development of information, systems and programming to predict the series of dynamics, neural network methods have recently been applied. They are more flexible, in comparison with existing analogues and are capable of taking into account the nonlinearities of the series. In this paper, we propose a modified algorithm for predicting the series of dynamics, which includes a method for training neural networks, an approach to describing and presenting input data, based on the prediction by the multilayer perceptron method. To construct a neural network, the values of a series of dynamics at the extremum points and time values corresponding to them, formed based on the sliding window method, are used as input data. The proposed algorithm can act as an independent approach to predicting the series of dynamics, and be one of the parts of the forecasting system. The efficiency of predicting the evolution of the dynamics series for a short-term one-step and long-term multi-step forecast by the classical multilayer perceptron method and a modified algorithm using synthetic and real data is compared. The result of this modification was the minimization of the magnitude of the iterative error that arises from the previously predicted inputs to the inputs to the neural network, as well as the increase in the accuracy of the iterative prediction of the neural network.
ERIC Educational Resources Information Center
Gonch, William
2014-01-01
This policy brief is the sixth in a series of in-depth case studies exploring how top-performing charter schools have incorporated civic learning in their school curriculum and school culture. The brief reports on Ridgeview Classical Schools, an integrated K-12 public charter school in Fort Collins, Colorado. The school contains an elementary,…
ERIC Educational Resources Information Center
Schlingman, Wayne M.; Prather, Edward E.; Wallace, Colin S.; Brissenden, Gina; Rudolph, Alexander L.
2012-01-01
This paper is the first in a series of investigations into the data from the recent national study using the Light and Spectroscopy Concept Inventory (LSCI). In this paper, we use classical test theory to form a framework of results that will be used to evaluate individual item difficulties, item discriminations, and the overall reliability of the…
NASA Astrophysics Data System (ADS)
Mathevet, T.; Kuentz, A.; Gailhard, J.; Andreassian, V.
2013-12-01
Improving the understanding of mountain watersheds hydrological variability is a great scientific issue, for both researchers and water resources managers, such as Electricite de France (Energy and Hydropower Company). The past and current context of climate variability enhances the interest on this topic, since multi-purposes water resources management is highly sensitive to this variability. The Durance River watershed (14000 km2), situated in the French Alps, is a good example of the complexity of this issue. It is characterized by a variety of hydrological processes (from snowy to Mediterranean regimes) and a wide range of anthropogenic influences (hydropower, irrigation, flood control, tourism and water supply), mixing potential causes of changes in its hydrological regimes. As water related stakes are numerous in this watershed, improving knowledge on the hydrological variability of the Durance River appears to be essential. In this presentation, we would like to focus on a methodology we developed to build long-term historical hydrometeorological time-series, based on atmospheric reanalysis (20CR : 20th Century Reanalysis) and historical local observations. This methodology allowed us to generate precipitation, air temperature and streamflow time-series at a daily time-step for a sample of 22 watersheds, for the 1883-2010 period. These long-term streamflow reconstructions have been validated thanks to historical searches that allowed to bring to light ten long historical series of daily streamflows, beginning on the early 20th century. Reconstructions appear to have rather good statistical properties, with good correlation (greater than 0.8) and limited mean and variance bias (less than 5%). Then, these long-term hydrometeorological time-series allowed us to characterize the past variability in terms of available water resources, droughts or hydrological regime. These analyses help water resources managers to better know the range of hydrological variabilities, which are usually greatly underestimated with classical available time-series (less than 50 years).
Analyses of GIMMS NDVI Time Series in Kogi State, Nigeria
NASA Astrophysics Data System (ADS)
Palka, Jessica; Wessollek, Christine; Karrasch, Pierre
2017-10-01
The value of remote sensing data is particularly evident where an areal monitoring is needed to provide information on the earth's surface development. The use of temporal high resolution time series data allows for detecting short-term changes. In Kogi State in Nigeria different vegetation types can be found. As the major population in this region is living in rural communities with crop farming the existing vegetation is slowly being altered. The expansion of agricultural land causes loss of natural vegetation, especially in the regions close to the rivers which are suitable for crop production. With regard to these facts, two questions can be dealt with covering different aspects of the development of vegetation in the Kogi state, the determination and evaluation of the general development of the vegetation in the study area (trend estimation) and analyses on a short-term behavior of vegetation conditions, which can provide information about seasonal effects in vegetation development. For this purpose, the GIMMS-NDVI data set, provided by the NOAA, provides information on the normalized difference vegetation index (NDVI) in a geometric resolution of approx. 8 km. The temporal resolution of 15 days allows the already described analyses. For the presented analysis data for the period 1981-2012 (31 years) were used. The implemented workflow mainly applies methods of time series analysis. The results show that in addition to the classical seasonal development, artefacts of different vegetation periods (several NDVI maxima) can be found in the data. The trend component of the time series shows a consistently positive development in the entire study area considering the full investigation period of 31 years. However, the results also show that this development has not been continuous and a simple linear modeling of the NDVI increase is only possible to a limited extent. For this reason, the trend modeling was extended by procedures for detecting structural breaks in the time series.
NASA Astrophysics Data System (ADS)
Larrañeta, M.; Moreno-Tejera, S.; Lillo-Bravo, I.; Silva-Pérez, M. A.
2018-02-01
Many of the available solar radiation databases only provide global horizontal irradiance (GHI) while there is a growing need of extensive databases of direct normal radiation (DNI) mainly for the development of concentrated solar power and concentrated photovoltaic technologies. In the present work, we propose a methodology for the generation of synthetic DNI hourly data from the hourly average GHI values by dividing the irradiance into a deterministic and stochastic component intending to emulate the dynamics of the solar radiation. The deterministic component is modeled through a simple classical model. The stochastic component is fitted to measured data in order to maintain the consistency of the synthetic data with the state of the sky, generating statistically significant DNI data with a cumulative frequency distribution very similar to the measured data. The adaptation and application of the model to the location of Seville shows significant improvements in terms of frequency distribution over the classical models. The proposed methodology applied to other locations with different climatological characteristics better results than the classical models in terms of frequency distribution reaching a reduction of the 50% in the Finkelstein-Schafer (FS) and Kolmogorov-Smirnov test integral (KSI) statistics.
Classical subharmonic resonances in microwave ionization of lithium Rydberg atoms
NASA Astrophysics Data System (ADS)
Noel, Michael W.; Griffith, W. M.; Gallagher, T. F.
2000-12-01
We have studied the ionization of lithium Rydberg atoms by pulsed microwave fields in the regime in which the microwave frequency is equal to or a subharmonic of the classical Kepler frequency of the two-body Coulomb problem. We have observed a series of resonances where the atom is relatively stable against ionization. The resonances are similar to those seen previously in hydrogen, but with significant quantitative differences. We also present measurements of the distribution of states that remain bound after the microwave interaction for initial states near one of the classical subharmonic resonances.
3 CFR 8571 - Proclamation 8571 of October 1, 2010. National Arts and Humanities Month, 2010
Code of Federal Regulations, 2011 CFR
2011-01-01
... children with an education that inspires as it informs. Exposing our students to disciplines in music... host a White House Music Series, Dance Series, and Poetry Jam. We have been honored to bring students, workshops, and performers to “the People’s House;” to highlight jazz, country, Latin, and classical music...
NASA Astrophysics Data System (ADS)
Di Piazza, A.; Cordano, E.; Eccel, E.
2012-04-01
The issue of climate change detection is considered a major challenge. In particular, high temporal resolution climate change scenarios are required in the evaluation of the effects of climate change on agricultural management (crop suitability, yields, risk assessment, etc.) energy production and water management. In this work, a "Weather Generator" technique was used for downscaling climate change scenarios for temperature. An R package (RMAWGEN, Cordano and Eccel, 2011 - available on http://cran.r-project.org) was developed aiming to generate synthetic daily weather conditions by using the theory of vectorial auto-regressive models (VAR). The VAR model was chosen for its ability in maintaining the temporal and spatial correlations among variables. In particular, observed time series of daily maximum and minimum temperature are transformed into "new" normally-distributed variable time series which are used to calibrate the parameters of a VAR model by using ordinary least square methods. Therefore the implemented algorithm, applied to monthly mean climatic values downscaled by Global Climate Model predictions, can generate several stochastic daily scenarios where the statistical consistency among series is saved. Further details are present in RMAWGEN documentation. An application is presented here by using a dataset with daily temperature time series recorded in 41 different sites of Trentino region for the period 1958-2010. Temperature time series were pre-processed to fill missing values (by a site-specific calibrated Inverse Distance Weighting algorithm, corrected with elevation) and to remove inhomogeneities. Several climatic indices were taken into account, useful for several impact assessment applications, and their time trends within the time series were analyzed. The indices go from the more classical ones, as annual mean temperatures, seasonal mean temperatures and their anomalies (from the reference period 1961-1990) to the climate change indices selected from the list recommended by the World Meteorological Organization Commission for Climatology (WMO-CCL) and the Research Programme on Climate Variability and Predictability (CLIVAR) project's Expert Team on Climate Change Detection, Monitoring and Indices (ETCCDMI). Each index was applied to both observed (and processed) data and to synthetic time series produced by the Weather Generator, over the thirty year reference period 1981-2010, in order to validate the procedure. Climate projections were statistically downscaled for a selection of sites for the two 30-year periods 2021-2050 and 2071-2099 of the European project "Ensembles" multi-model output (scenario A1B). The use of several climatic indices strengthens the trend analysis of both the generated synthetic series and future climate projections.
ERIC Educational Resources Information Center
Fincher, Cameron
This paper reviews what it calls a classic book in the field of higher education, 1952's "The Development and Scope of Higher Education," by Richard Hofstadter and C. DeWitt Hardy. The paper describes the main points of the book, which was written as part of a series, "Nature and Needs of Higher Education," prepared for the…
Lu, Wei-Zhen; Wang, Wen-Jian
2005-04-01
Monitoring and forecasting of air quality parameters are popular and important topics of atmospheric and environmental research today due to the health impact caused by exposing to air pollutants existing in urban air. The accurate models for air pollutant prediction are needed because such models would allow forecasting and diagnosing potential compliance or non-compliance in both short- and long-term aspects. Artificial neural networks (ANN) are regarded as reliable and cost-effective method to achieve such tasks and have produced some promising results to date. Although ANN has addressed more attentions to environmental researchers, its inherent drawbacks, e.g., local minima, over-fitting training, poor generalization performance, determination of the appropriate network architecture, etc., impede the practical application of ANN. Support vector machine (SVM), a novel type of learning machine based on statistical learning theory, can be used for regression and time series prediction and have been reported to perform well by some promising results. The work presented in this paper aims to examine the feasibility of applying SVM to predict air pollutant levels in advancing time series based on the monitored air pollutant database in Hong Kong downtown area. At the same time, the functional characteristics of SVM are investigated in the study. The experimental comparisons between the SVM model and the classical radial basis function (RBF) network demonstrate that the SVM is superior to the conventional RBF network in predicting air quality parameters with different time series and of better generalization performance than the RBF model.
2013-01-01
Background Statistical process control (SPC), an industrial sphere initiative, has recently been applied in health care and public health surveillance. SPC methods assume independent observations and process autocorrelation has been associated with increase in false alarm frequency. Methods Monthly mean raw mortality (at hospital discharge) time series, 1995–2009, at the individual Intensive Care unit (ICU) level, were generated from the Australia and New Zealand Intensive Care Society adult patient database. Evidence for series (i) autocorrelation and seasonality was demonstrated using (partial)-autocorrelation ((P)ACF) function displays and classical series decomposition and (ii) “in-control” status was sought using risk-adjusted (RA) exponentially weighted moving average (EWMA) control limits (3 sigma). Risk adjustment was achieved using a random coefficient (intercept as ICU site and slope as APACHE III score) logistic regression model, generating an expected mortality series. Application of time-series to an exemplar complete ICU series (1995-(end)2009) was via Box-Jenkins methodology: autoregressive moving average (ARMA) and (G)ARCH ((Generalised) Autoregressive Conditional Heteroscedasticity) models, the latter addressing volatility of the series variance. Results The overall data set, 1995-2009, consisted of 491324 records from 137 ICU sites; average raw mortality was 14.07%; average(SD) raw and expected mortalities ranged from 0.012(0.113) and 0.013(0.045) to 0.296(0.457) and 0.278(0.247) respectively. For the raw mortality series: 71 sites had continuous data for assessment up to or beyond lag40 and 35% had autocorrelation through to lag40; and of 36 sites with continuous data for ≥ 72 months, all demonstrated marked seasonality. Similar numbers and percentages were seen with the expected series. Out-of-control signalling was evident for the raw mortality series with respect to RA-EWMA control limits; a seasonal ARMA model, with GARCH effects, displayed white-noise residuals which were in-control with respect to EWMA control limits and one-step prediction error limits (3SE). The expected series was modelled with a multiplicative seasonal autoregressive model. Conclusions The data generating process of monthly raw mortality series at the ICU level displayed autocorrelation, seasonality and volatility. False-positive signalling of the raw mortality series was evident with respect to RA-EWMA control limits. A time series approach using residual control charts resolved these issues. PMID:23705957
The Hurst Phenomenon in Error Estimates Related to Atmospheric Turbulence
NASA Astrophysics Data System (ADS)
Dias, Nelson Luís; Crivellaro, Bianca Luhm; Chamecki, Marcelo
2018-05-01
The Hurst phenomenon is a well-known feature of long-range persistence first observed in hydrological and geophysical time series by E. Hurst in the 1950s. It has also been found in several cases in turbulence time series measured in the wind tunnel, the atmosphere, and in rivers. Here, we conduct a systematic investigation of the value of the Hurst coefficient H in atmospheric surface-layer data, and its impact on the estimation of random errors. We show that usually H > 0.5 , which implies the non-existence (in the statistical sense) of the integral time scale. Since the integral time scale is present in the Lumley-Panofsky equation for the estimation of random errors, this has important practical consequences. We estimated H in two principal ways: (1) with an extension of the recently proposed filtering method to estimate the random error (H_p ), and (2) with the classical rescaled range introduced by Hurst (H_R ). Other estimators were tried but were found less able to capture the statistical behaviour of the large scales of turbulence. Using data from three micrometeorological campaigns we found that both first- and second-order turbulence statistics display the Hurst phenomenon. Usually, H_R is larger than H_p for the same dataset, raising the question that one, or even both, of these estimators, may be biased. For the relative error, we found that the errors estimated with the approach adopted by us, that we call the relaxed filtering method, and that takes into account the occurrence of the Hurst phenomenon, are larger than both the filtering method and the classical Lumley-Panofsky estimates. Finally, we found that there is no apparent relationship between H and the Obukhov stability parameter. The relative errors, however, do show stability dependence, particularly in the case of the error of the kinematic momentum flux in unstable conditions, and that of the kinematic sensible heat flux in stable conditions.
On analytic modeling of lunar perturbations of artificial satellites of the earth
NASA Astrophysics Data System (ADS)
Lane, M. T.
1989-06-01
Two different procedures for analytically modeling the effects of the moon's direct gravitational force on artificial earth satellites are discussed from theoretical and numerical viewpoints. One is developed using classical series expansions of inclination and eccentricity for both the satellite and the moon, and the other employs the method of averaging. Both solutions are seen to have advantages, but it is shown that while the former is more accurate in special situations, the latter is quicker and more practical for the general orbit determination problem where observed data are used to correct the orbit in near real time.
Molenaar, Peter C M
2007-03-01
I am in general agreement with Toomela's (Integrative Psychological and Behavioral Science doi:10.1007/s12124-007-9004-0, 2007) plea for an alternative psychological methodology inspired by his description of the German-Austrian orientation. I will argue, however, that this alternative methodology has to be based on the classical ergodic theorems, using state-of-the-art statistical time series analysis of intra-individual variation as its main tool. Some more specific points made by Toomela will be criticized, while for others a more extreme elaboration along the lines indicated by Toomela is proposed.
Masselot, Pierre; Chebana, Fateh; Bélanger, Diane; St-Hilaire, André; Abdous, Belkacem; Gosselin, Pierre; Ouarda, Taha B M J
2018-07-01
In environmental epidemiology studies, health response data (e.g. hospitalization or mortality) are often noisy because of hospital organization and other social factors. The noise in the data can hide the true signal related to the exposure. The signal can be unveiled by performing a temporal aggregation on health data and then using it as the response in regression analysis. From aggregated series, a general methodology is introduced to account for the particularities of an aggregated response in a regression setting. This methodology can be used with usually applied regression models in weather-related health studies, such as generalized additive models (GAM) and distributed lag nonlinear models (DLNM). In particular, the residuals are modelled using an autoregressive-moving average (ARMA) model to account for the temporal dependence. The proposed methodology is illustrated by modelling the influence of temperature on cardiovascular mortality in Canada. A comparison with classical DLNMs is provided and several aggregation methods are compared. Results show that there is an increase in the fit quality when the response is aggregated, and that the estimated relationship focuses more on the outcome over several days than the classical DLNM. More precisely, among various investigated aggregation schemes, it was found that an aggregation with an asymmetric Epanechnikov kernel is more suited for studying the temperature-mortality relationship. Copyright © 2018. Published by Elsevier B.V.
DISCOVERY OF A PAIR OF CLASSICAL CEPHEIDS IN AN INVISIBLE CLUSTER BEYOND THE GALACTIC BULGE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dékány, I.; Palma, T.; Minniti, D.
2015-01-20
We report the discovery of a pair of extremely reddened classical Cepheid variable stars located in the Galactic plane behind the bulge, using near-infrared (NIR) time-series photometry from the VISTA Variables in the Vía Láctea Survey. This is the first time that such objects have ever been found in the opposite side of the Galactic plane. The Cepheids have almost identical periods, apparent brightnesses, and colors. From the NIR Leavitt law, we determine their distances with ∼1.5% precision and ∼8% accuracy. We find that they have a same total extinction of A(V)≃32 mag, and are located at the same heliocentricmore » distance of 〈d〉=11.4±0.9 kpc, and less than 1 pc from the true Galactic plane. Their similar periods indicate that the Cepheids are also coeval, with an age of ∼48±3 Myr, according to theoretical models. They are separated by an angular distance of only 18.″3, corresponding to a projected separation of ∼1 pc. Their position coincides with the expected location of the Far 3 kpc Arm behind the bulge. Such a tight pair of similar classical Cepheids indicates the presence of an underlying young open cluster that is both hidden behind heavy extinction and disguised by the dense stellar field of the bulge. All our attempts to directly detect this “invisible cluster” have failed, and deeper observations are needed. (letters)« less
Measuring the self-similarity exponent in Lévy stable processes of financial time series
NASA Astrophysics Data System (ADS)
Fernández-Martínez, M.; Sánchez-Granero, M. A.; Trinidad Segovia, J. E.
2013-11-01
Geometric method-based procedures, which will be called GM algorithms herein, were introduced in [M.A. Sánchez Granero, J.E. Trinidad Segovia, J. García Pérez, Some comments on Hurst exponent and the long memory processes on capital markets, Phys. A 387 (2008) 5543-5551], to efficiently calculate the self-similarity exponent of a time series. In that paper, the authors showed empirically that these algorithms, based on a geometrical approach, are more accurate than the classical algorithms, especially with short length time series. The authors checked that GM algorithms are good when working with (fractional) Brownian motions. Moreover, in [J.E. Trinidad Segovia, M. Fernández-Martínez, M.A. Sánchez-Granero, A note on geometric method-based procedures to calculate the Hurst exponent, Phys. A 391 (2012) 2209-2214], a mathematical background for the validity of such procedures to estimate the self-similarity index of any random process with stationary and self-affine increments was provided. In particular, they proved theoretically that GM algorithms are also valid to explore long-memory in (fractional) Lévy stable motions. In this paper, we prove empirically by Monte Carlo simulation that GM algorithms are able to calculate accurately the self-similarity index in Lévy stable motions and find empirical evidence that they are more precise than the absolute value exponent (denoted by AVE onwards) and the multifractal detrended fluctuation analysis (MF-DFA) algorithms, especially with a short length time series. We also compare them with the generalized Hurst exponent (GHE) algorithm and conclude that both GM2 and GHE algorithms are the most accurate to study financial series. In addition to that, we provide empirical evidence, based on the accuracy of GM algorithms to estimate the self-similarity index in Lévy motions, that the evolution of the stocks of some international market indices, such as U.S. Small Cap and Nasdaq100, cannot be modelized by means of a Brownian motion.
Jones, Luke A; Allely, Clare S; Wearden, John H
2011-02-01
A series of experiments demonstrated that a 5-s train of clicks that have been shown in previous studies to increase the subjective duration of tones they precede (in a manner consistent with "speeding up" timing processes) could also have an effect on information-processing rate. Experiments used studies of simple and choice reaction time (Experiment 1), or mental arithmetic (Experiment 2). In general, preceding trials by clicks made response times significantly shorter than those for trials without clicks, but white noise had no effects on response times. Experiments 3 and 4 investigated the effects of clicks on performance on memory tasks, using variants of two classic experiments of cognitive psychology: Sperling's (1960) iconic memory task and Loftus, Johnson, and Shimamura's (1985) iconic masking task. In both experiments participants were able to recall or recognize significantly more information from stimuli preceded by clicks than those preceded by silence.
Effect of postmortem sampling technique on the clinical significance of autopsy blood cultures.
Hove, M; Pencil, S D
1998-02-01
Our objective was to investigate the value of postmortem autopsy blood cultures performed with an iodine-subclavian technique relative to the classical method of atrial heat searing and antemortem blood cultures. The study consisted of a prospective autopsy series with each case serving as its own control relative to subsequent testing, and a retrospective survey of patients coming to autopsy who had both autopsy blood cultures and premortem blood cultures. A busy academic autopsy service (600 cases per year) at University of Texas Medical Branch Hospitals, Galveston, Texas, served as the setting for this work. The incidence of non-clinically relevant (false-positive) culture results were compared using different methods for collecting blood samples in a prospective series of 38 adult autopsy specimens. One hundred eleven adult autopsy specimens in which both postmortem and antemortem blood cultures were obtained were studied retrospectively. For both studies, positive culture results were scored as either clinically relevant or false positives based on analysis of the autopsy findings and the clinical summary. The rate of false-positive culture results obtained by an iodine-subclavian technique from blood drawn soon after death were statistically significantly lower (13%) than using the classical method of obtaining blood through the atrium after heat searing at the time of the autopsy (34%) in the same set of autopsy subjects. When autopsy results were compared with subjects' antemortem blood culture results, there was no significant difference in the rate of non-clinically relevant culture results in a paired retrospective series of antemortem blood cultures and postmortem blood cultures using the iodine-subclavian postmortem method (11.7% v 13.5%). The results indicate that autopsy blood cultures obtained using the iodine-subclavian technique have reliability equivalent to that of antemortem blood cultures.
Kulikov, Sergey N; Lisovskaya, Svetlana A; Zelenikhin, Pavel V; Bezrodnykh, Evgeniya A; Shakirova, Diana R; Blagodatskikh, Inesa V; Tikhonov, Vladimir E
2014-03-03
A series of oligochitosans (short chain chitosans) prepared by acidic hydrolysis of chitosan and characterized by their molecular weight, polydispersity and degree of deacetylation were used to determine their anticandidal activities. This study has demonstrated that oligochitosans show a high fungistatic activity (MIC 8-512 μg/ml) against Candida species and clinical isolates of Candida albicans, which are resistant to a series of classic antibiotics. Flow cytometry analysis showed that oligochitosan possessed a high fungicidal activity as well. For the first time it was shown that even sub-MIC oligochitosan concentration suppressed the formation of C. albicans hyphal structures, cause severe cell wall alterations, and altered internal cell structure. These results indicate that oligochitosan should be considered as a possible alternative/additive to known anti-yeast agents in pharmaceutical compositions. Copyright © 2014 Elsevier Masson SAS. All rights reserved.
What does the structure of its visibility graph tell us about the nature of the time series?
NASA Astrophysics Data System (ADS)
Franke, Jasper G.; Donner, Reik V.
2017-04-01
Visibility graphs are a recently introduced method to construct complex network representations based upon univariate time series in order to study their dynamical characteristics [1]. In the last years, this approach has been successfully applied to studying a considerable variety of geoscientific research questions and data sets, including non-trivial temporal patterns in complex earthquake catalogs [2] or time-reversibility in climate time series [3]. It has been shown that several characteristic features of the thus constructed networks differ between stochastic and deterministic (possibly chaotic) processes, which is, however, relatively hard to exploit in the case of real-world applications. In this study, we propose studying two new measures related with the network complexity of visibility graphs constructed from time series, one being a special type of network entropy [4] and the other a recently introduced measure of the heterogeneity of the network's degree distribution [5]. For paradigmatic model systems exhibiting bifurcation sequences between regular and chaotic dynamics, both properties clearly trace the transitions between both types of regimes and exhibit marked quantitative differences for regular and chaotic dynamics. Moreover, for dynamical systems with a small amount of additive noise, the considered properties demonstrate gradual changes prior to the bifurcation point. This finding appears closely related to the subsequent loss of stability of the current state known to lead to a critical slowing down as the transition point is approaches. In this spirit, both considered visibility graph characteristics provide alternative tracers of dynamical early warning signals consistent with classical indicators. Our results demonstrate that measures of visibility graph complexity (i) provide a potentially useful means to tracing changes in the dynamical patterns encoded in a univariate time series that originate from increasing autocorrelation and (ii) allow to systematically distinguish regular from deterministic-chaotic dynamics. We demonstrate the application of our method for different model systems as well as selected paleoclimate time series from the North Atlantic region. Notably, visibility graph based methods are particularly suited for studying the latter type of geoscientific data, since they do not impose intrinsic restrictions or assumptions on the nature of the time series under investigation in terms of noise process, linearity and sampling homogeneity. [1] Lacasa, Lucas, et al. "From time series to complex networks: The visibility graph." Proceedings of the National Academy of Sciences 105.13 (2008): 4972-4975. [2] Telesca, Luciano, and Michele Lovallo. "Analysis of seismic sequences by using the method of visibility graph." EPL (Europhysics Letters) 97.5 (2012): 50002. [3] Donges, Jonathan F., Reik V. Donner, and Jürgen Kurths. "Testing time series irreversibility using complex network methods." EPL (Europhysics Letters) 102.1 (2013): 10004. [4] Small, Michael. "Complex networks from time series: capturing dynamics." 2013 IEEE International Symposium on Circuits and Systems (ISCAS2013), Beijing (2013): 2509-2512. [5] Jacob, Rinku, K.P. Harikrishnan, Ranjeev Misra, and G. Ambika. "Measure for degree heterogeneity in complex networks and its application to recurrence network analysis." arXiv preprint 1605.06607 (2016).
Edgar Buchanan: dentist and popular character actor in movies and television.
Christen, A G; Christen, J A
2001-07-01
Edgar Buchanan, D.D.S., pursued a diverse mix of careers during his lifetime: as he practiced dentistry, he also worked as a popular film and television actor. Although he eventually relinquished a full-time dental practice for acting, he continued his commitment to clinical dentistry. Acting in 100 films and four television series across a 35-year span (1939-1975). He personified a scheming, yet well-meaning rustic who specialized in "cracker-barrel" philosophy. Typically, he was cast in classic western movies as a bewhiskered character actor. In several films he played a frontier dentist who was always portrayed in a sympathetic and authentic manner. His unique gravelly voice, subtle facial expressions, folksy mannerisms and portly build enabled Buchanan to step into a wide variety of character roles. His most memorable television role was in the classic situation comedy, "Petticoat Junction," (1963-1970), where he played Uncle Joe, a folksy, lovable, free-loader whose many entertaining schemes created chaos.
The missing link between neurobiology and behavior in Aplysia conditioning.
Arvanitogiannis, A
1997-01-01
Over the past decades, a wealth of findings has led to a substantial change in the assumed complexity of classical conditioning. The combined evidence indicates that temporal pairing is neither necessary nor sufficient for the formation of an associative connection. At the same time, studies of model invertebrate nervous systems have allowed us to ask a series of questions about the molecular basis of associative conditioning. The discovery of a pairing-sensitive mechanism in the gill-withdrawal circuitry of Aplysia is regarded as the hallmark of the reductionist approach. This review outlines the insights gathered from behavioral and neurobiological studies. Furthermore, the conceptual frameworks guiding research at the 'what' and 'how' levels of analysis are compared and contrasted. I argue that a rich cognitive view of conditioning has emerged at the 'what' level, whereas the traditional notion of temporal pairing still drives research at the 'how' level. A complete account of classical conditioning has to await the resolving of this discordance.
Cavalié, Olivier; Vernotte, François
2016-04-01
The Allan variance was introduced 50 years ago for analyzing the stability of frequency standards. In addition to its metrological interest, it may be also considered as an estimator of the large trends of the power spectral density (PSD) of frequency deviation. For instance, the Allan variance is able to discriminate different types of noise characterized by different power laws in the PSD. The Allan variance was also used in other fields than time and frequency metrology: for more than 20 years, it has been used in accelerometry, geophysics, geodesy, astrophysics, and even finances. However, it seems that up to now, it has been exclusively applied for time series analysis. We propose here to use the Allan variance on spatial data. Interferometric synthetic aperture radar (InSAR) is used in geophysics to image ground displacements in space [over the synthetic aperture radar (SAR) image spatial coverage] and in time thanks to the regular SAR image acquisitions by dedicated satellites. The main limitation of the technique is the atmospheric disturbances that affect the radar signal while traveling from the sensor to the ground and back. In this paper, we propose to use the Allan variance for analyzing spatial data from InSAR measurements. The Allan variance was computed in XY mode as well as in radial mode for detecting different types of behavior for different space-scales, in the same way as the different types of noise versus the integration time in the classical time and frequency application. We found that radial Allan variance is the more appropriate way to have an estimator insensitive to the spatial axis and we applied it on SAR data acquired over eastern Turkey for the period 2003-2011. Spatial Allan variance allowed us to well characterize noise features, classically found in InSAR such as phase decorrelation producing white noise or atmospheric delays, behaving like a random walk signal. We finally applied the spatial Allan variance to an InSAR time series to detect when the geophysical signal, here the ground motion, emerges from the noise.
Faris, Callum; van der Eerden, Paul; Vuyk, Hade
2015-01-01
This study clarifies the pedicle geometry and vascular supply of a midline forehead flap for nasal reconstruction. It reports on the vascular reliability of this flap and its ability to reduce hair transposition to the nose, a major complicating factor of previous forehead flap designs. To compare the vascular reliability of 3 different pedicle designs of the forehead flap in nasal reconstruction (classic paramedian, glabellar paramedian, and central artery flap design) and evaluate hair transposition rates and aesthetic results. Retrospective analysis of patient data and outcomes retrieved from computer files generated at the time of surgery, supplemented by data from the patient medical records and photographic documentation, from a tertiary referral nasal reconstructive practice, within a secondary-care hospital setting. The study population included all consecutive patients over a 19-year period who underwent primary forehead flap repair of nasal defects, with more than 3 months of postoperative follow-up and photographic documentation. Three sequential forehead flap patterns were used (classic paramedian flap, glabella flap, and central artery flap) for nasal reconstruction over the study duration. Data collected included patient characteristics, method of repair, complications, functional outcome, and patient satisfaction score. For cosmetic outcome, photographic documentation was scored by a medical juror. No forehead flap had vascular compromise in the first stage. Partial flap necrosis was reported in subsequent stages in 4 patients (1%), with no statistical difference in the rate of vascular compromise between the 3 flap designs. Hair transposition to the nose was lower in the central artery forehead flap (7%) compared with the classic paramedian (23%) and glabellar paramedian (13%) flaps (P < .05). Photographic evaluation in 227 patients showed that brow position (98%) and color match (83%) were good in the majority of the patients. In this series, the central artery forehead flap was as reliable (in terms of vascularity) as the glabellar and classic paramedian forehead flap. Its use resulted in a statistically significant reduction in transfer of hair to the nose in our series. 3.
Contraction Options and Optimal Multiple-Stopping in Spectrally Negative Lévy Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yamazaki, Kazutoshi, E-mail: kyamazak@kansai-u.ac.jp
This paper studies the optimal multiple-stopping problem arising in the context of the timing option to withdraw from a project in stages. The profits are driven by a general spectrally negative Lévy process. This allows the model to incorporate sudden declines of the project values, generalizing greatly the classical geometric Brownian motion model. We solve the one-stage case as well as the extension to the multiple-stage case. The optimal stopping times are of threshold-type and the value function admits an expression in terms of the scale function. A series of numerical experiments are conducted to verify the optimality and tomore » evaluate the efficiency of the algorithm.« less
Naro, Daniel; Rummel, Christian; Schindler, Kaspar; Andrzejak, Ralph G
2014-09-01
The rank-based nonlinear predictability score was recently introduced as a test for determinism in point processes. We here adapt this measure to time series sampled from time-continuous flows. We use noisy Lorenz signals to compare this approach against a classical amplitude-based nonlinear prediction error. Both measures show an almost identical robustness against Gaussian white noise. In contrast, when the amplitude distribution of the noise has a narrower central peak and heavier tails than the normal distribution, the rank-based nonlinear predictability score outperforms the amplitude-based nonlinear prediction error. For this type of noise, the nonlinear predictability score has a higher sensitivity for deterministic structure in noisy signals. It also yields a higher statistical power in a surrogate test of the null hypothesis of linear stochastic correlated signals. We show the high relevance of this improved performance in an application to electroencephalographic (EEG) recordings from epilepsy patients. Here the nonlinear predictability score again appears of higher sensitivity to nonrandomness. Importantly, it yields an improved contrast between signals recorded from brain areas where the first ictal EEG signal changes were detected (focal EEG signals) versus signals recorded from brain areas that were not involved at seizure onset (nonfocal EEG signals).
NASA Astrophysics Data System (ADS)
Naro, Daniel; Rummel, Christian; Schindler, Kaspar; Andrzejak, Ralph G.
2014-09-01
The rank-based nonlinear predictability score was recently introduced as a test for determinism in point processes. We here adapt this measure to time series sampled from time-continuous flows. We use noisy Lorenz signals to compare this approach against a classical amplitude-based nonlinear prediction error. Both measures show an almost identical robustness against Gaussian white noise. In contrast, when the amplitude distribution of the noise has a narrower central peak and heavier tails than the normal distribution, the rank-based nonlinear predictability score outperforms the amplitude-based nonlinear prediction error. For this type of noise, the nonlinear predictability score has a higher sensitivity for deterministic structure in noisy signals. It also yields a higher statistical power in a surrogate test of the null hypothesis of linear stochastic correlated signals. We show the high relevance of this improved performance in an application to electroencephalographic (EEG) recordings from epilepsy patients. Here the nonlinear predictability score again appears of higher sensitivity to nonrandomness. Importantly, it yields an improved contrast between signals recorded from brain areas where the first ictal EEG signal changes were detected (focal EEG signals) versus signals recorded from brain areas that were not involved at seizure onset (nonfocal EEG signals).
NASA Astrophysics Data System (ADS)
Wacławczyk, Marta; Ma, Yong-Feng; Kopeć, Jacek M.; Malinowski, Szymon P.
2017-11-01
In this paper we propose two approaches to estimating the turbulent kinetic energy (TKE) dissipation rate, based on the zero-crossing method by Sreenivasan et al. (1983). The original formulation requires a fine resolution of the measured signal, down to the smallest dissipative scales. However, due to finite sampling frequency, as well as measurement errors, velocity time series obtained from airborne experiments are characterized by the presence of effective spectral cutoffs. In contrast to the original formulation the new approaches are suitable for use with signals originating from airborne experiments. The suitability of the new approaches is tested using measurement data obtained during the Physics of Stratocumulus Top (POST) airborne research campaign as well as synthetic turbulence data. They appear useful and complementary to existing methods. We show the number-of-crossings-based approaches respond differently to errors due to finite sampling and finite averaging than the classical power spectral method. Hence, their application for the case of short signals and small sampling frequencies is particularly interesting, as it can increase the robustness of turbulent kinetic energy dissipation rate retrieval.
False Dichotomies and Health Policy Research Designs: Randomized Trials Are Not Always the Answer.
Soumerai, Stephen B; Ceccarelli, Rachel; Koppel, Ross
2017-02-01
Some medical scientists argue that only data from randomized controlled trials (RCTs) are trustworthy. They claim data from natural experiments and administrative data sets are always spurious and cannot be used to evaluate health policies and other population-wide phenomena in the real world. While many acknowledge biases caused by poor study designs, in this article we argue that several valid designs using administrative data can produce strong findings, particularly the interrupted time series (ITS) design. Many policy studies neither permit nor require an RCT for cause-and-effect inference. Framing our arguments using Campbell and Stanley's classic research design monograph, we show that several "quasi-experimental" designs, especially interrupted time series (ITS), can estimate valid effects (or non-effects) of health interventions and policies as diverse as public insurance coverage, speed limits, hospital safety programs, drug abuse regulation and withdrawal of drugs from the market. We further note the recent rapid uptake of ITS and argue for expanded training in quasi-experimental designs in medical and graduate schools and in post-doctoral curricula.
NASA Astrophysics Data System (ADS)
Sprott, J. C.
2003-04-01
In 1984 the University of Wisconsin began an outreach program called The Wonders of Physics. The program initially consisted of a series of public lectures intended to generate interest in physics through a series of fast-paced demonstrations suitable for a diverse audience. The demonstrations are organized around the areas of classical physics, including motion, heat, sound, electricity, magnetism, and light. The presentations include music, costumes, skits, and surprise appearances of special guests. The presentation has been given about 160 times on the Madison campus, nearly always to capacity crowds totaling over 50,000. Each year the program is videotaped and distributed to individuals, schools, and cable TV stations. In 1990, a Lecture Kit was produced and is widely distributed. A traveling version of the show was developed in 1988 and has been given about 800 times to a total audience of approximately 100,000, mostly school children in nineteen states and provinces. The program is funded by the Office of Fusion Energy Sciences of the Department of Energy and by donations from those for whom the presentations are made as well as a few corporations and benefactors.
A unified nonlinear stochastic time series analysis for climate science
NASA Astrophysics Data System (ADS)
Moon, Woosok; Wettlaufer, John
2017-04-01
Earth's orbit and axial tilt imprint a strong seasonal cycle on climatological data. Climate variability is typically viewed in terms of fluctuations in the seasonal cycle induced by higher frequency processes. We can interpret this as a competition between the orbitally enforced monthly stability and the fluctuations/noise induced by weather. Here we introduce a new time-series method that determines these contributions from monthly-averaged data. We find that the spatio-temporal distribution of the monthly stability and the magnitude of the noise reveal key fingerprints of several important climate phenomena, including the evolution of the Arctic sea ice cover, the El Niño Southern Oscillation (ENSO), the Atlantic Niño and the Indian Dipole Mode. In analogy with the classical destabilising influence of the ice-albedo feedback on summertime sea ice, we find that during some period of the season a destabilising process operates in all of these climate phenomena. The interaction between the destabilisation and the accumulation of noise, which we term the memory effect, underlies phase locking to the seasonal cycle and the statistical nature of seasonal predictability.
Temporal turnover and the maintenance of diversity in ecological assemblages
Magurran, Anne E.; Henderson, Peter A.
2010-01-01
Temporal variation in species abundances occurs in all ecological communities. Here, we explore the role that this temporal turnover plays in maintaining assemblage diversity. We investigate a three-decade time series of estuarine fishes and show that the abundances of the individual species fluctuate asynchronously around their mean levels. We then use a time-series modelling approach to examine the consequences of different patterns of turnover, by asking how the correlation between the abundance of a species in a given year and its abundance in the previous year influences the structure of the overall assemblage. Classical diversity measures that ignore species identities reveal that the observed assemblage structure will persist under all but the most extreme conditions. However, metrics that track species identities indicate a narrower set of turnover scenarios under which the predicted assemblage resembles the natural one. Our study suggests that species diversity metrics are insensitive to change and that measures that track species ranks may provide better early warning that an assemblage is being perturbed. It also highlights the need to incorporate temporal turnover in investigations of assemblage structure and function. PMID:20980310
Toward a qualitative understanding of binge-watching behaviors: A focus group approach.
Flayelle, Maèva; Maurage, Pierre; Billieux, Joël
2017-12-01
Background and aims Binge-watching (i.e., seeing multiple episodes of the same TV series in a row) now constitutes a widespread phenomenon. However, little is known about the psychological factors underlying this behavior, as reflected by the paucity of available studies, most merely focusing on its potential harmfulness by applying the classic criteria used for other addictive disorders without exploring the uniqueness of binge-watching. This study thus aimed to take the opposite approach as a first step toward a genuine understanding of binge-watching behaviors through a qualitative analysis of the phenomenological characteristics of TV series watching. Methods A focus group of regular TV series viewers (N = 7) was established to explore a wide range of aspects related to TV series watching (e.g., motives, viewing practices, and related behaviors). Results A content analysis identified binge-watching features across three dimensions: TV series watching motivations, TV series watching engagement, and structural characteristics of TV shows. Most participants acknowledged that TV series watching can become addictive, but they all agreed having trouble recognizing themselves as truly being an "addict." Although obvious connections could be established with substance addiction criteria and symptoms, such parallelism appeared to be insufficient, as several distinctive facets emerged (e.g., positive view, transient overinvolvement, context dependency, and low everyday life impact). Discussion and conclusion The research should go beyond the classic biomedical and psychological models of addictive behaviors to account for binge-watching in order to explore its specificities and generate the first steps toward an adequate theoretical rationale for these emerging problematic behaviors.
Quintela-del-Río, Alejandro; Francisco-Fernández, Mario
2011-02-01
The study of extreme values and prediction of ozone data is an important topic of research when dealing with environmental problems. Classical extreme value theory is usually used in air-pollution studies. It consists in fitting a parametric generalised extreme value (GEV) distribution to a data set of extreme values, and using the estimated distribution to compute return levels and other quantities of interest. Here, we propose to estimate these values using nonparametric functional data methods. Functional data analysis is a relatively new statistical methodology that generally deals with data consisting of curves or multi-dimensional variables. In this paper, we use this technique, jointly with nonparametric curve estimation, to provide alternatives to the usual parametric statistical tools. The nonparametric estimators are applied to real samples of maximum ozone values obtained from several monitoring stations belonging to the Automatic Urban and Rural Network (AURN) in the UK. The results show that nonparametric estimators work satisfactorily, outperforming the behaviour of classical parametric estimators. Functional data analysis is also used to predict stratospheric ozone concentrations. We show an application, using the data set of mean monthly ozone concentrations in Arosa, Switzerland, and the results are compared with those obtained by classical time series (ARIMA) analysis. Copyright © 2010 Elsevier Ltd. All rights reserved.
Fractal analyses reveal independent complexity and predictability of gait
Dierick, Frédéric; Nivard, Anne-Laure
2017-01-01
Locomotion is a natural task that has been assessed for decades and used as a proxy to highlight impairments of various origins. So far, most studies adopted classical linear analyses of spatio-temporal gait parameters. Here, we use more advanced, yet not less practical, non-linear techniques to analyse gait time series of healthy subjects. We aimed at finding more sensitive indexes related to spatio-temporal gait parameters than those previously used, with the hope to better identify abnormal locomotion. We analysed large-scale stride interval time series and mean step width in 34 participants while altering walking direction (forward vs. backward walking) and with or without galvanic vestibular stimulation. The Hurst exponent α and the Minkowski fractal dimension D were computed and interpreted as indexes expressing predictability and complexity of stride interval time series, respectively. These holistic indexes can easily be interpreted in the framework of optimal movement complexity. We show that α and D accurately capture stride interval changes in function of the experimental condition. Walking forward exhibited maximal complexity (D) and hence, adaptability. In contrast, walking backward and/or stimulation of the vestibular system decreased D. Furthermore, walking backward increased predictability (α) through a more stereotyped pattern of the stride interval and galvanic vestibular stimulation reduced predictability. The present study demonstrates the complementary power of the Hurst exponent and the fractal dimension to improve walking classification. Our developments may have immediate applications in rehabilitation, diagnosis, and classification procedures. PMID:29182659
NASA Astrophysics Data System (ADS)
de Sousa, J. Ricardo; de Albuquerque, Douglas F.
1997-02-01
By using two approaches of renormalization group (RG), mean field RG (MFRG) and effective field RG (EFRG), we study the critical properties of the simple cubic lattice classical XY and classical Heisenberg models. The methods are illustrated by employing its simplest approximation version in which small clusters with one ( N‧ = 1) and two ( N = 2) spins are used. The thermal and magnetic critical exponents, Yt and Yh, and the critical parameter Kc are numerically obtained and are compared with more accurate methods (Monte Carlo, series expansion and ε-expansion). The results presented in this work are in excellent agreement with these sophisticated methods. We have also shown that the exponent Yh does not depend on the symmetry n of the Hamiltonian, hence the criteria of universality for this exponent is only a function of the dimension d.
NASA Astrophysics Data System (ADS)
Valencio, Arthur; Grebogi, Celso; Baptista, Murilo S.
2017-10-01
The presence of undesirable dominating signals in geophysical experimental data is a challenge in many subfields. One remarkable example is surface gravimetry, where frequencies from Earth tides correspond to time-series fluctuations up to a thousand times larger than the phenomena of major interest, such as hydrological gravity effects or co-seismic gravity changes. This work discusses general methods for the removal of unwanted dominating signals by applying them to 8 long-period gravity time-series of the International Geodynamics and Earth Tides Service, equivalent to the acquisition from 8 instruments in 5 locations representative of the network. We compare three different conceptual approaches for tide removal: frequency filtering, physical modelling, and data-based modelling. Each approach reveals a different limitation to be considered depending on the intended application. Vestiges of tides remain in the residues for the modelling procedures, whereas the signal was distorted in different ways by the filtering and data-based procedures. The linear techniques employed were power spectral density, spectrogram, cross-correlation, and classical harmonics decomposition, while the system dynamics was analysed by state-space reconstruction and estimation of the largest Lyapunov exponent. Although the tides could not be completely eliminated, they were sufficiently reduced to allow observation of geophysical events of interest above the 10 nm s-2 level, exemplified by a hydrology-related event of 60 nm s-2. The implementations adopted for each conceptual approach are general, so that their principles could be applied to other kinds of data affected by undesired signals composed mainly by periodic or quasi-periodic components.
Off-diagonal series expansion for quantum partition functions
NASA Astrophysics Data System (ADS)
Hen, Itay
2018-05-01
We derive an integral-free thermodynamic perturbation series expansion for quantum partition functions which enables an analytical term-by-term calculation of the series. The expansion is carried out around the partition function of the classical component of the Hamiltonian with the expansion parameter being the strength of the off-diagonal, or quantum, portion. To demonstrate the usefulness of the technique we analytically compute to third order the partition functions of the 1D Ising model with longitudinal and transverse fields, and the quantum 1D Heisenberg model.
Introduction to the Neutrosophic Quantum Theory
NASA Astrophysics Data System (ADS)
Smarandache, Florentin
2014-10-01
Neutrosophic Quantum Theory (NQT) is the study of the principle that certain physical quantities can assume neutrosophic values, instead of discrete values as in quantum theory. These quantities are thus neutrosophically quantized. A neutrosophic values (neutrosophic amount) is expressed by a set (mostly an interval) that approximates (or includes) a discrete value. An oscillator can lose or gain energy by some neutrosophic amount (we mean neither continuously nor discretely, but as a series of integral sets: S, 2S, 3S, ..., where S is a set). In the most general form, one has an ensemble of sets of sets, i.e. R1S1 ,R2S2 ,R3S3 , ..., where all Rn and Sn are sets that may vary in function of time and of other parameters. Several such sets may be equal, or may be reduced to points, or may be empty. {The multiplication of two sets A and B is classically defined as: AB ={ab, a??A and b??B}. And similarly a number n times a set A is defined as: nA ={na, a??A}.} The unit of neutrosophic energy is Hν , where H is a set (in particular an interval) that includes Planck constant h, and ν is the frequency. Therefore, an oscillator could change its energy by a neutrosophic number of quanta: Hν , 2H ν, 3H ν, etc. For example, when H is an interval [h1 ,h2 ] , with 0 <=h1 <=h2 , that contains Planck constant h, then one has: [h1 ν ,h2 ν ], [2h1 ν , 2h2 ν ], [3h1 ν , 3h2 ν ],..., as series of intervals of energy change of the oscillator. The most general form of the units of neutrosophic energy is Hnνn , where all Hn and νn are sets that similarly as above may vary in function of time and of other oscillator and environment parameters. Neutrosophic quantum theory combines classical mechanics and quantum mechanics.
Fucci, D; Petrosino, L; Banks, M; Zaums, K; Wilcox, C
1996-08-01
The purpose of the present study was to assess the effect of preference for three different types of music on magnitude estimation scaling behavior in young adults. Three groups of college students, 10 who liked rock music, 10 who liked big band music, and 10 who liked classical music were tested. Subjects were instructed to assign numerical values to a random series of nine suprathreshold intensity levels of 10-sec, samples of rock music, big band music, and classical music. Analysis indicated that subjects who liked rock music scaled that stimulus differently from those subjects who liked big band and classical music. Subjects who liked big band music scaled that stimulus differently from those subjects who liked rock music and classical music. All subjects scaled classical music similarly regardless of their musical preferences. Results are discussed in reference to the literature concerned with personality and preference as well as spectrographic analyses of the three different types of music used in this study.
Cumulants, free cumulants and half-shuffles
Ebrahimi-Fard, Kurusch; Patras, Frédéric
2015-01-01
Free cumulants were introduced as the proper analogue of classical cumulants in the theory of free probability. There is a mix of similarities and differences, when one considers the two families of cumulants. Whereas the combinatorics of classical cumulants is well expressed in terms of set partitions, that of free cumulants is described and often introduced in terms of non-crossing set partitions. The formal series approach to classical and free cumulants also largely differs. The purpose of this study is to put forward a different approach to these phenomena. Namely, we show that cumulants, whether classical or free, can be understood in terms of the algebra and combinatorics underlying commutative as well as non-commutative (half-)shuffles and (half-) unshuffles. As a corollary, cumulants and free cumulants can be characterized through linear fixed point equations. We study the exponential solutions of these linear fixed point equations, which display well the commutative, respectively non-commutative, character of classical and free cumulants. PMID:27547078
Gambini, R; Pullin, J
2000-12-18
We consider general relativity with a cosmological constant as a perturbative expansion around a completely solvable diffeomorphism invariant field theory. This theory is the lambda --> infinity limit of general relativity. This allows an explicit perturbative computational setup in which the quantum states of the theory and the classical observables can be explicitly computed. An unexpected relationship arises at a quantum level between the discrete spectrum of the volume operator and the allowed values of the cosmological constant.
Effect of non-classical current paths in networks of 1-dimensional wires
NASA Astrophysics Data System (ADS)
Echternach, P. M.; Mikhalchuk, A. G.; Bozler, H. M.; Gershenson, M. E.; Bogdanov, A. L.; Nilsson, B.
1996-04-01
At low temperatures, the quantum corrections to the resistance due to weak localization and electron-electron interaction are affected by the shape and topology of samples. We observed these effects in the resistance of 2D percolation networks made from 1D wires and in a series of long 1D wires with regularly spaced side branches. Branches outside the classical current path strongly reduce the quantum corrections to the resistance and these reductions become a measure of the quantum lengths.
The classical equation of state of fully ionized plasmas
NASA Astrophysics Data System (ADS)
Eisa, Dalia Ahmed
2011-03-01
The aim of this paper is to calculate the analytical form of the equation of state until the third virial coefficient of a classical system interacting via an effective potential of fully Ionized Plasmas. The excess osmotic pressure is represented in the forms of a convergent series expansions in terms of the plasma Parameter μ _{ab} = {{{e_a e_b χ } over {DKT}}}, where χ2 is the square of the inverse Debye radius. We consider only the thermal equilibrium plasma.
Maximum Entropy Calculations on a Discrete Probability Space
1986-01-01
constraints acting besides normalization. Statement 3: " The aim of this paper is to show that the die experiment just spoken of has solutions by classical ...analysis. Statement 4: We snall solve this problem in a purely classical way, without the need for recourse to any exotic estimator, such as ME." Note... The I’iximoun Entropy Principle lin i rejirk.ible -series ofT papers beginning in 1957, E. T. J.ayiieti (1957) be~gan a revuluuion in inductive
Bias and robustness of uncertainty components estimates in transient climate projections
NASA Astrophysics Data System (ADS)
Hingray, Benoit; Blanchet, Juliette; Jean-Philippe, Vidal
2016-04-01
A critical issue in climate change studies is the estimation of uncertainties in projections along with the contribution of the different uncertainty sources, including scenario uncertainty, the different components of model uncertainty and internal variability. Quantifying the different uncertainty sources faces actually different problems. For instance and for the sake of simplicity, an estimate of model uncertainty is classically obtained from the empirical variance of the climate responses obtained for the different modeling chains. These estimates are however biased. Another difficulty arises from the limited number of members that are classically available for most modeling chains. In this case, the climate response of one given chain and the effect of its internal variability may be actually difficult if not impossible to separate. The estimate of scenario uncertainty, model uncertainty and internal variability components are thus likely to be not really robust. We explore the importance of the bias and the robustness of the estimates for two classical Analysis of Variance (ANOVA) approaches: a Single Time approach (STANOVA), based on the only data available for the considered projection lead time and a time series based approach (QEANOVA), which assumes quasi-ergodicity of climate outputs over the whole available climate simulation period (Hingray and Saïd, 2014). We explore both issues for a simple but classical configuration where uncertainties in projections are composed of two single sources: model uncertainty and internal climate variability. The bias in model uncertainty estimates is explored from theoretical expressions of unbiased estimators developed for both ANOVA approaches. The robustness of uncertainty estimates is explored for multiple synthetic ensembles of time series projections generated with MonteCarlo simulations. For both ANOVA approaches, when the empirical variance of climate responses is used to estimate model uncertainty, the bias is always positive. It can be especially high with STANOVA. In the most critical configurations, when the number of members available for each modeling chain is small (< 3) and when internal variability explains most of total uncertainty variance (75% or more), the overestimation is higher than 100% of the true model uncertainty variance. The bias can be considerably reduced with a time series ANOVA approach, owing to the multiple time steps accounted for. The longer the transient time period used for the analysis, the larger the reduction. When a quasi-ergodic ANOVA approach is applied to decadal data for the whole 1980-2100 period, the bias is reduced by a factor 2.5 to 20 depending on the projection lead time. In all cases, the bias is likely to be not negligible for a large number of climate impact studies resulting in a likely large overestimation of the contribution of model uncertainty to total variance. For both approaches, the robustness of all uncertainty estimates is higher when more members are available, when internal variability is smaller and/or the response-to-uncertainty ratio is higher. QEANOVA estimates are much more robust than STANOVA ones: QEANOVA simulated confidence intervals are roughly 3 to 5 times smaller than STANOVA ones. Excepted for STANOVA when less than 3 members is available, the robustness is rather high for total uncertainty and moderate for internal variability estimates. For model uncertainty or response-to-uncertainty ratio estimates, the robustness is conversely low for QEANOVA to very low for STANOVA. In the most critical configurations (small number of member, large internal variability), large over- or underestimation of uncertainty components is very thus likely. To propose relevant uncertainty analyses and avoid misleading interpretations, estimates of uncertainty components should be therefore bias corrected and ideally come with estimates of their robustness. This work is part of the COMPLEX Project (European Collaborative Project FP7-ENV-2012 number: 308601; http://www.complex.ac.uk/). Hingray, B., Saïd, M., 2014. Partitioning internal variability and model uncertainty components in a multimodel multireplicate ensemble of climate projections. J.Climate. doi:10.1175/JCLI-D-13-00629.1 Hingray, B., Blanchet, J. (revision) Unbiased estimators for uncertainty components in transient climate projections. J. Climate Hingray, B., Blanchet, J., Vidal, J.P. (revision) Robustness of uncertainty components estimates in climate projections. J.Climate
Chau, Thinh; Parsi, Kory K; Ogawa, Toru; Kiuru, Maija; Konia, Thomas; Li, Chin-Shang; Fung, Maxwell A
2017-12-01
Psoriasis is usually diagnosed clinically, so only non-classic or refractory cases tend to be biopsied. Diagnostic uncertainty persists when dermatopathologists encounter features regarded as non-classic for psoriasis. Define and document classic and non-classic histologic features in skin biopsies from patients with clinically confirmed psoriasis. Minimal clinical diagnostic criteria were informally validated and applied to a consecutive series of biopsies histologically consistent with psoriasis. Clinical confirmation required 2 of the following criteria: (1) classic morphology, (2) classic distribution, (3) nail pitting, and (4) family history, with #1 and/or #2 as 1 criterion in every case RESULTS: Fifty-one biopsies from 46 patients were examined. Classic features of psoriasis included hypogranulosis (96%), club-shaped rete ridges (96%), dermal papilla capillary ectasia (90%), Munro microabscess (78%), suprapapillary plate thinning (63%), spongiform pustules (53%), and regular acanthosis (14%). Non-classic features included irregular acanthosis (84%), junctional vacuolar alteration (76%), spongiosis (76%), dermal neutrophils (69%), necrotic keratinocytes (67%), hypergranulosis (65%), neutrophilic spongiosis (61%), dermal eosinophils (49%), compact orthokeratosis (37%), papillary dermal fibrosis (35%), lichenoid infiltrate (25%), plasma cells (16%), and eosinophilic spongiosis (8%). Psoriasis exhibits a broader histopathologic spectrum. The presence of some non-classic features does not necessarily exclude the possibility of psoriasis. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Delia García, Rosa; Cuevas, Emilio; García, Omaira Elena; Ramos, Ramón; Romero-Campos, Pedro Miguel; de Ory, Fernado; Cachorro, Victoria Eugenia; de Frutos, Angel
2017-03-01
A 1-year inter-comparison of classical and modern radiation and sunshine duration (SD) instruments has been performed at Izaña Atmospheric Observatory (IZO) located in Tenerife (Canary Islands, Spain) starting on 17 July 2014. We compare daily global solar radiation (GSRH) records measured with a Kipp & Zonen CM-21 pyranometer, taken in the framework of the Baseline Surface Radiation Network, with those measured with a multifilter rotating shadowband radiometer (MFRSR), a bimetallic pyranometer (PYR) and GSRH estimated from sunshine duration performed by a Campbell-Stokes sunshine recorder (CS) and a Kipp & Zonen sunshine duration sensor (CSD). Given that the BSRN GSRH records passed strict quality controls (based on principles of physical limits and comparison with the LibRadtran model), they have been used as reference in the inter-comparison study. We obtain an overall root mean square error (RMSE) of ˜ 0.9 MJm-2 (4 %) for PYR and MFRSR GSRH, 1.9 (7 %) and 1.2 MJm-2 (5 %) for CS and CSD GSRH, respectively. Factors such as temperature, relative humidity (RH) and the solar zenith angle (SZA) have been shown to moderately affect the GSRH observations. As an application of the methodology developed in this work, we have re-evaluated the GSRH data time series obtained at IZO with two PYRs between 1977 and 1991. Their high consistency and temporal stability have been proved by comparing with GSRH estimates obtained from SD observations. These results demonstrate that (1) the continuous-basis inter-comparison of different GSRH techniques offers important diagnostics for identifying inconsistencies between GSRH data records, and (2) the GSRH measurements performed with classical and more simple instruments are consistent with more modern techniques and, thus, valid to recover GSRH data time series and complete worldwide distributed GSRH data. The inter-comparison and quality assessment of these different techniques have allowed us to obtain a complete and consistent long-term global solar radiation series (1977-2015) at Izaña.
Does an alteration of dialyzer design and geometry affect biocompatibility parameters?
Opatrný, Karel; Krouzzecký, Ales; Polanská, Kamila; Mares, Jan; Tomsů, Martina; Bowry, Sudhir K; Vienken, Jörg
2006-04-01
The aim of the study was to assess the biocompatibility profile of a newly developed high-flux polysulfone dialyzer type (FX-class dialyzer). The new class of dialyzers incorporates a number of novel design features (including a new membrane) that have been developed specifically in order to enhance the removal of small- and middle-size molecules. The new FX dialyzer series was compared with the classical routinely used high-flux polysulfone F series of dialyzers. In an open prospective, randomized, crossover clinical study, concentrations of the C5a complement component, and leukocyte count in blood and various thrombogenicity parameters were evaluated before, and at 15 and 60 min of hemodialysis at both dialyzer inlet and outlet in 9 long-term hemodialysis patients using the FX60S dialyzers and, after crossover, the classical F60S, while in another 9 patients, the evaluation was made with the dialyzers used in reverse order. The comparison of dialyzers based on evaluation of the group including all procedures with the FX60S and the group including procedures with the F60S did not reveal significant differences in platelet count, activated partial thromboplastin times, plasma heparin levels, platelet factor-4, D-dimer, C5a, and leukocyte count at any point of the collecting period. Both dialyzer types showed a significant increase in the plasma levels of the thrombin-antithrombin III complexes; however, the measured levels were only slightly elevated compared with the upper end of the normal range. Biocompatibility parameters reflecting the behavior of platelets, fibrinolysis, complement activation, and leukopenia do not differ during dialysis with either the FX60S or the F60S despite their large differences in design and geometry features. Although coagulation activation, as evaluated by one of the parameters used, was slightly higher with the FX60S, it was still within the range seen with other highly biocompatible dialyzers and therefore is not indicative of any appreciable activation of the coagulation system. Thus, the incorporation of various performance-enhancing design features into the new FX class of dialyzers does not result in a deterioration of their biocompatibility profile, which is comparable to that of the classical F series of dialyzers.
Aeroelastic Flight Data Analysis with the Hilbert-Huang Algorithm
NASA Technical Reports Server (NTRS)
Brenner, Martin J.; Prazenica, Chad
2006-01-01
This report investigates the utility of the Hilbert Huang transform for the analysis of aeroelastic flight data. It is well known that the classical Hilbert transform can be used for time-frequency analysis of functions or signals. Unfortunately, the Hilbert transform can only be effectively applied to an extremely small class of signals, namely those that are characterized by a single frequency component at any instant in time. The recently-developed Hilbert Huang algorithm addresses the limitations of the classical Hilbert transform through a process known as empirical mode decomposition. Using this approach, the data is filtered into a series of intrinsic mode functions, each of which admits a well-behaved Hilbert transform. In this manner, the Hilbert Huang algorithm affords time-frequency analysis of a large class of signals. This powerful tool has been applied in the analysis of scientific data, structural system identification, mechanical system fault detection, and even image processing. The purpose of this report is to demonstrate the potential applications of the Hilbert Huang algorithm for the analysis of aeroelastic systems, with improvements such as localized online processing. Applications for correlations between system input and output, and amongst output sensors, are discussed to characterize the time-varying amplitude and frequency correlations present in the various components of multiple data channels. Online stability analyses and modal identification are also presented. Examples are given using aeroelastic test data from the F-18 Active Aeroelastic Wing airplane, an Aerostructures Test Wing, and pitch plunge simulation.
Aeroelastic Flight Data Analysis with the Hilbert-Huang Algorithm
NASA Technical Reports Server (NTRS)
Brenner, Marty; Prazenica, Chad
2005-01-01
This paper investigates the utility of the Hilbert-Huang transform for the analysis of aeroelastic flight data. It is well known that the classical Hilbert transform can be used for time-frequency analysis of functions or signals. Unfortunately, the Hilbert transform can only be effectively applied to an extremely small class of signals, namely those that are characterized by a single frequency component at any instant in time. The recently-developed Hilbert-Huang algorithm addresses the limitations of the classical Hilbert transform through a process known as empirical mode decomposition. Using this approach, the data is filtered into a series of intrinsic mode functions, each of which admits a well-behaved Hilbert transform. In this manner, the Hilbert-Huang algorithm affords time-frequency analysis of a large class of signals. This powerful tool has been applied in the analysis of scientific data, structural system identification, mechanical system fault detection, and even image processing. The purpose of this paper is to demonstrate the potential applications of the Hilbert-Huang algorithm for the analysis of aeroelastic systems, with improvements such as localized/online processing. Applications for correlations between system input and output, and amongst output sensors, are discussed to characterize the time-varying amplitude and frequency correlations present in the various components of multiple data channels. Online stability analyses and modal identification are also presented. Examples are given using aeroelastic test data from the F/A-18 Active Aeroelastic Wing aircraft, an Aerostructures Test Wing, and pitch-plunge simulation.
NASA Technical Reports Server (NTRS)
Baysal, Oktay
1986-01-01
An explicit-implicit and an implicit two-dimensional Navier-Stokes code along with various grid generation capabilities were developed. A series of classical benckmark cases were simulated using these codes.
Visualization of system dynamics using phasegrams
Herbst, Christian T.; Herzel, Hanspeter; Švec, Jan G.; Wyman, Megan T.; Fitch, W. Tecumseh
2013-01-01
A new tool for visualization and analysis of system dynamics is introduced: the phasegram. Its application is illustrated with both classical nonlinear systems (logistic map and Lorenz system) and with biological voice signals. Phasegrams combine the advantages of sliding-window analysis (such as the spectrogram) with well-established visualization techniques from the domain of nonlinear dynamics. In a phasegram, time is mapped onto the x-axis, and various vibratory regimes, such as periodic oscillation, subharmonics or chaos, are identified within the generated graph by the number and stability of horizontal lines. A phasegram can be interpreted as a bifurcation diagram in time. In contrast to other analysis techniques, it can be automatically constructed from time-series data alone: no additional system parameter needs to be known. Phasegrams show great potential for signal classification and can act as the quantitative basis for further analysis of oscillating systems in many scientific fields, such as physics (particularly acoustics), biology or medicine. PMID:23697715
Universality classes of fluctuation dynamics in hierarchical complex systems
NASA Astrophysics Data System (ADS)
Macêdo, A. M. S.; González, Iván R. Roa; Salazar, D. S. P.; Vasconcelos, G. L.
2017-03-01
A unified approach is proposed to describe the statistics of the short-time dynamics of multiscale complex systems. The probability density function of the relevant time series (signal) is represented as a statistical superposition of a large time-scale distribution weighted by the distribution of certain internal variables that characterize the slowly changing background. The dynamics of the background is formulated as a hierarchical stochastic model whose form is derived from simple physical constraints, which in turn restrict the dynamics to only two possible classes. The probability distributions of both the signal and the background have simple representations in terms of Meijer G functions. The two universality classes for the background dynamics manifest themselves in the signal distribution as two types of tails: power law and stretched exponential, respectively. A detailed analysis of empirical data from classical turbulence and financial markets shows excellent agreement with the theory.
Du, Wenxiao; Zeng, Fanrong
2016-12-14
Adults of the lady beetle species Harmonia axyridis (Pallas) are bred artificially en masse for classic biological control, which requires egg-laying by the H. axyridis ovary. Development-related genes may impact the growth of the H. axyridis adult ovary but have not been reported. Here, we used integrative time-series RNA-seq analysis of the ovary in H. axyridis adults to detect development-related genes. A total of 28,558 unigenes were functionally annotated using seven types of databases to obtain an annotated unigene database for ovaries in H. axyridis adults. We also analysed differentially expressed genes (DEGs) between samples. Based on a combination of the results of this bioinformatics analysis with literature reports and gene expression level changes in four different stages, we focused on the development of oocyte reproductive stem cell and yolk formation process and identified 26 genes with high similarity to development-related genes. 20 DEGs were randomly chosen for quantitative real-time PCR (qRT-PCR) to validate the accuracy of the RNA-seq results. This study establishes a robust pipeline for the discovery of key genes using high-throughput sequencing and the identification of a class of development-related genes for characterization.
A stacking method and its applications to Lanzarote tide gauge records
NASA Astrophysics Data System (ADS)
Zhu, Ping; van Ruymbeke, Michel; Cadicheanu, Nicoleta
2009-12-01
A time-period analysis tool based on stacking is introduced in this paper. The original idea comes from the classical tidal analysis method. It is assumed that the period of each major tidal component is precisely determined based on the astronomical constants and it is unchangeable with time at a given point in the Earth. We sum the tidal records at a fixed tidal component center period T then take the mean of it. The stacking could significantly increase the signal-to-noise ratio (SNR) if a certain number of stacking circles is reached. The stacking results were fitted using a sinusoidal function, the amplitude and phase of the fitting curve is computed by the least squares methods. The advantage of the method is that: (1) an individual periodical signal could be isolated by stacking; (2) one can construct a linear Stacking-Spectrum (SSP) by changing the stacking period Ts; (3) the time-period distribution of the singularity component could be approximated by a Sliding-Stacking approach. The shortcoming of the method is that in order to isolate a low energy frequency or separate the nearby frequencies, we need a long enough series with high sampling rate. The method was tested with a numeric series and then it was applied to 1788 days Lanzarote tide gauge records as an example.
New variables for classical and quantum gravity in all dimensions: I. Hamiltonian analysis
NASA Astrophysics Data System (ADS)
Bodendorfer, N.; Thiemann, T.; Thurn, A.
2013-02-01
Loop quantum gravity (LQG) relies heavily on a connection formulation of general relativity such that (1) the connection Poisson commutes with itself and (2) the corresponding gauge group is compact. This can be achieved starting from the Palatini or Holst action when imposing the time gauge. Unfortunately, this method is restricted to D + 1 = 4 spacetime dimensions. However, interesting string theories and supergravity theories require higher dimensions and it would therefore be desirable to have higher dimensional supergravity loop quantizations at one’s disposal in order to compare these approaches. In this series of papers we take first steps toward this goal. The present first paper develops a classical canonical platform for a higher dimensional connection formulation of the purely gravitational sector. The new ingredient is a different extension of the ADM phase space than the one used in LQG which does not require the time gauge and which generalizes to any dimension D > 1. The result is a Yang-Mills theory phase space subject to Gauß, spatial diffeomorphism and Hamiltonian constraint as well as one additional constraint, called the simplicity constraint. The structure group can be chosen to be SO(1, D) or SO(D + 1) and the latter choice is preferred for purposes of quantization.
Modification of hormonal secretion in clinically silent pituitary adenomas.
Daems, Tania; Verhelst, Johan; Michotte, Alex; Abrams, Pascale; De Ridder, Dirk; Abs, Roger
2009-01-01
Silent pituitary adenomas are a subtype of adenomas characterized by positive immunoreactivity for one or more hormones classically secreted by normal pituitary cells but without clinical expression, although in some occasions enhanced or changed secretory activity can develop over time. Silent corticotroph adenomas are the classical example of this phenomenon. A series of about 500 pituitary adenomas seen over a period of 20 years were screened for modification in hormonal secretion. Biochemical and immunohistochemical data were reviewed. Two cases were retrieved, one silent somatotroph adenoma and one thyrotroph adenoma, both without specific clinical features or biochemical abnormalities, which presented 20 years after initial surgery with evidence of acromegaly and hyperthyroidism, respectively. While the acromegaly was controlled by a combination of somatostatin analogs and growth hormone (GH) receptor antagonist therapy, neurosurgery was necessary to manage the thyrotroph adenoma. Immunohistochemical examination demonstrated an increase in the number of thyroid stimulating hormone (TSH)-immunoreactive cells compared to the first tissue. Apparently, the mechanisms responsible for the secretory modifications are different, being a change in secretory capacity in the silent somatotroph adenoma and a quantitative change in the silent thyrotroph adenoma. These two cases, one somatotroph and one thyrotroph adenoma, are an illustration that clinically silent pituitary adenomas may in rare circumstances evolve over time and become active, as previously demonstrated in silent corticotroph adenomas.
Imperfect pitch: Gabor's uncertainty principle and the pitch of extremely brief sounds.
Hsieh, I-Hui; Saberi, Kourosh
2016-02-01
How brief must a sound be before its pitch is no longer perceived? The uncertainty tradeoff between temporal and spectral resolution (Gabor's principle) limits the minimum duration required for accurate pitch identification or discrimination. Prior studies have reported that pitch can be extracted from sinusoidal pulses as brief as half a cycle. This finding has been used in a number of classic papers to develop models of pitch encoding. We have found that phase randomization, which eliminates timbre confounds, degrades this ability to chance, raising serious concerns over the foundation on which classic pitch models have been built. The current study investigated whether subthreshold pitch cues may still exist in partial-cycle pulses revealed through statistical integration in a time series containing multiple pulses. To this end, we measured frequency-discrimination thresholds in a two-interval forced-choice task for trains of partial-cycle random-phase tone pulses. We found that residual pitch cues exist in these pulses but discriminating them requires an order of magnitude (ten times) larger frequency difference than that reported previously, necessitating a re-evaluation of pitch models built on earlier findings. We also found that as pulse duration is decreased to less than two cycles its pitch becomes biased toward higher frequencies, consistent with predictions of an auto-correlation model of pitch extraction.
Change detection of polarimetric SAR images based on the KummerU Distribution
NASA Astrophysics Data System (ADS)
Chen, Quan; Zou, Pengfei; Li, Zhen; Zhang, Ping
2014-11-01
In the society of PolSAR image segmentation, change detection and classification, the classical Wishart distribution has been used for a long time, but it especially suit to low-resolution SAR image, because in traditional sensors, only a small number of scatterers are present in each resolution cell. With the improving of SAR systems these years, the classical statistical models can therefore be reconsidered for high resolution and polarimetric information contained in the images acquired by these advanced systems. In this study, SAR image segmentation algorithm based on level-set method, added with distance regularized level-set evolution (DRLSE) is performed using Envisat/ASAR single-polarization data and Radarsat-2 polarimetric images, respectively. KummerU heterogeneous clutter model is used in the later to overcome the homogeneous hypothesis at high resolution cell. An enhanced distance regularized level-set evolution (DRLSE-E) is also applied in the later, to ensure accurate computation and stable level-set evolution. Finally, change detection based on four polarimetric Radarsat-2 time series images is carried out at Genhe area of Inner Mongolia Autonomous Region, NorthEastern of China, where a heavy flood disaster occurred during the summer of 2013, result shows the recommend segmentation method can detect the change of watershed effectively.
Rigorous Combination of GNSS and VLBI: How it Improves Earth Orientation and Reference Frames
NASA Astrophysics Data System (ADS)
Lambert, S. B.; Richard, J. Y.; Bizouard, C.; Becker, O.
2017-12-01
Current reference series (C04) of the International Earth Rotation and Reference Systems Service (IERS) are produced by a weighted combination of Earth orientation parameters (EOP) time series built up by combination centers of each technique (VLBI, GNSS, Laser ranging, DORIS). In the future, we plan to derive EOP from a rigorous combination of the normal equation systems of the four techniques.We present here the results of a rigorous combination of VLBI and GNSS pre-reduced, constraint-free, normal equations with the DYNAMO geodetic analysis software package developed and maintained by the French GRGS (Groupe de Recherche en GeÌodeÌsie Spatiale). The used normal equations are those produced separately by the IVS and IGS combination centers to which we apply our own minimal constraints.We address the usefulness of such a method with respect to the classical, a posteriori, combination method, and we show whether EOP determinations are improved.Especially, we implement external validations of the EOP series based on comparison with geophysical excitation and examination of the covariance matrices. Finally, we address the potential of the technique for the next generation celestial reference frames, which are currently determined by VLBI only.
Modular Curriculum: English/Social Studies, Japanese Civilization.
ERIC Educational Resources Information Center
Spear, Richard L.
This independent study course for college credit is a study of Japanese civilization. The nine lessons that comprise the course are: 1. The Origins of the Civilization: From Primitive to Early Classical Times; 2. The Classical Tradition I: The Religion and Aesthetics of Classical Times; 3. The Classical Tradition II: A View of Court Life through…
OSMEAN - OSCULATING/MEAN CLASSICAL ORBIT ELEMENTS CONVERSION (HP9000/7XX VERSION)
NASA Technical Reports Server (NTRS)
Guinn, J. R.
1994-01-01
OSMEAN is a sophisticated FORTRAN algorithm that converts between osculating and mean classical orbit elements. Mean orbit elements are advantageous for trajectory design and maneuver planning since they can be propagated very quickly; however, mean elements cannot describe the exact orbit at any given time. Osculating elements will enable the engineer to give an exact description of an orbit; however, computation costs are significantly higher due to the numerical integration procedure required for propagation. By calculating accurate conversions between osculating and mean orbit elements, OSMEAN allows the engineer to exploit the advantages of each approach for the design and planning of orbital trajectories and maneuver planning. OSMEAN is capable of converting mean elements to osculating elements or vice versa. The conversion is based on modelling of all first order aspherical and lunar-solar gravitation perturbations as well as a second-order aspherical term based on the second degree central body zonal perturbation. OSMEAN is written in FORTRAN 77 for HP 9000 series computers running HP-UX (NPO-18796) and DEC VAX series computers running VMS (NPO-18741). The HP version requires 388K of RAM for execution and the DEC VAX version requires 254K of RAM for execution. Sample input and output are listed in the documentation. Sample input is also provided on the distribution medium. The standard distribution medium for the HP 9000 series version is a .25 inch streaming magnetic IOTAMAT tape cartridge in UNIX tar format. It is also available on a .25 inch streaming magnetic tape cartridge in UNIX tar format or on a 3.5 inch diskette in UNIX tar format. The standard distribution medium for the DEC VAX version is a 1600 BPI 9-track magnetic tape in DEC VAX BACKUP format. It is also available on a TK50 tape cartridge in DEC VAX BACKUP format. OSMEAN was developed on a VAX 6410 in 1989, and was ported to the HP 9000 series platform in 1991. It is a copyrighted work with all copyright vested in NASA.
OSMEAN - OSCULATING/MEAN CLASSICAL ORBIT ELEMENTS CONVERSION (VAX VMS VERSION)
NASA Technical Reports Server (NTRS)
Guinn, J. R.
1994-01-01
OSMEAN is a sophisticated FORTRAN algorithm that converts between osculating and mean classical orbit elements. Mean orbit elements are advantageous for trajectory design and maneuver planning since they can be propagated very quickly; however, mean elements cannot describe the exact orbit at any given time. Osculating elements will enable the engineer to give an exact description of an orbit; however, computation costs are significantly higher due to the numerical integration procedure required for propagation. By calculating accurate conversions between osculating and mean orbit elements, OSMEAN allows the engineer to exploit the advantages of each approach for the design and planning of orbital trajectories and maneuver planning. OSMEAN is capable of converting mean elements to osculating elements or vice versa. The conversion is based on modelling of all first order aspherical and lunar-solar gravitation perturbations as well as a second-order aspherical term based on the second degree central body zonal perturbation. OSMEAN is written in FORTRAN 77 for HP 9000 series computers running HP-UX (NPO-18796) and DEC VAX series computers running VMS (NPO-18741). The HP version requires 388K of RAM for execution and the DEC VAX version requires 254K of RAM for execution. Sample input and output are listed in the documentation. Sample input is also provided on the distribution medium. The standard distribution medium for the HP 9000 series version is a .25 inch streaming magnetic IOTAMAT tape cartridge in UNIX tar format. It is also available on a .25 inch streaming magnetic tape cartridge in UNIX tar format or on a 3.5 inch diskette in UNIX tar format. The standard distribution medium for the DEC VAX version is a 1600 BPI 9-track magnetic tape in DEC VAX BACKUP format. It is also available on a TK50 tape cartridge in DEC VAX BACKUP format. OSMEAN was developed on a VAX 6410 in 1989, and was ported to the HP 9000 series platform in 1991. It is a copyrighted work with all copyright vested in NASA.
Diverse Power Iteration Embeddings and Its Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang H.; Yoo S.; Yu, D.
2014-12-14
Abstract—Spectral Embedding is one of the most effective dimension reduction algorithms in data mining. However, its computation complexity has to be mitigated in order to apply it for real-world large scale data analysis. Many researches have been focusing on developing approximate spectral embeddings which are more efficient, but meanwhile far less effective. This paper proposes Diverse Power Iteration Embeddings (DPIE), which not only retains the similar efficiency of power iteration methods but also produces a series of diverse and more effective embedding vectors. We test this novel method by applying it to various data mining applications (e.g. clustering, anomaly detectionmore » and feature selection) and evaluating their performance improvements. The experimental results show our proposed DPIE is more effective than popular spectral approximation methods, and obtains the similar quality of classic spectral embedding derived from eigen-decompositions. Moreover it is extremely fast on big data applications. For example in terms of clustering result, DPIE achieves as good as 95% of classic spectral clustering on the complex datasets but 4000+ times faster in limited memory environment.« less
Socially acquired predator avoidance: is it just classical conditioning?
Griffin, Andrea S
2008-06-15
Associative learning theories presume the existence of a general purpose learning process, the structure of which does not mirror the demands of any particular learning problem. In contrast, learning scientists working within an Evolutionary Biology tradition believe that learning processes have been shaped by ecological demands. One potential means of exploring how ecology may have modified properties of acquisition is to use associative learning theory as a framework within which to analyse a particular learning phenomenon. Recent work has used this approach to examine whether socially transmitted predator avoidance can be conceptualised as a classical conditioning process in which a novel predator stimulus acts as a conditioned stimulus (CS) and acquires control over an avoidance response after it has become associated with alarm signals of social companions, the unconditioned stimulus (US). I review here a series of studies examining the effect of CS/US presentation timing on the likelihood of acquisition. Results suggest that socially acquired predator avoidance may be less sensitive to forward relationships than traditional classical conditioning paradigms. I make the case that socially acquired predator avoidance is an exciting novel one-trial learning paradigm that could be studied along side fear conditioning. Comparisons between social and non-social learning of danger at both the behavioural and neural level may yield a better understanding of how ecology might shape properties and mechanisms of learning.
[Scimitar syndrome: a case series].
Jaramillo González, Carlos; Karam Bechara, José; Sáenz Gómez, Jessica; Siegert Olivares, Augusto; Jamaica Balderas, Lourdes
Scimitar syndrome is a rare and complex congenital anomaly of the lung with multiple variables and is named for its resemblance to the classical radiological crooked sword. Its defining feature is the anomalous pulmonary drainage. It is associated with various cardiothoracic malformations and a wide spectrum of clinical manifestations. Nine patients diagnosed with scimitar syndrome found in the database of Hospital Infantil de México between 2009 and 2013 were reviewed. Demographic records, clinical status and hemodynamic parameters reported were collected. This case series called attention to certain differences between our group of patients and those reported in the international literature. Patients were predominantly female and were diagnosed between 1 and 20 months of life. All were asymptomatic at the time of the study. Half of the patients had a history of respiratory disease and all patients had with pulmonary hypertension. Surgical management was required in on-third of the patient group. Copyright © 2014 Hospital Infantil de México Federico Gómez. Publicado por Masson Doyma México S.A. All rights reserved.
Automatic Network Fingerprinting through Single-Node Motifs
Echtermeyer, Christoph; da Fontoura Costa, Luciano; Rodrigues, Francisco A.; Kaiser, Marcus
2011-01-01
Complex networks have been characterised by their specific connectivity patterns (network motifs), but their building blocks can also be identified and described by node-motifs—a combination of local network features. One technique to identify single node-motifs has been presented by Costa et al. (L. D. F. Costa, F. A. Rodrigues, C. C. Hilgetag, and M. Kaiser, Europhys. Lett., 87, 1, 2009). Here, we first suggest improvements to the method including how its parameters can be determined automatically. Such automatic routines make high-throughput studies of many networks feasible. Second, the new routines are validated in different network-series. Third, we provide an example of how the method can be used to analyse network time-series. In conclusion, we provide a robust method for systematically discovering and classifying characteristic nodes of a network. In contrast to classical motif analysis, our approach can identify individual components (here: nodes) that are specific to a network. Such special nodes, as hubs before, might be found to play critical roles in real-world networks. PMID:21297963
Unemployment and inflation dynamics prior to the economic downturn of 2007-2008.
Guastello, Stephen J; Myers, Adam
2009-10-01
This article revisits a long-standing theoretical issue as to whether a "natural rate" of unemployment exists in the sense of an exogenously driven fixed-point Walrasian equilibrium or attractor, or whether more complex dynamics such as hysteresis or chaos characterize an endogenous dynamical process instead. The same questions are posed regarding a possible natural rate of inflation along with an investigation of the actual relationship between inflation and unemployment for which extent theories differ. Time series of unemployment and inflation for US data - were analyzed using the exponential model series and nonlinear regression for capturing Lyapunov exponents and transfer effects from other variables. The best explanation for unemployment was that it is a chaotic variable that is driven in part by inflation. The best explanation for inflation is that it is also a chaotic variable driven in part by unemployment and the prices of treasury bills. Estimates of attractors' epicenters were calculated in lieu of classical natural rates.
Fractal dimension and nonlinear dynamical processes
NASA Astrophysics Data System (ADS)
McCarty, Robert C.; Lindley, John P.
1993-11-01
Mandelbrot, Falconer and others have demonstrated the existence of dimensionally invariant geometrical properties of non-linear dynamical processes known as fractals. Barnsley defines fractal geometry as an extension of classical geometry. Such an extension, however, is not mathematically trivial Of specific interest to those engaged in signal processing is the potential use of fractal geometry to facilitate the analysis of non-linear signal processes often referred to as non-linear time series. Fractal geometry has been used in the modeling of non- linear time series represented by radar signals in the presence of ground clutter or interference generated by spatially distributed reflections around the target or a radar system. It was recognized by Mandelbrot that the fractal geometries represented by man-made objects had different dimensions than the geometries of the familiar objects that abound in nature such as leaves, clouds, ferns, trees, etc. The invariant dimensional property of non-linear processes suggests that in the case of acoustic signals (active or passive) generated within a dispersive medium such as the ocean environment, there exists much rich structure that will aid in the detection and classification of various objects, man-made or natural, within the medium.
NASA Astrophysics Data System (ADS)
Hughes, S.; Gotoh, H.; Kamada, H.
2006-09-01
We present a theoretical study of photon-coupled single quantum dots in a semiconductor. A series of optical effects are demonstrated, including a subradiant dark resonance, superradiance, reversible spontaneous emission decay, and pronounced exciton entanglement. Both classical and quantum optical approaches are presented using a self-consistent formalism that treats real and virtual photon exchange on an equal footing and can account for different quantum dot properties, surface effects, and retardation in the dipole-dipole coupling, all of which are shown to play a non-negligible role.
["Secret causes": causality and determinism in the classical age].
Cléro, Jean-Pierre
2014-01-01
The notion of the "secret cause", which appears in many classical texts is tied to a particular practice of science and a conception of its methods where the "law" finds itself at the center of the nexus. If certain phenomena appear to escape the law, one is obliged to amend the law through the introduction of a series of "small equations." If the calculation of probabilities is deployed, this is to precisely reveal causes which are, at their origin, secret, but which will gradually become less so and eventually conform to laws.
Expansion of the gravitational potential with computerized Poisson series
NASA Technical Reports Server (NTRS)
Broucke, R.
1976-01-01
The paper describes a recursive formulation for the expansion of the gravitational potential valid for both the tesseral and zonal harmonics. The expansion is primarily in rectangular coordinates, but the classical orbit elements or equinoctial orbit elements can be easily substituted. The equations of motion for the zonal harmonics in both classical and equinoctial orbital elements are described in a form which will result in closed-form expressions for the first-order perturbations. In order to achieve this result, the true longitude or true anomaly have to be used as independent variables.
Immunomodulation of classical and non-classical HLA molecules by ionizing radiation.
Gallegos, Cristina E; Michelin, Severino; Dubner, Diana; Carosella, Edgardo D
2016-05-01
Radiotherapy has been employed for the treatment of oncological patients for nearly a century, and together with surgery and chemotherapy, radiation oncology constitutes one of the three pillars of cancer therapy. Ionizing radiation has complex effects on neoplastic cells and on tumor microenvironment: beyond its action as a direct cytotoxic agent, tumor irradiation triggers a series of alterations in tumoral cells, which includes the de novo synthesis of particular proteins and the up/down-regulation of cell surface molecules. Additionally, ionizing radiation may induce the release of "danger signals" which may, in turn lead to cellular and molecular responses by the immune system. This immunomodulatory action of ionizing radiation highlights the importance of the combined use (radiotherapy plus immunotherapy) for cancer healing. Major histocompatibility complex antigens (also called Human Leukocyte Antigens, HLA in humans) are one of those molecules whose expression is modulated after irradiation. This review summarizes the modulatory properties of ionizing radiation on the expression of HLA class I (classical and non-classical) and class II molecules, with special emphasis in non-classical HLA-I molecules. Copyright © 2016 Elsevier Inc. All rights reserved.
SPAGETTA, a Gridded Weather Generator: Calibration, Validation and its Use for Future Climate
NASA Astrophysics Data System (ADS)
Dubrovsky, Martin; Rotach, Mathias W.; Huth, Radan
2017-04-01
Spagetta is a new (started in 2016) stochastic multi-site multi-variate weather generator (WG). It can produce realistic synthetic daily (or monthly, or annual) weather series representing both present and future climate conditions at multiple sites (grids or stations irregularly distributed in space). The generator, whose model is based on the Wilks' (1999) multi-site extension of the parametric (Richardson's type) single site M&Rfi generator, may be run in two modes: In the first mode, it is run as a classical generator, which is calibrated in the first step using weather data from multiple sites, and only then it may produce arbitrarily long synthetic time series mimicking the spatial and temporal structure of the calibration weather data. To generate the weather series representing the future climate, the WG parameters are modified according to the climate change scenario, typically derived from GCM or RCM simulations. In the second mode, the user provides only basic information (not necessarily to be realistic) on the temporal and spatial auto-correlation structure of the surface weather variables and their mean annual cycle; the generator itself derives the parameters of the underlying autoregressive model, which produces the multi-site weather series. In the latter mode of operation, the user is allowed to prescribe the spatially varying trend, which is superimposed to the values produced by the generator; this feature has been implemented for use in developing the methodology for assessing significance of trends in multi-site weather series (for more details see another EGU-2017 contribution: Huth and Dubrovsky, 2017, Evaluating collective significance of climatic trends: A comparison of methods on synthetic data; EGU2017-4993). This contribution will focus on the first (classical) mode. The poster will present (a) model of the generator, (b) results of the validation tests made in terms of the spatial hot/cold/dry/wet spells, and (c) results of the pilot climate change impact experiment, in which (i) the WG parameters representing the spatial and temporal variability are modified using the climate change scenarios and then (ii) the effect on the above spatial validation indices derived from the synthetic series produced by the modified WG is analysed. In this experiment, the generator is calibrated using the E-OBS gridded daily weather data for several European regions, and the climate change scenarios are derived from the selected RCM simulation (taken from the CORDEX database).
Allamani, Allaman; Mattiacci, Silvia
2015-03-01
This article constitutes a discovery journey into the world of drinking images, the pleasures and harms related to consuming alcoholic beverages, as well as the relationships between drinking and spirituality. These aspects are described historically and globally, over time through a series of snapshots and mini-discussions about both visual and mental images from art, classical literature and operatic music.The images are interpreted according to how they represent the drinking culture within which they were created and sustained, and how they are able to involve the spectator and the user in terms of either empathizing, accepting and including or distancing, stigmatizing and marginalizing the user.
NASA Astrophysics Data System (ADS)
Jiang, Kaili; Zhu, Jun; Tang, Bin
2017-12-01
Periodic nonuniform sampling occurs in many applications, and the Nyquist folding receiver (NYFR) is an efficient, low complexity, and broadband spectrum sensing architecture. In this paper, we first derive that the radio frequency (RF) sample clock function of NYFR is periodic nonuniform. Then, the classical results of periodic nonuniform sampling are applied to NYFR. We extend the spectral reconstruction algorithm of time series decomposed model to the subsampling case by using the spectrum characteristics of NYFR. The subsampling case is common for broadband spectrum surveillance. Finally, we take example for a LFM signal under large bandwidth to verify the proposed algorithm and compare the spectral reconstruction algorithm with orthogonal matching pursuit (OMP) algorithm.
Real-time dynamics of matrix quantum mechanics beyond the classical approximation
NASA Astrophysics Data System (ADS)
Buividovich, Pavel; Hanada, Masanori; Schäfer, Andreas
2018-03-01
We describe a numerical method which allows to go beyond the classical approximation for the real-time dynamics of many-body systems by approximating the many-body Wigner function by the most general Gaussian function with time-dependent mean and dispersion. On a simple example of a classically chaotic system with two degrees of freedom we demonstrate that this Gaussian state approximation is accurate for significantly smaller field strengths and longer times than the classical one. Applying this approximation to matrix quantum mechanics, we demonstrate that the quantum Lyapunov exponents are in general smaller than their classical counterparts, and even seem to vanish below some temperature. This behavior resembles the finite-temperature phase transition which was found for this system in Monte-Carlo simulations, and ensures that the system does not violate the Maldacena-Shenker-Stanford bound λL < 2πT, which inevitably happens for classical dynamics at sufficiently small temperatures.
Further summation formulae related to generalized harmonic numbers
NASA Astrophysics Data System (ADS)
Zheng, De-Yin
2007-11-01
By employing the univariate series expansion of classical hypergeometric series formulae, Shen [L.-C. Shen, Remarks on some integrals and series involving the Stirling numbers and [zeta](n), Trans. Amer. Math. Soc. 347 (1995) 1391-1399] and Choi and Srivastava [J. Choi, H.M. Srivastava, Certain classes of infinite series, Monatsh. Math. 127 (1999) 15-25; J. Choi, H.M. Srivastava, Explicit evaluation of Euler and related sums, Ramanujan J. 10 (2005) 51-70] investigated the evaluation of infinite series related to generalized harmonic numbers. More summation formulae have systematically been derived by Chu [W. Chu, Hypergeometric series and the Riemann Zeta function, Acta Arith. 82 (1997) 103-118], who developed fully this approach to the multivariate case. The present paper will explore the hypergeometric series method further and establish numerous summation formulae expressing infinite series related to generalized harmonic numbers in terms of the Riemann Zeta function [zeta](m) with m=5,6,7, including several known ones as examples.
NASA Astrophysics Data System (ADS)
Lavin, Alicia; Somavilla, Raquel; Cano, Daniel; Rodriguez, Carmen; Gonzalez-Pola, Cesar; Viloria, Amaia; Tel, Elena; Ruiz-Villareal, Manuel
2017-04-01
Long-Term Time Series Stations have been developed in order to document seasonal to decadal scale variations in key physical and biogeochemical parameters. Long-term time series measurements are crucial for determining the physical and biological mechanisms controlling the system. The Science and Technology Ministers of the G7 in their Tsukuba Communiqué have stated that 'many parts of the ocean interior are not sufficiently observed' and that 'it is crucial to develop far stronger scientific knowledge necessary to assess the ongoing changes in the ocean and their impact on economies.' Time series has been classically obtained by oceanographic ships that regularly cover standard sections and stations. From 1991, shelf and slope waters of the Southern Bay of Biscay are regularly sampled in a monthly hydrographic line north of Santander to a depth of 1000 m in early stages and for the whole water column down to 2580 m in recent times. Nearby, in June 2007, the IEO deployed an oceanic-meteorological buoy (AGL Buoy, 43° 50.67'N; 3° 46.20'W, and 40 km offshore, www.boya-agl.st.ieo.es). The Santander Atlantic Time Series Station is integrated in the Spanish Institute of Oceanography Observing Sistem (IEOOS). The long-term hydrographic monitoring has allowed to define the seasonality of the main oceanographic facts as the upwelling, the Iberian Poleward Current, low salinity incursions, trends and interannual variability at mixing layer, and at the main water masses North Atlantic Central Water and Mediterranean Water. The relation of these changes with the high frequency surface conditions recorded by the Biscay AGL has been examined using also satellite and reanalysis data. During the FIXO3 Project (Fixed-point Open Ocean Observatories), and using this combined sources, some products and quality controled series of high interest and utility for scientific purposes has been developed. Hourly products as Sea Surface Temperature and Salinity anomalies, wave significant height character with respect to monthly average, and currents with respect to seasonal averages. Ocean-atmosphere heat fluxes (latent and sensible) are computed from the buoy atmospheric and oceanic measurements. Estimations of the mixed layer depth and bulk series at different water levels are provided in a monthly basis. Quality controlled series are distributed for sea surface salinity, oxygen and chlorophyll data. Some sensors are particularly affected by biofouling, and monthly visits to the buoy permit to follow these sensors behaviour. Chlorophyll-fluorescence sensor is the main concern, but Dissolved Oxygen sensor is also problematic. Periods of realistic smooth variations present strong offset that is corrected based on the Winkler analysis of water samples. Also Wind air temperature and humidilty buoy sensors are monthly compared with the research vessel data. Next step will consist in working on a better validation of the data, mainly ten-year data from the Biscay AGL buoy, but also the 25 year data of the station 7, close to the buoy. Data will be depurated an analyzed and the final product will be published and widening to improve and get the better use of them.
Optimal processor assignment for pipeline computations
NASA Technical Reports Server (NTRS)
Nicol, David M.; Simha, Rahul; Choudhury, Alok N.; Narahari, Bhagirath
1991-01-01
The availability of large scale multitasked parallel architectures introduces the following processor assignment problem for pipelined computations. Given a set of tasks and their precedence constraints, along with their experimentally determined individual responses times for different processor sizes, find an assignment of processor to tasks. Two objectives are of interest: minimal response given a throughput requirement, and maximal throughput given a response time requirement. These assignment problems differ considerably from the classical mapping problem in which several tasks share a processor; instead, it is assumed that a large number of processors are to be assigned to a relatively small number of tasks. Efficient assignment algorithms were developed for different classes of task structures. For a p processor system and a series parallel precedence graph with n constituent tasks, an O(np2) algorithm is provided that finds the optimal assignment for the response time optimization problem; it was found that the assignment optimizing the constrained throughput in O(np2log p) time. Special cases of linear, independent, and tree graphs are also considered.
An Approximate Markov Model for the Wright-Fisher Diffusion and Its Application to Time Series Data.
Ferrer-Admetlla, Anna; Leuenberger, Christoph; Jensen, Jeffrey D; Wegmann, Daniel
2016-06-01
The joint and accurate inference of selection and demography from genetic data is considered a particularly challenging question in population genetics, since both process may lead to very similar patterns of genetic diversity. However, additional information for disentangling these effects may be obtained by observing changes in allele frequencies over multiple time points. Such data are common in experimental evolution studies, as well as in the comparison of ancient and contemporary samples. Leveraging this information, however, has been computationally challenging, particularly when considering multilocus data sets. To overcome these issues, we introduce a novel, discrete approximation for diffusion processes, termed mean transition time approximation, which preserves the long-term behavior of the underlying continuous diffusion process. We then derive this approximation for the particular case of inferring selection and demography from time series data under the classic Wright-Fisher model and demonstrate that our approximation is well suited to describe allele trajectories through time, even when only a few states are used. We then develop a Bayesian inference approach to jointly infer the population size and locus-specific selection coefficients with high accuracy and further extend this model to also infer the rates of sequencing errors and mutations. We finally apply our approach to recent experimental data on the evolution of drug resistance in influenza virus, identifying likely targets of selection and finding evidence for much larger viral population sizes than previously reported. Copyright © 2016 by the Genetics Society of America.
Delay-time distribution in the scattering of time-narrow wave packets (II)—quantum graphs
NASA Astrophysics Data System (ADS)
Smilansky, Uzy; Schanz, Holger
2018-02-01
We apply the framework developed in the preceding paper in this series (Smilansky 2017 J. Phys. A: Math. Theor. 50 215301) to compute the time-delay distribution in the scattering of ultra short radio frequency pulses on complex networks of transmission lines which are modeled by metric (quantum) graphs. We consider wave packets which are centered at high wave number and comprise many energy levels. In the limit of pulses of very short duration we compute upper and lower bounds to the actual time-delay distribution of the radiation emerging from the network using a simplified problem where time is replaced by the discrete count of vertex-scattering events. The classical limit of the time-delay distribution is also discussed and we show that for finite networks it decays exponentially, with a decay constant which depends on the graph connectivity and the distribution of its edge lengths. We illustrate and apply our theory to a simple model graph where an algebraic decay of the quantum time-delay distribution is established.
NASA Astrophysics Data System (ADS)
Gruszczynska, Marta; Rosat, Severine; Klos, Anna; Bogusz, Janusz
2017-04-01
Seasonal oscillations in the GPS position time series can arise from real geophysical effects and numerical artefacts. According to Dong et al. (2002) environmental loading effects can account for approximately 40% of the total variance of the annual signals in GPS time series, however using generally acknowledged methods (e.g. Least Squares Estimation, Wavelet Decomposition, Singular Spectrum Analysis) to model seasonal signals we are not able to separate real from spurious signals (effects of mismodelling aliased into annual period as well as draconitic). Therefore, we propose to use Multichannel Singular Spectrum Analysis (MSSA) to determine seasonal oscillations (with annual and semi-annual periods) from GPS position time series and environmental loading displacement models. The MSSA approach is an extension of the classical Karhunen-Loève method and it is a special case of SSA for multivariate time series. The main advantage of MSSA is the possibility to extract common seasonal signals for stations from selected area and to investigate the causality between a set of time series as well. In this research, we explored the ability of MSSA application to separate real geophysical effects from spurious effects in GPS time series. For this purpose, we used GPS position changes and environmental loading models. We analysed the topocentric time series from 250 selected stations located worldwide, delivered from Network Solution obtained by the International GNSS Service (IGS) as a contribution to the latest realization of the International Terrestrial Reference System (namely ITRF2014, Rebishung et al., 2016). We also researched atmospheric, hydrological and non-tidal oceanic loading models provided by the EOST/IPGS Loading Service in the Centre-of-Figure (CF) reference frame. The analysed displacements were estimated from ERA-Interim (surface pressure), MERRA-land (soil moisture and snow) as well as ECCO2 ocean bottom pressure. We used Multichannel Singular Spectrum Analysis to determine common seasonal signals in two case studies with adopted a 3-years lag-window as the optimal window size. We also inferred the statistical significance of oscillations through the Monte Carlo MSSA method (Allen and Robertson, 1996). In the first case study, we investigated the common spatio-temporal seasonal signals for all stations. For this purpose, we divided selected stations with respect to the continents. For instance, for stations located in Europe, seasonal oscillations accounts for approximately 45% of the GPS-derived data variance. Much higher variance of seasonal signals is explained by hydrological loadings of about 92%, while the non-tidal oceanic loading accounted for 31% of total variance. In the second case study, we analysed the capability of the MSSA method to establish a causality between several time series. Each of estimated Principal Component represents pattern of the common signal for all analysed data. For ZIMM station (Zimmerwald, Switzerland), the 1st, 2nd and 9th, 10th Principal Components, which accounts for 35% of the variance, corresponds to the annual and semi-annual signals. In this part, we applied the non-parametric MSSA approach to extract the common seasonal signals for GPS time series and environmental loadings for each of the 250 stations with clear statement, that some part of seasonal signal reflects the real geophysical effects. REFERENCES: 1. Allen, M. and Robertson, A.: 1996, Distinguishing modulated oscillations from coloured noise in multivariate datasets. Climate Dynamics, 12, No. 11, 775-784. DOI: 10.1007/s003820050142. 2. Dong, D., Fang, P., Bock, Y., Cheng, M.K. and Miyazaki, S.: 2002, Anatomy of apparent seasonal variations from GPS-derived site position time series. Journal of Geophysical Research, 107, No. B4, 2075. DOI: 10.1029/2001JB000573. 3. Rebischung, P., Altamimi, Z., Ray, J. and Garayt, B.: 2016, The IGS contribution to ITRF2014. Journal of Geodesy, 90, No. 7, 611-630. DOI:10.1007/s00190-016-0897-6.
Larsen, Kristian Nørgaard; Kristensen, Søren Rud; Søgaard, Rikke
2018-01-01
Health care systems increasingly aim to create value for money by simultaneous incentivizing of quality along with classical goals such as activity increase and cost containment. It has recently been suggested that letting health care professionals choose the performance metrics on which they are evaluated may improve value of care by facilitating greater employee initiative, especially in the quality domain. There is a risk that this strategy leads to loss of performance as measured by the classical goals, if these goals are not prioritized by health care professionals. In this study we investigate the performance of eight hospital departments in the second largest region of Denmark that were delegated the authority to choose their own performance focus during a three-year test period from 2013 to 2016. The usual activity-based remuneration was suspended and departments were instructed to keep their global budgets and maintain activity levels, while managing according to their newly chosen performance focuses. Our analysis is based on monthly observations from two years before to three years after delegation. We collected data for 32 new performance indicators chosen by hospital department managements; 11 new performance indicators chosen by a centre management under which 5 of the departments were organised; and 3 classical indicators of priority to the central administration (activity, productivity, and cost containment). Interrupted time series analysis is used to estimate the effect of delegation on these indicators. We find no evidence that this particular proposal for giving health care professionals greater autonomy leads to consistent quality improvements but, on the other hand, also no consistent evidence of harm to the classical goals. Future studies could consider alternative possibilities to create greater autonomy for hospital departments. Copyright © 2017 Elsevier Ltd. All rights reserved.
Measurement error in time-series analysis: a simulation study comparing modelled and monitored data.
Butland, Barbara K; Armstrong, Ben; Atkinson, Richard W; Wilkinson, Paul; Heal, Mathew R; Doherty, Ruth M; Vieno, Massimo
2013-11-13
Assessing health effects from background exposure to air pollution is often hampered by the sparseness of pollution monitoring networks. However, regional atmospheric chemistry-transport models (CTMs) can provide pollution data with national coverage at fine geographical and temporal resolution. We used statistical simulation to compare the impact on epidemiological time-series analysis of additive measurement error in sparse monitor data as opposed to geographically and temporally complete model data. Statistical simulations were based on a theoretical area of 4 regions each consisting of twenty-five 5 km × 5 km grid-squares. In the context of a 3-year Poisson regression time-series analysis of the association between mortality and a single pollutant, we compared the error impact of using daily grid-specific model data as opposed to daily regional average monitor data. We investigated how this comparison was affected if we changed the number of grids per region containing a monitor. To inform simulations, estimates (e.g. of pollutant means) were obtained from observed monitor data for 2003-2006 for national network sites across the UK and corresponding model data that were generated by the EMEP-WRF CTM. Average within-site correlations between observed monitor and model data were 0.73 and 0.76 for rural and urban daily maximum 8-hour ozone respectively, and 0.67 and 0.61 for rural and urban loge(daily 1-hour maximum NO2). When regional averages were based on 5 or 10 monitors per region, health effect estimates exhibited little bias. However, with only 1 monitor per region, the regression coefficient in our time-series analysis was attenuated by an estimated 6% for urban background ozone, 13% for rural ozone, 29% for urban background loge(NO2) and 38% for rural loge(NO2). For grid-specific model data the corresponding figures were 19%, 22%, 54% and 44% respectively, i.e. similar for rural loge(NO2) but more marked for urban loge(NO2). Even if correlations between model and monitor data appear reasonably strong, additive classical measurement error in model data may lead to appreciable bias in health effect estimates. As process-based air pollution models become more widely used in epidemiological time-series analysis, assessments of error impact that include statistical simulation may be useful.
NASA Astrophysics Data System (ADS)
Scholkmann, Felix; Cifra, Michal; Alexandre Moraes, Thiago; de Mello Gallep, Cristiano
2011-12-01
The aim of the present study was to test whether the multifractal properties of ultra-weak photon emission (UPE) from germinating wheat seedlings (Triticum aestivum) change when the seedlings are treated with different concentrations of the toxin potassium dichromate (PD). To this end, UPE was measured (50 seedlings in one Petri dish, duration: approx. 16.6- 28 h) from samples of three groups: (i) control (group C, N = 9), (ii) treated with 25 ppm of PD (group G25, N = 32), and (iii) treated with 150 ppm of PD (group G150, N = 23). For the multifractal analysis, the following steps where performed: (i) each UPE time series was trimmed to a final length of 1000 min; (ii) each UPE time series was filtered, linear detrended and normalized; (iii) the multifractal spectrum (f(α)) was calculated for every UPE time series using the backward multifractal detrended moving average (MFDMA) method; (iv) each multifractal spectrum was characterized by calculating the mode (αmode) of the spectrum and the degree of multifractality (Δα) (v) for every UPE time series its mean, skewness and kurtosis were also calculated; finally (vi) all obtained parameters where analyzed to determine their ability to differentiate between the three groups. This was based on Fisher's discriminant ratio (FDR), which was calculated for each parameter combination. Additionally, a non-parametric test was used to test whether the parameter values are significantly different or not. The analysis showed that when comparing all the three groups, FDR had the highest values for the multifractal parameters (αmode, Δα). Furthermore, the differences in these parameters between the groups were statistically significant (p < 0.05). The classical parameters (mean, skewness and kurtosis) had lower FDR values than the multifractal parameters in all cases and showed no significant difference between the groups (except for the skewness between group C and G150). In conclusion, multifractal analysis enables changes in UPE time series to be detected even when they are hidden for normal linear signal analysis methods. The analysis of changes in the multifractal properties might be a basis to design a classification system enabling the intoxication of cell cultures to be quantified based on UPE measurements.
Quantum and classical behavior in interacting bosonic systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hertzberg, Mark P.
It is understood that in free bosonic theories, the classical field theory accurately describes the full quantum theory when the occupancy numbers of systems are very large. However, the situation is less understood in interacting theories, especially on time scales longer than the dynamical relaxation time. Recently there have been claims that the quantum theory deviates spectacularly from the classical theory on this time scale, even if the occupancy numbers are extremely large. Furthermore, it is claimed that the quantum theory quickly thermalizes while the classical theory does not. The evidence for these claims comes from noticing a spectacular differencemore » in the time evolution of expectation values of quantum operators compared to the classical micro-state evolution. If true, this would have dramatic consequences for many important phenomena, including laboratory studies of interacting BECs, dark matter axions, preheating after inflation, etc. In this work we critically examine these claims. We show that in fact the classical theory can describe the quantum behavior in the high occupancy regime, even when interactions are large. The connection is that the expectation values of quantum operators in a single quantum micro-state are approximated by a corresponding classical ensemble average over many classical micro-states. Furthermore, by the ergodic theorem, a classical ensemble average of local fields with statistical translation invariance is the spatial average of a single micro-state. So the correlation functions of the quantum and classical field theories of a single micro-state approximately agree at high occupancy, even in interacting systems. Furthermore, both quantum and classical field theories can thermalize, when appropriate coarse graining is introduced, with the classical case requiring a cutoff on low occupancy UV modes. We discuss applications of our results.« less
Analysis of MHC class I genes across horse MHC haplotypes
Tallmadge, Rebecca L.; Campbell, Julie A.; Miller, Donald C.; Antczak, Douglas F.
2010-01-01
The genomic sequences of 15 horse Major Histocompatibility Complex (MHC) class I genes and a collection of MHC class I homozygous horses of five different haplotypes were used to investigate the genomic structure and polymorphism of the equine MHC. A combination of conserved and locus-specific primers was used to amplify horse MHC class I genes with classical and non-classical characteristics. Multiple clones from each haplotype identified three to five classical sequences per homozygous animal, and two to three non-classical sequences. Phylogenetic analysis was applied to these sequences and groups were identified which appear to be allelic series, but some sequences were left ungrouped. Sequences determined from MHC class I heterozygous horses and previously described MHC class I sequences were then added, representing a total of ten horse MHC haplotypes. These results were consistent with those obtained from the MHC homozygous horses alone, and 30 classical sequences were assigned to four previously confirmed loci and three new provisional loci. The non-classical genes had few alleles and the classical genes had higher levels of allelic polymorphism. Alleles for two classical loci with the expected pattern of polymorphism were found in the majority of haplotypes tested, but alleles at two other commonly detected loci had more variation outside of the hypervariable region than within. Our data indicate that the equine Major Histocompatibility Complex is characterized by variation in the complement of class I genes expressed in different haplotypes in addition to the expected allelic polymorphism within loci. PMID:20099063
NASA Astrophysics Data System (ADS)
Nicolas, J.; Nocquet, J.; van Camp, M.; Coulot, D.
2003-12-01
Time-dependent displacements of stations usually have magnitude close to the accuracy of each individual technique, and it still remains difficult to separate the true geophysical motion from possible artifacts inherent to each space geodetic technique. The Observatoire de la C“te d'Azur (OCA), located at Grasse, France benefits from the collocation of several geodetic instruments and techniques (3 laser ranging stations, and a permanent GPS) what allows us to do a direct comparison of the time series. Moreover, absolute gravimetry measurement campaigns have also been regularly performed since 1997, first by the "Ecole et Observatoire des Sciences de la Terre (EOST) of Strasbourg, France, and more recently by the Royal Observatory of Belgium. This study presents a comparison between the positioning time series of the vertical component derived from the SLR and GPS analysis with the gravimetric results from 1997 to 2003. The laser station coordinates are based on a LAGEOS -1 and -2 combined solution using reference 10-day arc orbits, the ITRF2000 reference frame, and the IERS96 conventions. Different GPS weekly global solutions provided from several IGS are combined and compared to the SLR results. The absolute gravimetry measurements are converted into vertical displacements with a classical gradient. The laser time series indicate a strong annual signal at the level of about 3-4 cm peak to peak amplitude on the vertical component. Absolute gravimetry data agrees with the SLR results. GPS positioning solutions also indicate a significant annual term, but with a magnitude of only 50% of the one shown by the SLR solution and by the gravimetry measurements. Similar annual terms are also observed on other SLR sites we processed, but usually with! lower and various amplitudes. These annual signals are also compared to vertical positioning variations corresponding to an atmospheric loading model. We present the level of agreement between the different techniques and we discuss possible explanations for the discrepancy noted between the signals. At last, we expose explanations for the large annual term at Grasse: These annual variations could be partly due to an hydrological loading effect on the karstic massif on which the observatory is located.
Comparison of adaptive critic-based and classical wide-area controllers for power systems.
Ray, Swakshar; Venayagamoorthy, Ganesh Kumar; Chaudhuri, Balarko; Majumder, Rajat
2008-08-01
An adaptive critic design (ACD)-based damping controller is developed for a thyristor-controlled series capacitor (TCSC) installed in a power system with multiple poorly damped interarea modes. The performance of this ACD computational intelligence-based method is compared with two classical techniques, which are observer-based state-feedback (SF) control and linear matrix inequality LMI-H(infinity) robust control. Remote measurements are used as feedback signals to the wide-area damping controller for modulating the compensation of the TCSC. The classical methods use a linearized model of the system whereas the ACD method is purely measurement-based, leading to a nonlinear controller with fixed parameters. A comparative analysis of the controllers' performances is carried out under different disturbance scenarios. The ACD-based design has shown promising performance with very little knowledge of the system compared to classical model-based controllers. This paper also discusses the advantages and disadvantages of ACDs, SF, and LMI-H(infinity).
Plass, Dietrich; Chau, Patsy Yuen Kwan; Thach, Thuan Quoc; Jahn, Heiko J; Lai, Poh Chin; Wong, Chit Ming; Kraemer, Alexander
2013-09-18
To complement available information on mortality in a population Standard Expected Years of Life Lost (SEYLL), an indicator of premature mortality, is increasingly used to calculate the mortality-associated disease burden. SEYLL consider the age at death and therefore allow a more accurate view on mortality patterns as compared to routinely used measures (e.g. death counts). This study provides a comprehensive assessment of disease and injury SEYLL for Hong Kong in 2010. To estimate the SEYLL, life-expectancy at birth was set according to the 2004 Global Burden of Disease study at 82.5 and 80 years for females and males, respectively. Cause of death data for 2010 were corrected for misclassification of cardiovascular and cancer causes. In addition to the baseline estimates, scenario analyses were performed using alternative assumptions on life-expectancy (Hong Kong standard life-expectancy), time-discounting and age-weighting. To estimate a trend of premature mortality a time-series analysis from 2001 to 2010 was conducted. In 2010 524,706.5 years were lost due to premature death in Hong Kong with 58.3% of the SEYLL attributable to male deaths. The three overall leading single causes of SEYLL were "trachea, bronchus and lung cancers", "ischaemic heart disease" and "lower respiratory infections" together accounting for about 29% of the overall SEYLL. Further, self-inflicted injuries (5.6%; ranked 5) and liver cancer (4.9%; ranked 7) were identified as important causes not adequately captured by classical mortality measures. Scenario analyses highlighted that by using a 3% time-discount rate and non-uniform age-weights the SEYLL dropped by 51.6%. Using Hong Kong's standard life-expectancy values resulted in an overall increase of SEYLL by 10.8% as compared to the baseline SEYLL. Time-series analysis indicates an overall increase of SEYLL by 6.4%. In particular, group I (communicable, maternal, perinatal and nutritional) conditions showed highest increases with SEYLL-rates per 100,000 in 2010 being 1.4 times higher than 2001. The study stresses the mortality impact of diseases and injuries that occur in earlier stages of life and thus presents the SEYLL measure as a more sensitive indicator compared to classical mortality indicators. SEYLL provide useful additional information and supplement available death statistics.
2013-01-01
Background To complement available information on mortality in a population Standard Expected Years of Life Lost (SEYLL), an indicator of premature mortality, is increasingly used to calculate the mortality-associated disease burden. SEYLL consider the age at death and therefore allow a more accurate view on mortality patterns as compared to routinely used measures (e.g. death counts). This study provides a comprehensive assessment of disease and injury SEYLL for Hong Kong in 2010. Methods To estimate the SEYLL, life-expectancy at birth was set according to the 2004 Global Burden of Disease study at 82.5 and 80 years for females and males, respectively. Cause of death data for 2010 were corrected for misclassification of cardiovascular and cancer causes. In addition to the baseline estimates, scenario analyses were performed using alternative assumptions on life-expectancy (Hong Kong standard life-expectancy), time-discounting and age-weighting. To estimate a trend of premature mortality a time-series analysis from 2001 to 2010 was conducted. Results In 2010 524,706.5 years were lost due to premature death in Hong Kong with 58.3% of the SEYLL attributable to male deaths. The three overall leading single causes of SEYLL were “trachea, bronchus and lung cancers”, “ischaemic heart disease” and “lower respiratory infections” together accounting for about 29% of the overall SEYLL. Further, self-inflicted injuries (5.6%; ranked 5) and liver cancer (4.9%; ranked 7) were identified as important causes not adequately captured by classical mortality measures. Scenario analyses highlighted that by using a 3% time-discount rate and non-uniform age-weights the SEYLL dropped by 51.6%. Using Hong Kong’s standard life-expectancy values resulted in an overall increase of SEYLL by 10.8% as compared to the baseline SEYLL. Time-series analysis indicates an overall increase of SEYLL by 6.4%. In particular, group I (communicable, maternal, perinatal and nutritional) conditions showed highest increases with SEYLL-rates per 100,000 in 2010 being 1.4 times higher than 2001. Conclusions The study stresses the mortality impact of diseases and injuries that occur in earlier stages of life and thus presents the SEYLL measure as a more sensitive indicator compared to classical mortality indicators. SEYLL provide useful additional information and supplement available death statistics. PMID:24044523
ERIC Educational Resources Information Center
Redl, Fritz
1970-01-01
Discusses the individual and group psychology of preadolescence and offers suggestions for improving adult-child relationships. (Excerpt from "Preadolescents - What Makes Them Tick? by Dr. Fritz Redl, published in Child Study in 1943.) (DR)
Cognitive-Behavioral Therapy. Second Edition. Theories of Psychotherapy Series
ERIC Educational Resources Information Center
Craske, Michelle G.
2017-01-01
In this revised edition of "Cognitive-Behavioral Therapy," Michelle G. Craske discusses the history, theory, and practice of this commonly practiced therapy. Cognitive-behavioral therapy (CBT) originated in the science and theory of classical and instrumental conditioning when cognitive principles were adopted following dissatisfaction…
Molecular Dynamics Simulations of Simple Liquids
ERIC Educational Resources Information Center
Speer, Owner F.; Wengerter, Brian C.; Taylor, Ramona S.
2004-01-01
An experiment, in which students were given the opportunity to perform molecular dynamics simulations on a series of molecular liquids using the Amber suite of programs, is presented. They were introduced to both physical theories underlying classical mechanics simulations and to the atom-atom pair distribution function.
NASA Astrophysics Data System (ADS)
Brunner, R.; Akis, R.; Ferry, D. K.; Kuchar, F.; Meisels, R.
2008-07-01
We discuss a quantum system coupled to the environment, composed of an open array of billiards (dots) in series. Beside pointer states occurring in individual dots, we observe sets of robust states which arise only in the array. We define these new states as bipartite pointer states, since they cannot be described in terms of simple linear combinations of robust single-dot states. The classical existence of bipartite pointer states is confirmed by comparing the quantum-mechanical and classical results. The ability of the robust states to create “offspring” indicates that quantum Darwinism is in action.
Brunner, R; Akis, R; Ferry, D K; Kuchar, F; Meisels, R
2008-07-11
We discuss a quantum system coupled to the environment, composed of an open array of billiards (dots) in series. Beside pointer states occurring in individual dots, we observe sets of robust states which arise only in the array. We define these new states as bipartite pointer states, since they cannot be described in terms of simple linear combinations of robust single-dot states. The classical existence of bipartite pointer states is confirmed by comparing the quantum-mechanical and classical results. The ability of the robust states to create "offspring" indicates that quantum Darwinism is in action.
Projective limits of state spaces III. Toy-models
NASA Astrophysics Data System (ADS)
Lanéry, Suzanne; Thiemann, Thomas
2018-01-01
In this series of papers, we investigate the projective framework initiated by Kijowski (1977) and Okołów (2009, 2014, 2013) [1,2], which describes the states of a quantum theory as projective families of density matrices. A short reading guide to the series can be found in Lanéry (2016). A strategy to implement the dynamics in this formalism was presented in our first paper Lanéry and Thiemann (2017) (see also Lanéry, 2016, section 4), which we now test in two simple toy-models. The first one is a very basic linear model, meant as an illustration of the general procedure, and we will only discuss it at the classical level. In the second one, we reformulate the Schrödinger equation, treated as a classical field theory, within this projective framework, and proceed to its (non-relativistic) second quantization. We are then able to reproduce the physical content of the usual Fock quantization.
Projective limits of state spaces II. Quantum formalism
NASA Astrophysics Data System (ADS)
Lanéry, Suzanne; Thiemann, Thomas
2017-06-01
In this series of papers, we investigate the projective framework initiated by Kijowski (1977) and Okołów (2009, 2014, 2013), which describes the states of a quantum theory as projective families of density matrices. A short reading guide to the series can be found in Lanéry (2016). After discussing the formalism at the classical level in a first paper (Lanéry, 2017), the present second paper is devoted to the quantum theory. In particular, we inspect in detail how such quantum projective state spaces relate to inductive limit Hilbert spaces and to infinite tensor product constructions (Lanéry, 2016, subsection 3.1) [1]. Regarding the quantization of classical projective structures into quantum ones, we extend the results by Okołów (2013), that were set up in the context of linear configuration spaces, to configuration spaces given by simply-connected Lie groups, and to holomorphic quantization of complex phase spaces (Lanéry, 2016, subsection 2.2) [1].
NASA Astrophysics Data System (ADS)
Igenberlina, Alua; Matin, Dauren; Turgumbayev, Mendybay
2017-09-01
In this paper, deviations of the partial sums of a multiple Fourier-Walsh series of a function in the metric L1(Qk) on a dyadic group are investigated. This estimate plays an important role in the study of equivalent normalizations in this space by means of a difference, oscillation, and best approximation by polynomials in the Walsh system. The classical classical Besov space and its equivalent normalizations are set forth in the well-known monographs of Nikolsky S.M., Besov O.V., Ilyin V.P., Triebel H.; in the works of Kazakh scientists such as Amanov T.I., Mynbaev K.T., Otelbaev M.O., Smailov E.S.. The Besov spaces on the dyadic group and the Vilenkin groups in the one-dimensional case are considered in works by Ombe H., Bloom Walter R, Fournier J., Onneweer C.W., Weyi S., Jun Tateoka.
NASA Astrophysics Data System (ADS)
Lenoir, Guillaume; Crucifix, Michel
2018-03-01
Geophysical time series are sometimes sampled irregularly along the time axis. The situation is particularly frequent in palaeoclimatology. Yet, there is so far no general framework for handling the continuous wavelet transform when the time sampling is irregular. Here we provide such a framework. To this end, we define the scalogram as the continuous-wavelet-transform equivalent of the extended Lomb-Scargle periodogram defined in Part 1 of this study (Lenoir and Crucifix, 2018). The signal being analysed is modelled as the sum of a locally periodic component in the time-frequency plane, a polynomial trend, and a background noise. The mother wavelet adopted here is the Morlet wavelet classically used in geophysical applications. The background noise model is a stationary Gaussian continuous autoregressive-moving-average (CARMA) process, which is more general than the traditional Gaussian white and red noise processes. The scalogram is smoothed by averaging over neighbouring times in order to reduce its variance. The Shannon-Nyquist exclusion zone is however defined as the area corrupted by local aliasing issues. The local amplitude in the time-frequency plane is then estimated with least-squares methods. We also derive an approximate formula linking the squared amplitude and the scalogram. Based on this property, we define a new analysis tool: the weighted smoothed scalogram, which we recommend for most analyses. The estimated signal amplitude also gives access to band and ridge filtering. Finally, we design a test of significance for the weighted smoothed scalogram against the stationary Gaussian CARMA background noise, and provide algorithms for computing confidence levels, either analytically or with Monte Carlo Markov chain methods. All the analysis tools presented in this article are available to the reader in the Python package WAVEPAL.
The relevance of the cross-wavelet transform in the analysis of human interaction – a tutorial
Issartel, Johann; Bardainne, Thomas; Gaillot, Philippe; Marin, Ludovic
2015-01-01
This article sheds light on a quantitative method allowing psychologists and behavioral scientists to take into account the specific characteristics emerging from the interaction between two sets of data in general and two individuals in particular. The current article outlines the practical elements of the cross-wavelet transform (CWT) method, highlighting WHY such a method is important in the analysis of time-series in psychology. The idea is (1) to bridge the gap between physical measurements classically used in physiology – neuroscience and psychology; (2) and demonstrates how the CWT method can be applied in psychology. One of the aims is to answer three important questions WHO could use this method in psychology, WHEN it is appropriate to use it (suitable type of time-series) and HOW to use it. Throughout these explanations, an example with simulated data is used. Finally, data from real life application are analyzed. This data corresponds to a rating task where the participants had to rate in real time the emotional expression of a person. The objectives of this practical example are (i) to point out how to manipulate the properties of the CWT method on real data, (ii) to show how to extract meaningful information from the results, and (iii) to provide a new way to analyze psychological attributes. PMID:25620949
Long-Term Transport of Cryptosporidium Parvum
NASA Astrophysics Data System (ADS)
Andrea, C.; Harter, T.; Hou, L.; Atwill, E. R.; Packman, A.; Woodrow-Mumford, K.; Maldonado, S.
2005-12-01
The protozoan pathogen Cryptosporidium parvum is a leading cause of waterborne disease. Subsurface transport and filtration in natural and artificial porous media are important components of the environmental pathway of this pathogen. It has been shown that the oocysts of C. parvum show distinct colloidal properties. We conducted a series of laboratory studies on sand columns (column length: 10 cm - 60 cm, flow rates: 0.7 m/d - 30 m/d, ionic strength: 0.01 - 100 mM, filter grain size: 0.2 - 2 mm, various solution chemistry). Breakthrough curves were measured over relatively long time-periods (hundreds to thousands of pore volumes). We show that classic colloid filtration theory is a reasonable tool for predicting the initial breakthrough, but it is inadequate to explain the significant tailing observed in the breakthrough of C. parvum oocyst through sand columns. We discuss the application of the Continuous Time Random Walk approach to account for the strong tailing that was observed in our experiments. The CTRW is generalized transport modeling framework, which includes the classic advection-dispersion equation (ADE), the fractional ADE, and the multi-rate mass transfer model as special cases. Within this conceptual framework, it is possible to distinguish between the contributions of pore-scale geometrical (physical) disorder and of pore-scale physico-chemical heterogeneities (e.g., of the filtration, sorption, desorption processes) to the transport of C. parvum oocysts.
Quantifying the range of cross-correlated fluctuations using a q- L dependent AHXA coefficient
NASA Astrophysics Data System (ADS)
Wang, Fang; Wang, Lin; Chen, Yuming
2018-03-01
Recently, based on analogous height cross-correlation analysis (AHXA), a cross-correlation coefficient ρ×(L) has been proposed to quantify the levels of cross-correlation on different temporal scales for bivariate series. A limitation of this coefficient is that it cannot capture the full information of cross-correlations on amplitude of fluctuations. In fact, it only detects the cross-correlation at a specific order fluctuation, which might neglect some important information inherited from other order fluctuations. To overcome this disadvantage, in this work, based on the scaling of the qth order covariance and time delay L, we define a two-parameter dependent cross-correlation coefficient ρq(L) to detect and quantify the range and level of cross-correlations. This new version of ρq(L) coefficient leads to the formation of a ρq(L) surface, which not only is able to quantify the level of cross-correlations, but also allows us to identify the range of fluctuation amplitudes that are correlated in two given signals. Applications to the classical ARFIMA models and the binomial multifractal series illustrate the feasibility of this new coefficient ρq(L) . In addition, a statistical test is proposed to quantify the existence of cross-correlations between two given series. Applying our method to the real life empirical data from the 1999-2000 California electricity market, we find that the California power crisis in 2000 destroys the cross-correlation between the price and the load series but does not affect the correlation of the load series during and before the crisis.
Orthonormal aberration polynomials for anamorphic optical imaging systems with rectangular pupils.
Mahajan, Virendra N
2010-12-20
The classical aberrations of an anamorphic optical imaging system, representing the terms of a power-series expansion of its aberration function, are separable in the Cartesian coordinates of a point on its pupil. We discuss the balancing of a classical aberration of a certain order with one or more such aberrations of lower order to minimize its variance across a rectangular pupil of such a system. We show that the balanced aberrations are the products of two Legendre polynomials, one for each of the two Cartesian coordinates of the pupil point. The compound Legendre polynomials are orthogonal across a rectangular pupil and, like the classical aberrations, are inherently separable in the Cartesian coordinates of the pupil point. They are different from the balanced aberrations and the corresponding orthogonal polynomials for a system with rotational symmetry but a rectangular pupil.
NASA Astrophysics Data System (ADS)
Chiu, L.; Vongsaard, J.; El-Ghazawi, T.; Weinman, J.; Yang, R.; Kafatos, M.
U Due to the poor temporal sampling by satellites, data gaps exist in satellite derived time series of precipitation. This poses a challenge for assimilating rain- fall data into forecast models. To yield a continuous time series, the classic image processing technique of digital image morphing has been used. However, the digital morphing technique was applied manually and that is time consuming. In order to avoid human intervention in the process, an automatic procedure for image morphing is needed for real-time operations. For this purpose, Genetic Algorithm Based Image Registration Automatic Morphing (GRAM) model was developed and tested in this paper. Specifically, automatic morphing technique was integrated with Genetic Algo- rithm and Feature Based Image Metamorphosis technique to fill in data gaps between satellite coverage. The technique was tested using NOWRAD data which are gener- ated from the network of NEXRAD radars. Time series of NOWRAD data from storm Floyd that occurred at the US eastern region on September 16, 1999 for 00:00, 01:00, 02:00,03:00, and 04:00am were used. The GRAM technique was applied to data col- lected at 00:00 and 04:00am. These images were also manually morphed. Images at 01:00, 02:00 and 03:00am were interpolated from the GRAM and manual morphing and compared with the original NOWRAD rainrates. The results show that the GRAM technique outperforms manual morphing. The correlation coefficients between the im- ages generated using manual morphing are 0.905, 0.900, and 0.905 for the images at 01:00, 02:00,and 03:00 am, while the corresponding correlation coefficients are 0.946, 0.911, and 0.913, respectively, based on the GRAM technique. Index terms Remote Sensing, Image Registration, Hydrology, Genetic Algorithm, Morphing, NEXRAD
Time series analysis of Mexico City subsidence constrained by radar interferometry
NASA Astrophysics Data System (ADS)
López-Quiroz, Penélope; Doin, Marie-Pierre; Tupin, Florence; Briole, Pierre; Nicolas, Jean-Marie
2009-09-01
In Mexico City, subsidence rates reach up to 40 cm/yr mainly due to soil compaction led by the over exploitation of the Mexico Basin aquifer. In this paper, we map the spatial and temporal patterns of the Mexico City subsidence by differential radar interferometry, using 38 ENVISAT images acquired between end of 2002 and beginning of 2007. We present the severe interferogram unwrapping problems partly due to the coherence loss but mostly due to the high fringe rates. These difficulties are overcome by designing a new methodology that helps the unwrapping step. Our approach is based on the fact that the deformation shape is stable for similar time intervals during the studied period. As a result, a stack of the five best interferograms can be used to compute an average deformation rate for a fixed time interval. Before unwrapping, the number of fringes is then decreased in wrapped interferograms using a scaled version of the stack together with the estimation of the atmospheric phase contribution related with the troposphere vertical stratification. The residual phase, containing less fringes, is more easily unwrapped than the original interferogram. The unwrapping procedure is applied in three iterative steps. The 71 small baseline unwrapped interferograms are inverted to obtain increments of radar propagation delays between the 38 acquisition dates. Based on the redundancy of the interferometric data base, we quantify the unwrapping errors and show that they are strongly decreased by iterations in the unwrapping process. A map of the RMS interferometric system misclosure allows to define the unwrapping reliability for each pixel. Finally, we present a new algorithm for time series analysis that differs from classical SVD decomposition and is best suited to the present data base. Accurate deformation time series are then derived over the metropolitan area of the city with a spatial resolution of 30 × 30 m.
The disturbing function for polar Centaurs and transneptunian objects
NASA Astrophysics Data System (ADS)
Namouni, F.; Morais, M. H. M.
2017-10-01
The classical disturbing function of the three-body problem is based on an expansion of the gravitational interaction in the vicinity of nearly coplanar orbits. Consequently, it is not suitable for the identification and study of resonances of the Centaurs and transneptunian objects on nearly polar orbits with the Solar system planets. Here, we provide a series expansion algorithm of the gravitational interaction in the vicinity of polar orbits and produce explicitly the disturbing function to fourth order in eccentricity and inclination cosine. The properties of the polar series differ significantly from those of the classical disturbing function: the polar series can model any resonance, as the expansion order is not related to the resonance order. The powers of eccentricity and inclination of the force amplitude of a p:q resonance do not depend on the value of the resonance order |p - q| but only on its parity. Thus, all even resonance order eccentricity amplitudes are ∝e2 and odd ones ∝e to lowest order in eccentricity e. With the new findings on the structure of the polar disturbing function and the possible resonant critical arguments, we illustrate the dynamics of the polar resonances 1:3, 3:1, 2:9 and 7:9 where transneptunian object 471325 could currently be locked.
Posterior segment involvement in cat-scratch disease: A case series.
Tolou, C; Mahieu, L; Martin-Blondel, G; Ollé, P; Matonti, F; Hamid, S; Benouaich, X; Debard, A; Cassagne, M; Soler, V
2015-12-01
Cat-scratch disease (CSD) is a systemic infectious disease. The most well-known posterior segment presentation is neuroretinitis with a macular star. In this study, we present a case series emphasising the heterogeneity of the disease and the various posterior segment manifestations. A retrospective case series of consecutive patients presenting with posterior segment CSD, over a 5-year period (2010 to 2015), at two ophthalmological centres in Midi-Pyrénées. Twelve patients (17 eyes) were included, of whom 11 (92%) presented with rapidly decreasing visual acuity, with 6 of these (50%) extremely abrupt. CSD was bilateral in 5 (42% of all patients). Posterior manifestations were: 12 instances of optic nerve edema (100%), 8 of focal chorioretinitis (67%) and only 6 of the classic macular edema with macular star (25% at first examination, but 50% later). Other ophthalmological complications developed in three patients; one developed acute anterior ischemic optic neuropathy, one a retrohyaloid hemorrhage and one a branch retinal artery occlusion, all secondary to occlusive focal vasculitis adjacent to focal chorioretinitis. Classical neuroretinitis with macular star is not the only clinical presentation of CSD. Practitioners should screen for Bartonella henselae in all patients with papillitis or focal chorioretinitis. Copyright © 2015 Elsevier Masson SAS. All rights reserved.
Investigation on the coloured noise in GPS-derived position with time-varying seasonal signals
NASA Astrophysics Data System (ADS)
Gruszczynska, Marta; Klos, Anna; Bos, Machiel Simon; Bogusz, Janusz
2016-04-01
The seasonal signals in the GPS-derived time series arise from real geophysical signals related to tidal (residual) or non-tidal (loadings from atmosphere, ocean and continental hydrosphere, thermo elastic strain, etc.) effects and numerical artefacts including aliasing from mismodelling in short periods or repeatability of the GPS satellite constellation with respect to the Sun (draconitics). Singular Spectrum Analysis (SSA) is a method for investigation of nonlinear dynamics, suitable to either stationary or non-stationary data series without prior knowledge about their character. The aim of SSA is to mathematically decompose the original time series into a sum of slowly varying trend, seasonal oscillations and noise. In this presentation we will explore the ability of SSA to subtract the time-varying seasonal signals in GPS-derived North-East-Up topocentric components and show properties of coloured noise from residua. For this purpose we used data from globally distributed IGS (International GNSS Service) permanent stations processed by the JPL (Jet Propulsion Laboratory) in a PPP (Precise Point Positioning) mode. After introducing a threshold of 13 years, 264 stations left with a maximum length reaching 23 years. The data was initially pre-processed for outliers, offsets and gaps. The SSA was applied to pre-processed series to estimate the time-varying seasonal signals. We adopted a 3-years window as the optimal dimension of its size determined with the Akaike's Information Criteria (AIC) values. A Fisher-Snedecor test corrected for the presence of temporal correlation was used to determine the statistical significance of reconstructed components. This procedure showed that first four components describing annual and semi-annual signals, are significant at a 99.7% confidence level, which corresponds to 3-sigma criterion. We compared the non-parametric SSA approach with a commonly chosen parametric Least-Squares Estimation that assumes constant amplitudes and phases over time. We noticed a maximum difference in seasonal oscillation of 3.5 mm and a maximum change in velocity of 0.15 mm/year for Up component (YELL, Yellowknife, Canada), when SSA and LSE are compared. The annual signal has the greatest influence on data variability in time series, while the semi-annual signal in Up component has much smaller contribution in the total variance of data. For some stations more than 35% of the total variance is explained by annual signal. According to the Power Spectral Densities (PSD) we proved that SSA has the ability to properly subtract the seasonals changing in time with almost no influence on power-law character of stochastic part. Then, the modified Maximum Likelihood Estimation (MLE) in Hector software was applied to SSA-filtered time series. We noticed a significant improvement in spectral indices and power-law amplitudes in comparison to classically determined ones with LSE, which will be the main subject of this presentation.
Silvicultural systems for managing ponderosa pine
Andrew Youngblood
2005-01-01
Silviculturists have primarily relied on classical even-aged silvicultural systems (the planned series of treatments for tending, harvesting, and re-establishing a stand) for ponderosa pine, with uneven-aged systems used to a lesser degree. Current management practices involve greater innovation because of conflicting management objectives. Silvicultural systems used...
Photographs, Foxfire, and Flea-Markets.
ERIC Educational Resources Information Center
Buckley, Mary
An introductory literature course for college sophomores focuses on children's "classics," based on the premise that no book good for children is for children only. This document describes an approach to teaching the "Little House in the Big Woods" series of children's books. Three techniques are suggested for confirming the…
Mexican American Televison: Applied Anthropology and Public Television
ERIC Educational Resources Information Center
Eiselein, E. B.; Marshall, Wes
1976-01-01
Fiesta Project provides a classic example of action anthropology in broadcasting. The project involved the research and production of a Spanish language public television series designed to attract, retain, and realistically help a Mexican American audience in southern Arizona. The project used anthropological research in initial program…
Charting Relationships in American Popular Film. Part II.
ERIC Educational Resources Information Center
Burke, Ken
1998-01-01
Explores the concept of genre evolution through the experimental, classic, refinement, and deconstructivist phases of American films. A series of detailed diagrams present a synthesis of influences and developments in the western, supercop, detective, gangster, futuristic science fiction, fantasy, outer space science fiction, horror, musical, and…
One-Time Pad as a nonlinear dynamical system
NASA Astrophysics Data System (ADS)
Nagaraj, Nithin
2012-11-01
The One-Time Pad (OTP) is the only known unbreakable cipher, proved mathematically by Shannon in 1949. In spite of several practical drawbacks of using the OTP, it continues to be used in quantum cryptography, DNA cryptography and even in classical cryptography when the highest form of security is desired (other popular algorithms like RSA, ECC, AES are not even proven to be computationally secure). In this work, we prove that the OTP encryption and decryption is equivalent to finding the initial condition on a pair of binary maps (Bernoulli shift). The binary map belongs to a family of 1D nonlinear chaotic and ergodic dynamical systems known as Generalized Luröth Series (GLS). Having established these interesting connections, we construct other perfect secrecy systems on the GLS that are equivalent to the One-Time Pad, generalizing for larger alphabets. We further show that OTP encryption is related to Randomized Arithmetic Coding - a scheme for joint compression and encryption.
Radiotracer investigation in gold leaching tanks.
Dagadu, C P K; Akaho, E H K; Danso, K A; Stegowski, Z; Furman, L
2012-01-01
Measurement and analysis of residence time distribution (RTD) is a classical method to investigate performance of chemical reactors. In the present investigation, the radioactive tracer technique was used to measure the RTD of aqueous phase in a series of gold leaching tanks at the Damang gold processing plant in Ghana. The objective of the investigation was to measure the effective volume of each tank and validate the design data after recent process intensification or revamping of the plant. I-131 was used as a radioactive tracer and was instantaneously injected into the feed stream of the first tank and monitored at the outlet of different tanks. Both sampling and online measurement methods were used to monitor the tracer concentration. The results of measurements indicated that both the methods provided identical RTD curves. The mean residence time (MRT) and effective volume of each tank was estimated. The tanks-in-series model with exchange between active and stagnant volume was used and found suitable to describe the flow structure of aqueous phase in the tanks. The estimated effective volume of the tanks and high degree of mixing in tanks could validate the design data and confirmed the expectation of the plant engineer after intensification of the process. Copyright © 2011 Elsevier Ltd. All rights reserved.
Liu, Yangfan; Bolton, J Stuart
2016-08-01
The (Cartesian) multipole series, i.e., the series comprising monopole, dipoles, quadrupoles, etc., can be used, as an alternative to the spherical or cylindrical wave series, in representing sound fields in a wide range of problems, such as source radiation, sound scattering, etc. The proofs of the completeness of the spherical and cylindrical wave series in these problems are classical results, and it is also generally agreed that the Cartesian multipole series spans the same space as the spherical waves: a rigorous mathematical proof of that statement has, however, not been presented. In the present work, such a proof of the completeness of the Cartesian multipole series, both in two and three dimensions, is given, and the linear dependence relations among different orders of multipoles are discussed, which then allows one to easily extract a basis from the multipole series. In particular, it is concluded that the multipoles comprising the two highest orders in the series form a basis of the whole series, since the multipoles of all the lower source orders can be expressed as a linear combination of that basis.
NASA Astrophysics Data System (ADS)
Joshi, Vishal; Banerjee, D. P. K.; Srivastava, Mudit
2017-12-01
We present a series of near-infrared spectra of Nova Ophiuchus 2017 in the K band that record the evolution of the first overtone CO emission in unprecedented detail. Starting from 11.7 days after maximum, when CO is first detected at great strength, the spectra track the CO emission to +25.6 days by which time it is found to have rapidly declined in strength by almost a factor of ∼35. The cause for the rapid destruction of CO is examined in the framework of different mechanisms for CO destruction, namely, an increase in photoionizating flux, chemical pathways of destruction, or destruction by energetic nonthermal particles created in shocks. From LTE modeling of the CO emission, the 12C/13C ratio is determined to be 1.6 ± 0.3. This is consistent with the expected value of this parameter from nucleosynthesis theory for a nova eruption occuring on a low mass (∼ 0.6 {M}ȯ ) carbon–oxygen core white dwarf. The present 12C/13C estimate constitutes one of the most secure estimates of this ratio in a classical nova.
Oviedo de la Fuente, Manuel; Febrero-Bande, Manuel; Muñoz, María Pilar; Domínguez, Àngela
2018-01-01
This paper proposes a novel approach that uses meteorological information to predict the incidence of influenza in Galicia (Spain). It extends the Generalized Least Squares (GLS) methods in the multivariate framework to functional regression models with dependent errors. These kinds of models are useful when the recent history of the incidence of influenza are readily unavailable (for instance, by delays on the communication with health informants) and the prediction must be constructed by correcting the temporal dependence of the residuals and using more accessible variables. A simulation study shows that the GLS estimators render better estimations of the parameters associated with the regression model than they do with the classical models. They obtain extremely good results from the predictive point of view and are competitive with the classical time series approach for the incidence of influenza. An iterative version of the GLS estimator (called iGLS) was also proposed that can help to model complicated dependence structures. For constructing the model, the distance correlation measure [Formula: see text] was employed to select relevant information to predict influenza rate mixing multivariate and functional variables. These kinds of models are extremely useful to health managers in allocating resources in advance to manage influenza epidemics.
VizieR Online Data Catalog: LMC NIR survey. IV. Type II Cepheid variables (Bhardwaj+, 2017)
NASA Astrophysics Data System (ADS)
Bhardwaj, A.; Macri, L. M.; Rejkuba, M.; Kanbur, S. M.; Ngeow, C.-C.; Singh, H. P.
2018-05-01
This paper is the fourth in a series of articles based on observations obtained by the Large Magellanic Cloud Near-infrared Synoptic Survey (LMCNISS; Macri et al. 2015, J/AJ/149/117, hereafter Paper I). In Paper I we carried out a time-series survey of 18 deg2 in the central region of the LMC at JHKs wavelengths using the 1.5 m telescope at the Cerro Tololo Inter-American Observatory and the CPAPIR camera. Observations were carried out in queue mode by the SMARTS consortium during 32 nights from 2006 November to 2007 November. The survey products include measurements for more than 3.5x106 sources, including ~1500 Classical Cepheids. We cross-matched the LMCNISS catalog (Paper I) against OGLE-III (Soszynski et al. 2008, J/AcA/58/293) and identified 81 T2Cs with periods ranging from 1 to 68 days; 70 of these have JHKs measurements, while the remaining 11 only have data in J and/or H band. The sample consists of 16 BLH, 31 WVI, 12 PWV, and 22 RVT stars. (4 data files).
Solitary Fibrous Tumors of the Orbit and Central Nervous System: A Case Series Analysis
Brum, Marisa; Nzwalo, Hipólito; Oliveira, Edson; Pelejão, Maria Rita; Pereira, Pedro; Farias, João Paulo; Pimentel, José
2018-01-01
Introduction: Solitary fibrous tumor (SFT) is rarely diagnosed in clinical practice. Since its initial descriptions in the central nervous system (CNS) and the orbits, very few case reports and small case series have expanded their clinical and pathological characterization. We sought to describe a cases series of SFT from a single laboratory of neuropathology belonging to a tertiary university hospital. Methods: Retrospective clinical and histopathological description of eight cases of CNS and orbital SFT diagnosed over a 21-year period of time. Results: Median age was 47.3 years and four were males. Clinical presentation was related to local mass effect in all. Tumors occurred in the orbits (5/62.5%), intracranial dura attached (2), and the spinal medulla (1). The neuropathology showed the presence of hemangiopericytoma type (2), classic type (3), and mixed type (3). Histological anaplasia was present in two cases. Widespread/total immunoreactivity for vimentin, CD34, and Bcl-2 was present in all. Gross total removal was conducted in the majority (6/75%) and subtotal removal in 2 (25%). Three patients were submitted to adjuvant treatment (radiosurgery and radiotherapy). Recurrence occurred in four patients, 13–120 months after surgical intervention. Anaplasia was present in one case of recurrence. Conclusion: Our case series confirms the clinical and neuropathological diversity of CNS and orbital SFTs. Studies with longer follow-up periods are necessary to better understand the clinical behavior and prognosis of the SFT in the CNS and orbits. PMID:29682031
Teaching Leadership: Innovative Approaches for the 21st Century. Leadership Horizons Series.
ERIC Educational Resources Information Center
Pillai, Rajnandini, Ed.; Stites-Doe, Susan, Ed.
This book provides a collection of strategies for teaching leadership. It includes the creative use of films, classics, and fiction in teaching leadership; teaching leadership to specific audiences; team teaching and collaboration; and assessing outcomes. Following are the chapter titles and authors: "Blockbuster Leadership: Teaching Leadership…
ERIC Educational Resources Information Center
Shore, Felice S.; Pascal, Matthew
2008-01-01
This article describes several distinct approaches taken by preservice elementary teachers to solving a classic rate problem. Their approaches incorporate a variety of mathematical concepts, ranging from proportions to infinite series, and illustrate the power of all five NCTM Process Standards. (Contains 8 figures.)
Activation of Premotor Vocal Areas during Musical Discrimination
ERIC Educational Resources Information Center
Brown, Steven; Martinez, Michael J.
2007-01-01
Two same/different discrimination tasks were performed by amateur-musician subjects in this functional magnetic resonance imaging study: Melody Discrimination and Harmony Discrimination. Both tasks led to activations not only in classic working memory areas--such as the cingulate gyrus and dorsolateral prefrontal cortex--but in a series of…
Adverse health risks from environmental agents are generally related to average (long term) exposures. We used results from a series of controlled human exposure tests and classical first order rate kinetics calculations to estimate how well spot measurements of methyl tertiary ...
Hi/Lo Supplements: Middle Grades to High School.
ERIC Educational Resources Information Center
Curriculum Review, 1980
1980-01-01
Reviews seven entertaining story collections which aim to keep readers reading rather than doing exercises. With reading levels well below interest levels, these stories lure reluctant readers with such topics as the outdoors, mystery, and science fiction. Two series, one with accompanying filmstrips, retell literary classics for below-grade…
Practical Application of Fundamental Concepts in Exercise Physiology
ERIC Educational Resources Information Center
Ramsbottom R.; Kinch, R. F. T.; Morris, M. G.; Dennis, A. M.
2007-01-01
The collection of primary data in laboratory classes enhances undergraduate practical and critical thinking skills. The present article describes the use of a lecture program, running in parallel with a series of linked practical classes, that emphasizes classical or standard concepts in exercise physiology. The academic and practical program ran…
Janik, M; Bossew, P; Kurihara, O
2018-07-15
Machine learning is a class of statistical techniques which has proven to be a powerful tool for modelling the behaviour of complex systems, in which response quantities depend on assumed controls or predictors in a complicated way. In this paper, as our first purpose, we propose the application of machine learning to reconstruct incomplete or irregularly sampled data of time series indoor radon ( 222 Rn). The physical assumption underlying the modelling is that Rn concentration in the air is controlled by environmental variables such as air temperature and pressure. The algorithms "learn" from complete sections of multivariate series, derive a dependence model and apply it to sections where the controls are available, but not the response (Rn), and in this way complete the Rn series. Three machine learning techniques are applied in this study, namely random forest, its extension called the gradient boosting machine and deep learning. For a comparison, we apply the classical multiple regression in a generalized linear model version. Performance of the models is evaluated through different metrics. The performance of the gradient boosting machine is found to be superior to that of the other techniques. By applying learning machines, we show, as our second purpose, that missing data or periods of Rn series data can be reconstructed and resampled on a regular grid reasonably, if data of appropriate physical controls are available. The techniques also identify to which degree the assumed controls contribute to imputing missing Rn values. Our third purpose, though no less important from the viewpoint of physics, is identifying to which degree physical, in this case environmental variables, are relevant as Rn predictors, or in other words, which predictors explain most of the temporal variability of Rn. We show that variables which contribute most to the Rn series reconstruction, are temperature, relative humidity and day of the year. The first two are physical predictors, while "day of the year" is a statistical proxy or surrogate for missing or unknown predictors. Copyright © 2018 Elsevier B.V. All rights reserved.
From blackbirds to black holes: Investigating capture-recapture methods for time domain astronomy
NASA Astrophysics Data System (ADS)
Laycock, Silas G. T.
2017-07-01
In time domain astronomy, recurrent transients present a special problem: how to infer total populations from limited observations. Monitoring observations may give a biassed view of the underlying population due to limitations on observing time, visibility and instrumental sensitivity. A similar problem exists in the life sciences, where animal populations (such as migratory birds) or disease prevalence, must be estimated from sparse and incomplete data. The class of methods termed Capture-Recapture is used to reconstruct population estimates from time-series records of encounters with the study population. This paper investigates the performance of Capture-Recapture methods in astronomy via a series of numerical simulations. The Blackbirds code simulates monitoring of populations of transients, in this case accreting binary stars (neutron star or black hole accreting from a stellar companion) under a range of observing strategies. We first generate realistic light-curves for populations of binaries with contrasting orbital period distributions. These models are then randomly sampled at observing cadences typical of existing and planned monitoring surveys. The classical capture-recapture methods, Lincoln-Peterson, Schnabel estimators, related techniques, and newer methods implemented in the Rcapture package are compared. A general exponential model based on the radioactive decay law is introduced which is demonstrated to recover (at 95% confidence) the underlying population abundance and duty cycle, in a fraction of the observing visits (10-50%) required to discover all the sources in the simulation. Capture-Recapture is a promising addition to the toolbox of time domain astronomy, and methods implemented in R by the biostats community can be readily called from within python.
Drought, agricultural adaptation, and sociopolitical collapse in the Maya Lowlands
Douglas, Peter M. J.; Pagani, Mark; Canuto, Marcello A.; Brenner, Mark; Hodell, David A.; Eglinton, Timothy I.; Curtis, Jason H.
2015-01-01
Paleoclimate records indicate a series of severe droughts was associated with societal collapse of the Classic Maya during the Terminal Classic period (∼800–950 C.E.). Evidence for drought largely derives from the drier, less populated northern Maya Lowlands but does not explain more pronounced and earlier societal disruption in the relatively humid southern Maya Lowlands. Here we apply hydrogen and carbon isotope compositions of plant wax lipids in two lake sediment cores to assess changes in water availability and land use in both the northern and southern Maya lowlands. We show that relatively more intense drying occurred in the southern lowlands than in the northern lowlands during the Terminal Classic period, consistent with earlier and more persistent societal decline in the south. Our results also indicate a period of substantial drying in the southern Maya Lowlands from ∼200 C.E. to 500 C.E., during the Terminal Preclassic and Early Classic periods. Plant wax carbon isotope records indicate a decline in C4 plants in both lake catchments during the Early Classic period, interpreted to reflect a shift from extensive agriculture to intensive, water-conservative maize cultivation that was motivated by a drying climate. Our results imply that agricultural adaptations developed in response to earlier droughts were initially successful, but failed under the more severe droughts of the Terminal Classic period. PMID:25902508
NASA Astrophysics Data System (ADS)
Goodman, Joseph W.
2000-07-01
The Wiley Classics Library consists of selected books that have become recognized classics in their respective fields. With these new unabridged and inexpensive editions, Wiley hopes to extend the life of these important works by making them available to future generations of mathematicians and scientists. Currently available in the Series: T. W. Anderson The Statistical Analysis of Time Series T. S. Arthanari & Yadolah Dodge Mathematical Programming in Statistics Emil Artin Geometric Algebra Norman T. J. Bailey The Elements of Stochastic Processes with Applications to the Natural Sciences Robert G. Bartle The Elements of Integration and Lebesgue Measure George E. P. Box & Norman R. Draper Evolutionary Operation: A Statistical Method for Process Improvement George E. P. Box & George C. Tiao Bayesian Inference in Statistical Analysis R. W. Carter Finite Groups of Lie Type: Conjugacy Classes and Complex Characters R. W. Carter Simple Groups of Lie Type William G. Cochran & Gertrude M. Cox Experimental Designs, Second Edition Richard Courant Differential and Integral Calculus, Volume I RIchard Courant Differential and Integral Calculus, Volume II Richard Courant & D. Hilbert Methods of Mathematical Physics, Volume I Richard Courant & D. Hilbert Methods of Mathematical Physics, Volume II D. R. Cox Planning of Experiments Harold S. M. Coxeter Introduction to Geometry, Second Edition Charles W. Curtis & Irving Reiner Representation Theory of Finite Groups and Associative Algebras Charles W. Curtis & Irving Reiner Methods of Representation Theory with Applications to Finite Groups and Orders, Volume I Charles W. Curtis & Irving Reiner Methods of Representation Theory with Applications to Finite Groups and Orders, Volume II Cuthbert Daniel Fitting Equations to Data: Computer Analysis of Multifactor Data, Second Edition Bruno de Finetti Theory of Probability, Volume I Bruno de Finetti Theory of Probability, Volume 2 W. Edwards Deming Sample Design in Business Research
Banjak, Hussein; Grenier, Thomas; Epicier, Thierry; Koneti, Siddardha; Roiban, Lucian; Gay, Anne-Sophie; Magnin, Isabelle; Peyrin, Françoise; Maxim, Voichita
2018-06-01
Fast tomography in Environmental Transmission Electron Microscopy (ETEM) is of a great interest for in situ experiments where it allows to observe 3D real-time evolution of nanomaterials under operating conditions. In this context, we are working on speeding up the acquisition step to a few seconds mainly with applications on nanocatalysts. In order to accomplish such rapid acquisitions of the required tilt series of projections, a modern 4K high-speed camera is used, that can capture up to 100 images per second in a 2K binning mode. However, due to the fast rotation of the sample during the tilt procedure, noise and blur effects may occur in many projections which in turn would lead to poor quality reconstructions. Blurred projections make classical reconstruction algorithms inappropriate and require the use of prior information. In this work, a regularized algebraic reconstruction algorithm named SIRT-FISTA-TV is proposed. The performance of this algorithm using blurred data is studied by means of a numerical blur introduced into simulated images series to mimic possible mechanical instabilities/drifts during fast acquisitions. We also present reconstruction results from noisy data to show the robustness of the algorithm to noise. Finally, we show reconstructions with experimental datasets and we demonstrate the interest of fast tomography with an ultra-fast acquisition performed under environmental conditions, i.e. gas and temperature, in the ETEM. Compared to classically used SIRT and SART approaches, our proposed SIRT-FISTA-TV reconstruction algorithm provides higher quality tomograms allowing easier segmentation of the reconstructed volume for a better final processing and analysis. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Schreck, M.
2016-05-01
This article is devoted to finding classical point-particle equivalents for the fermion sector of the nonminimal standard model extension (SME). For a series of nonminimal operators, such Lagrangians are derived at first order in Lorentz violation using the algebraic concept of Gröbner bases. Subsequently, the Lagrangians serve as a basis for reanalyzing the results of certain kinematic tests of special relativity that were carried out in the past century. Thereby, a number of new constraints on coefficients of the nonminimal SME is obtained. In the last part of the paper we point out connections to Finsler geometry.
On the effective field theory of intersecting D3-branes
NASA Astrophysics Data System (ADS)
Abbaspur, Reza
2018-05-01
We study the effective field theory of two intersecting D3-branes with one common dimension along the lines recently proposed in ref. [1]. We introduce a systematic way of deriving the classical effective action to arbitrary orders in perturbation theory. Using a proper renormalization prescription to handle logarithmic divergencies arising at all orders in the perturbation series, we recover the first order renormalization group equation of ref. [1] plus an infinite set of higher order equations. We show the consistency of the higher order equations with the first order one and hence interpret the first order result as an exact RG flow equation in the classical theory.
Computational approach to Thornley's problem by bivariate operational calculus
NASA Astrophysics Data System (ADS)
Bazhlekova, E.; Dimovski, I.
2012-10-01
Thornley's problem is an initial-boundary value problem with a nonlocal boundary condition for linear onedimensional reaction-diffusion equation, used as a mathematical model of spiral phyllotaxis in botany. Applying a bivariate operational calculus we find explicit representation of the solution, containing two convolution products of special solutions and the arbitrary initial and boundary functions. We use a non-classical convolution with respect to the space variable, extending in this way the classical Duhamel principle. The special solutions involved are represented in the form of fast convergent series. Numerical examples are considered to show the application of the present technique and to analyze the character of the solution.
Causality as a Rigorous Notion and Quantitative Causality Analysis with Time Series
NASA Astrophysics Data System (ADS)
Liang, X. S.
2017-12-01
Given two time series, can one faithfully tell, in a rigorous and quantitative way, the cause and effect between them? Here we show that this important and challenging question (one of the major challenges in the science of big data), which is of interest in a wide variety of disciplines, has a positive answer. Particularly, for linear systems, the maximal likelihood estimator of the causality from a series X2 to another series X1, written T2→1, turns out to be concise in form: T2→1 = [C11 C12 C2,d1 — C112 C1,d1] / [C112 C22 — C11C122] where Cij (i,j=1,2) is the sample covariance between Xi and Xj, and Ci,dj the covariance between Xi and ΔXj/Δt, the difference approximation of dXj/dt using the Euler forward scheme. An immediate corollary is that causation implies correlation, but not vice versa, resolving the long-standing debate over causation versus correlation. The above formula has been validated with touchstone series purportedly generated with one-way causality that evades the classical approaches such as Granger causality test and transfer entropy analysis. It has also been applied successfully to the investigation of many real problems. Through a simple analysis with the stock series of IBM and GE, an unusually strong one-way causality is identified from the former to the latter in their early era, revealing to us an old story, which has almost faded into oblivion, about "Seven Dwarfs" competing with a "Giant" for the computer market. Another example presented here regards the cause-effect relation between the two climate modes, El Niño and Indian Ocean Dipole (IOD). In general, these modes are mutually causal, but the causality is asymmetric. To El Niño, the information flowing from IOD manifests itself as a propagation of uncertainty from the Indian Ocean. In the third example, an unambiguous one-way causality is found between CO2 and the global mean temperature anomaly. While it is confirmed that CO2 indeed drives the recent global warming, on paleoclimate scales the cause-effect relation may be completely reversed. Key words: Causation, Information flow, Uncertainty Generation, El Niño, IOD, CO2/Global warming Reference : Liang, 2014: Unraveling the cause-effect relation between time series. PRE 90, 052150 News Report: http://scitation.aip.org/content/aip/magazine/physicstoday/news/10.1063/PT.5.7124
OSM-Classic : An optical imaging technique for accurately determining strain
NASA Astrophysics Data System (ADS)
Aldrich, Daniel R.; Ayranci, Cagri; Nobes, David S.
OSM-Classic is a program designed in MATLAB® to provide a method of accurately determining strain in a test sample using an optical imaging technique. Measuring strain for the mechanical characterization of materials is most commonly performed with extensometers, LVDT (linear variable differential transistors), and strain gauges; however, these strain measurement methods suffer from their fragile nature and it is not particularly easy to attach these devices to the material for testing. To alleviate these potential problems, an optical approach that does not require contact with the specimen can be implemented to measure the strain. OSM-Classic is a software that interrogates a series of images to determine elongation in a test sample and hence, strain of the specimen. It was designed to provide a graphical user interface that includes image processing with a dynamic region of interest. Additionally, the stain is calculated directly while providing active feedback during the processing.
f and g series solutions to a post-Newtonian two-body problem with parameters β and γ
NASA Astrophysics Data System (ADS)
Qin, Song-He; Liu, Jing-Xi; Zhong, Ze-Hao; Xie, Yi
2016-01-01
Classical Newtonian f and g series for a Keplerian two-body problem are extended for the case of a post-Newtonian two-body problem with parameters β and γ. These two parameters are introduced to parameterize the post-Newtonian approximation of alternative theories of gravity and they are both equal to 1 in general relativity. Up to the order of 30, we obtain all of the coefficients of the series in their exact forms without any cutoff for significant figures. The f and g series for the post-Newtonian two-body problem are also compared with a Runge-Kutta order 7 integrator. Although the f and g series have no superiority in terms of accuracy or efficiency at the order of 7, the discrepancy in the performances of these two methods is not quite distinct. However, the f and g series have the advantage of flexibility for going to higher orders. Some examples of relativistic advance of periastron are given and the effect of gravitational radiation on the scheme of f and g series is evaluated.
Classical Limit and Quantum Logic
NASA Astrophysics Data System (ADS)
Losada, Marcelo; Fortin, Sebastian; Holik, Federico
2018-02-01
The analysis of the classical limit of quantum mechanics usually focuses on the state of the system. The general idea is to explain the disappearance of the interference terms of quantum states appealing to the decoherence process induced by the environment. However, in these approaches it is not explained how the structure of quantum properties becomes classical. In this paper, we consider the classical limit from a different perspective. We consider the set of properties of a quantum system and we study the quantum-to-classical transition of its logical structure. The aim is to open the door to a new study based on dynamical logics, that is, logics that change over time. In particular, we appeal to the notion of hybrid logics to describe semiclassical systems. Moreover, we consider systems with many characteristic decoherence times, whose sublattices of properties become distributive at different times.
NASA Astrophysics Data System (ADS)
Girardi, P.; Pastres, R.; Gaetan, C.; Mangin, A.; Taji, M. A.
2015-12-01
In this paper, we present the results of a classification of Adriatic waters, based on spatial time series of remotely sensed Chlorophyll type-a. The study was carried out using a clustering procedure combining quantile smoothing and an agglomerative clustering algorithms. The smoothing function includes a seasonal term, thus allowing one to classify areas according to “similar” seasonal evolution, as well as according to “similar” trends. This methodology, which is here applied for the first time to Ocean Colour data, is more robust with respect to other classical methods, as it does not require any assumption on the probability distribution of the data. This approach was applied to the classification of an eleven year long time series, from January 2002 to December 2012, of monthly values of Chlorophyll type-a concentrations covering the whole Adriatic Sea. The data set was made available by ACRI (http://hermes.acri.fr) in the framework of the Glob-Colour Project (http://www.globcolour.info). Data were obtained by calibrating Ocean Colour data provided by different satellite missions, such as MERIS, SeaWiFS and MODIS. The results clearly show the presence of North-South and West-East gradient in the level of Chlorophyll, which is consistent with literature findings. This analysis could provide a sound basis for the identification of “water bodies” and of Chlorophyll type-a thresholds which define their Good Ecological Status, in terms of trophic level, as required by the implementation of the Marine Strategy Framework Directive. The forthcoming availability of Sentinel-3 OLCI data, in continuity of the previous missions, and with perspective of more than a 15-year monitoring system, offers a real opportunity of expansion of our study as a strong support to the implementation of both the EU Marine Strategy Framework Directive and the UNEP-MAP Ecosystem Approach in the Mediterranean.
Smoothed quantum-classical states in time-irreversible hybrid dynamics
NASA Astrophysics Data System (ADS)
Budini, Adrián A.
2017-09-01
We consider a quantum system continuously monitored in time which in turn is coupled to an arbitrary dissipative classical system (diagonal reduced density matrix). The quantum and classical dynamics can modify each other, being described by an arbitrary time-irreversible hybrid Lindblad equation. Given a measurement trajectory, a conditional bipartite stochastic state can be inferred by taking into account all previous recording information (filtering). Here, we demonstrate that the joint quantum-classical state can also be inferred by taking into account both past and future measurement results (smoothing). The smoothed hybrid state is estimated without involving information from unobserved measurement channels. Its average over recording realizations recovers the joint time-irreversible behavior. As an application we consider a fluorescent system monitored by an inefficient photon detector. This feature is taken into account through a fictitious classical two-level system. The average purity of the smoothed quantum state increases over that of the (mixed) state obtained from the standard quantum jump approach.
NASA Technical Reports Server (NTRS)
Tsue, Yasuhiko
1994-01-01
A general framework for time-dependent variational approach in terms of squeezed coherent states is constructed with the aim of describing quantal systems by means of classical mechanics including higher order quantal effects with the aid of canonicity conditions developed in the time-dependent Hartree-Fock theory. The Maslov phase occurring in a semi-classical quantization rule is investigated in this framework. In the limit of a semi-classical approximation in this approach, it is definitely shown that the Maslov phase has a geometric nature analogous to the Berry phase. It is also indicated that this squeezed coherent state approach is a possible way to go beyond the usual WKB approximation.
"Nearly Everybody Gets Twitterpated": The Disney Version of Mothering
ERIC Educational Resources Information Center
Fraustino, Lisa Rowe
2015-01-01
This essay makes the case that during the American cold-war era, Disney's animated film classics worked in tandem with their True-Life Adventure series of nature documentaries to reproduce traditional mothering ideology under patriarchy. The animated films do this not by animating the realities of marriage, childbirth, and mothering work for girls…
Reforming Educators: Teachers, Experts, and Advocates.
ERIC Educational Resources Information Center
Mitchell, Samuel
This textbook analyzes successful innovations in education. The first chapter provides an overview of the book, which is followed by a review of classical studies and disasters that have accompanied innovation. Chapter 3 offers a series of separate stories about the different ways teachers have responded to changes, and chapter 4 tries to reverse…
Rhetoric. The Bobbs-Merrill Series in Composition and Rhetoric.
ERIC Educational Resources Information Center
Larson, Richard L., Ed.
Reflecting the opinions of both classical theorists and recent authors, 16 papers on rhetorical theory are collected in this publication. Selections in Part 1, concerned with the definition and objectives of rhetoric, are by Plato, Aristotle, Cicero, Kenneth Burke, Donald C. Bryant, and Martin Steinmann, Jr. In Part 2, selections from the pedagogy…
Why Does My Cruorine Change Color? Using Classic Research Articles To Teach Biochemistry Topics.
ERIC Educational Resources Information Center
White, Harold B., III
2001-01-01
Uses the spectroscopic study by G.G. Stokes of the reversible "oxidation and reduction" of hemoglobin to illustrate how a series of open-ended group assignments and associated classroom demonstrations can be built around a single article in a way that integrates and illuminates basic concepts. (Author/MM)
Interrupting the Symphony: Unpacking the Importance Placed on Classical Concert Experiences
ERIC Educational Resources Information Center
Hess, Juliet
2018-01-01
The Toronto Symphony Orchestra presents a series of youth concerts each year to introduce and attract younger audiences to the symphony. Music teachers often attend these concerts with students, and the importance of such experiences is frequently emphasised and normalised. This article explores the historical roots of the following relations,…
NASA Astrophysics Data System (ADS)
Bose, Chandan; Sarkar, Sunetra
2018-04-01
The present study investigates the complex vortex interactions in two-dimensional flow-field behind a symmetric NACA0012 airfoil undergoing a prescribed periodic pitching-plunging motion in low Reynolds number regime. The flow-field transitions from periodic to chaotic through a quasi-periodic route as the plunge amplitude is gradually increased. This study unravels the role of the complex interactions that take place among the main vortex structures in making the unsteady flow-field transition from periodicity to chaos. The leading-edge separation plays a key role in providing the very first trigger for aperiodicity. Subsequent mechanisms like shredding, merging, splitting, and collision of vortices in the near-field that propagate and sustain the disturbance have also been followed and presented. These fundamental mechanisms are seen to give rise to spontaneous and irregular formation of new vortex couples at arbitrary locations, which are the primary agencies for sustaining chaos in the flow-field. The interactions have been studied for each dynamical state to understand the course of transition in the flow-field. The qualitative changes observed in the flow-field are manifestation of changes in the underlying dynamical system. The overall dynamics are established in the present study by means of robust quantitative measures derived from classical and non-classical tools from the dynamical system theory. As the present analysis involves a high fidelity multi-unknown system, non-classical dynamical tools such as recurrence-based time series methods are seen to be very efficient. Moreover, their application is novel in the context of pitch-plunge flapping flight.
Shara, Michael M.; Doyle, Trisha F.; Lauer, Tod R.; ...
2016-11-08
The Hubble Space Telescope has imaged the central part of M87 over a 10 week span, leading to the discovery of 32 classical novae (CNe) and nine fainter, likely very slow, and/or symbiotic novae. In this first paper of a series, we present the M87 nova finder charts, and the light and color curves of the novae. We demonstrate that the rise and decline times, and the colors of M87 novae are uncorrelated with each other and with position in the galaxy. The spatial distribution of the M87 novae follows the light of the galaxy, suggesting that novae accreted by M87 during cannibalistic episodes are well-mixed. Conservatively using only the 32 brightest CNe we derive a nova rate for M87:more » $${363}_{-45}^{+33}$$ novae yr –1. We also derive the luminosity-specific classical nova rate for this galaxy, which is $${7.88}_{-2.6}^{+2.3}\\,{\\mathrm{yr}}^{-1}/{10}^{10}\\,{L}_{\\odot }{,}_{K}$$. Both rates are 3–4 times higher than those reported for M87 in the past, and similarly higher than those reported for all other galaxies. As a result, we suggest that most previous ground-based surveys for novae in external galaxies, including M87, miss most faint, fast novae, and almost all slow novae near the centers of galaxies.« less
A Comparative Study between Universal Eclectic Septoplasty Technique and Cottle
Amaral Neto, Odim Ferreira do; Mizoguchi, Flavio Massao; Freitas, Renato da Silva; Maniglia, João Jairney; Maniglia, Fábio Fabrício; Maniglia, Ricardo Fabrício
2017-01-01
Introduction Since the last century surgical correction of nasal septum deviation has been improved. The Universal Eclectic Technique was recently reported and there are still few studies dedicated to address this surgical approach. Objective The objective of this study is to compare the results of septal deviation correction achieved using the Universal Eclectic Technique (UET) with those obtained through Cottle's Technique. Methods This is a prospective study with two consecutive case series totaling 90 patients (40 women and 50 men), aged between 18 and 55 years. We divided patients into two groups according to the surgical approach. Fifty-three patients underwent septoplasty through Universal Eclectic Technique (UET) and thirty-seven patients were submitted to classical Cottle's septoplasty technique. All patients have answered the Nasal Obstruction Symptom Evaluation Scale (NOSE) questionnaire to assess pre and postoperative nasal obstruction. Results Statistical analysis showed a significantly shorter operating time for the UET group. Nasal edema assessment performed seven days after the surgery showed a prevalence of mild edema in UET group and moderate edema in Cottle's technique group. In regard to complication rates, UET presented a single case of septal hematoma while in Cottle's technique group we observed: 02 cases of severe edemas, 01 case of incapacitating headache, and 01 complaint of nasal pain. Conclusion The Universal Eclectic Technique (UET) has proven to be a safe and effective surgical technique with faster symptomatic improvement, low complication rates, and reduced surgical time when compared with classical Cottle's technique. PMID:28680499
Neuroembryology of the Acupuncture Principal Meridians: Part 3. The Head and Neck.
Dorsher, Peter T; Chiang, Poney
2018-04-01
Background: Accumulating evidence from anatomical, physiologic, and neuroimaging research shows that Classical acupuncture points stimulate nerve trunks or their branches in the head, trunk, and extremities. The first part of this series revealed that phenomenon in the extremities. Principal meridian distributions mirror those of major peripheral nerves there and Classical acupuncture points are proximate to peripheral nerves there. These relationships were shown to be consistent with the linear neuroembryologic development of the extremities. The second part of this series revealed that, in the trunk, a neuroanatomical basis for the Principal meridians exists consistent with lateral folding in early fetal neuroembryologic development. Objective: The aim of this Part is to provide anatomical data that corroborates a neuroanatomical basis for the Principal meridians in the head and neck, which is consistent with the longitudinal and lateral folding that occurs in early fetal neuroembryologic development. Methods: Adobe Photoshop software was used to apply Classical acupuncture points and Principal meridians as layers superimposed on neuroanatomic images of the head and neck, allowing demonstration of their anatomical relationships. Results: The Principal meridian distributions in the head and region can be conceptualized as connecting branches of the cranial and/or cervical spinal nerves. Conclusions: Anatomical data support the conceptualization of acupuncture Principal meridians in the head and neck as connecting branches of the cranial and/or cervical spinal nerves and are consistent with neuroembryologic development. Overall, the acupuncture Principal meridians can be conceptualized to have a neuroanatomical substrate that is corroborated by developmental neuroembryology.
Angona, Anna; Alvarez-Larrán, Alberto; Bellosillo, Beatriz; Martínez-Avilés, Luz; Garcia-Pallarols, Francesc; Longarón, Raquel; Ancochea, Àgueda; Besses, Carles
2015-03-15
Two prognostic models to predict overall survival and thrombosis-free survival have been proposed: International Prognostic Score for Essential Thrombocythemia (IPSET) and IPSET-Thrombosis, respectively, based on age, leukocytes count, history of previous thrombosis, the presence of cardiovascular risk factors and the JAK2 mutational status. The aim of the present study was to assess the clinical and biological characteristics at diagnosis and during evolution in essential thrombocythemia (ET) patients as well as the factors associated with survival and thrombosis and the usefulness of these new prognostic models. We have evaluated the clinical data and the mutation status of JAK2, MPL and calreticulin of 214 ET patients diagnosed in a single center between 1985 and 2012, classified according to classical risk stratification, IPSET and IPSET-Thrombosis. With a median follow-up of 6.9 years, overall survival was not associated with any variable by multivariate analysis. Thrombotic history and leukocytes>10×10(9)/l were associated with thrombosis-free survival (TFS). In our series, IPSET prognostic systems of survival and thrombosis did not provide more clinically relevant information regarding the classic risk of thrombosis stratification. Thrombotic history and leukocytosis>10×10(9)/l were significantly associated with lower TFS, while the prognostic IPSET-Thrombosis system did not provide more information than classical thrombotic risk assessment. Copyright © 2014 Elsevier España, S.L.U. All rights reserved.
NASA Astrophysics Data System (ADS)
Nikolaev, A. S.
2015-03-01
We study the structure of the canonical Poincaré-Lindstedt perturbation series in the Deprit operator formalism and establish its connection to the Kato resolvent expansion. A discussion of invariant definitions for averaging and integrating perturbation operators and their canonical identities reveals a regular pattern in the series for the Deprit generator. This regularity is explained using Kato series and the relation of the perturbation operators to the Laurent coefficients for the resolvent of the Liouville operator. This purely canonical approach systematizes the series and leads to an explicit expression for the Deprit generator in any order of the perturbation theory: , where is the partial pseudoinverse of the perturbed Liouville operator. The corresponding Kato series provides a reasonably effective computational algorithm. The canonical connection of the perturbed and unperturbed averaging operators allows describing ambiguities in the generator and transformed Hamiltonian, while Gustavson integrals turn out to be insensitive to the normalization style. We use nonperturbative examples for illustration.
NASA Astrophysics Data System (ADS)
Ogawa, Kazuhisa; Kobayashi, Hirokazu; Tomita, Akihisa
2018-02-01
The quantum interference of entangled photons forms a key phenomenon underlying various quantum-optical technologies. It is known that the quantum interference patterns of entangled photon pairs can be reconstructed classically by the time-reversal method; however, the time-reversal method has been applied only to time-frequency-entangled two-photon systems in previous experiments. Here, we apply the time-reversal method to the position-wave-vector-entangled two-photon systems: the two-photon Young interferometer and the two-photon beam focusing system. We experimentally demonstrate that the time-reversed systems classically reconstruct the same interference patterns as the position-wave-vector-entangled two-photon systems.
Hybrid quantum-classical hierarchy for mitigation of decoherence and determination of excited states
DOE Office of Scientific and Technical Information (OSTI.GOV)
McClean, Jarrod R.; Kimchi-Schwartz, Mollie E.; Carter, Jonathan
Using quantum devices supported by classical computational resources is a promising approach to quantum-enabled computation. One powerful example of such a hybrid quantum-classical approach optimized for classically intractable eigenvalue problems is the variational quantum eigensolver, built to utilize quantum resources for the solution of eigenvalue problems and optimizations with minimal coherence time requirements by leveraging classical computational resources. These algorithms have been placed as leaders among the candidates for the first to achieve supremacy over classical computation. Here, we provide evidence for the conjecture that variational approaches can automatically suppress even nonsystematic decoherence errors by introducing an exactly solvable channelmore » model of variational state preparation. Moreover, we develop a more general hierarchy of measurement and classical computation that allows one to obtain increasingly accurate solutions by leveraging additional measurements and classical resources. In conclusion, we demonstrate numerically on a sample electronic system that this method both allows for the accurate determination of excited electronic states as well as reduces the impact of decoherence, without using any additional quantum coherence time or formal error-correction codes.« less
Roma, Andres A; Barry, Jessica; Pai, Rish K; Billings, Steven D
2014-07-01
Sebaceous gland hyperplasia is a common skin condition, very rarely reported in the female genital region. We present 13 cases from 12 patients, the first case series of sebaceous gland hyperplasia of the vulva. Differences in age at presentation and clinical presentation compared with classic sebaceous gland hyperplasia from the head and neck region were noted. Also, it was rarely included in the clinical differential diagnosis. Immunohistochemical studies to determine any possible association with the Muir-Torre syndrome were performed and mismatch repair protein loss was not identified.
NASA Astrophysics Data System (ADS)
Cattaneo, Alberto S.; Schiavina, Michele
2017-02-01
This note describes the restoration of time in one-dimensional parameterization-invariant (hence timeless) models, namely, the classically equivalent Jacobi action and gravity coupled to matter. It also serves as a timely introduction by examples to the classical and quantum BV-BFV formalism as well as to the AKSZ method.
Principles of Discrete Time Mechanics
NASA Astrophysics Data System (ADS)
Jaroszkiewicz, George
2014-04-01
1. Introduction; 2. The physics of discreteness; 3. The road to calculus; 4. Temporal discretization; 5. Discrete time dynamics architecture; 6. Some models; 7. Classical cellular automata; 8. The action sum; 9. Worked examples; 10. Lee's approach to discrete time mechanics; 11. Elliptic billiards; 12. The construction of system functions; 13. The classical discrete time oscillator; 14. Type 2 temporal discretization; 15. Intermission; 16. Discrete time quantum mechanics; 17. The quantized discrete time oscillator; 18. Path integrals; 19. Quantum encoding; 20. Discrete time classical field equations; 21. The discrete time Schrodinger equation; 22. The discrete time Klein-Gordon equation; 23. The discrete time Dirac equation; 24. Discrete time Maxwell's equations; 25. The discrete time Skyrme model; 26. Discrete time quantum field theory; 27. Interacting discrete time scalar fields; 28. Space, time and gravitation; 29. Causality and observation; 30. Concluding remarks; Appendix A. Coherent states; Appendix B. The time-dependent oscillator; Appendix C. Quaternions; Appendix D. Quantum registers; References; Index.
Double-bosonization and Majid's conjecture, (I): Rank-inductions of ABCD
NASA Astrophysics Data System (ADS)
Hu, Hongmei; Hu, Naihong
2015-11-01
Majid developed in [S. Majid, Math. Proc. Cambridge Philos. Soc. 125, 151-192 (1999)] the double-bosonization theory to construct Uq(𝔤) and expected to generate inductively not just a line but a tree of quantum groups starting from a node. In this paper, the authors confirm Majid's first expectation (see p. 178 [S. Majid, Math. Proc. Cambridge Philos. Soc. 125, 151-192 (1999)]) through giving and verifying the full details of the inductive constructions of Uq(𝔤) for the classical types, i.e., the ABCD series. Some examples in low ranks are given to elucidate that any quantum group of classical type can be constructed from the node corresponding to Uq(𝔰𝔩2).
Frontiers in Relativistic Celestial Mechanics, Vol. 2, Applications and Experiments
NASA Astrophysics Data System (ADS)
Kopeikin, Sergei
2014-08-01
Relativistic celestial mechanics - investigating the motion celestial bodies under the influence of general relativity - is a major tool of modern experimental gravitational physics. With a wide range of prominent authors from the field, this two-volume series consists of reviews on a multitude of advanced topics in the area of relativistic celestial mechanics - starting from more classical topics such as the regime of asymptotically-flat spacetime, light propagation and celestial ephemerides, but also including its role in cosmology and alternative theories of gravity as well as modern experiments in this area. This second volume of a two-volume series covers applications of the theory as well as experimental verifications. From tools to determine light travel times in curved space-time to laser ranging between earth and moon and between satellites, and impacts on the definition of time scales and clock comparison techniques, a variety of effects is discussed. On the occasion of his 80-th birthday, these two volumes honor V. A. Brumberg - one of the pioneers in modern relativistic celestial mechanics. Contributions include: J. Simon, A. Fienga: Victor Brumberg and the French school of analytical celestial mechanics T. Fukushima: Elliptic functions and elliptic integrals for celestial mechanics and dynamical astronomy P. Teyssandier: New tools for determining the light travel time in static, spherically symmetric spacetimes beyond the order G2 J. Müller, L. Biskupek, F. Hofmann and E. Mai: Lunar laser ranging and relativity N. Wex: Testing relativistic celestial mechanics with radio pulsars I. Ciufolini et al.: Dragging of inertial frames, fundamental physics, and satellite laser ranging G. Petit, P. Wolf, P. Delva: Atomic time, clocks, and clock comparisons in relativistic spacetime: a review
Transport studies in high-performance field reversed configuration plasmas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gupta, S., E-mail: sgupta@trialphaenergy.com; Barnes, D. C.; Dettrick, S. A.
2016-05-15
A significant improvement of field reversed configuration (FRC) lifetime and plasma confinement times in the C-2 plasma, called High Performance FRC regime, has been observed with neutral beam injection (NBI), improved edge stability, and better wall conditioning [Binderbauer et al., Phys. Plasmas 22, 056110 (2015)]. A Quasi-1D (Q1D) fluid transport code has been developed and employed to carry out transport analysis of such C-2 plasma conditions. The Q1D code is coupled to a Monte-Carlo code to incorporate the effect of fast ions, due to NBI, on the background FRC plasma. Numerically, the Q1D transport behavior with enhanced transport coefficients (butmore » with otherwise classical parametric dependencies) such as 5 times classical resistive diffusion, classical thermal ion conductivity, 20 times classical electron thermal conductivity, and classical fast ion behavior fit with the experimentally measured time evolution of the excluded flux radius, line-integrated density, and electron/ion temperature. The numerical study shows near sustainment of poloidal flux for nearly 1 ms in the presence of NBI.« less
NASA Astrophysics Data System (ADS)
Sitohang, Yosep Oktavianus; Darmawan, Gumgum
2017-08-01
This research attempts to compare between two forecasting models in time series analysis for predicting the sales volume of motorcycle in Indonesia. The first forecasting model used in this paper is Autoregressive Fractionally Integrated Moving Average (ARFIMA). ARFIMA can handle non-stationary data and has a better performance than ARIMA in forecasting accuracy on long memory data. This is because the fractional difference parameter can explain correlation structure in data that has short memory, long memory, and even both structures simultaneously. The second forecasting model is Singular spectrum analysis (SSA). The advantage of the technique is that it is able to decompose time series data into the classic components i.e. trend, cyclical, seasonal and noise components. This makes the forecasting accuracy of this technique significantly better. Furthermore, SSA is a model-free technique, so it is likely to have a very wide range in its application. Selection of the best model is based on the value of the lowest MAPE. Based on the calculation, it is obtained the best model for ARFIMA is ARFIMA (3, d = 0, 63, 0) with MAPE value of 22.95 percent. For SSA with a window length of 53 and 4 group of reconstructed data, resulting MAPE value of 13.57 percent. Based on these results it is concluded that SSA produces better forecasting accuracy.
An Extension of the Mean Value Theorem for Integrals
ERIC Educational Resources Information Center
Khalili, Parviz; Vasiliu, Daniel
2010-01-01
In this note we present an extension of the mean value theorem for integrals. The extension we consider is motivated by an older result (here referred as Corollary 2), which is quite classical for the literature of Mathematical Analysis or Calculus. We also show an interesting application for computing the sum of a harmonic series.
Social Comparison: The End of a Theory and the Emergence of a Field
ERIC Educational Resources Information Center
Buunk, Abraham P.; Gibbons, Frederick X.
2007-01-01
The past and current states of research on social comparison are reviewed with regard to a series of major theoretical developments that have occurred in the past 5 decades. These are, in chronological order: (1) classic social comparison theory, (2) fear-affiliation theory, (3) downward comparison theory, (4) social comparison as social…
De Gustibus Non Disputandum (One does not Argue about Tastes).
ERIC Educational Resources Information Center
Strasheim, Lorraine A.
Taking the epigrams of Martial and some of the reading notes from the Loeb Classical Library, this document presents classroom-ready readings on foods, including a menu excerpted from Martial and a series of two- to four-line epigrams on a variety of foods:peppers, beans, flour, beets, lettuce, turnips, leeks, cheese, sausage, eggs, bread,…
Science in History, Volume 2: The Scientific and Industrial Revolutions.
ERIC Educational Resources Information Center
Bernal, J. D.
This volume, the second of four, includes parts four and five of the eight parts in the series. Part Four deals with what is called the Scientific Revolution from 1440-1690. This "revolution" is divided into three phases: Phase 1 (1440-1540) includes the Renaissance and the Reformation, during which the world-picture adopted from classical times…
Removing the Mystery of Entropy and Thermodynamics--Part I
ERIC Educational Resources Information Center
Left, Harvey S.
2012-01-01
Energy and entropy are centerpieces of physics. Energy is typically introduced in the study of classical mechanics. Although energy in this context can be challenging, its use in thermodynamics and its connection with entropy seem to take on a special air of mystery. In this five-part series, I pinpoint ways around key areas of difficulty to…
Lesion Neuroanatomy of the Sustained Attention to Response Task
ERIC Educational Resources Information Center
Molenberghs, Pascal; Gillebert, Celine R.; Schoofs, Hanne; Dupont, Patrick; Peeters, Ronald; Vandenberghe, Rik
2009-01-01
The Sustained Attention to Response task is a classical neuropsychological test that has been used by many centres to characterize the attentional deficits in traumatic brain injury, ADHD, autism and other disorders. During the SART a random series of digits 1-9 is presented repeatedly and subjects have to respond to each digit (go trial) except…
True Merit: Ensuring Our Brightest Students Have Access to Our Best Colleges and Universities
ERIC Educational Resources Information Center
Giancola, Jennifer; Kahlenberg, Richard D.
2016-01-01
The admissions process used today in America's most selective colleges and universities is a classic case of interest group politics gone awry. Nobody champions or fights for smart, low-income students. The result is an admissions process reduced to a series of "preferences." Taken together with other widely-used admissions practices,…
Poverty, Social Capital, Parenting and Child Outcomes in Canada. Final Report. Working Paper Series
ERIC Educational Resources Information Center
Jones, Charles; Clark, Linn; Grusec, Joan; Hart, Randle; Plickert, Gabriele; Tepperman, Lorne
2002-01-01
The experience of long-term poverty affects many child outcomes, in part through a family stress process in which poverty is considered to be one of the major factors causing family dysfunction, depression among caregivers and inadequate parenting. Recent scholarship extends the classical Family Stress Model by researching the ways in which…
Neurons and the Process Standards
ERIC Educational Resources Information Center
Zambo, Ron; Zambo, Debby
2011-01-01
The classic Chickens and Pigs problem is considered to be an algebraic problem with two equations and two unknowns. In this article, the authors describe how third-grade teacher Maria is using it to develop a problem-based lesson because she is looking to her students' future needs. As Maria plans, she considers how a series of problems with the…
NOAA Photo Library Banner Takes you to the Top Page Takes you to the About this Site page. Takes Collections page. Takes you to the search page. Takes you to the Links page. NOAA Photo Library Image - nssl0059 Tornado in mature stage of development. Photo #3 of a series of classic photographs of this
True Tales: From Bee Behavior to the Life of Buddha, Not All Comics Are Fiction
ERIC Educational Resources Information Center
Sanderson, Peter
2004-01-01
Although graphic novels are traditionally thought of as the domain of larger-than-life fantasies, for decades comics have also served as educational tools. One of the pioneers of American comic books, Will Eisner, worked for years creating instructional comics. Starting in 1941, the Classics Illustrated series introduced young readers to…
Education and Identity. Second Edition. The Jossey-Bass Higher and Adult Education Series.
ERIC Educational Resources Information Center
Chickering, Arthur W.; Reisser, Linda
Developing policies and practices to create higher education environments that will foster broad-based development of human talent and potentials is the focus of this fully revised and updated edition, which adds findings from the last 25 years to a classic work. The volume begins with "A Current Theoretical Context for Student Development," which…
Crossing Boundaries: Sharing Concepts of Music Teaching from Classroom to Studio
ERIC Educational Resources Information Center
McPhail, Graham J.
2010-01-01
This study demonstrates how action research can provide a means for teachers to undertake research for themselves to inform and enhance their work. The focus of the research was the self-critique of pedagogical practice in one-to-one classical instrumental music teaching within the context of the author's private studio. A series of lessons were…
Gene Concepts in Higher Education Cell and Molecular Biology Textbooks
ERIC Educational Resources Information Center
Albuquerque, Pitombo Maiana; de Almeida, Ana Maria Rocha; El-Hani, Nino Charbel
2008-01-01
Despite being a landmark of 20th century biology, the "classical molecular gene concept," according to which a gene is a stretch of DNA encoding a functional product, which may be a single polypeptide or RNA molecule, has been recently challenged by a series of findings (e.g., split genes, alternative splicing, overlapping and nested…
Tree decay an expanded concept
Alex L. Shigo
1979-01-01
This publication is the final one in a series on tree decay developed in cooperation with Harold G. Marx, Research Application Staff Assistant, U.S. Department of Agriculture, Forest Service, Washington, D.C. The purpose of this publication is to clarify further the tree decay concept that expands the classical concept to include the orderly response of the tree to...
Pleasurable Pedagogies: "Reading Lolita in Tehran" and the Rhetoric of Empathy
ERIC Educational Resources Information Center
Kulbaga, Theresa A.
2008-01-01
In her audio essay for the the National Public Radio's series "This I Believe," Iranian-American author and professor Azar Nafisi celebrates the affective power of empathy. In the essay, Nafisi refers to actual people in Darfur, Afghanistan, Iraq, Algeria, Rwanda, and North Korea, but she turns to classic nineteenth-century American novel to…
Stokes phenomena in discrete Painlevé II.
Joshi, N; Lustri, C J; Luu, S
2017-02-01
We consider the asymptotic behaviour of the second discrete Painlevé equation in the limit as the independent variable becomes large. Using asymptotic power series, we find solutions that are asymptotically pole-free within some region of the complex plane. These asymptotic solutions exhibit Stokes phenomena, which is typically invisible to classical power series methods. We subsequently apply exponential asymptotic techniques to investigate such phenomena, and obtain mathematical descriptions of the rapid switching behaviour associated with Stokes curves. Through this analysis, we determine the regions of the complex plane in which the asymptotic behaviour is described by a power series expression, and find that the behaviour of these asymptotic solutions shares a number of features with the tronquée and tri-tronquée solutions of the second continuous Painlevé equation.
Stokes phenomena in discrete Painlevé II
Joshi, N.
2017-01-01
We consider the asymptotic behaviour of the second discrete Painlevé equation in the limit as the independent variable becomes large. Using asymptotic power series, we find solutions that are asymptotically pole-free within some region of the complex plane. These asymptotic solutions exhibit Stokes phenomena, which is typically invisible to classical power series methods. We subsequently apply exponential asymptotic techniques to investigate such phenomena, and obtain mathematical descriptions of the rapid switching behaviour associated with Stokes curves. Through this analysis, we determine the regions of the complex plane in which the asymptotic behaviour is described by a power series expression, and find that the behaviour of these asymptotic solutions shares a number of features with the tronquée and tri-tronquée solutions of the second continuous Painlevé equation. PMID:28293132
Coseismic flow of frictional melts: insights from mini-AMS measurements on pseudotachylyte
NASA Astrophysics Data System (ADS)
Geissman, J. W.; Leibovitz, N.; Meado, A.; Campbell, L.; Ferre, E. C.
2017-12-01
Fault pseudotachylytes, widely regarded as earthquake fossils, are fascinating rocks that may hold important clues on the physics of seismic rupture and the lubrication of fault planes. Forceful injection of rapidly produced melts along a friction zone typically forms a complex network of veins along the slip zone and at a high angle to the generation plane. The flow patterns of these pseudotachylyte melts remain, however, poorly constrained except in rare cases when billow-like folds or other flow structures are preserved. Recent modifications to the anisotropy of magnetic susceptibility (AMS) method allow new directions of investigations of melt kinematics in pseudotachylyte veins, regardless of whether they are generation or injection veins. Here we present new mini-AMS results based on series of 3.5 mm cubes (≈200 times smaller than classic sample size) of pseudotachylyte veins from the Val Gilba (Italian Alps), the Cima di Gratera (Corsica) and Santa Rosa (California) classic localities. These preliminary analyses demonstrate the potential of this new mini-AMS method in tracking the complex coseismic movement of a low viscosity magma through dynamically deformed conduits. The lack of plastic deformation in pseudotachylyte clasts and along the pseudotachylyte margins supports the hypothesis that the coseismic melt flow pattern is frozen in situ without significant subsolidus deformation.
NASA Technical Reports Server (NTRS)
Scargle, Jeffrey D.
2017-01-01
Questions in data analysis involving the concepts of time and measurement are often pushed into the background or reserved for a philosophical discussion. Some examples are: a) Is causality a consequence of the laws of physics, or can the arrow of time be reversed? b) Can we determine the arrow of time of an event? c) Do we need the continuum hypothesis for the underlying function in any measurement process? d) Can we say anything about the analyticity of the underlying process of an event? e) Would it be valid to model a non-analytical process as function of time? f) What are the implications of all these questions for classical Fourier techniques? However, in the age of big data gathered either from space missions supplying ultra-precise long time series, or e.g. LIGO data from the ground, the moment to bring these questions to the foreground seems arrived. The limitations of our understanding of some fundamental processes is emphasized by the lack of solution for problems open for more than 2 decades, such as the non-detection of solar g-modes, or the modal identification of main sequence stellar pulsators like delta Scuti stars. Flicker noise or 1/f noise, for example, attributed in solar-like stars to granulation, is analyzed mostly only to apply noise reduction techniques, neither considering the classical problem of 1/f noise that was introduced a 100 years ago, nor taking into account ergodic or non-ergodic solutions that make inapplicable spectral analysis techniques in practice. This topic was discussed by Nicholas W. Watkins during the ITISE meeting held in Granada in 2016. There he presented preliminary results of his research on Mandelbrot's related work. We reproduce here his quotation of Mandelbrot (1999) "There is a sharp contrast between a highly anomalous ("non-white") noise that proceeds in ordinary clock time and a noise whose principal anomaly is that it is restricted to fractal time", suggesting a connection with the above proposed topics that could be phrased as the following additional questions:a) Is self-organized criticality (SOC) frequent in astrophysical phenomena? b) Could all fractals in nature be considered stochastic? c) Could we establish mathematical/physical relationships between chaotic and fractal behaviors in time series? d) Could the differences between fractals and chaos in terms of analyticity be used to understand the residuals of the fitting of stellar light curves? In this meeting we would like to approximate these problems from a holistic and multidisciplinary perspective, taking into account not only technical issues but also the deeper implications. In particular the concept of connectivity (introduced in Pascual-Granado et al. A&A, 2015) could be used to implement, within the framework of ARMA processes, an "arrow of time" (see attached document), and so studying the possible implications in the concept of time as envisaged by Watkins.
Tropospheric delays derived from Kalman-filtered VLBI observations
NASA Astrophysics Data System (ADS)
Soja, Benedikt; Nilsson, Tobias; Karbon, Maria; Balidakis, Kyriakos; Lu, Cuixian; Anderson, James; Glaser, Susanne; Liu, Li; Mora-Diaz, Julian A.; Raposo-Pulido, Virginia; Xu, Minghui; Heinkelmann, Robert; Schuh, Harald
2015-04-01
One of the most important error sources in the products of space geodetic techniques is the troposphere. Currently, it is not possible to model the rapid variations in the path delay caused by water vapor with sufficient accuracy, thus it is necessary to estimate these delays in the data analysis. Very long baseline interferometry (VLBI) is well suited to determine wet delays with high accuracy and precision. Compared to GNSS, the analysis does not need to deal with effects related to code biases, multipath, satellite orbit mismodeling, or antenna phase center variations that are inherent in GNSS processing. VLBI data are usually analyzed by estimating geodetic parameters in a least squares adjustment. However, once the VLBI Global Observing System (VGOS) will have become operational, algorithms providing real-time capability, for instance a Kalman filter, should be preferable for data analysis. Even today, certain advantages of such a filter, for example, allowing stochastic modeling of geodetic parameters, warrant its application. The estimation of tropospheric wet delays, in particular, greatly benefits from the stochastic approach of the filter. In this work we have investigated the benefits of applying a Kalman filter in the VLBI data analysis for the determination of tropospheric parameters. The VLBI datasets considered are the CONT campaigns, which demonstrate state-of-the-art capabilities of the VLBI system. They are unique in following a continuous observation schedule over 15 days and in having data recorded at higher bandwidth than usual. The large amount of observations leads to a very high quality of geodetic products. CONT campaigns are held every three years; we have analyzed all CONT campaigns between 2002 and 2014 for this study. In our implementation of a Kalman filter in the VLBI software VieVS@GFZ, the zenith wet delays (ZWD) are modeled as random walk processes. We have compared the resulting time series to corresponding ones obtained from other sources (water vapor radiometers, GNSS, ray-traced delays from numerical weather models) and from a classical least squares solution of the VLBI data. Taking the radiometer time series as a reference, the Kalman filter solution showed the smallest root mean square. Due to the high correlation between the ZWD and station coordinates, investigations of the baseline lengths are of great interest in this context as well. Comparing baseline length repeatabilities from the classical least squares fit with those from the Kalman filter, the filter results present a better performance of up to 15%. To further improve the performance of the ZWD estimation, the noise parameters of the Kalman filter were modeled individually for each station. From ZWD time series at all involved VLBI sites, the power spectral densities of the white noise processes which are driving the random walk processes have been derived. Applying this station-based model results in an improvement of the baseline length repeatabilities of additional 2-3%.
Surgical considerations in FAP-related pouch surgery: Could we do better?
Möslein, Gabriela
2016-07-01
The ileoanal pouch has become the standard restorative procedure of choice for patients with the classical phenotype in FAP (familial adenomatous polyposis) and also for ulcerative colitis (UC). Whilst we tend to encounter descriptive analyses comparing functional outcome, fertility and quality of life (QOL) between series in literature, there may be an urgent need to discuss the subtle technical modifications that may be pivotal for improving long-term QOL in FAP patients. Our aim is to review the current literature and discuss the aspects of ileal pouch-anal anastomosis that may require specific reevaluation for FAP. Surgical strategies aimed at minimizing post-interventional desmoid growth is one of the most important aspects. For this study, the following topics of interest were selected: Timing of surgery, IRA or ileoanal pouch for classical FAP, laparoscopic or conventional surgery, TME or mesenteric dissection, preservation of the ileocolic vessels, handsewn or double-staple anastomosis, shape and size of pouch, protective ileostomy, Last and definitely not least: how to manage desmoid plaques or desmoids at the time of prophylactic surgery. For the depicted technicalities of the procedure, a review of recent literature was performed and evaluated. For the topics selected, only sparse reference in literature was identified that was focused on the specific condition situation of FAP. Almost all pouch literature focusses on the procedural aspects, and FAP patients are always a very minor number. Therefore it becomes obvious that the specific entity is not adequately taken into account. This is a serious bias for identification of important steps in the procedure that may be beneficial for patients with either of the diseases. The results of this study demonstrate that several technical differences for construction of ileoanal pouches in FAP patients deserve more attention and prospective evaluation-perhaps even randomized trials. The role, importance and potential benefit or deterioration of outcome in most of the discussed technicalities remains unclear to date. Significant differences between the underlying diseases (UC and FAP) have not been taken into consideration, such as specifically the management of precursor desmoid lesions at the time of prophylactic surgery as well as prevention of desmoid tumors. Several of the aspects discussed in this paper should be prospectively evaluated in larger and exclusive series of FAP patients.
Evaluating data-driven causal inference techniques in noisy physical and ecological systems
NASA Astrophysics Data System (ADS)
Tennant, C.; Larsen, L.
2016-12-01
Causal inference from observational time series challenges traditional approaches for understanding processes and offers exciting opportunities to gain new understanding of complex systems where nonlinearity, delayed forcing, and emergent behavior are common. We present a formal evaluation of the performance of convergent cross-mapping (CCM) and transfer entropy (TE) for data-driven causal inference under real-world conditions. CCM is based on nonlinear state-space reconstruction, and causality is determined by the convergence of prediction skill with an increasing number of observations of the system. TE is the uncertainty reduction based on transition probabilities of a pair of time-lagged variables. With TE, causal inference is based on asymmetry in information flow between the variables. Observational data and numerical simulations from a number of classical physical and ecological systems: atmospheric convection (the Lorenz system), species competition (patch-tournaments), and long-term climate change (Vostok ice core) were used to evaluate the ability of CCM and TE to infer causal-relationships as data series become increasingly corrupted by observational (instrument-driven) or process (model-or -stochastic-driven) noise. While both techniques show promise for causal inference, TE appears to be applicable to a wider range of systems, especially when the data series are of sufficient length to reliably estimate transition probabilities of system components. Both techniques also show a clear effect of observational noise on causal inference. For example, CCM exhibits a negative logarithmic decline in prediction skill as the noise level of the system increases. Changes in TE strongly depend on noise type and which variable the noise was added to. The ability of CCM and TE to detect driving influences suggest that their application to physical and ecological systems could be transformative for understanding driving mechanisms as Earth systems undergo change.
Chlyeh, G; Henry, P Y; Jarne, P
2003-09-01
The population biology of the schistosome-vector snail Bulinus truncatus was studied in an irrigation area near Marrakech, Morocco, using demographic approaches, in order to estimate life-history parameters. The survey was conducted using 2 capture-mark-recapture analyses in 2 separate sites from the irrigation area, the first one in 1999 and the second one in 2000. Individuals larger than 5 mm were considered. The capture probability varied through time and space in both analyses. Apparent survival (from 0.7 to 1 per period of 2-3 days) varied with time and space (a series of sinks was considered), as well as a square function of size. These results suggest variation in population intrinsic rate of increase. They also suggest that results from more classical analyses of population demography, aiming, for example at estimating population size, should be interpreted with caution. Together with other results obtained in the same irrigation area, they also lead to some suggestions for population control.
Relativistic Newtonian dynamics for objects and particles
NASA Astrophysics Data System (ADS)
Friedman, Y.
2017-04-01
Relativistic Newtonian Dynamics (RND) was introduced in a series of recent papers by the author, in partial cooperation with J. M. Steiner. RND was capable of describing non-classical behavior of motion under a central attracting force. RND incorporates the influence of potential energy on spacetime in Newtonian dynamics, treating gravity as a force in flat spacetime. It was shown that this dynamics predicts accurately gravitational time dilation, the anomalous precession of Mercury and the periastron advance of any binary. In this paper the model is further refined and extended to describe also the motion of both objects with non-zero mass and massless particles, under a conservative attracting force. It is shown that for any conservative force a properly defined energy is conserved on the trajectories and if this force is central, the angular momentum is also preserved. An RND equation of motion is derived for motion under a conservative force. As an application, it is shown that RND predicts accurately also the Shapiro time delay - the fourth test of GR.
Van Gosen, Bradley S.
2009-01-01
A similar version of this slide show was presented on three occasions during 2008: two times to local chapters of the Society for Mining, Metallurgy, and Exploration (SME), as part of SME's Henry Krumb lecture series, and the third time at the Northwest Mining Association's 114th Annual Meeting, held December 1-5, 2008, in Sparks (Reno), Nevada. In 2006, the U.S. Geological Survey (USGS) initiated a study of the diverse and uncommon mineral resources associated with carbonatites and associated alkaline igneous rocks. Most of these deposit types have not been studied by the USGS during the last 25 years, and many of these mineral resources have important applications in modern technology. The author chose to begin this study at Iron Hill in southwestern Colorado because it is the site of a classic carbonatite complex, which is thought to host the largest known resources of titanium and niobium in the United States.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giammichele, N.; Fontaine, G.; Bergeron, P.
2015-12-10
We present the first of a two-part seismic analysis of the two bright hot ZZ Ceti stars GD 165 and Ross 548. In this first part, we report the results of frequency extraction exercises based on time-series data sets of exceptional quality. We uncovered up to 13 independent pulsation modes in GD 165, regrouped into six main frequency multiplets. These include 9 secure (signal-to-noise ratio, S/N > 4) detections and 4 possible ones (4 ≥ S/N ≥ 3). Likewise, we isolated 11 independent modes in Ross 548 (9 secure and 2 possible detections), also regrouped into 6 multiplets. The multiplet structure is likely causedmore » by rotational splitting. We also provide updated estimates of the time-averaged atmospheric properties of these two pulsators in the light of recent developments on the front of atmospheric modeling for DA white dwarfs.« less
NASA Astrophysics Data System (ADS)
Steinberg, R.; Siegel, E.
2010-03-01
``AUDIOMAPS'' music enjoyment/appreciation-via-understanding methodology, versus art, music-dynamics evolves, telling a story in (3+1)-dimensions: trails, frames, timbres, + dynamics amplitude vs. music-score time-series (formal-inverse power- spectrum) surprisingly closely parallels (3+1)-dimensional Einstein(1905) special-relativity ``+'' (with its enjoyment- expectations) a manifestation of quantum-theory expectation- values, together a music quantum-ACOUSTO/MUSICO-dynamics (QA/MD). Analysis via Derrida deconstruction enabled Siegel- Baez ``Category-Semantics'' ``FUZZYICS''=``CATEGORYICS (``SON of 'TRIZ") classic Aristotle ``Square-of-Opposition" (SoO) DEduction-logic, irrespective of Boon-Klimontovich versus Voss- Clark[PRL(77)] music power-spectrum analysis sampling- time/duration controversy: part versus whole, shows that ``AUDIOMAPS" QA/MD reigns supreme as THE music appreciation-via- analysis tool for the listener in musicology!!! Connection to Deutsch-Hartmann-Levitin[This is Your Brain on Music,(2006)] brain/mind-barrier brain/mind-music connection is both subtle and compelling and immediate!!!
Segregating gas from melt: an experimental study of the Ostwald ripening of vapor bubbles in magmas
Lautze, Nicole C.; Sisson, Thomas W.; Mangan, Margaret T.; Grove, Timothy L.
2011-01-01
Diffusive coarsening (Ostwald ripening) of H2O and H2O-CO2 bubbles in rhyolite and basaltic andesite melts was studied with elevated temperature–pressure experiments to investigate the rates and time spans over which vapor bubbles may enlarge and attain sufficient buoyancy to segregate in magmatic systems. Bubble growth and segregation are also considered in terms of classical steady-state and transient (non-steady-state) ripening theory. Experimental results are consistent with diffusive coarsening as the dominant mechanism of bubble growth. Ripening is faster in experiments saturated with pure H2O than in those with a CO2-rich mixed vapor probably due to faster diffusion of H2O than CO2 through the melt. None of the experimental series followed the time1/3 increase in mean bubble radius and time-1 decrease in bubble number density predicted by classical steady-state ripening theory. Instead, products are interpreted as resulting from transient regime ripening. Application of transient regime theory suggests that bubbly magmas may require from days to 100 years to reach steady-state ripening conditions. Experimental results, as well as theory for steady-state ripening of bubbles that are immobile or undergoing buoyant ascent, indicate that diffusive coarsening efficiently eliminates micron-sized bubbles and would produce mm-sized bubbles in 102–104 years in crustal magma bodies. Once bubbles attain mm-sizes, their calculated ascent rates are sufficient that they could transit multiple kilometers over hundreds to thousands of years through mafic and silicic melt, respectively. These results show that diffusive coarsening can facilitate transfer of volatiles through, and from, magmatic systems by creating bubbles sufficiently large for rapid ascent.
NASA Astrophysics Data System (ADS)
Holman, Benjamin R.
In recent years, revolutionary "hybrid" or "multi-physics" methods of medical imaging have emerged. By combining two or three different types of waves these methods overcome limitations of classical tomography techniques and deliver otherwise unavailable, potentially life-saving diagnostic information. Thermoacoustic (and photoacoustic) tomography is the most developed multi-physics imaging modality. Thermo- and photo- acoustic tomography require reconstructing initial acoustic pressure in a body from time series of pressure measured on a surface surrounding the body. For the classical case of free space wave propagation, various reconstruction techniques are well known. However, some novel measurement schemes place the object of interest between reflecting walls that form a de facto resonant cavity. In this case, known methods cannot be used. In chapter 2 we present a fast iterative reconstruction algorithm for measurements made at the walls of a rectangular reverberant cavity with a constant speed of sound. We prove the convergence of the iterations under a certain sufficient condition, and demonstrate the effectiveness and efficiency of the algorithm in numerical simulations. In chapter 3 we consider the more general problem of an arbitrarily shaped resonant cavity with a non constant speed of sound and present the gradual time reversal method for computing solutions to the inverse source problem. It consists in solving back in time on the interval [0, T] the initial/boundary value problem for the wave equation, with the Dirichlet boundary data multiplied by a smooth cutoff function. If T is sufficiently large one obtains a good approximation to the initial pressure; in the limit of large T such an approximation converges (under certain conditions) to the exact solution.
Smith, Kyle K G; Poulsen, Jens Aage; Nyman, Gunnar; Rossky, Peter J
2015-06-28
We develop two classes of quasi-classical dynamics that are shown to conserve the initial quantum ensemble when used in combination with the Feynman-Kleinert approximation of the density operator. These dynamics are used to improve the Feynman-Kleinert implementation of the classical Wigner approximation for the evaluation of quantum time correlation functions known as Feynman-Kleinert linearized path-integral. As shown, both classes of dynamics are able to recover the exact classical and high temperature limits of the quantum time correlation function, while a subset is able to recover the exact harmonic limit. A comparison of the approximate quantum time correlation functions obtained from both classes of dynamics is made with the exact results for the challenging model problems of the quartic and double-well potentials. It is found that these dynamics provide a great improvement over the classical Wigner approximation, in which purely classical dynamics are used. In a special case, our first method becomes identical to centroid molecular dynamics.
NASA Astrophysics Data System (ADS)
Niccolini, Gianni; Manuello, Amedeo; Marchis, Elena; Carpinteri, Alberto
2017-07-01
The stability of an arch as a structural element in the thermal bath of King Charles Albert (Carlo Alberto) in the Royal Castle of Racconigi (on the UNESCO World Heritage List since 1997) was assessed by the acoustic emission (AE) monitoring technique with application of classical inversion methods to recorded AE data. First, damage source location by means of triangulation techniques and signal frequency analysis were carried out. Then, the recently introduced method of natural-time analysis was preliminarily applied to the AE time series in order to reveal a possible entrance point to a critical state of the monitored structural element. Finally, possible influence of the local seismic and microseismic activity on the stability of the monitored structure was investigated. The criterion for selecting relevant earthquakes was based on the estimation of the size of earthquake preparation zones. The presented results suggest the use of the AE technique as a tool for detecting both ongoing structural damage processes and microseismic activity during preparation stages of seismic events.
Probability evolution method for exit location distribution
NASA Astrophysics Data System (ADS)
Zhu, Jinjie; Chen, Zhen; Liu, Xianbin
2018-03-01
The exit problem in the framework of the large deviation theory has been a hot topic in the past few decades. The most probable escape path in the weak-noise limit has been clarified by the Freidlin-Wentzell action functional. However, noise in real physical systems cannot be arbitrarily small while noise with finite strength may induce nontrivial phenomena, such as noise-induced shift and noise-induced saddle-point avoidance. Traditional Monte Carlo simulation of noise-induced escape will take exponentially large time as noise approaches zero. The majority of the time is wasted on the uninteresting wandering around the attractors. In this paper, a new method is proposed to decrease the escape simulation time by an exponentially large factor by introducing a series of interfaces and by applying the reinjection on them. This method can be used to calculate the exit location distribution. It is verified by examining two classical examples and is compared with theoretical predictions. The results show that the method performs well for weak noise while may induce certain deviations for large noise. Finally, some possible ways to improve our method are discussed.
Is the local linearity of space-time inherited from the linearity of probabilities?
NASA Astrophysics Data System (ADS)
Müller, Markus P.; Carrozza, Sylvain; Höhn, Philipp A.
2017-02-01
The appearance of linear spaces, describing physical quantities by vectors and tensors, is ubiquitous in all of physics, from classical mechanics to the modern notion of local Lorentz invariance. However, as natural as this seems to the physicist, most computer scientists would argue that something like a ‘local linear tangent space’ is not very typical and in fact a quite surprising property of any conceivable world or algorithm. In this paper, we take the perspective of the computer scientist seriously, and ask whether there could be any inherently information-theoretic reason to expect this notion of linearity to appear in physics. We give a series of simple arguments, spanning quantum information theory, group representation theory, and renormalization in quantum gravity, that supports a surprising thesis: namely, that the local linearity of space-time might ultimately be a consequence of the linearity of probabilities. While our arguments involve a fair amount of speculation, they have the virtue of being independent of any detailed assumptions on quantum gravity, and they are in harmony with several independent recent ideas on emergent space-time in high-energy physics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oberreuter, Johannes M., E-mail: johannes.oberreuter@theorie.physik.uni-goettingen.de; Homrighausen, Ingo; Kehrein, Stefan
We study the time evolution of entanglement in a new quantum version of the Kac ring, where two spin chains become dynamically entangled by quantum gates, which are used instead of the classical markers. The features of the entanglement evolution are best understood by using knowledge about the behavior of an ensemble of classical Kac rings. For instance, the recurrence time of the quantum many-body system is twice the length of the chain and “thermalization” only occurs on time scales much smaller than the dimension of the Hilbert space. The model thus elucidates the relation between the results of measurementsmore » in quantum and classical systems: While in classical systems repeated measurements are performed over an ensemble of systems, the corresponding result is obtained by measuring the same quantum system prepared in an appropriate superposition repeatedly.« less
Continuous-Time Classical and Quantum Random Walk on Direct Product of Cayley Graphs
NASA Astrophysics Data System (ADS)
Salimi, S.; Jafarizadeh, M. A.
2009-06-01
In this paper we define direct product of graphs and give a recipe for obtaining probability of observing particle on vertices in the continuous-time classical and quantum random walk. In the recipe, the probability of observing particle on direct product of graph is obtained by multiplication of probability on the corresponding to sub-graphs, where this method is useful to determining probability of walk on complicated graphs. Using this method, we calculate the probability of continuous-time classical and quantum random walks on many of finite direct product Cayley graphs (complete cycle, complete Kn, charter and n-cube). Also, we inquire that the classical state the stationary uniform distribution is reached as t → ∞ but for quantum state is not always satisfied.
Chiang, H-S; Huang, R-Y; Weng, P-W; Mau, L-P; Tsai, Y-W C; Chung, M-P; Chung, C-H; Yeh, H-W; Shieh, Y-S; Cheng, W-C
2018-03-01
Current bibliometric analyses of the evolving trends in research scope category across different time periods using the H-classics method in implantology are considerably limited. The purpose of this study was to identify the classic articles in implantology to analyse bibliometric characteristics and associated factors in implantology for the past four decades. H-Classics in implantology were identified within four time periods between 1977 and 2016, based on the h-index from the Scopus ® database. For each article, the principal bibliometric parameters of authorship, geographic origin, country origin, and institute origin, collaboration, centralisation, article type, scope of study and other associated factors were analysed in four time periods. A significant increase in mean numbers of authors per H-Classics was found across time. Both Europe and North America were the most productive region/country and steadily dominated this field in each time period. Collaborations of author, internationally and inter-institutionally had significantly increased across time. A significant decentralisation in authorships, institutes and journals was noted in past four decades. The journal of Clinical Oral Implant Researches has raised its importance for almost 30 years (1987-2016). Research on Complications, peri-implant infection/pathology/therapy had been increasing in production throughout each period. This is the first study to evaluate research trends in implantology in the past 40 years using the H-classics method, which through analysing via principle bibliometric characteristics reflected a historical perspective on evolutionary mainstream in the field. Prominence of research regarding complications may forecast innovative advancements in future. © 2018 John Wiley & Sons Ltd.
a Classical Isodual Theory of Antimatter and its Prediction of Antigravity
NASA Astrophysics Data System (ADS)
Santilli, Ruggero Maria
An inspection of the contemporary physics literature reveals that, while matter is treated at all levels of study, from Newtonian mechanics to quantum field theory, antimatter is solely treated at the level of second quantization. For the purpose of initiating the restoration of full equivalence in the treatment of matter and antimatter in due time, and as the classical foundations of an axiomatically consistent inclusion of gravitation in unified gauge theories recently appeared elsewhere, in this paper we present a classical representation of antimatter which begins at the primitive Newtonian level with corresponding formulations at all subsequent levels. By recalling that charge conjugation of particles into antiparticles is antiautomorphic, the proposed theory of antimatter is based on a new map, called isoduality, which is also antiautomorphic (and more generally, antiisomorphic), yet it is applicable beginning at the classical level and then persists at the quantum level where it becomes equivalent to charge conjugation. We therefore present, apparently for the first time, the classical isodual theory of antimatter, we identify the physical foundations of the theory as being the novel isodual Galilean, special and general relativities, and we show the compatibility of the theory with all available classical experimental data on antimatter. We identify the classical foundations of the prediction of antigravity for antimatter in the field of matter (or vice-versa) without any claim on its validity, and defer its resolution to specifically identified experiments. We identify the novel, classical, isodual electromagnetic waves which are predicted to be emitted by antimatter, the so-called space-time machine based on a novel non-Newtonian geometric propulsion, and other implications of the theory. We also introduce, apparently for the first time, the isodual space and time inversions and show that they are nontrivially different than the conventional ones, thus offering a possibility for the future resolution whether far away galaxies and quasars are made up of matter or of antimatter. The paper ends with the indication that the studies are at their first infancy, and indicates some of the open problems. To avoid a prohibitive length, the paper is restricted to the classical treatment, while studies on operator profiles are treated elsewhere.
Large Impact Features on Europa: Results of the Galileo Nominal Mission
NASA Technical Reports Server (NTRS)
Moore, Jeffrey M.; Asphaug, Erik; Sullivan, Robert J.; Klemaszewski, James E.; Bender, Kelly C.; Greeley, Ronald; Geissler, Paul E.; McEwen, Alfred S.; Turtle, Elizabeth P.; Phillips, Cynthia B.
1998-01-01
The Galileo Orbiter examined several impact features on Europa at considerably better resolution than was possible from Voyager. The new data allow us to describe the morphology and infer the geology of the largest impact features on Europa, which are probes into the crust. We observe two basic types of large impact features: (1) "classic" impact craters that grossly resemble well-preserved lunar craters of similar size but are more topographically subdued (e.g., Pwyll) and (2) very flat circular features that lack the basic topographic structures of impact craters such as raised rims, a central depression, or central peaks, and which largely owe their identification as impact features to the field of secondary craters radially sprayed about them (e.g., Callanish). Our interpretation is that the classic craters (all <30 km diameter) formed entirely within a solid target at least 5 to 10 km thick that exhibited brittle behavior on time scales of the impact events. Some of the classic craters have a more subdued topography than fresh craters of similar size on other icy bodies such as Ganymede and Callisto, probably due to the enhanced viscous relaxation produced by a steeper thermal gradient on Europa. Pedestal ejecta facies on Europa (and Ganymede) may be produced by the relief-flattening movement of plastically deforming but otherwise solid ice that was warm at the time of emplacement. Callanish and Tyre do not appear to be larger and even more viscously relaxed versions of the classic craters; rather they display totally different morphologies such as distinctive textures and a series of large concentric structural rings cutting impact-feature-related materials. Impact simulations suggest that the distinctive morphologies would not be produced by impact into a solid ice target, but may be explained by impact into an ice layer approximately 10 to 15 km thick overlying a low-viscosity material such as water. The very wide (near antipodal) separation of Callanish and Tyre imply that approximately 10-15 km may have been the global average thickness of the rigid crust of Europa when these impacts occurred. The absence of detectable craters superposed on the interior deposits of Callanish suggests that it is geologically young (< 10(exp 8) years). Hence, it seems likely that our preliminary conclusions about the subsurface structure of Europa apply to the current day.
Large Impact Features on Europa: Results of the Galileo Nominal Mission
Moore, Johnnie N.; Asphaug, E.; Sullivan, R.J.; Klemaszewski, J.E.; Bender, K.C.; Greeley, R.; Geissler, P.E.; McEwen, A.S.; Turtle, E.P.; Phillips, C.B.; Tufts, B.R.; Head, J. W.; Pappalardo, R.T.; Jones, K.B.; Chapman, C.R.; Belton, M.J.S.; Kirk, R.L.; Morrison, D.
1998-01-01
The Galileo Orbiter examined several impact features on Europa at considerably better resolution than was possible from Voyager. The new data allow us to describe the morphology and infer the geology of the largest impact features on Europa, which are probes into the crust. We observe two basic types of large impact features: (1) "classic" impact craters that grossly resemble well-preserved lunar craters of similar size but are more topographically subdued (e.g., Pwyll) and (2) very flat circular features that lack the basic topographic structures of impact craters such as raised rims, a central depression, or central peaks, and which largely owe their identification as impact features to the field of secondary craters radially sprayed about them (e.g., Callanish). Our interpretation is that the classic craters (all <30 km diameter) formed entirely within a solid target at least 5 to 10 km thick that exhibited brittle behavior on time scales of the impact events. Some of the classic craters have a more subdued topography than fresh craters of similar size on other icy bodies such as Ganymede and Callisto, probably due to the enhanced viscous relaxation produced by a steeper thermal gradient on Europa. Pedestal ejecta facies on Europa (and Ganymede) may be produced by the relief-flattening movement of plastically deforming but otherwise solid ice that was warm at the time of emplacement. Callanish and Tyre do not appear to be larger and even more viscously relaxed versions of the classic craters; rather they display totally different morphologies such as distinctive textures and a series of large concentric structural rings cutting impact-feature-related materials. Impact simulations suggest that the distinctive morphologies would not be produced by impact into a solid ice target, but may be explained by impact into an ice layer ~10 to 15 km thick overlying a low-viscosity material such as water. The very wide (near antipodal) separation of Callanish and Tyre imply that ~10-15 km may have been the global average thickness of the rigid crust of Europa when these impacts occurred. The absence of detectable craters superposed on the interior deposits of Callanish suggests that it is geologically young (<108years). Hence, it seems likely that our preliminary conclusions about the subsurface structure of Europa apply to the current day. ?? 1998 Academic Press.
The Importance of Hydrological Signature and Its Recurring Dynamics
NASA Astrophysics Data System (ADS)
Wendi, D.; Marwan, N.; Merz, B.
2017-12-01
Temporal changes in hydrology are known to be challenging to detect and attribute due to multiple drivers that include complex processes that are non-stationary and highly variable. These drivers, such as human-induced climate change, natural climate variability, implementation of flood defense, river training, and land use change, could impact variably on space-time scales and influence or mask each other. Besides, data depicting these drivers are often not available. One conventional approach of analyzing the change is based on discrete points of magnitude (e.g. the frequency of recurring extreme discharge) and often linearly quantified and hence do not reveal the potential change in the hydrological process. Moreover, discharge series are often subject to measurement errors, such as rating curve error especially in the case of flood peaks where observation are derived through extrapolation. In this study, the system dynamics inferred from the hydrological signature (i.e. the shape of hydrograph) is being emphasized. One example is to see if certain flood dynamics (instead of flood peak) in the recent years, had also occurred in the past (or rather extraordinary), and if so what is its recurring rate and if there had been a shift in its occurrence in time or seasonality (e.g. earlier snow melt dominant flood). The utilization of hydrological signature here is extended beyond those of classical hydrology such as base flow index, recession and rising limb slope, and time to peak. It is in fact all these characteristics combined i.e. from the start until the end of the hydrograph. Recurrence plot is used as a method to quantify and visualize the recurring hydrological signature through its phase space trajectories, and usually in the order of dimension above 2. Such phase space trajectories are constructed by embedding the time series into a series of variables (i.e. number of dimension) corresponding to the time delay. Since the method is rather novel in hydrological community, the study presents an overview and a guideline to the method with an application example on analyzing the change of hydrological signature and discussion of its benefits and flaws.
On the substructure of the cosmological constant
NASA Astrophysics Data System (ADS)
Dvali, G.; Gomez, C.; Zell, S.
We summarize the findings of our paper arXiv:1701.08776 [hep-th]. We start by defining the quantum break-time. Once one understands a classical solution as expectation value of an underlying quantum state, it emerges as time-scale after which the true quantum evolution departs from the classical mean field evolution. We apply this idea to de Sitter space. Following earlier work, we construct a simple model of a spin-2 field, which for some time reproduces the de Sitter metric and simultaneously allows for its well-defined representation as coherent quantum state of gravitons. The mean occupation number N of background gravitons turns out to be equal to the de Sitter horizon area in Planck units, while their frequency is given by the de Sitter Hubble parameter. In the semi-classical limit, we show that the model reproduces all semi-classical calculations in de Sitter, such as thermal Gibbons-Hawking radiation, all in the language of quantum S-matrix scatterings and decays of coherent state gravitons. Most importantly, this framework allows to capture the (1/N)-effects of back reaction to which the usual semi-classical treatment is blind. They violate the de Sitter symmetry and lead to a finite quantum break-time of the de Sitter state equal to the de Sitter radius times N. We also point out that the quantum-break time is inversely proportional to the number of particle species in the theory. Thus, the quantum break-time imposes the following consistency condition: Older and species-richer universes must have smaller cosmological constants. For the maximal, phenomenologically acceptable number of species, the observed cosmological constant would saturate this bound if our Universe were 10100 years old in its entire classical history.
Tracer experiments in periodical heterogeneous model porous medium
NASA Astrophysics Data System (ADS)
Majdalani, Samer; Delenne, Carole; Guinot, Vincent
2017-06-01
It is established that solute transport in homogenous porous media follows a classical 'S' shape breakthrough curve that can easily be modelled by a convection dispersion equation. In this study, we designed a Model Heterogeneous Porous Medium (MHPM) with a high degree of heterogeneity, in which the breakthrough curve does not follow the classical 'S' shape. The contrast in porosity is obtained by placing a cylindrical cavity (100% porosity) inside a 40% porosity medium composed with 1mm glass beads. Step tracing experiments are done by injecting salty water in the study column initially containing deionised water, until the outlet concentration stabilises to the input one. Several replicates of the experiment were conducted for n = 1 to 6 MHPM placed in series. The total of 116 experiments gives a high-quality database allowing the assessment of experimental uncertainty. The experimental results show that the breakthrough curve is very different from the `S' shape for small values of n, but the more n increases, the more the classical shape is recovered.
Secure Communication via a Recycling of Attenuated Classical Signals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, IV, Amos M.
We describe a simple method of interleaving a classical and quantum signal in a secure communication system at a single wavelength. The system transmits data encrypted via a one-time pad on a classical signal and produces a single-photon reflection of the encrypted signal. This attenuated signal can be used to observe eavesdroppers and produce fresh secret bits. The system can be secured against eavesdroppers, detect simple tampering or classical bit errors, produces more secret bits than it consumes, and does not require any entanglement or complex wavelength division multiplexing, thus, making continuous secure two-way communication via one-time pads practical.
Secure Communication via a Recycling of Attenuated Classical Signals
Smith, IV, Amos M.
2017-01-12
We describe a simple method of interleaving a classical and quantum signal in a secure communication system at a single wavelength. The system transmits data encrypted via a one-time pad on a classical signal and produces a single-photon reflection of the encrypted signal. This attenuated signal can be used to observe eavesdroppers and produce fresh secret bits. The system can be secured against eavesdroppers, detect simple tampering or classical bit errors, produces more secret bits than it consumes, and does not require any entanglement or complex wavelength division multiplexing, thus, making continuous secure two-way communication via one-time pads practical.
Hidden Semi-Markov Models and Their Application
NASA Astrophysics Data System (ADS)
Beyreuther, M.; Wassermann, J.
2008-12-01
In the framework of detection and classification of seismic signals there are several different approaches. Our choice for a more robust detection and classification algorithm is to adopt Hidden Markov Models (HMM), a technique showing major success in speech recognition. HMM provide a powerful tool to describe highly variable time series based on a double stochastic model and therefore allow for a broader class description than e.g. template based pattern matching techniques. Being a fully probabilistic model, HMM directly provide a confidence measure of an estimated classification. Furthermore and in contrast to classic artificial neuronal networks or support vector machines, HMM are incorporating the time dependence explicitly in the models thus providing a adequate representation of the seismic signal. As the majority of detection algorithms, HMM are not based on the time and amplitude dependent seismogram itself but on features estimated from the seismogram which characterize the different classes. Features, or in other words characteristic functions, are e.g. the sonogram bands, instantaneous frequency, instantaneous bandwidth or centroid time. In this study we apply continuous Hidden Semi-Markov Models (HSMM), an extension of continuous HMM. The duration probability of a HMM is an exponentially decaying function of the time, which is not a realistic representation of the duration of an earthquake. In contrast HSMM use Gaussians as duration probabilities, which results in an more adequate model. The HSMM detection and classification system is running online as an EARTHWORM module at the Bavarian Earthquake Service. Here the signals that are to be classified simply differ in epicentral distance. This makes it possible to easily decide whether a classification is correct or wrong and thus allows to better evaluate the advantages and disadvantages of the proposed algorithm. The evaluation is based on several month long continuous data and the results are additionally compared to the previously published discrete HMM, continuous HMM and a classic STA/LTA. The intermediate evaluation results are very promising.
Sweat, Noah W; Bates, Larry W; Hendricks, Peter S
2016-01-01
Developing methods for improving creativity is of broad interest. Classic psychedelics may enhance creativity; however, the underlying mechanisms of action are unknown. This study was designed to assess whether a relationship exists between naturalistic classic psychedelic use and heightened creative problem-solving ability and if so, whether this is mediated by lifetime mystical experience. Participants (N = 68) completed a survey battery assessing lifetime mystical experience and circumstances surrounding the most memorable experience. They were then administered a functional fixedness task in which faster completion times indicate greater creative problem-solving ability. Participants reporting classic psychedelic use concurrent with mystical experience (n = 11) exhibited significantly faster times on the functional fixedness task (Cohen's d = -.87; large effect) and significantly greater lifetime mystical experience (Cohen's d = .93; large effect) than participants not reporting classic psychedelic use concurrent with mystical experience. However, lifetime mystical experience was unrelated to completion times on the functional fixedness task (standardized β = -.06), and was therefore not a significant mediator. Classic psychedelic use may increase creativity independent of its effects on mystical experience. Maximizing the likelihood of mystical experience may need not be a goal of psychedelic interventions designed to boost creativity.
Historical Perspectives and Future Needs in the Development of the Soil Series Concept
NASA Astrophysics Data System (ADS)
Beaudette, Dylan E.; Brevik, Eric C.; Indorante, Samuel J.
2016-04-01
The soil series concept is an ever-evolving understanding of soil profile observations, their connection to the landscape, and functional limits on the range in characteristics that affect management. Historically, the soil series has played a pivotal role in the development of soil-landscape theory, modern soil survey methods, and concise delivery of soils information to the end-user-- in other words, soil series is the palette from which soil survey reports are crafted. Over the last 20 years the soil series has received considerable criticism as a means of soil information organization (soil survey development) and delivery (end-user application of soil survey data), with increasing pressure (internal and external) to retire the soil series. We propose that a modern re-examination of soil series information could help address several of the long-standing critiques of soil survey: consistency across survey vintage and political divisions and more robust estimates of soil properties and associated uncertainty. A new library of soil series data would include classic narratives describing morphology and management, quantitative descriptions of soil properties and their ranges, graphical depiction of the relationships between associated soil series, block diagrams illustrating soil-landscape models, maps of series distribution, and a probabilistic representation of a "typical" soil profile. These data would be derived from re-correlation of existing morphologic and characterization data informed by modern statistical methods and regional expertise.
Nonequilibrium dynamics of the O( N ) model on dS3 and AdS crunches
NASA Astrophysics Data System (ADS)
Kumar, S. Prem; Vaganov, Vladislav
2018-03-01
We study the nonperturbative quantum evolution of the interacting O( N ) vector model at large- N , formulated on a spatial two-sphere, with time dependent couplings which diverge at finite time. This model - the so-called "E-frame" theory, is related via a conformal transformation to the interacting O( N ) model in three dimensional global de Sitter spacetime with time independent couplings. We show that with a purely quartic, relevant deformation the quantum evolution of the E-frame model is regular even when the classical theory is rendered singular at the end of time by the diverging coupling. Time evolution drives the E-frame theory to the large- N Wilson-Fisher fixed point when the classical coupling diverges. We study the quantum evolution numerically for a variety of initial conditions and demonstrate the finiteness of the energy at the classical "end of time". With an additional (time dependent) mass deformation, quantum backreaction lowers the mass, with a putative smooth time evolution only possible in the limit of infinite quartic coupling. We discuss the relevance of these results for the resolution of crunch singularities in AdS geometries dual to E-frame theories with a classical gravity dual.
Quantum Speed Limits across the Quantum-to-Classical Transition
NASA Astrophysics Data System (ADS)
Shanahan, B.; Chenu, A.; Margolus, N.; del Campo, A.
2018-02-01
Quantum speed limits set an upper bound to the rate at which a quantum system can evolve. Adopting a phase-space approach, we explore quantum speed limits across the quantum-to-classical transition and identify equivalent bounds in the classical world. As a result, and contrary to common belief, we show that speed limits exist for both quantum and classical systems. As in the quantum domain, classical speed limits are set by a given norm of the generator of time evolution.
Signal processing for molecular and cellular biological physics: an emerging field.
Little, Max A; Jones, Nick S
2013-02-13
Recent advances in our ability to watch the molecular and cellular processes of life in action--such as atomic force microscopy, optical tweezers and Forster fluorescence resonance energy transfer--raise challenges for digital signal processing (DSP) of the resulting experimental data. This article explores the unique properties of such biophysical time series that set them apart from other signals, such as the prevalence of abrupt jumps and steps, multi-modal distributions and autocorrelated noise. It exposes the problems with classical linear DSP algorithms applied to this kind of data, and describes new nonlinear and non-Gaussian algorithms that are able to extract information that is of direct relevance to biological physicists. It is argued that these new methods applied in this context typify the nascent field of biophysical DSP. Practical experimental examples are supplied.
Pore Pressure Pulse Drove the 2012 Emilia (Italy) Series of Earthquakes
NASA Astrophysics Data System (ADS)
Pezzo, Giuseppe; De Gori, Pasquale; Lucente, Francesco Pio; Chiarabba, Claudio
2018-01-01
The 2012 Emilia earthquakes sequence is the first debated case in Italy of destructive event possibly induced by anthropic activity. During this sequence, two main earthquakes occurred separated by 9 days on contiguous thrust faults. Scientific commissions engaged by the Italian government reported complementary scenarios on the potential trigger mechanism ascribable to exploitation of a nearby oil field. In this study, we combine a refined geodetic source model constrained by precise aftershock locations and an improved tomographic model of the area to define the geometrical relation between the activated faults and investigate possible triggering mechanisms. An aftershock decay rate that deviates from the classical Omori-like pattern and
K2 Reveals Pulsed Accretion Driven by the 2 Myr Old Hot Jupiter CI Tau b
NASA Astrophysics Data System (ADS)
Biddle, Lauren I.; Johns-Krull, Christopher M.; Llama, Joe; Prato, Lisa; Skiff, Brian A.
2018-02-01
CI Tau is a young (∼2 Myr) classical T Tauri star located in the Taurus star-forming region. Radial velocity observations indicate it hosts a Jupiter-sized planet with an orbital period of approximately 9 days. In this work, we analyze time series of CI Tau’s photometric variability as seen by K2. The light curve reveals the stellar rotation period to be ∼6.6 days. Although there is no evidence that CI Tau b transits the host star, a ∼9 day signature is also present in the light curve. We believe this is most likely caused by planet–disk interactions that perturb the accretion flow onto the star, resulting in a periodic modulation of the brightness with the ∼9 day period of the planet’s orbit.
Signal processing for molecular and cellular biological physics: an emerging field
Little, Max A.; Jones, Nick S.
2013-01-01
Recent advances in our ability to watch the molecular and cellular processes of life in action—such as atomic force microscopy, optical tweezers and Forster fluorescence resonance energy transfer—raise challenges for digital signal processing (DSP) of the resulting experimental data. This article explores the unique properties of such biophysical time series that set them apart from other signals, such as the prevalence of abrupt jumps and steps, multi-modal distributions and autocorrelated noise. It exposes the problems with classical linear DSP algorithms applied to this kind of data, and describes new nonlinear and non-Gaussian algorithms that are able to extract information that is of direct relevance to biological physicists. It is argued that these new methods applied in this context typify the nascent field of biophysical DSP. Practical experimental examples are supplied. PMID:23277603
NASA Astrophysics Data System (ADS)
Shi, Jinfei; Zhu, Songqing; Chen, Ruwen
2017-12-01
An order selection method based on multiple stepwise regressions is proposed for General Expression of Nonlinear Autoregressive model which converts the model order problem into the variable selection of multiple linear regression equation. The partial autocorrelation function is adopted to define the linear term in GNAR model. The result is set as the initial model, and then the nonlinear terms are introduced gradually. Statistics are chosen to study the improvements of both the new introduced and originally existed variables for the model characteristics, which are adopted to determine the model variables to retain or eliminate. So the optimal model is obtained through data fitting effect measurement or significance test. The simulation and classic time-series data experiment results show that the method proposed is simple, reliable and can be applied to practical engineering.
VizieR Online Data Catalog: BVI photometry of LMC bar variables (Di Fabrizio+, 2005)
NASA Astrophysics Data System (ADS)
di Fabrizio, L.; Clementini, G.; Maio, M.; Bragaglia, A.; Carretta, E.; Gratton, R.; Montegriffo, P.; Zoccali, M.
2005-01-01
We present the Johnson-Cousins B,V and I time series data obtained for 162 variable stars (135 RR Lyrae, 4 candidate Anomalous Cepheids, 11 Classical Cepheids, 11 eclipsing binaries and 1 delta Scuti star) in two 13x13 square arcmin areas close to the bar of the Large Magellanic Cloud. The photometric observations presented in this paper were carried out at the 1.54m Danish telescope located in La Silla, Chile, on the nights 4-7 January 1999, UT, and 23-24 January 2001, UT, respectively. In the paper we give coordinates, finding charts, periods, epochs, amplitudes, and mean quantities (intensity- and magnitude-averaged luminosities) of the variables with full coverage of the light variations, along with a discussion of the pulsation properties of the RR Lyrae stars in the sample. (8 data files).
A Comparative Study for Flow of Viscoelastic Fluids with Cattaneo-Christov Heat Flux.
Hayat, Tasawar; Muhammad, Taseer; Alsaedi, Ahmed; Mustafa, Meraj
2016-01-01
This article examines the impact of Cattaneo-Christov heat flux in flows of viscoelastic fluids. Flow is generated by a linear stretching sheet. Influence of thermal relaxation time in the considered heat flux is seen. Mathematical formulation is presented for the boundary layer approach. Suitable transformations lead to a nonlinear differential system. Convergent series solutions of velocity and temperature are achieved. Impacts of various influential parameters on the velocity and temperature are sketched and discussed. Numerical computations are also performed for the skin friction coefficient and heat transfer rate. Our findings reveal that the temperature profile has an inverse relationship with the thermal relaxation parameter and the Prandtl number. Further the temperature profile and thermal boundary layer thickness are lower for Cattaneo-Christov heat flux model in comparison to the classical Fourier's law of heat conduction.
Sabet-Peyman, Esfandiar J; Woodward, Julie A
2014-01-01
Orofacial granulomatosis is a relapsing nonnecrotizing granulomatous syndrome that classically presents with lip and perioral swelling. Over the years, several patients have been referred to the Duke Eye Center Oculoplastics Department for severe, progressive, recurrent eyelid swelling interfering with both their functional vision and their appearance. In this IRB approved retrospective case series, we describe the clinical course of 5 such patients, including their presenting symptoms, diagnosis, and response to treatment. We hope that oculoplastics specialists will consider this entity in the differential diagnosis of periorbital edema and consider initiating localized anti-inflammatory treatment once the diagnosis has been made.
NASA Technical Reports Server (NTRS)
Zimmerman, M.
1979-01-01
The classical mechanics results for free precession which are needed in order to calculate the weak field, slow-motion, quadrupole-moment gravitational waves are reviewed. Within that formalism, algorithms are given for computing the exact gravitational power radiated and waveforms produced by arbitrary rigid-body freely-precessing sources. The dominant terms are presented in series expansions of the waveforms for the case of an almost spherical object precessing with a small wobble angle. These series expansions, which retain the precise frequency dependence of the waves, may be useful for gravitational astronomers when freely-precessing sources begin to be observed.
Classic Writings on Instructional Technology. Volume 2. Instructional Technology Series.
ERIC Educational Resources Information Center
Ely, Donald P.; Plomp, Tjeerd
Selected for their influence on the field, their continued reference over the years, and the reputation of the authors, these 15 seminal papers are considered to be foundations in the field of instructional technology. Extending the purpose of the first volume to primary writings of the 70s, 80s, and early 90s, this work continues to document the…
ERIC Educational Resources Information Center
Eemeren, F. H. van; Grootendorst, R.
Suitable methods can be developed and instructional devices can be designed for the teaching of argumentation analysis to students of varying interests, ages, and capacities. Until 1950, the study of argumentation in the Netherlands was either purely practical or a continuation of the classic logic and rhetoric traditions. A number of new research…
ERIC Educational Resources Information Center
Linse, Barbara; Judd, Dick
Mexican and Central American cultures are a blend of Native American influences and Spanish traditions and religions. These are seen in aspects of Mexican and Central American celebrations. This book explores those celebrations through activities in art, folk and classical music, dances and fiestas. The book is organized into two sections to…
ERIC Educational Resources Information Center
Leduc, Aimee
1980-01-01
The French language article is the second in a series and describes the principles of classic conditioning which underlie attitude formation and change. The article also notes the many functions of self-concept attitudes in order to guide the choices of intervention in attitude learning. (Author/SB)
ERIC Educational Resources Information Center
Jones, Elizabeth; Reynolds, Gretchen
2011-01-01
Responding to current debates on the place of play in schools, the authors have extensively revised their groundbreaking book. They explain how and why play is a critical part of children's development, as well as the central role adults have to promote it. This classic textbook and popular practitioner resource offers systematic descriptions and…
ERIC Educational Resources Information Center
Clay, Matthew D.; McLeod, Eric J.
2012-01-01
Salicylic acid and its derivative, acetylsalicylic acid, are often encountered in introductory organic chemistry experiments, and mention is often made that salicylic acid was originally isolated from the bark of the willow tree. This biological connection, however, is typically not further pursued, leaving students with an impression that biology…
Effective dynamics of a classical point charge
DOE Office of Scientific and Technical Information (OSTI.GOV)
Polonyi, Janos, E-mail: polonyi@iphc.cnrs.fr
2014-03-15
The effective Lagrangian of a point charge is derived by eliminating the electromagnetic field within the framework of the classical closed time path formalism. The short distance singularity of the electromagnetic field is regulated by an UV cutoff. The Abraham–Lorentz force is recovered and its similarity to quantum anomalies is underlined. The full cutoff-dependent linearized equation of motion is obtained, no runaway trajectories are found but the effective dynamics shows acausality if the cutoff is beyond the classical charge radius. The strength of the radiation reaction force displays a pole in its cutoff-dependence in a manner reminiscent of the Landau-polemore » of perturbative QED. Similarity between the dynamical breakdown of the time reversal invariance and dynamical symmetry breaking is pointed out. -- Highlights: •Extension of the classical action principle for dissipative systems. •New derivation of the Abraham–Lorentz force for a point charge. •Absence of a runaway solution of the Abraham–Lorentz force. •Acausality in classical electrodynamics. •Renormalization of classical electrodynamics of point charges.« less
Quantum and classical dissipation of charged particles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ibarra-Sierra, V.G.; Anzaldo-Meneses, A.; Cardoso, J.L.
2013-08-15
A Hamiltonian approach is presented to study the two dimensional motion of damped electric charges in time dependent electromagnetic fields. The classical and the corresponding quantum mechanical problems are solved for particular cases using canonical transformations applied to Hamiltonians for a particle with variable mass. Green’s function is constructed and, from it, the motion of a Gaussian wave packet is studied in detail. -- Highlights: •Hamiltonian of a damped charged particle in time dependent electromagnetic fields. •Exact Green’s function of a charged particle in time dependent electromagnetic fields. •Time evolution of a Gaussian wave packet of a damped charged particle.more » •Classical and quantum dynamics of a damped electric charge.« less
Joyce, C A; Zorich, B; Pike, S J; Barber, J C; Dennis, N R
1996-01-01
Fluorescence in situ hybridisation (FISH) and conventional chromosome analysis were performed on a series of 52 patients with classical Williams-Beuren syndrome (WBS), suspected WBS, or supravalvular aortic stenosis (SVAS). In the classical WBS group, 22/23 (96%) had a submicroscopic deletion of the elastin locus on chromosome 7, but the remaining patient had a unique interstitial deletion of chromosome 11 (del(11)(q13.5q14.2)). In the suspected WBS group 2/22 (9%) patients had elastin deletions but a third patient had a complex karyotype including a ring chromosome 22 with a deletion of the long arm (r(22)(p11-->q13)). In the SVAS group, 1/7 (14%) had an elastin gene deletion, despite having normal development and minimal signs of WBS. Overall, some patients with submicroscopic elastin deletions have fewer features of Williams-Beuren syndrome than those with other cytogenetic abnormalities. These results, therefore, emphasise the importance of a combined conventional and molecular cytogenetic approach to diagnosis and suggest that the degree to which submicroscopic deletions of chromosome 7 extend beyond the elastin locus may explain some of the phenotypic variability found in Williams-Beuren syndrome. Images PMID:9004128
Determining the structure of an optimal personnel profile for a transformed commission
DOE Office of Scientific and Technical Information (OSTI.GOV)
Graniere, R.J.
1998-06-01
In the classic sociological sense, an organization such as a public utility commission is a social unit consisting of specific groupings constructed and reconstructed deliberately and with forethought to achieve specific goals. These organizational groupings determined on the basis of rational divisions of labor, power, and communication are designed with the objective of placing individuals into positions where they are expected to make the largest contribution towards achieving the organization`s goals. It is reasonable then to conclude that proponents of the classical view had in mind a readily identifiable common ground among the organization`s members that the organization exploits asmore » it selects its goals. Recently, it has been argued that metaphors are an acceptable shorthand for this common ground that provides an insight into the types of personnel an organization would find most suitable for assisting its efforts to reach its goals. This report is one of a series of reports on the transformation of public utility commissions. Previous reports in the series have focused on the transformation of a commission`s culture, roles, and activities. This report focuses on the staffing dimension of the personnel mix needed to support these changes.« less
Extended generalized geometry and a DBI-type effective action for branes ending on branes
NASA Astrophysics Data System (ADS)
Jurčo, Branislav; Schupp, Peter; Vysoký, Jan
2014-08-01
Starting from the Nambu-Goto bosonic membrane action, we develop a geometric description suitable for p-brane backgrounds. With tools of generalized geometry we derive the pertinent generalization of the string open-closed relations to the p-brane case. Nambu-Poisson structures are used in this context to generalize the concept of semi-classical noncommutativity of D-branes governed by a Poisson tensor. We find a natural description of the correspondence of recently proposed commutative and noncommutative versions of an effective action for p-branes ending on a p '-brane. We calculate the power series expansion of the action in background independent gauge. Leading terms in the double scaling limit are given by a generalization of a (semi-classical) matrix model.
Buckling of Low Arches or Curved Beams of Small Curvature
NASA Technical Reports Server (NTRS)
Fung, Y C; Kaplan, A
1952-01-01
A general solution, based on the classical buckling criterion, is given for the problem of buckling of low arches under a lateral loading acting toward the center of curvature. For a sinusoidal arch under sinusoidal loading, the critical load can be expressed exactly as a simple function of the beam dimension parameters. For other arch shapes and load distributions, approximate values of the critical load can be obtained by summing a few terms of a rapidly converging Fourier series. The effects of initial end thrust and axial and lateral elastic support are discussed. The buckling load based on energy criterion of Karman and Tsien is also calculated. Results for both the classical and the energy criteria are compared with experimental results.
MyDTW - Dynamic Time Warping program for stratigraphical time series
NASA Astrophysics Data System (ADS)
Kotov, Sergey; Paelike, Heiko
2017-04-01
One of the general tasks in many geological disciplines is matching of one time or space signal to another. It can be classical correlation between two cores or cross-sections in sedimentology or marine geology. For example, tuning a paleoclimatic signal to a target curve, driven by variations in the astronomical parameters, is a powerful technique to construct accurate time scales. However, these methods can be rather time-consuming and can take ours of routine work even with the help of special semi-automatic software. Therefore, different approaches to automate the processes have been developed during last decades. Some of them are based on classical statistical cross-correlations such as the 'Correlator' after Olea [1]. Another ones use modern ideas of dynamic programming. A good example is as an algorithm developed by Lisiecki and Lisiecki [2] or dynamic time warping based algorithm after Pälike [3]. We introduce here an algorithm and computer program, which are also stemmed from the Dynamic Time Warping algorithm class. Unlike the algorithm of Lisiecki and Lisiecki, MyDTW does not lean on a set of penalties to follow geological logics, but on a special internal structure and specific constrains. It differs also from [3] in basic ideas of implementation and constrains design. The algorithm is implemented as a computer program with a graphical user interface using Free Pascal and Lazarus IDE and available for Windows, Mac OS, and Linux. Examples with synthetic and real data are demonstrated. Program is available for free download at http://www.marum.de/Sergey_Kotov.html . References: 1. Olea, R.A. Expert systems for automated correlation and interpretation of wireline logs // Math Geol (1994) 26: 879. doi:10.1007/BF02083420 2. Lisiecki L. and Lisiecki P. Application of dynamic programming to the correlation of paleoclimate records // Paleoceanography (2002), Volume 17, Issue 4, pp. 1-1, CiteID 1049, doi: 10.1029/2001PA000733 3. Pälike, H. Extending the astronomical calibration of the Geological Time Scale PhD thesis, University of Cambridge, (2002)
Parameter and uncertainty estimation for mechanistic, spatially explicit epidemiological models
NASA Astrophysics Data System (ADS)
Finger, Flavio; Schaefli, Bettina; Bertuzzo, Enrico; Mari, Lorenzo; Rinaldo, Andrea
2014-05-01
Epidemiological models can be a crucially important tool for decision-making during disease outbreaks. The range of possible applications spans from real-time forecasting and allocation of health-care resources to testing alternative intervention mechanisms such as vaccines, antibiotics or the improvement of sanitary conditions. Our spatially explicit, mechanistic models for cholera epidemics have been successfully applied to several epidemics including, the one that struck Haiti in late 2010 and is still ongoing. Calibration and parameter estimation of such models represents a major challenge because of properties unusual in traditional geoscientific domains such as hydrology. Firstly, the epidemiological data available might be subject to high uncertainties due to error-prone diagnosis as well as manual (and possibly incomplete) data collection. Secondly, long-term time-series of epidemiological data are often unavailable. Finally, the spatially explicit character of the models requires the comparison of several time-series of model outputs with their real-world counterparts, which calls for an appropriate weighting scheme. It follows that the usual assumption of a homoscedastic Gaussian error distribution, used in combination with classical calibration techniques based on Markov chain Monte Carlo algorithms, is likely to be violated, whereas the construction of an appropriate formal likelihood function seems close to impossible. Alternative calibration methods, which allow for accurate estimation of total model uncertainty, particularly regarding the envisaged use of the models for decision-making, are thus needed. Here we present the most recent developments regarding methods for parameter and uncertainty estimation to be used with our mechanistic, spatially explicit models for cholera epidemics, based on informal measures of goodness of fit.
Refined multiscale fuzzy entropy based on standard deviation for biomedical signal analysis.
Azami, Hamed; Fernández, Alberto; Escudero, Javier
2017-11-01
Multiscale entropy (MSE) has been a prevalent algorithm to quantify the complexity of biomedical time series. Recent developments in the field have tried to alleviate the problem of undefined MSE values for short signals. Moreover, there has been a recent interest in using other statistical moments than the mean, i.e., variance, in the coarse-graining step of the MSE. Building on these trends, here we introduce the so-called refined composite multiscale fuzzy entropy based on the standard deviation (RCMFE σ ) and mean (RCMFE μ ) to quantify the dynamical properties of spread and mean, respectively, over multiple time scales. We demonstrate the dependency of the RCMFE σ and RCMFE μ , in comparison with other multiscale approaches, on several straightforward signal processing concepts using a set of synthetic signals. The results evidenced that the RCMFE σ and RCMFE μ values are more stable and reliable than the classical multiscale entropy ones. We also inspect the ability of using the standard deviation as well as the mean in the coarse-graining process using magnetoencephalograms in Alzheimer's disease and publicly available electroencephalograms recorded from focal and non-focal areas in epilepsy. Our results indicated that when the RCMFE μ cannot distinguish different types of dynamics of a particular time series at some scale factors, the RCMFE σ may do so, and vice versa. The results showed that RCMFE σ -based features lead to higher classification accuracies in comparison with the RCMFE μ -based ones. We also made freely available all the Matlab codes used in this study at http://dx.doi.org/10.7488/ds/1477 .
Classical dynamics on curved Snyder space
NASA Astrophysics Data System (ADS)
Ivetić, B.; Meljanac, S.; Mignemi, S.
2014-05-01
We study the classical dynamics of a particle in nonrelativistic Snyder-de Sitter space. We show that for spherically symmetric systems, parameterizing the solutions in terms of an auxiliary time variable, which is a function only of the physical time and of the energy and angular momentum of the particles, one can reduce the problem to the equivalent one in classical mechanics. We also discuss a relativistic extension of these results, and a generalization to the case in which the algebra is realized in flat space.
Hervella-Garcés, M; García-Gavín, J; Silvestre-Salvador, J F
2016-09-01
The Spanish standard patch test series, as recommended by the Spanish Contact Dermatitis and Skin Allergy Research Group (GEIDAC), has been updated for 2016. The new series replaces the 2012 version and contains the minimum set of allergens recommended for routine investigation of contact allergy in Spain from 2016 onwards. Four haptens -clioquinol, thimerosal, mercury, and primin- have been eliminated owing to a low frequency of relevant allergic reactions, while 3 new allergens -methylisothiazolinone, diazolidinyl urea, and imidazolidinyl urea- have been added. GEIDAC has also modified the recommended aqueous solution concentrations for the 2 classic, major haptens methylchloroisothiazolinone and methylisothiazolinone, which are now to be tested at 200ppm in aqueous solution, and formaldehyde, which is now to be tested in a 2% aqueous solution. Updating the Spanish standard series is one of the functions of GEIDAC, which is responsible for ensuring that the standard series is suited to the country's epidemiological profile and pattern of contact sensitization. Copyright © 2016 AEDV. Publicado por Elsevier España, S.L.U. All rights reserved.
NASA Astrophysics Data System (ADS)
Austin, Rickey W.
In Einstein's theory of Special Relativity (SR), one method to derive relativistic kinetic energy is via applying the classical work-energy theorem to relativistic momentum. This approach starts with a classical based work-energy theorem and applies SR's momentum to the derivation. One outcome of this derivation is relativistic kinetic energy. From this derivation, it is rather straight forward to form a kinetic energy based time dilation function. In the derivation of General Relativity a common approach is to bypass classical laws as a starting point. Instead a rigorous development of differential geometry and Riemannian space is constructed, from which classical based laws are derived. This is in contrast to SR's approach of starting with classical laws and applying the consequences of the universal speed of light by all observers. A possible method to derive time dilation due to Newtonian gravitational potential energy (NGPE) is to apply SR's approach to deriving relativistic kinetic energy. It will be shown this method gives a first order accuracy compared to Schwarzschild's metric. The SR's kinetic energy and the newly derived NGPE derivation are combined to form a Riemannian metric based on these two energies. A geodesic is derived and calculations compared to Schwarzschild's geodesic for an orbiting test mass about a central, non-rotating, non-charged massive body. The new metric results in high accuracy calculations when compared to Einsteins General Relativity's prediction. The new method provides a candidate approach for starting with classical laws and deriving General Relativity effects. This approach mimics SR's method of starting with classical mechanics when deriving relativistic equations. As a compliment to introducing General Relativity, it provides a plausible scaffolding method from classical physics when teaching introductory General Relativity. A straight forward path from classical laws to General Relativity will be derived. This derivation provides a minimum first order accuracy to Schwarzschild's solution to Einstein's field equations.
Light weight Heat-Sink, Based on Phase-Change-Material for a High powered - Time limited application
NASA Astrophysics Data System (ADS)
Leibovitz, Johnathan
2002-01-01
When designing components for an aerospace application, whether it is an aircraft, satellite, space station or a launcher - a major considered parameter is its weight . For a combat aircraft, an addition of such a lightweight Heat sink to a high power component, can extend significantly avionics performance at very high altitude - when cooling means are poor. When dealing with a satellite launcher, each pound saved from the launcher in favor of the satellite - may contribute, for instance, several months of satellite life. The solution presented in this paper deals with an electronic device producing high power, for limited time and requires relatively low temperature base plate. The requirements demand that a base plate temperature should not exceed 70°c while exposed to a heat- flux of about 1.5W/cm^2 from an electronic device, during approximately 14 minutes. The classical solution for this transient process requires an Aluminum block heat sink of about 1100 grams . The PCM based heat-sink gives the solution for this case with about 400 grams only with a compact package. It also includes an option for cooling the system by forced convection (and in principle by radiation), when those means of heat dissipation - are available. The work includes a thermal analysis for the Aluminum - PCM heat sink and a series of validation tests of a model. The paper presents results of the analysis and results of the tests, including comparison to the classical robust solution. A parametric performance envelope for customizing to other potential applications is presented as well.
NASA Astrophysics Data System (ADS)
Shara, Michael M.; Doyle, Trisha F.; Lauer, Tod R.; Zurek, David; Neill, J. D.; Madrid, Juan P.; Mikołajewska, Joanna; Welch, D. L.; Baltz, Edward A.
2016-11-01
The Hubble Space Telescope has imaged the central part of M87 over a 10 week span, leading to the discovery of 32 classical novae (CNe) and nine fainter, likely very slow, and/or symbiotic novae. In this first paper of a series, we present the M87 nova finder charts, and the light and color curves of the novae. We demonstrate that the rise and decline times, and the colors of M87 novae are uncorrelated with each other and with position in the galaxy. The spatial distribution of the M87 novae follows the light of the galaxy, suggesting that novae accreted by M87 during cannibalistic episodes are well-mixed. Conservatively using only the 32 brightest CNe we derive a nova rate for M87: {363}-45+33 novae yr‑1. We also derive the luminosity-specific classical nova rate for this galaxy, which is {7.88}-2.6+2.3 {yr}}-1/{10}10 {L}ȯ {,}K. Both rates are 3–4 times higher than those reported for M87 in the past, and similarly higher than those reported for all other galaxies. We suggest that most previous ground-based surveys for novae in external galaxies, including M87, miss most faint, fast novae, and almost all slow novae near the centers of galaxies. Based on observations with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by AURA, Inc., under NASA contract NAS 5-26555.
Zahabiun, Farzaneh; Sadjjadi, Seyed Mahmoud; Esfandiari, Farideh
2015-01-01
Permanent slide preparation of nematodes especially small ones is time consuming, difficult and they become scarious margins. Regarding this problem, a modified double glass mounting method was developed and compared with classic method. A total of 209 nematode samples from human and animal origin were fixed and stained with Formaldehyde Alcohol Azocarmine Lactophenol (FAAL) followed by double glass mounting and classic dehydration method using Canada balsam as their mounting media. The slides were evaluated in different dates and times, more than four years. Different photos were made with different magnification during the evaluation time. The double glass mounting method was stable during this time and comparable with classic method. There were no changes in morphologic structures of nematodes using double glass mounting method with well-defined and clear differentiation between different organs of nematodes in this method. Using this method is cost effective and fast for mounting of small nematodes comparing to classic method.
ZAHABIUN, Farzaneh; SADJJADI, Seyed Mahmoud; ESFANDIARI, Farideh
2015-01-01
Background: Permanent slide preparation of nematodes especially small ones is time consuming, difficult and they become scarious margins. Regarding this problem, a modified double glass mounting method was developed and compared with classic method. Methods: A total of 209 nematode samples from human and animal origin were fixed and stained with Formaldehyde Alcohol Azocarmine Lactophenol (FAAL) followed by double glass mounting and classic dehydration method using Canada balsam as their mounting media. The slides were evaluated in different dates and times, more than four years. Different photos were made with different magnification during the evaluation time. Results: The double glass mounting method was stable during this time and comparable with classic method. There were no changes in morphologic structures of nematodes using double glass mounting method with well-defined and clear differentiation between different organs of nematodes in this method. Conclusion: Using this method is cost effective and fast for mounting of small nematodes comparing to classic method. PMID:26811729
Monopole operators and Hilbert series of Coulomb branches of 3 d = 4 gauge theories
NASA Astrophysics Data System (ADS)
Cremonesi, Stefano; Hanany, Amihay; Zaffaroni, Alberto
2014-01-01
This paper addresses a long standing problem - to identify the chiral ring and moduli space (i.e. as an algebraic variety) on the Coulomb branch of an = 4 superconformal field theory in 2+1 dimensions. Previous techniques involved a computation of the metric on the moduli space and/or mirror symmetry. These methods are limited to sufficiently small moduli spaces, with enough symmetry, or to Higgs branches of sufficiently small gauge theories. We introduce a simple formula for the Hilbert series of the Coulomb branch, which applies to any good or ugly three-dimensional = 4 gauge theory. The formula counts monopole operators which are dressed by classical operators, the Casimir invariants of the residual gauge group that is left unbroken by the magnetic flux. We apply our formula to several classes of gauge theories. Along the way we make various tests of mirror symmetry, successfully comparing the Hilbert series of the Coulomb branch with the Hilbert series of the Higgs branch of the mirror theory.
NASA Astrophysics Data System (ADS)
Khosla, Kiran E.; Altamirano, Natacha
2017-05-01
The notion of time is given a different footing in quantum mechanics and general relativity, treated as a parameter in the former and being an observer-dependent property in the latter. From an operational point of view time is simply the correlation between a system and a clock, where an idealized clock can be modeled as a two-level system. We investigate the dynamics of clocks interacting gravitationally by treating the gravitational interaction as a classical information channel. This model, known as the classical-channel gravity (CCG), postulates that gravity is mediated by a fundamentally classical force carrier and is therefore unable to entangle particles gravitationally. In particular, we focus on the decoherence rates and temporal resolution of arrays of N clocks, showing how the minimum dephasing rate scales with N , and the spatial configuration. Furthermore, we consider the gravitational redshift between a clock and a massive particle and show that a classical-channel model of gravity predicts a finite-dephasing rate from the nonlocal interaction. In our model we obtain a fundamental limitation in time accuracy that is intrinsic to each clock.
Quantum Metrology beyond the Classical Limit under the Effect of Dephasing
NASA Astrophysics Data System (ADS)
Matsuzaki, Yuichiro; Benjamin, Simon; Nakayama, Shojun; Saito, Shiro; Munro, William J.
2018-04-01
Quantum sensors have the potential to outperform their classical counterparts. For classical sensing, the uncertainty of the estimation of the target fields scales inversely with the square root of the measurement time T . On the other hand, by using quantum resources, we can reduce this scaling of the uncertainty with time to 1 /T . However, as quantum states are susceptible to dephasing, it has not been clear whether we can achieve sensitivities with a scaling of 1 /T for a measurement time longer than the coherence time. Here, we propose a scheme that estimates the amplitude of globally applied fields with the uncertainty of 1 /T for an arbitrary time scale under the effect of dephasing. We use one-way quantum-computing-based teleportation between qubits to prevent any increase in the correlation between the quantum state and its local environment from building up and have shown that such a teleportation protocol can suppress the local dephasing while the information from the target fields keeps growing. Our method has the potential to realize a quantum sensor with a sensitivity far beyond that of any classical sensor.
NASA Astrophysics Data System (ADS)
Ausloos, Marcel; Cerqueti, Roy; Lupi, Claudio
2017-03-01
This paper explores a large collection of about 377,000 observations, spanning more than 20 years with a frequency of 30 min, of the streamflow of the Paglia river, in central Italy. We analyze the long-term persistence properties of the series by computing the Hurst exponent, not only in its original form but also under an evolutionary point of view by analyzing the Hurst exponents over a rolling windows basis. The methodological tool adopted for the persistence is the detrended fluctuation analysis (DFA), which is classically known as suitable for our purpose. As an ancillary exploration, we implement a control on the data validity by assessing if the data exhibit the regularity stated by Benford's law. Results are interesting under different viewpoints. First, we show that the Paglia river streamflow exhibits periodicities which broadly suggest the existence of some common behavior with El Niño and the North Atlantic Oscillations: this specifically points to a (not necessarily direct) effect of these oceanic phenomena on the hydrogeological equilibria of very far geographical zones: however, such an hypothesis needs further analyses to be validated. Second, the series of streamflows shows an antipersistent behavior. Third, data are not consistent with Benford's law: this suggests that the measurement criteria should be opportunely revised. Fourth, the streamflow distribution is well approximated by a discrete generalized Beta distribution: this is well in accordance with the measured streamflows being the outcome of a complex system.
Hee, Siew Wan; Parsons, Nicholas; Stallard, Nigel
2018-03-01
The motivation for the work in this article is the setting in which a number of treatments are available for evaluation in phase II clinical trials and where it may be infeasible to try them concurrently because the intended population is small. This paper introduces an extension of previous work on decision-theoretic designs for a series of phase II trials. The program encompasses a series of sequential phase II trials with interim decision making and a single two-arm phase III trial. The design is based on a hybrid approach where the final analysis of the phase III data is based on a classical frequentist hypothesis test, whereas the trials are designed using a Bayesian decision-theoretic approach in which the unknown treatment effect is assumed to follow a known prior distribution. In addition, as treatments are intended for the same population it is not unrealistic to consider treatment effects to be correlated. Thus, the prior distribution will reflect this. Data from a randomized trial of severe arthritis of the hip are used to test the application of the design. We show that the design on average requires fewer patients in phase II than when the correlation is ignored. Correspondingly, the time required to recommend an efficacious treatment for phase III is quicker. © 2017 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Variable Stars in Large Magellanic Cloud Globular Clusters. II. NGC 1786
NASA Astrophysics Data System (ADS)
Kuehn, Charles A.; Smith, Horace A.; Catelan, Márcio; Pritzl, Barton J.; De Lee, Nathan; Borissova, Jura
2012-12-01
This is the second in a series of papers studying the variable stars in Large Magellanic Cloud globular clusters. The primary goal of this series is to study how RR Lyrae stars in Oosterhoff-intermediate systems compare to their counterparts in Oosterhoff I/II systems. In this paper, we present the results of our new time-series B-V photometric study of the globular cluster NGC 1786. A total of 65 variable stars were identified in our field of view. These variables include 53 RR Lyraes (27 RRab, 18 RRc, and 8 RRd), 3 classical Cepheids, 1 Type II Cepheid, 1 Anomalous Cepheid, 2 eclipsing binaries, 3 Delta Scuti/SX Phoenicis variables, and 2 variables of undetermined type. Photometric parameters for these variables are presented. We present physical properties for some of the RR Lyrae stars, derived from Fourier analysis of their light curves. We discuss several different indicators of Oosterhoff type which indicate that the Oosterhoff classification of NGC 1786 is not as clear cut as what is seen in most globular clusters. Based on observations taken with the SMARTS 1.3 m telescope operated by the SMARTS Consortium and observations taken at the Southern Astrophysical Research (SOAR) telescope, which is a joint project of the Ministério da Ciência, Tecnologia, e Inovação (MCTI) da República Federativa do Brasil, the U.S. National Optical Astronomy Observatory (NOAO), the University of North Carolina at Chapel Hill (UNC), and Michigan State University (MSU).
German Children's Classics: Heirs and Pretenders to an Eclectic Heritage
ERIC Educational Resources Information Center
Doderer, Klaus
1973-01-01
There are no classic children's books, if by classics we mean books that will last forever. Instead, it is a matter of constant reevaluation. At most, we have older works that are still valuable today because they touch upon the human and artistic problems of our time. (Author/SJ)
The Orbital Period of the Classical Nova V458 Vul
NASA Astrophysics Data System (ADS)
Goranskij, V. P.; Metlova, N. V.; Barsukova, E. A.; Burenkov, A. N.; Soloviev, V. Ya.
2008-07-01
Classical nova V458 Vul (N Vul 2007 No.1) was detected as a supersoft X-ray source (SSS) by the Swift XRT several times in the time range between 2007 October 18 and 2008 June 18 (J. Drake et al., ATel #1246 and #1603). Our V photometry shows the plateau in the light curve continued since January till June 2008. This feature accompanies usually the SSS phases in some classical novae. The fragmentary monitoring during plateau shows night- to-night variability with the amplitudes between 1.2 and 0.4 mag and rapid variability by 0.1 mag in the time scale of an hour.
NASA Technical Reports Server (NTRS)
Barker, L. E., Jr.; Bowles, R. L.; Williams, L. H.
1973-01-01
High angular rates encountered in real-time flight simulation problems may require a more stable and accurate integration method than the classical methods normally used. A study was made to develop a general local linearization procedure of integrating dynamic system equations when using a digital computer in real-time. The procedure is specifically applied to the integration of the quaternion rate equations. For this application, results are compared to a classical second-order method. The local linearization approach is shown to have desirable stability characteristics and gives significant improvement in accuracy over the classical second-order integration methods.
Time evolution of linearized gauge field fluctuations on a real-time lattice
NASA Astrophysics Data System (ADS)
Kurkela, A.; Lappi, T.; Peuron, J.
2016-12-01
Classical real-time lattice simulations play an important role in understanding non-equilibrium phenomena in gauge theories and are used in particular to model the prethermal evolution of heavy-ion collisions. Due to instabilities, small quantum fluctuations on top of the classical background may significantly affect the dynamics of the system. In this paper we argue for the need for a numerical calculation of a system of classical gauge fields and small linearized fluctuations in a way that keeps the separation between the two manifest. We derive and test an explicit algorithm to solve these equations on the lattice, maintaining gauge invariance and Gauss' law.
Tracking variable sedimentation rates in orbitally forced paleoclimate proxy series
NASA Astrophysics Data System (ADS)
Li, M.; Kump, L. R.; Hinnov, L.
2017-12-01
This study addresses two fundamental issues in cyclostratigraphy: quantitative testing of orbital forcing in cyclic sedimentary sequences and tracking variable sedimentation rates. The methodology proposed here addresses these issues as an inverse problem, and estimates the product-moment correlation coefficient between the frequency spectra of orbital solutions and paleoclimate proxy series over a range of "test" sedimentation rates. It is inspired by the ASM method (1). The number of orbital parameters involved in the estimation is also considered. The method relies on the hypothesis that orbital forcing had a significant impact on the paleoclimate proxy variations, and thus is also tested. The null hypothesis of no astronomical forcing is evaluated using the Beta distribution, for which the shape parameters are estimated using a Monte Carlo simulation approach. We introduce a metric to estimate the most likely sedimentation rate using the product-moment correlation coefficient, H0 significance level, and the number of contributing orbital parameters, i.e., the CHO value. The CHO metric is applied with a sliding window to track variable sedimentation rates along the paleoclimate proxy series. Two forward models with uniform and variable sedimentation rates are evaluated to demonstrate the robustness of the method. The CHO method is applied to the classical Late Triassic Newark depth rank series; the estimated sedimentation rates match closely with previously published sedimentation rates and provide a more highly time-resolved estimate (2,3). References: (1) Meyers, S.R., Sageman, B.B., Amer. J. Sci., 307, 773-792, 2007; (2) Kent, D.V., Olsen, P.E., Muttoni, G., Earth-Sci. Rev.166, 153-180, 2017; (3) Li, M., Zhang, Y., Huang, C., Ogg, J., Hinnov, L., Wang, Y., Zou, Z., Li, L., 2017. Earth Plant. Sc. Lett. doi:10.1016/j.epsl.2017.07.015
Nonclassicality of Temporal Correlations.
Brierley, Stephen; Kosowski, Adrian; Markiewicz, Marcin; Paterek, Tomasz; Przysiężna, Anna
2015-09-18
The results of spacelike separated measurements are independent of distant measurement settings, a property one might call two-way no-signaling. In contrast, timelike separated measurements are only one-way no-signaling since the past is independent of the future but not vice versa. For this reason some temporal correlations that are formally identical to nonclassical spatial correlations can still be modeled classically. We propose a new formulation of Bell's theorem for temporal correlations; namely, we define nonclassical temporal correlations as the ones which cannot be simulated by propagating in time the classical information content of a quantum system given by the Holevo bound. We first show that temporal correlations between results of any projective quantum measurements on a qubit can be simulated classically. Then we present a sequence of general measurements on a single m-level quantum system that cannot be explained by propagating in time an m-level classical system and using classical computers with unlimited memory.
Information transport in classical statistical systems
NASA Astrophysics Data System (ADS)
Wetterich, C.
2018-02-01
For "static memory materials" the bulk properties depend on boundary conditions. Such materials can be realized by classical statistical systems which admit no unique equilibrium state. We describe the propagation of information from the boundary to the bulk by classical wave functions. The dependence of wave functions on the location of hypersurfaces in the bulk is governed by a linear evolution equation that can be viewed as a generalized Schrödinger equation. Classical wave functions obey the superposition principle, with local probabilities realized as bilinears of wave functions. For static memory materials the evolution within a subsector is unitary, as characteristic for the time evolution in quantum mechanics. The space-dependence in static memory materials can be used as an analogue representation of the time evolution in quantum mechanics - such materials are "quantum simulators". For example, an asymmetric Ising model on a Euclidean two-dimensional lattice represents the time evolution of free relativistic fermions in two-dimensional Minkowski space.
Quantum Tic-Tac-Toe as Metaphor for Quantum Physics
NASA Astrophysics Data System (ADS)
Goff, Allan; Lehmann, Dale; Siegel, Joel
2004-02-01
Quantum Tic-Tac-Toe is presented as an abstract quantum system derived from the rules of Classical Tic-Tac-Toe. Abstract quantum systems can be constructed from classical systems by the addition of three types of rules; rules of Superposition, rules of Entanglement, and rules of Collapse. This is formally done for Quantum Tic-Tac-Toe. As a part of this construction it is shown that abstract quantum systems can be viewed as an ensemble of classical systems. That is, the state of a quantum game implies a set of simultaneous classical games. The number and evolution of the ensemble of classical games is driven by the superposition, entanglement, and collapse rules. Various aspects and play situations provide excellent metaphors for standard features of quantum mechanics. Several of the more significant metaphors are discussed, including a measurement mechanism, the correspondence principle, Everett's Many Worlds Hypothesis, an ascertainity principle, and spooky action at a distance. Abstract quantum systems also show the consistency of backwards-in-time causality, and the influence on the present of both pasts and futures that never happened. The strongest logical argument against faster-than-light (FTL) phenomena is that since FTL implies backwards-in-time causality, temporal paradox is an unavoidable consequence of FTL; hence FTL is impossible. Since abstract quantum systems support backwards-in-time causality but avoid temporal paradox through pruning of the classical ensemble, it may be that quantum based FTL schemes are possible allowing backwards-in-time causality, but prohibiting temporal paradox.
Quantum break-time of de Sitter
NASA Astrophysics Data System (ADS)
Dvali, Gia; Gómez, César; Zell, Sebastian
2017-06-01
The quantum break-time of a system is the time-scale after which its true quantum evolution departs from the classical mean field evolution. For capturing it, a quantum resolution of the classical background—e.g., in terms of a coherent state—is required. In this paper, we first consider a simple scalar model with anharmonic oscillations and derive its quantum break-time. Next, following [1], we apply these ideas to de Sitter space. We formulate a simple model of a spin-2 field, which for some time reproduces the de Sitter metric and simultaneously allows for its well-defined representation as quantum coherent state of gravitons. The mean occupation number N of background gravitons turns out to be equal to the de Sitter horizon area in Planck units, while their frequency is given by the de Sitter Hubble parameter. In the semi-classical limit, we show that the model reproduces all the known properties of de Sitter, such as the redshift of probe particles and thermal Gibbons-Hawking radiation, all in the language of quantum S-matrix scatterings and decays of coherent state gravitons. Most importantly, this framework allows to capture the 1/N-effects to which the usual semi-classical treatment is blind. They violate the de Sitter symmetry and lead to a finite quantum break-time of the de Sitter state equal to the de Sitter radius times N. We also point out that the quantum-break time is inversely proportional to the number of particle species in the theory. Thus, the quantum break-time imposes the following consistency condition: older and species-richer universes must have smaller cosmological constants. For the maximal, phenomenologically acceptable number of species, the observed cosmological constant would saturate this bound if our Universe were 10100 years old in its entire classical history.
Moore, Darrell; Van Nest, Byron N; Seier, Edith
2011-06-01
Classical experiments demonstrated that honey bee foragers trained to collect food at virtually any time of day will return to that food source on subsequent days with a remarkable degree of temporal accuracy. This versatile time-memory, based on an endogenous circadian clock, presumably enables foragers to schedule their reconnaissance flights to best take advantage of the daily rhythms of nectar and pollen availability in different species of flowers. It is commonly believed that the time-memory rapidly extinguishes if not reinforced daily, thus enabling foragers to switch quickly from relatively poor sources to more productive ones. On the other hand, it is also commonly thought that extinction of the time-memory is slow enough to permit foragers to 'remember' the food source over a day or two of bad weather. What exactly is the time-course of time-memory extinction? In a series of field experiments, we determined that the level of food-anticipatory activity (FAA) directed at a food source is not rapidly extinguished and, furthermore, the time-course of extinction is dependent upon the amount of experience accumulated by the forager at that source. We also found that FAA is prolonged in response to inclement weather, indicating that time-memory extinction is not a simple decay function but is responsive to environmental changes. These results provide insights into the adaptability of FAA under natural conditions.
NASA Astrophysics Data System (ADS)
Pedretti, Daniele; Beckie, Roger Daniel
2014-05-01
Missing data in hydrological time-series databases are ubiquitous in practical applications, yet it is of fundamental importance to make educated decisions in problems involving exhaustive time-series knowledge. This includes precipitation datasets, since recording or human failures can produce gaps in these time series. For some applications, directly involving the ratio between precipitation and some other quantity, lack of complete information can result in poor understanding of basic physical and chemical dynamics involving precipitated water. For instance, the ratio between precipitation (recharge) and outflow rates at a discharge point of an aquifer (e.g. rivers, pumping wells, lysimeters) can be used to obtain aquifer parameters and thus to constrain model-based predictions. We tested a suite of methodologies to reconstruct missing information in rainfall datasets. The goal was to obtain a suitable and versatile method to reduce the errors given by the lack of data in specific time windows. Our analyses included both a classical chronologically-pairing approach between rainfall stations and a probability-based approached, which accounted for the probability of exceedence of rain depths measured at two or multiple stations. Our analyses proved that it is not clear a priori which method delivers the best methodology. Rather, this selection should be based considering the specific statistical properties of the rainfall dataset. In this presentation, our emphasis is to discuss the effects of a few typical parametric distributions used to model the behavior of rainfall. Specifically, we analyzed the role of distributional "tails", which have an important control on the occurrence of extreme rainfall events. The latter strongly affect several hydrological applications, including recharge-discharge relationships. The heavy-tailed distributions we considered were parametric Log-Normal, Generalized Pareto, Generalized Extreme and Gamma distributions. The methods were first tested on synthetic examples, to have a complete control of the impact of several variables such as minimum amount of data required to obtain reliable statistical distributions from the selected parametric functions. Then, we applied the methodology to precipitation datasets collected in the Vancouver area and on a mining site in Peru.
3D hand motion trajectory prediction from EEG mu and beta bandpower.
Korik, A; Sosnik, R; Siddique, N; Coyle, D
2016-01-01
A motion trajectory prediction (MTP) - based brain-computer interface (BCI) aims to reconstruct the three-dimensional (3D) trajectory of upper limb movement using electroencephalography (EEG). The most common MTP BCI employs a time series of bandpass-filtered EEG potentials (referred to here as the potential time-series, PTS, model) for reconstructing the trajectory of a 3D limb movement using multiple linear regression. These studies report the best accuracy when a 0.5-2Hz bandpass filter is applied to the EEG. In the present study, we show that spatiotemporal power distribution of theta (4-8Hz), mu (8-12Hz), and beta (12-28Hz) bands are more robust for movement trajectory decoding when the standard PTS approach is replaced with time-varying bandpower values of a specified EEG band, ie, with a bandpower time-series (BTS) model. A comprehensive analysis comprising of three subjects performing pointing movements with the dominant right arm toward six targets is presented. Our results show that the BTS model produces significantly higher MTP accuracy (R~0.45) compared to the standard PTS model (R~0.2). In the case of the BTS model, the highest accuracy was achieved across the three subjects typically in the mu (8-12Hz) and low-beta (12-18Hz) bands. Additionally, we highlight a limitation of the commonly used PTS model and illustrate how this model may be suboptimal for decoding motion trajectory relevant information. Although our results, showing that the mu and beta bands are prominent for MTP, are not in line with other MTP studies, they are consistent with the extensive literature on classical multiclass sensorimotor rhythm-based BCI studies (classification of limbs as opposed to motion trajectory prediction), which report the best accuracy of imagined limb movement classification using power values of mu and beta frequency bands. The methods proposed here provide a positive step toward noninvasive decoding of imagined 3D hand movements for movement-free BCIs. © 2016 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Robson, Holly; Sage, Karen; Lambon Ralph, Matthew A.
2012-01-01
Wernicke's aphasia (WA) is the classical neurological model of comprehension impairment and, as a result, the posterior temporal lobe is assumed to be critical to semantic cognition. This conclusion is potentially confused by (a) the existence of patient groups with semantic impairment following damage to other brain regions (semantic dementia and…
- nssl0057 Early stage of tornado formation. Photo #1 of a series of classic photographs of this tornado Available Publication of the U.S. Department of Commerce, National Oceanic & Atmospheric Adminstration NOAA Photo Library Banner Takes you to the Top Page Takes you to the About this Site page. Takes
“Bible” of the hydrological sciences celebrates its 30th year
NASA Astrophysics Data System (ADS)
Berkowitz, Brian
Jacob Bear's Dynamics of Fluids in Porous Media, first published by Elsevier in 1972 and re-issued by Dover in 1988 as a classic in its physics and chemistry series, has reached the age of 30. And yet, the suffix "years old" is not applicable to the book, as it continues to be heavily referenced to this day by both academics and consultants.
ERIC Educational Resources Information Center
Doring, Richard; Hicks, Bruce
A brief review is presented of the characteristics of four maxicalculators (HP 9830, Wang 2200, IBM 5100, MCM/700) and two minicomputers (Classic, Altair 8800). The HP 9830 and the Wang 2200 are thought to be the best adapted to serve entire schools and their unique properties are discussed. Some criteria of what should be taken into account in…
ERIC Educational Resources Information Center
Mote, F.W., Ed.
The two Chinese Linguistics Conferences held at Princeton in October 1966 and 1967 treated respectively (1) the relevance and specific application of computer methods to the problems of Chinese Linguistics and (2) a series of interrelated problems in teaching Chinese--dictionaries, courses in classical Chinese, and methods of teaching and…
Dance-related concussion: a case series.
Stein, Cynthia J; Kinney, Susan A; McCrystal, Tara; Carew, Elizabeth A; Bottino, Nicole M; Meehan Iii, William P; Micheli, Lyle J
2014-01-01
Sport-related concussion is a topic of increasing public and media attention; the medical literature on this topic is growing rapidly. However, to our knowledge no published papers have described concussion specifically in the dancer. This case series involved a retrospective chart review at a large teaching hospital over a 5.5-year period. Eleven dancers (10 female, 1 male) were identified who experienced concussions while in dance class, rehearsal, or performance: 2 in classical ballet, 2 in modern dance, 2 in acro dance, 1 in hip hop, 1 in musical theater, and 3 were unspecified. Dancers were between 12 and 20 years old at the time of presentation. Three concussions occurred during stunting, diving, or flipping. Three resulted from unintentional drops while partnering. Two followed slips and falls. Two were due to direct blows to the head, and one dancer developed symptoms after repeatedly whipping her head and neck in a choreographed movement. Time to presentation in the sports medicine clinic ranged from the day of injury to 3 months. Duration of symptoms ranged from less than 3 weeks to greater than 2 years at last documented follow-up appointment. It is concluded that dancers do suffer dance-related concussions that can result in severe symptoms, limitations in dance participation, and difficulty with activities of daily living. Future studies are needed to evaluate dancers' recognition of concussion symptoms and care-seeking behaviors. Additional work is also necessary to tailor existing guidelines for gradual, progressive, safe return to dance.
Pancreaticoduodenectomy in a Latin American country: the transition to a high-volume center.
Chan, Carlos; Franssen, Bernardo; Rubio, Alethia; Uscanga, Luis
2008-03-01
To analyze data in a single institution series of pancreaticoduodenectomies (PD) performed in a 7-year period after the transition to a high-volume center for pancreatic surgery. PD has developed dramatically in the last century. Mortality is minimal yet complications are still frequent (around 40%). There are very few reports of PD in Latin America. Data on all PDs performed by a single surgeon from March 2000 to July 2006 in our institution were collected prospectively. During the study's time frame 122 PDs were performed; 84% were classical resections. Mean age was 57.9 years. Of the patients, 51% were female. Intraoperative mean values included blood loss 881 ml, operative time 5 h and 35 min, and vein resection in 14 cases. Both ampullary and pancreatic cancer accounted for 34% of cases (42 patients each), 5.7% were distal bile duct and 4% duodenal carcinomas. Benign pathology included chronic pancreatitis, neuroendocrine tumors, cystic lesions, and other miscellaneous tumors. Overall operative mortality was 6.5% in the 7-year period, 2.2% in the later 5 years. There was a total of 75 consecutive PDs without mortality. Of the patients, 41.8% had one or more complications. Mean survival for pancreatic cancer was 22.6 months and ampullary adenocarcinoma was 31.4 months. To our knowledge, this is the largest single surgeon series of PD performed in Latin America. It emphasizes the importance of experience and expertise at high-volume centers in developing countries.
Classical impurity ion confinement in a toroidal magnetized fusion plasma.
Kumar, S T A; Den Hartog, D J; Caspary, K J; Magee, R M; Mirnov, V V; Chapman, B E; Craig, D; Fiksel, G; Sarff, J S
2012-03-23
High-resolution measurements of impurity ion dynamics provide first-time evidence of classical ion confinement in a toroidal, magnetically confined plasma. The density profile evolution of fully stripped carbon is measured in MST reversed-field pinch plasmas with reduced magnetic turbulence to assess Coulomb-collisional transport without the neoclassical enhancement from particle drift effects. The impurity density profile evolves to a hollow shape, consistent with the temperature screening mechanism of classical transport. Corroborating methane pellet injection experiments expose the sensitivity of the impurity particle confinement time to the residual magnetic fluctuation amplitude.
"Antigone" on the Night Shift: Classics in the Contemporary Classroom.
ERIC Educational Resources Information Center
Devenish, Alan
2000-01-01
Examines community college students' choices of favorite works after a one-year composition and literature course. Finds "Antigone" was the favorite. Claims Greek classical works put students in contact with a distant culture that they find intriguing. Suggests juxtaposing a classical work with one from another time and culture to avoid…
Taking-On: A Grounded Theory of Addressing Barriers in Task Completion
ERIC Educational Resources Information Center
Austinson, Julie Ann
2011-01-01
This study of taking-on was conducted using classical grounded theory methodology (Glaser, 1978, 1992, 1998, 2001, 2005; Glaser & Strauss, 1967). Classical grounded theory is inductive, empirical, and naturalistic; it does not utilize manipulation or constrained time frames. Classical grounded theory is a systemic research method used to generate…
Tarnished Gold: Classical Music in America
ERIC Educational Resources Information Center
Asia, Daniel
2010-01-01
A few articles have appeared recently regarding the subject of the health of classical music (or more broadly, the fine arts) in America. These include "Classical Music's New Golden Age," by Heather Mac Donald, in the "City Journal" and "The Decline of the Audience," by Terry Teachout, in "Commentary." These articles appeared around the time of…
Exact treatment of the Jaynes-Cummings model under the action of an external classical field
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abdalla, M. Sebawe, E-mail: m.sebaweh@physics.org; Khalil, E.M.; Mathematics Department, College of Science, Taibah University, Al-MaDinah
2011-09-15
We consider the usual Jaynes-Cummings model (JCM), in the presence of an external classical field. Under a certain canonical transformation for the Pauli operators, the system is transformed into the usual JCM. Using the equations of motion in the Heisenberg picture, exact solutions for the time-dependent dynamical operators are obtained. In order to calculate the expectation values of these operators, the wave function has been constructed. It has been shown that the classical field augments the atomic frequency {omega}{sub 0} and mixes the original atomic states. Changes of squeezing from one quadrature to another is also observed for a strongmore » value of the coupling parameter of the classical field. Furthermore, the system in this case displays partial entanglement and the state of the field losses its purity. - Highlights: > The time-dependent JCM, in the presence of the classical field, is still one of the essential problems in the quantum optics. > A new approach is applied through a certain canonical transformation. > The classical field augments the atomic frequency {omega}{sub 0} and mixes the original atomic states.« less
Schlimpert, Susan; Flärdh, Klas; Buttner, Mark
2016-02-28
Live-cell imaging of biological processes at the single cell level has been instrumental to our current understanding of the subcellular organization of bacterial cells. However, the application of time-lapse microscopy to study the cell biological processes underpinning development in the sporulating filamentous bacteria Streptomyces has been hampered by technical difficulties. Here we present a protocol to overcome these limitations by growing the new model species, Streptomyces venezuelae, in a commercially available microfluidic device which is connected to an inverted fluorescence widefield microscope. Unlike the classical model species, Streptomyces coelicolor, S. venezuelae sporulates in liquid, allowing the application of microfluidic growth chambers to cultivate and microscopically monitor the cellular development and differentiation of S. venezuelae over long time periods. In addition to monitoring morphological changes, the spatio-temporal distribution of fluorescently labeled target proteins can also be visualized by time-lapse microscopy. Moreover, the microfluidic platform offers the experimental flexibility to exchange the culture medium, which is used in the detailed protocol to stimulate sporulation of S. venezuelae in the microfluidic chamber. Images of the entire S. venezuelae life cycle are acquired at specific intervals and processed in the open-source software Fiji to produce movies of the recorded time-series.
The hyperbolic step potential: Anti-bound states, SUSY partners and Wigner time delays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gadella, M.; Kuru, Ş.; Negro, J., E-mail: jnegro@fta.uva.es
We study the scattering produced by a one dimensional hyperbolic step potential, which is exactly solvable and shows an unusual interest because of its asymmetric character. The analytic continuation of the scattering matrix in the momentum representation has a branch cut and an infinite number of simple poles on the negative imaginary axis which are related with the so called anti-bound states. This model does not show resonances. Using the wave functions of the anti-bound states, we obtain supersymmetric (SUSY) partners which are the series of Rosen–Morse II potentials. We have computed the Wigner reflection and transmission time delays formore » the hyperbolic step and such SUSY partners. Our results show that the more bound states a partner Hamiltonian has the smaller is the time delay. We also have evaluated time delays for the hyperbolic step potential in the classical case and have obtained striking similitudes with the quantum case. - Highlights: • The scattering matrix of hyperbolic step potential is studied. • The scattering matrix has a branch cut and an infinite number of poles. • The poles are associated to anti-bound states. • Susy partners using antibound states are computed. • Wigner time delays for the hyperbolic step and partner potentials are compared.« less
Fluorescence Time-lapse Imaging of the Complete S. venezuelae Life Cycle Using a Microfluidic Device
Schlimpert, Susan; Flärdh, Klas; Buttner, Mark
2016-01-01
Live-cell imaging of biological processes at the single cell level has been instrumental to our current understanding of the subcellular organization of bacterial cells. However, the application of time-lapse microscopy to study the cell biological processes underpinning development in the sporulating filamentous bacteria Streptomyces has been hampered by technical difficulties. Here we present a protocol to overcome these limitations by growing the new model species, Streptomyces venezuelae, in a commercially available microfluidic device which is connected to an inverted fluorescence widefield microscope. Unlike the classical model species, Streptomyces coelicolor, S. venezuelae sporulates in liquid, allowing the application of microfluidic growth chambers to cultivate and microscopically monitor the cellular development and differentiation of S. venezuelae over long time periods. In addition to monitoring morphological changes, the spatio-temporal distribution of fluorescently labeled target proteins can also be visualized by time-lapse microscopy. Moreover, the microfluidic platform offers the experimental flexibility to exchange the culture medium, which is used in the detailed protocol to stimulate sporulation of S. venezuelae in the microfluidic chamber. Images of the entire S. venezuelae life cycle are acquired at specific intervals and processed in the open-source software Fiji to produce movies of the recorded time-series. PMID:26967231
Slama, Matous; Benes, Peter M.; Bila, Jiri
2015-01-01
During radiotherapy treatment for thoracic and abdomen cancers, for example, lung cancers, respiratory motion moves the target tumor and thus badly affects the accuracy of radiation dose delivery into the target. A real-time image-guided technique can be used to monitor such lung tumor motion for accurate dose delivery, but the system latency up to several hundred milliseconds for repositioning the radiation beam also affects the accuracy. In order to compensate the latency, neural network prediction technique with real-time retraining can be used. We have investigated real-time prediction of 3D time series of lung tumor motion on a classical linear model, perceptron model, and on a class of higher-order neural network model that has more attractive attributes regarding its optimization convergence and computational efficiency. The implemented static feed-forward neural architectures are compared when using gradient descent adaptation and primarily the Levenberg-Marquardt batch algorithm as the ones of the most common and most comprehensible learning algorithms. The proposed technique resulted in fast real-time retraining, so the total computational time on a PC platform was equal to or even less than the real treatment time. For one-second prediction horizon, the proposed techniques achieved accuracy less than one millimeter of 3D mean absolute error in one hundred seconds of total treatment time. PMID:25893194
Bukovsky, Ivo; Homma, Noriyasu; Ichiji, Kei; Cejnek, Matous; Slama, Matous; Benes, Peter M; Bila, Jiri
2015-01-01
During radiotherapy treatment for thoracic and abdomen cancers, for example, lung cancers, respiratory motion moves the target tumor and thus badly affects the accuracy of radiation dose delivery into the target. A real-time image-guided technique can be used to monitor such lung tumor motion for accurate dose delivery, but the system latency up to several hundred milliseconds for repositioning the radiation beam also affects the accuracy. In order to compensate the latency, neural network prediction technique with real-time retraining can be used. We have investigated real-time prediction of 3D time series of lung tumor motion on a classical linear model, perceptron model, and on a class of higher-order neural network model that has more attractive attributes regarding its optimization convergence and computational efficiency. The implemented static feed-forward neural architectures are compared when using gradient descent adaptation and primarily the Levenberg-Marquardt batch algorithm as the ones of the most common and most comprehensible learning algorithms. The proposed technique resulted in fast real-time retraining, so the total computational time on a PC platform was equal to or even less than the real treatment time. For one-second prediction horizon, the proposed techniques achieved accuracy less than one millimeter of 3D mean absolute error in one hundred seconds of total treatment time.
Biphoton Generation Driven by Spatial Light Modulation: Parallel-to-Series Conversion
NASA Astrophysics Data System (ADS)
Zhao, Luwei; Guo, Xianxin; Sun, Yuan; Su, Yumian; Loy, M. M. T.; Du, Shengwang
2016-05-01
We demonstrate the generation of narrowband biphotons with controllable temporal waveform by spontaneous four-wave mixing in cold atoms. In the group-delay regime, we study the dependence of the biphoton temporal waveform on the spatial profile of the pump laser beam. By using a spatial light modulator, we manipulate the spatial profile of the pump laser and map it onto the two-photon entangled temporal wave function. This parallel-to-series conversion (or spatial-to-temporal mapping) enables coding the parallel classical information of the pump spatial profile to the sequential temporal waveform of the biphoton quantum state. The work was supported by the Hong Kong RGC (Project No. 601113).
Affinity Proteomics in the mountains: Alpbach 2015.
Taussig, Michael J
2016-09-25
The 2015 Alpbach Workshop on Affinity Proteomics, organised by the EU AFFINOMICS consortium, was the 7th workshop in this series. As in previous years, the focus of the event was the current state of affinity methods for proteome analysis, including complementarity with mass spectrometry, progress in recombinant binder production methods, alternatives to classical antibodies as affinity reagents, analysis of proteome targets, industry focus on biomarkers, and diagnostic and clinical applications. The combination of excellent science with Austrian mountain scenery and winter sports engender an atmosphere that makes this series of workshops exceptional. The articles in this Special Issue represent a cross-section of the presentations at the 2015 meeting. Copyright © 2016 Elsevier B.V. All rights reserved.
Tracks detection from high-orbit space objects
NASA Astrophysics Data System (ADS)
Shumilov, Yu. P.; Vygon, V. G.; Grishin, E. A.; Konoplev, A. O.; Semichev, O. P.; Shargorodskii, V. D.
2017-05-01
The paper presents studies results of a complex algorithm for the detection of highly orbital space objects. Before the implementation of the algorithm, a series of frames with weak tracks of space objects, which can be discrete, is recorded. The algorithm includes pre-processing, classical for astronomy, consistent filtering of each frame and its threshold processing, shear transformation, median filtering of the transformed series of frames, repeated threshold processing and detection decision making. Modeling of space objects weak tracks on of the night starry sky real frames obtained in the regime of a stationary telescope was carried out. It is shown that the permeability of an optoelectronic device has increased by almost 2m.
Matrix Transformations between Certain Sequence Spaces over the Non-Newtonian Complex Field
Efe, Hakan
2014-01-01
In some cases, the most general linear operator between two sequence spaces is given by an infinite matrix. So the theory of matrix transformations has always been of great interest in the study of sequence spaces. In the present paper, we introduce the matrix transformations in sequence spaces over the field ℂ* and characterize some classes of infinite matrices with respect to the non-Newtonian calculus. Also we give the necessary and sufficient conditions on an infinite matrix transforming one of the classical sets over ℂ* to another one. Furthermore, the concept for sequence-to-sequence and series-to-series methods of summability is given with some illustrated examples. PMID:25110740
Borgonovo, Andrea; Bianchi, Albino; Marchetti, Andrea; Censi, Rachele; Maiorana, Carlo
2012-05-01
After an inferior alveolar nerve (IAN) injury, the onset of altered sensation usually begins immediately after surgery. However, it sometimes begins after several days, which is referred to as delayed paresthesia. The authors considered three different etiologies that likely produce inflammation along the nerve trunk and cause delayed paresthesia: compression of the clot, fibrous reorganization of the clot, and nerve trauma caused by bone fragments during clot organization. The aim of this article was to evaluate the etiology of IAN delayed paresthesia, analyze the literature, present a case series related to three different causes of this pathology, and compare delayed paresthesia with the classic immediate symptomatic paresthesia.
NASA Astrophysics Data System (ADS)
Suh, Uhi Rinn
2018-02-01
The purpose of this article is to investigate relations between W-superalgebras and integrable super-Hamiltonian systems. To this end, we introduce the generalized Drinfel'd-Sokolov (D-S) reduction associated to a Lie superalgebra g and its even nilpotent element f, and we find a new definition of the classical affine W-superalgebra W(g,f,k) via the D-S reduction. This new construction allows us to find free generators of W(g,f,k), as a differential superalgebra, and two independent Lie brackets on W(g,f,k)/partial W(g,f,k). Moreover, we describe super-Hamiltonian systems with the Poisson vertex algebras theory. A W-superalgebra with certain properties can be understood as an underlying differential superalgebra of a series of integrable super-Hamiltonian systems.
Why didn't Box-Jenkins win (again)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pack, D.J.; Downing, D.J.
This paper focuses on the forecasting performance of the Box-Jenkins methodology applied to the 111 time series of the Makridakis competition. It considers the influence of the following factors: (1) time series length, (2) time-series information (autocorrelation) content, (3) time-series outliers or structural changes, (4) averaging results over time series, and (5) forecast time origin choice. It is found that the 111 time series contain substantial numbers of very short series, series with obvious structural change, and series whose histories are relatively uninformative. If these series are typical of those that one must face in practice, the real message ofmore » the competition is that univariate time series extrapolations will frequently fail regardless of the methodology employed to produce them.« less
Multifractal analysis of visibility graph-based Ito-related connectivity time series.
Czechowski, Zbigniew; Lovallo, Michele; Telesca, Luciano
2016-02-01
In this study, we investigate multifractal properties of connectivity time series resulting from the visibility graph applied to normally distributed time series generated by the Ito equations with multiplicative power-law noise. We show that multifractality of the connectivity time series (i.e., the series of numbers of links outgoing any node) increases with the exponent of the power-law noise. The multifractality of the connectivity time series could be due to the width of connectivity degree distribution that can be related to the exit time of the associated Ito time series. Furthermore, the connectivity time series are characterized by persistence, although the original Ito time series are random; this is due to the procedure of visibility graph that, connecting the values of the time series, generates persistence but destroys most of the nonlinear correlations. Moreover, the visibility graph is sensitive for detecting wide "depressions" in input time series.
NASA Astrophysics Data System (ADS)
Ripepi, V.; Moretti, M. I.; Clementini, G.; Marconi, M.; Cioni, M. R.; Marquette, J. B.; Tisserand, P.
2012-09-01
The Vista Magellanic Cloud (VMC, PI M.R. Cioni) survey is collecting K S -band time series photometry of the system formed by the two Magellanic Clouds (MC) and the "bridge" that connects them. These data are used to build K S -band light curves of the MC RR Lyrae stars and Classical Cepheids and determine absolute distances and the 3D geometry of the whole system using the K-band period luminosity ( PLK S ), the period-luminosity-color ( PLC) and the Wesenhiet relations applicable to these types of variables. As an example of the survey potential we present results from the VMC observations of two fields centered respectively on the South Ecliptic Pole and the 30 Doradus star forming region of the Large Magellanic Cloud. The VMC K S -band light curves of the RR Lyrae stars in these two regions have very good photometric quality with typical errors for the individual data points in the range of ˜0.02 to 0.05 mag. The Cepheids have excellent light curves (typical errors of ˜0.01 mag). The average K S magnitudes derived for both types of variables were used to derive PLK S relations that are in general good agreement within the errors with the literature data, and show a smaller scatter than previous studies.
Classical Wigner method with an effective quantum force: application to reaction rates.
Poulsen, Jens Aage; Li, Huaqing; Nyman, Gunnar
2009-07-14
We construct an effective "quantum force" to be used in the classical molecular dynamics part of the classical Wigner method when determining correlation functions. The quantum force is obtained by estimating the most important short time separation of the Feynman paths that enter into the expression for the correlation function. The evaluation of the force is then as easy as classical potential energy evaluations. The ideas are tested on three reaction rate problems. The resulting transmission coefficients are in much better agreement with accurate results than transmission coefficients from the ordinary classical Wigner method.
The Intrinsic Pathway of Coagulation as a Target for Antithrombotic Therapy
Wheeler, Allison P.; Gailani, David
2016-01-01
Plasma coagulation in the activated partial thromboplastin time assay is initiated by sequential activation of coagulation factors XII, XI and IX – the classical intrinsic pathway of coagulation. It is well recognized that this series of proteolytic reactions is not an accurate model for hemostasis in vivo, as factor XII deficiency does not cause abnormal bleeding, and fXI deficiency causes a relatively mild propensity to bleed excessively with injury. Despite their limited roles in hemostasis, there is mounting evidence that fXI and fXII contribute to thrombosis, and that inhibiting them can produce an antithrombotic effect with a relatively small effect on hemostasis. In this chapter the contributions of components of the intrinsic pathway to thrombosis in animal models and humans are discussed, and results of early clinical trials of drugs targeting factors IX, XI and XII are presented. PMID:27637310
Emotion Recognition from EEG Signals Using Multidimensional Information in EMD Domain.
Zhuang, Ning; Zeng, Ying; Tong, Li; Zhang, Chi; Zhang, Hanming; Yan, Bin
2017-01-01
This paper introduces a method for feature extraction and emotion recognition based on empirical mode decomposition (EMD). By using EMD, EEG signals are decomposed into Intrinsic Mode Functions (IMFs) automatically. Multidimensional information of IMF is utilized as features, the first difference of time series, the first difference of phase, and the normalized energy. The performance of the proposed method is verified on a publicly available emotional database. The results show that the three features are effective for emotion recognition. The role of each IMF is inquired and we find that high frequency component IMF1 has significant effect on different emotional states detection. The informative electrodes based on EMD strategy are analyzed. In addition, the classification accuracy of the proposed method is compared with several classical techniques, including fractal dimension (FD), sample entropy, differential entropy, and discrete wavelet transform (DWT). Experiment results on DEAP datasets demonstrate that our method can improve emotion recognition performance.
Functional imaging with low-resolution brain electromagnetic tomography (LORETA): a review.
Pascual-Marqui, R D; Esslen, M; Kochi, K; Lehmann, D
2002-01-01
This paper reviews several recent publications that have successfully used the functional brain imaging method known as LORETA. Emphasis is placed on the electrophysiological and neuroanatomical basis of the method, on the localization properties of the method, and on the validation of the method in real experimental human data. Papers that criticize LORETA are briefly discussed. LORETA publications in the 1994-1997 period based localization inference on images of raw electric neuronal activity. In 1998, a series of papers appeared that based localization inference on the statistical parametric mapping methodology applied to high-time resolution LORETA images. Starting in 1999, quantitative neuroanatomy was added to the methodology, based on the digitized Talairach atlas provided by the Brain Imaging Centre, Montreal Neurological Institute. The combination of these methodological developments has placed LORETA at a level that compares favorably to the more classical functional imaging methods, such as PET and fMRI.
Extreme values and fat tails of multifractal fluctuations
NASA Astrophysics Data System (ADS)
Muzy, J. F.; Bacry, E.; Kozhemyak, A.
2006-06-01
In this paper we discuss the problem of the estimation of extreme event occurrence probability for data drawn from some multifractal process. We also study the heavy (power-law) tail behavior of probability density function associated with such data. We show that because of strong correlations, the standard extreme value approach is not valid and classical tail exponent estimators should be interpreted cautiously. Extreme statistics associated with multifractal random processes turn out to be characterized by non-self-averaging properties. Our considerations rely upon some analogy between random multiplicative cascades and the physics of disordered systems and also on recent mathematical results about the so-called multifractal formalism. Applied to financial time series, our findings allow us to propose an unified framework that accounts for the observed multiscaling properties of return fluctuations, the volatility clustering phenomenon and the observed “inverse cubic law” of the return pdf tails.
Dissipative Effects on Inertial-Range Statistics at High Reynolds Numbers.
Sinhuber, Michael; Bewley, Gregory P; Bodenschatz, Eberhard
2017-09-29
Using the unique capabilities of the Variable Density Turbulence Tunnel at the Max Planck Institute for Dynamics and Self-Organization, Göttingen, we report experimental measurements in classical grid turbulence that uncover oscillations of the velocity structure functions in the inertial range. This was made possible by measuring extremely long time series of up to 10^{10} samples of the turbulent fluctuating velocity, which corresponds to O(10^{7}) integral length scales. The measurements were conducted in a well-controlled environment at a wide range of high Reynolds numbers from R_{λ}=110 up to R_{λ}=1600, using both traditional hot-wire probes as well as the nanoscale thermal anemometry probe developed at Princeton University. An implication of the observed oscillations is that dissipation influences the inertial-range statistics of turbulent flows at scales significantly larger than predicted by current models and theories.
NASA Astrophysics Data System (ADS)
Cao, Jin; Jiang, Zhibin; Wang, Kangzhou
2017-07-01
Many nonlinear customer satisfaction-related factors significantly influence the future customer demand for service-oriented manufacturing (SOM). To address this issue and enhance the prediction accuracy, this article develops a novel customer demand prediction approach for SOM. The approach combines the phase space reconstruction (PSR) technique with the optimized least square support vector machine (LSSVM). First, the prediction sample space is reconstructed by the PSR to enrich the time-series dynamics of the limited data sample. Then, the generalization and learning ability of the LSSVM are improved by the hybrid polynomial and radial basis function kernel. Finally, the key parameters of the LSSVM are optimized by the particle swarm optimization algorithm. In a real case study, the customer demand prediction of an air conditioner compressor is implemented. Furthermore, the effectiveness and validity of the proposed approach are demonstrated by comparison with other classical predication approaches.
Effect of modulator sorption on gradient shape in ion-exchange chromatography
NASA Technical Reports Server (NTRS)
Velayudhan, A.; Ladisch, M. R.; Mitchell, C. A. (Principal Investigator)
1995-01-01
Mobile phase additives, or modulators, are used in gradient elution chromatography to facilitate separation and reduce separation time. The modulators are usually assumed to be linearly adsorbed or unadsorbed. Here, the consequences of nonlinear modulator adsorption are examined for ion-exchange gradient elution through a series of simulations. Even when the buffer salt is identical to the modulator salt, gradient deformation is observed; the extent of deformation increases as the volume of the feed is increased. When the modulator salt is different from the buffer salt, unusual effects are observed, and the chromatograms are quite different from those predicted by classical gradient elution theory. In particular, local increases in the buffer concentration are found between feed bands, and serve to improve the separation. These effects become more pronounced as the feed volume increases, and could therefore prove valuable in preparative applications.
PsyGlass: Capitalizing on Google Glass for naturalistic data collection.
Paxton, Alexandra; Rodriguez, Kevin; Dale, Rick
2015-09-01
As commercial technology moves further into wearable technologies, cognitive and psychological scientists can capitalize on these devices to facilitate naturalistic research designs while still maintaining strong experimental control. One such wearable technology is Google Glass (Google, Inc.: www.google.com/glass), which can present wearers with audio and visual stimuli while tracking a host of multimodal data. In this article, we introduce PsyGlass, a framework for incorporating Google Glass into experimental work that is freely available for download and community improvement over time (www.github.com/a-paxton/PsyGlass). As a proof of concept, we use this framework to investigate dual-task pressures on naturalistic interaction. The preliminary study demonstrates how designs from classic experimental psychology may be integrated in naturalistic interactive designs with emerging technologies. We close with a series of recommendations for using PsyGlass and a discussion of how wearable technology more broadly may contribute to new or adapted naturalistic research designs.
Correlation dimension and phase space contraction via extreme value theory
NASA Astrophysics Data System (ADS)
Faranda, Davide; Vaienti, Sandro
2018-04-01
We show how to obtain theoretical and numerical estimates of correlation dimension and phase space contraction by using the extreme value theory. The maxima of suitable observables sampled along the trajectory of a chaotic dynamical system converge asymptotically to classical extreme value laws where: (i) the inverse of the scale parameter gives the correlation dimension and (ii) the extremal index is associated with the rate of phase space contraction for backward iteration, which in dimension 1 and 2, is closely related to the positive Lyapunov exponent and in higher dimensions is related to the metric entropy. We call it the Dynamical Extremal Index. Numerical estimates are straightforward to obtain as they imply just a simple fit to a univariate distribution. Numerical tests range from low dimensional maps, to generalized Henon maps and climate data. The estimates of the indicators are particularly robust even with relatively short time series.
Quantum communication through an unmodulated spin chain.
Bose, Sougato
2003-11-14
We propose a scheme for using an unmodulated and unmeasured spin chain as a channel for short distance quantum communications. The state to be transmitted is placed on one spin of the chain and received later on a distant spin with some fidelity. We first obtain simple expressions for the fidelity of quantum state transfer and the amount of entanglement sharable between any two sites of an arbitrary Heisenberg ferromagnet using our scheme. We then apply this to the realizable case of an open ended chain with nearest neighbor interactions. The fidelity of quantum state transfer is obtained as an inverse discrete cosine transform and as a Bessel function series. We find that in a reasonable time, a qubit can be directly transmitted with better than classical fidelity across the full length of chains of up to 80 spins. Moreover, our channel allows distillable entanglement to be shared over arbitrary distances.
Imbibition with swelling: Capillary rise in thin deformable porous media
NASA Astrophysics Data System (ADS)
Kvick, Mathias; Martinez, D. Mark; Hewitt, Duncan R.; Balmforth, Neil J.
2017-07-01
The imbibition of a liquid into a thin deformable porous substrate driven by capillary suction is considered. The substrate is initially dry and has uniform porosity and thickness. Two-phase flow theory is used to describe how the liquid flows through the pore space behind the wetting front when out-of-plane deformation of the solid matrix is considered. Neglecting gravity and evaporation, standard shallow-layer scalings are used to construct a reduced model of the dynamics. The model predicts convergence to a self-similar behavior in all regions except near the wetting front, where a boundary layer arises whose structure narrows with the advance of the front. Over time, the rise height approaches the similarity scaling of t1 /2, as in the classical Washburn or BCLW law. The results are compared with a series of laboratory experiments using cellulose paper sheets, which provide qualitative agreement.
Suls, Jerry; Martin, René
2005-12-01
This article describes a series of studies using the daily process paradigm to describe and understand the affective dynamics of people who experience frequent and intense bouts of a wide range of negative emotions. In several studies, community residents reported on problem occurrence and affect several times a day or at the end of the day. We found reliable evidence that persons who scored high (vs. low) in Neuroticism reported more daily problems, tended to react with more severe emotions, experienced more mood spillover from prior occasions, and exhibited stronger reactions to recurring problems (the "neurotic cascade"). The susceptibility of neurotics to stress seems to extend to all types of problems while certain other dimensions of personality (e.g., Agreeableness) are associated with hyperreactivity to particular kinds of problems. The research demonstrates how daily process research can provide insight about classic problems in the field of individual differences.
NASA Astrophysics Data System (ADS)
Hsu, Kuo-Lin; Gupta, Hoshin V.; Gao, Xiaogang; Sorooshian, Soroosh; Imam, Bisher
2002-12-01
Artificial neural networks (ANNs) can be useful in the prediction of hydrologic variables, such as streamflow, particularly when the underlying processes have complex nonlinear interrelationships. However, conventional ANN structures suffer from network training issues that significantly limit their widespread application. This paper presents a multivariate ANN procedure entitled self-organizing linear output map (SOLO), whose structure has been designed for rapid, precise, and inexpensive estimation of network structure/parameters and system outputs. More important, SOLO provides features that facilitate insight into the underlying processes, thereby extending its usefulness beyond forecast applications as a tool for scientific investigations. These characteristics are demonstrated using a classic rainfall-runoff forecasting problem. Various aspects of model performance are evaluated in comparison with other commonly used modeling approaches, including multilayer feedforward ANNs, linear time series modeling, and conceptual rainfall-runoff modeling.
Synchronisation and Circuit Realisation of Chaotic Hartley System
NASA Astrophysics Data System (ADS)
Varan, Metin; Akgül, Akif; Güleryüz, Emre; Serbest, Kasım
2018-06-01
Hartley chaotic system is topologically the simplest, but its dynamical behaviours are very rich and its synchronisation has not been seen in literature. This paper aims to introduce a simple chaotic system which can be used as alternative to classical chaotic systems in synchronisation fields. Time series, phase portraits, and bifurcation diagrams reveal the dynamics of the mentioned system. Chaotic Hartley model is also supported with electronic circuit model simulations. Its exponential dynamics are hard to realise on circuit model; this paper is the first in literature that handles such a complex modelling problem. Modelling, synchronisation, and circuit realisation of the Hartley system are implemented respectively in MATLAB-Simulink and ORCAD environments. The effectiveness of the applied synchronisation method is revealed via numerical methods, and the results are discussed. Retrieved results show that this complex chaotic system can be used in secure communication fields.
Cobo, Carmen María Sarabia
2014-12-01
To evaluate the influence exercised by institutionalization on the autonomy and perception of quality of life among the institutionalized elderly. The study is quasi-experimental (interrupted time series) and longitudinal. The sample is composed for 104 elderly people who went into a three nursing home in Santander, Spain. To assess the quality of life and dependence two scales were used: the Barthel Index and Lawton Index. There was an important relationship between autonomy and independence and their deterioration due to their institutionalisation, such as the physical and social aspects. It´s important to point out that the dependence of the elderly is a complex phenomenon, which admits many types of intervention, including the customary ones referring to more classic welfare actions which tend to supplant the absence of autonomy in everyday life by facilitating services and attention to make up for this need, without having to resort to institutionalization.
Momentum conserving defects in affine Toda field theories
NASA Astrophysics Data System (ADS)
Bristow, Rebecca; Bowcock, Peter
2017-05-01
Type II integrable defects with more than one degree of freedom at the defect are investigated. A condition on the form of the Lagrangian for such defects is found which ensures the existence of a conserved momentum in the presence of the defect. In addition it is shown that for any Lagrangian satisfying this condition, the defect equations of motion, when taken to hold everywhere, can be extended to give a Bäcklund transformation between the bulk theories on either side of the defect. This strongly suggests that such systems are integrable. Momentum conserving defects and Bäcklund transformations for affine Toda field theories based on the A n , B n , C n and D n series of Lie algebras are found. The defect associated with the D 4 affine Toda field theory is examined in more detail. In particular classical time delays for solitons passing through the defect are calculated.
Experimental demonstration of chaotic scattering of microwaves
NASA Astrophysics Data System (ADS)
Doron, E.; Smilansky, U.; Frenkel, A.
1990-12-01
Reflection of microwaves from a cavity is measured in a frequency domain where the underlying classical chaotic scattering leaves a clear mark on the wave dynamics. We check the hypothesis that the fluctuations of the S matrix can be described in terms of parameters characterizing the chaotic classical scatteirng. Absorption of energy in the cavity walls is shown to significantly affect the results, and is linked to time-domain properties of the scattering in a general way. We also show that features whose origin is entirely due to wave dynamics (e.g., the enhancement of the Wigner time delay due to time-reversal symmetry) coexist with other features which characterize the underlying classical dynamics.
Time Reparametrization Group and the Long Time Behavior in Quantum Glassy Systems
NASA Astrophysics Data System (ADS)
Kennett, Malcolm P.; Chamon, Claudio
2001-02-01
We study the long time dynamics of a quantum version of the Sherrington-Kirkpatrick model. Time reparametrizations of the dynamical equations have a parallel with renormalization group transformations; in this language the long time behavior of this model is controlled by a reparametrization group ( RpG) fixed point of the classical dynamics. The irrelevance of quantum terms in the dynamical equations in the aging regime explains the classical nature of the out of equilibrium fluctuation-dissipation relation.
Inductive reasoning and implicit memory: evidence from intact and impaired memory systems.
Girelli, Luisa; Semenza, Carlo; Delazer, Margarete
2004-01-01
In this study, we modified a classic problem solving task, number series completion, in order to explore the contribution of implicit memory to inductive reasoning. Participants were required to complete number series sharing the same underlying algorithm (e.g., +2), differing in both constituent elements (e.g., 2468 versus 57911) and correct answers (e.g., 10 versus 13). In Experiment 1, reliable priming effects emerged, whether primes and targets were separated by four or ten fillers. Experiment 2 provided direct evidence that the observed facilitation arises at central stages of problem solving, namely the identification of the algorithm and its subsequent extrapolation. The observation of analogous priming effects in a severely amnesic patient strongly supports the hypothesis that the facilitation in number series completion was largely determined by implicit memory processes. These findings demonstrate that the influence of implicit processes extends to higher level cognitive domain such as induction reasoning.
NASA Technical Reports Server (NTRS)
Stein, M.
1985-01-01
Nonlinear strain displacement relations for three-dimensional elasticity are determined in orthogonal curvilinear coordinates. To develop a two-dimensional theory, the displacements are expressed by trigonometric series representation through-the-thickness. The nonlinear strain-displacement relations are expanded into series which contain all first and second degree terms. In the series for the displacements only the first few terms are retained. Insertion of the expansions into the three-dimensional virtual work expression leads to nonlinear equations of equilibrium for laminated and thick plates and shells that include the effects of transverse shearing. Equations of equilibrium and buckling equations are derived for flat plates and cylindrical shells. The shell equations reduce to conventional transverse shearing shell equations when the effects of the trigonometric terms are omitted and to classical shell equations when the trigonometric terms are omitted and the shell is assumed to be thin.
The Parker-Sochacki Method of Solving Differential Equations: Applications and Limitations
NASA Astrophysics Data System (ADS)
Rudmin, Joseph W.
2006-11-01
The Parker-Sochacki method is a powerful but simple technique of solving systems of differential equations, giving either analytical or numerical results. It has been in use for about 10 years now since its discovery by G. Edgar Parker and James Sochacki of the James Madison University Dept. of Mathematics and Statistics. It is being presented here because it is still not widely known and can benefit the listeners. It is a method of rapidly generating the Maclauren series to high order, non-iteratively. It has been successfully applied to more than a hundred systems of equations, including the classical many-body problem. Its advantages include its speed of calculation, its simplicity, and the fact that it uses only addition, subtraction and multiplication. It is not just a polynomial approximation, because it yields the Maclaurin series, and therefore exhibits the advantages and disadvantages of that series. A few applications will be presented.
Citation classics in periodontology: a controlled study.
Nieri, Michele; Saletta, Daniele; Guidi, Luisa; Buti, Jacopo; Franceschi, Debora; Mauro, Saverio; Pini-Prato, Giovanpaolo
2007-04-01
The aims of this study were to identify the most cited articles in Periodontology published from January 1990 to March 2005; and to analyse the differences between citation Classics and less cited articles. The search was carried out in four international periodontal journals: Journal of Periodontology, Journal of Clinical Periodontology, International Journal of Periodontics and Restorative Dentistry and Journal of Periodontal Research. The Classics, that are articles cited at least 100 times, were identified using the Science Citation Index database. From every issue of the journals that contained a Classic, another article was randomly selected and used as a Control. Fifty-five Classics and 55 Controls were identified. Classic articles were longer, used more images, had more authors, and contained more self-references than Controls. Moreover Classics had on the average a bigger sample size, often dealt with etiopathogenesis and prognosis, but were rarely controlled or randomized studies. Classic articles play an instructive role, but are often non-Controlled studies.
Fragrance contact allergy: a 4-year retrospective study.
Cuesta, Laura; Silvestre, Juan Francisco; Toledo, Fernando; Lucas, Ana; Pérez-Crespo, María; Ballester, Irene
2010-08-01
Fragrance chemicals are the second most frequent cause of contact allergy. The mandatory labelling of 26 fragrance chemicals when present in cosmetics has facilitated management of patients allergic to fragrances. The study was aimed to define the characteristics of the population allergic to perfumes detected in our hospital district, to determine the usefulness of markers of fragrance allergy in the baseline GEIDAC series, and to describe the contribution made by the fragrance series to the data obtained with the baseline series. We performed a 4-year retrospective study of patients tested with the Spanish baseline series and/or fragrance series. There are four fragrance markers in the baseline series: fragrance mix I (FM I), Myroxylon pereirae, fragrance mix II (FM II), and hydroxyisohexyl 3-cyclohexene carboxaldehyde. A total of 1253 patients were patch tested, 117 (9.3%) of whom were positive to a fragrance marker. FM I and M. pereirae detected 92.5% of the cases of fragrance contact allergy. FM II and hydroxyisohexyl 3-cyclohexene carboxaldehyde detected 6 additional cases and provided further information in 8, enabling improved management. A fragrance series was tested in a selected group of 86 patients and positive results were obtained in 45.3%. Geraniol was the allergen most frequently found in the group of patients tested with the fragrance series. Classic markers detect the majority of cases of fragrance contact allergy. We recommend incorporating FM II in the Spanish baseline series, as in the European baseline series, and using a specific fragrance series to study patients allergic to a fragrance marker.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warneke, Jonas; Hou, Gao-Lei; Aprà, Edoardo
2017-10-09
The relative stability and electron loss process of Multiply Charged Anions have been traditionally explained in terms of the classical Coulomb interaction between spatially separated charges. In this study we report the surprising properties of [B12X12]2-, X = F – At, that are counterintuitive compared to the prevailing classical description and justify their classification into a new class of MCAs. In this new class of MCAs, comprising of a “Boron core” surrounded by a “Halogen shell”, the sign of the total charge in these two regions changes along the halogen series from F to At. With the aid of photoelectronmore » spectroscopy and electronic structure calculations we demonstrate that the behavior of these MCAs is largely determined by quantum effects rather than classical electrostatics. The second excess electron is always taken from the most positively charged region, viz. the “Boron core” for F – Br and the surrounding “Halogen shell” for I, At.« less
Chaos in the classical mechanics of bound and quasi-bound HX-4He complexes with X = F, Cl, Br, CN.
Gamboa, Antonio; Hernández, Henar; Ramilowski, Jordan A; Losada, J C; Benito, R M; Borondo, F; Farrelly, David
2009-10-01
The classical dynamics of weakly bound floppy van der Waals complexes have been extensively studied in the past except for the weakest of all, i.e., those involving He atoms. These complexes are of considerable current interest in light of recent experimental work focussed on the study of molecules trapped in small droplets of the quantum solvent (4)He. Despite a number of quantum investigations, details on the dynamics of how quantum solvation occurs remain unclear. In this paper, the classical rotational dynamics of a series of van der Waals complexes, HX-(4)He with X = F, Cl, Br, CN, are studied. In all cases, the ground state dynamics are found to be almost entirely chaotic, in sharp contrast to other floppy complexes, such as HCl-Ar, for which chaos sets in only at relatively high energies. The consequences of this result for quantum solvation are discussed. We also investigate rotationally excited states with J = 1 which, except for HCN-(4)He, are actually resonances that decay by rotational pre-dissociation.
A historical review of classic articles in surgery field.
Long, Xiao; Huang, Jiu-Zuo; Ho, Yuh-Shan
2014-11-01
Surgery is one of the most rapidly developing specialties in the past century. Diagnostic methods, operation technique, and knowledge of the diseases are changing continuously. In the academic history, lots of classic papers brought advances for surgery. They were accepted and cited numerously by the medical specialists all over the world. Citation analysis reflects the recognition a work has received in the scientific community by its peers. The articles in the field of surgery have been cited at least 1,000 times since its publication to 2011 were analyzed. By categorizing the publication year, journals, authors, institutions, countries, life citation cycles, level of evidence provided, and characteristics of the topmost articles, we intended to determine what qualities make the articles important to the specialty. The methodology used in this study was based on the Science Citation Index Expanded database of Web of Science from Thomson Reuters. According to Journal Citation Reports of 2011, it indexes 8,336 journals with citation references across 176Web of Science categories in science edition. Level of evidence of these articles was graded according to the standard provided by Oxford Centre for Evidence-Based Medicine. Totally 36 articles have been cited at least 1,000 times since their publication to the year 2011. According to their citation histories, 35 articles were further evaluated. These topmost articles covered 8 subspecialties of surgery and were published in 17 journals. The publication year varied from 1940 to 1999 and the articles provided different level of evidence, most of which are retrospective studies of case series. Six articles were research articles including animal model, histology analysis, and laboratory research. The others were clinical articles. From the results of citation analysis, the classic articles are not always in top citations. In addition, some of these articles have no citations after several years post their publication. The introduction of a commonly used classification or scoring system is a major factor in propelling citation by other authors. The most cited articles in surgery present their long academic life in spite of their level of evidence and journal impact factor in which they were published. Copyright © 2014 Elsevier Inc. All rights reserved.
Regenerating time series from ordinal networks.
McCullough, Michael; Sakellariou, Konstantinos; Stemler, Thomas; Small, Michael
2017-03-01
Recently proposed ordinal networks not only afford novel methods of nonlinear time series analysis but also constitute stochastic approximations of the deterministic flow time series from which the network models are constructed. In this paper, we construct ordinal networks from discrete sampled continuous chaotic time series and then regenerate new time series by taking random walks on the ordinal network. We then investigate the extent to which the dynamics of the original time series are encoded in the ordinal networks and retained through the process of regenerating new time series by using several distinct quantitative approaches. First, we use recurrence quantification analysis on traditional recurrence plots and order recurrence plots to compare the temporal structure of the original time series with random walk surrogate time series. Second, we estimate the largest Lyapunov exponent from the original time series and investigate the extent to which this invariant measure can be estimated from the surrogate time series. Finally, estimates of correlation dimension are computed to compare the topological properties of the original and surrogate time series dynamics. Our findings show that ordinal networks constructed from univariate time series data constitute stochastic models which approximate important dynamical properties of the original systems.
Regenerating time series from ordinal networks
NASA Astrophysics Data System (ADS)
McCullough, Michael; Sakellariou, Konstantinos; Stemler, Thomas; Small, Michael
2017-03-01
Recently proposed ordinal networks not only afford novel methods of nonlinear time series analysis but also constitute stochastic approximations of the deterministic flow time series from which the network models are constructed. In this paper, we construct ordinal networks from discrete sampled continuous chaotic time series and then regenerate new time series by taking random walks on the ordinal network. We then investigate the extent to which the dynamics of the original time series are encoded in the ordinal networks and retained through the process of regenerating new time series by using several distinct quantitative approaches. First, we use recurrence quantification analysis on traditional recurrence plots and order recurrence plots to compare the temporal structure of the original time series with random walk surrogate time series. Second, we estimate the largest Lyapunov exponent from the original time series and investigate the extent to which this invariant measure can be estimated from the surrogate time series. Finally, estimates of correlation dimension are computed to compare the topological properties of the original and surrogate time series dynamics. Our findings show that ordinal networks constructed from univariate time series data constitute stochastic models which approximate important dynamical properties of the original systems.
Speed and heart-rate profiles in skating and classical cross-country skiing competitions.
Bolger, Conor M; Kocbach, Jan; Hegge, Ann Magdalen; Sandbakk, Øyvind
2015-10-01
To compare the speed and heart-rate profiles during international skating and classical competitions in male and female world-class cross-country skiers. Four male and 5 female skiers performed individual time trials of 15 km (men) and 10 km (women) in the skating and classical techniques on 2 consecutive days. Races were performed on the same 5-km course. The course was mapped with GPS and a barometer to provide a valid course and elevation profile. Time, speed, and heart rate were determined for uphill, flat, and downhill terrains throughout the entire competition by wearing a GPS and a heart-rate monitor. Times in uphill, flat, and downhill terrain were ~55%, 15-20%, and 25-30%, respectively, of the total race time for both techniques and genders. The average speed differences between skating and classical skiing were 9% and 11% for men and women, respectively, and these values were 12% and 15% for uphill, 8% and 13% for flat (all P < .05), and 2% and 1% for downhill terrain. The average speeds for men were 9% and 11% faster than for women in skating and classical, respectively, with corresponding numbers of 11% and 14% for uphill, 6% and 11% for flat, and 4% and 5% for downhill terrain (all P < .05). Heart-rate profiles were relatively independent of technique and gender. The greatest performance differences between the skating and classical techniques and between the 2 genders were found on uphill terrain. Therefore, these speed differences could not be explained by variations in exercise intensity.
NASA Technical Reports Server (NTRS)
Norbury, John W.
1989-01-01
The invariance of classical electromagnetism under charge-conjugation, parity, and time-reversal (CPT) is studied by considering the motion of a charged particle in electric and magnetic fields. Upon applying CPT transformations to various physical quantities and noting that the motion still behaves physically demonstrates invariance.
Jeon, Jonggu; Lim, Joon Hyung; Kim, Seongheun; Kim, Heejae; Cho, Minhaeng
2015-05-28
A time series of kinetic energies (KE) from classical molecular dynamics (MD) simulation contains fundamental information on system dynamics. It can also be analyzed in the frequency domain through Fourier transformation (FT) of velocity correlation functions, providing energy content of different spectral regions. By limiting the FT time span, we have previously shown that spectral resolution of KE evolution is possible in the nonequilibrium situations [Jeon and Cho, J. Chem. Phys. 2011, 135, 214504]. In this paper, we refine the method by employing the concept of instantaneous power spectra, extending it to reflect an instantaneous time-correlation of velocities with those in the future as well as with those in the past, and present a new method to obtain the instantaneous spectral density of KE (iKESD). This approach enables the simultaneous spectral and temporal resolution of KE with unlimited time precision. We discuss the formal and novel properties of the new iKESD approaches and how to optimize computational methods and determine parameters for practical applications. The method is specifically applied to the nonequilibrium MD simulation of vibrational relaxation of the OD stretch mode in a hydrated HOD molecule by employing a hybrid quantum mechanical/molecular mechanical (QM/MM) potential. We directly compare the computational results with the OD band population relaxation time profiles extracted from the IR pump-probe measurements for 5% HOD in water. The calculated iKESD yields the OD bond relaxation time scale ∼30% larger than the experimental value, and this decay is largely frequency-independent if the classical anharmonicity is accounted for. From the integrated iKESD over intra- and intermolecular bands, the major energy transfer pathways were found to involve the HOD bending mode in the subps range, then the internal modes of the solvent until 5 ps after excitation, and eventually the solvent intermolecular modes. Also, strong hydrogen-bonding of HOD is found to significantly hinder the initial intramolecular energy transfer process.
GPS Position Time Series @ JPL
NASA Technical Reports Server (NTRS)
Owen, Susan; Moore, Angelyn; Kedar, Sharon; Liu, Zhen; Webb, Frank; Heflin, Mike; Desai, Shailen
2013-01-01
Different flavors of GPS time series analysis at JPL - Use same GPS Precise Point Positioning Analysis raw time series - Variations in time series analysis/post-processing driven by different users. center dot JPL Global Time Series/Velocities - researchers studying reference frame, combining with VLBI/SLR/DORIS center dot JPL/SOPAC Combined Time Series/Velocities - crustal deformation for tectonic, volcanic, ground water studies center dot ARIA Time Series/Coseismic Data Products - Hazard monitoring and response focused center dot ARIA data system designed to integrate GPS and InSAR - GPS tropospheric delay used for correcting InSAR - Caltech's GIANT time series analysis uses GPS to correct orbital errors in InSAR - Zhen Liu's talking tomorrow on InSAR Time Series analysis
On the thermal efficiency of power cycles in finite time thermodynamics
NASA Astrophysics Data System (ADS)
Momeni, Farhang; Morad, Mohammad Reza; Mahmoudi, Ashkan
2016-09-01
The Carnot, Diesel, Otto, and Brayton power cycles are reconsidered endoreversibly in finite time thermodynamics (FTT). In particular, the thermal efficiency of these standard power cycles is compared to the well-known results in classical thermodynamics. The present analysis based on FTT modelling shows that a reduction in both the maximum and minimum temperatures of the cycle causes the thermal efficiency to increase. This is antithetical to the existing trend in the classical references. Under the assumption of endoreversibility, the relation between the efficiencies is also changed to {η }{{Carnot}}\\gt {η }{{Brayton}}\\gt {η }{{Diesel}}\\gt {η }{{Otto}}, which is again very different from the corresponding classical results. The present results benefit a better understanding of the important role of irreversibility on heat engines in classical thermodynamics.
NASA Astrophysics Data System (ADS)
ten Veldhuis, Marie-Claire; Schleiss, Marc
2017-04-01
Urban catchments are typically characterised by a more flashy nature of the hydrological response compared to natural catchments. Predicting flow changes associated with urbanisation is not straightforward, as they are influenced by interactions between impervious cover, basin size, drainage connectivity and stormwater management infrastructure. In this study, we present an alternative approach to statistical analysis of hydrological response variability and basin flashiness, based on the distribution of inter-amount times. We analyse inter-amount time distributions of high-resolution streamflow time series for 17 (semi-)urbanised basins in North Carolina, USA, ranging from 13 to 238 km2 in size. We show that in the inter-amount-time framework, sampling frequency is tuned to the local variability of the flow pattern, resulting in a different representation and weighting of high and low flow periods in the statistical distribution. This leads to important differences in the way the distribution quantiles, mean, coefficient of variation and skewness vary across scales and results in lower mean intermittency and improved scaling. Moreover, we show that inter-amount-time distributions can be used to detect regulation effects on flow patterns, identify critical sampling scales and characterise flashiness of hydrological response. The possibility to use both the classical approach and the inter-amount-time framework to identify minimum observable scales and analyse flow data opens up interesting areas for future research.
Effects of guided breath exercise on complex behaviour of heart rate dynamics.
Tavares, Bruna S; de Paula Vidigal, Giovanna; Garner, David M; Raimundo, Rodrigo D; de Abreu, Luiz Carlos; Valenti, Vitor E
2017-11-01
Cardiac autonomic regulation is influenced by changes in respiratory rate, which has been demonstrated by linear analysis of heart rate variability (HRV). Conversely, the complex behaviour is not well defined for HRV during this physiological state. In this sense, Higuchi Fractal Dimension is applied directly to the time series. It analyses the fractal dimension of discrete time sequences and is simpler and faster than correlation dimension and many other classical measures derived from chaos theory. We investigated chaotic behaviour of heart rate dynamics during guided breath exercises. We investigated 21 healthy male volunteers aged between 18 and 30 years. HRV was analysed 10 min before and 10 min during guided breath exercises. HRV was analysed in the time and frequency domain for linear analysis and through HFD for non-linear analysis. Linear analysis indicated that SDNN, pNN50, RMSSD, LF, HF and LF/HF increased during guided breath exercises. HFD analysis illustrated that between K max 20 to K max 120 intervals, was enhanced during guided breath exercises. Guided breath exercises acutely increased chaotic behaviour of HRV measured by HFD. © 2016 Scandinavian Society of Clinical Physiology and Nuclear Medicine. Published by John Wiley & Sons Ltd.
Cache and energy efficient algorithms for Nussinov's RNA Folding.
Zhao, Chunchun; Sahni, Sartaj
2017-12-06
An RNA folding/RNA secondary structure prediction algorithm determines the non-nested/pseudoknot-free structure by maximizing the number of complementary base pairs and minimizing the energy. Several implementations of Nussinov's classical RNA folding algorithm have been proposed. Our focus is to obtain run time and energy efficiency by reducing the number of cache misses. Three cache-efficient algorithms, ByRow, ByRowSegment and ByBox, for Nussinov's RNA folding are developed. Using a simple LRU cache model, we show that the Classical algorithm of Nussinov has the highest number of cache misses followed by the algorithms Transpose (Li et al.), ByRow, ByRowSegment, and ByBox (in this order). Extensive experiments conducted on four computational platforms-Xeon E5, AMD Athlon 64 X2, Intel I7 and PowerPC A2-using two programming languages-C and Java-show that our cache efficient algorithms are also efficient in terms of run time and energy. Our benchmarking shows that, depending on the computational platform and programming language, either ByRow or ByBox give best run time and energy performance. The C version of these algorithms reduce run time by as much as 97.2% and energy consumption by as much as 88.8% relative to Classical and by as much as 56.3% and 57.8% relative to Transpose. The Java versions reduce run time by as much as 98.3% relative to Classical and by as much as 75.2% relative to Transpose. Transpose achieves run time and energy efficiency at the expense of memory as it takes twice the memory required by Classical. The memory required by ByRow, ByRowSegment, and ByBox is the same as that of Classical. As a result, using the same amount of memory, the algorithms proposed by us can solve problems up to 40% larger than those solvable by Transpose.
Lara, Juan A; Lizcano, David; Pérez, Aurora; Valente, Juan P
2014-10-01
There are now domains where information is recorded over a period of time, leading to sequences of data known as time series. In many domains, like medicine, time series analysis requires to focus on certain regions of interest, known as events, rather than analyzing the whole time series. In this paper, we propose a framework for knowledge discovery in both one-dimensional and multidimensional time series containing events. We show how our approach can be used to classify medical time series by means of a process that identifies events in time series, generates time series reference models of representative events and compares two time series by analyzing the events they have in common. We have applied our framework on time series generated in the areas of electroencephalography (EEG) and stabilometry. Framework performance was evaluated in terms of classification accuracy, and the results confirmed that the proposed schema has potential for classifying EEG and stabilometric signals. The proposed framework is useful for discovering knowledge from medical time series containing events, such as stabilometric and electroencephalographic time series. These results would be equally applicable to other medical domains generating iconographic time series, such as, for example, electrocardiography (ECG). Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Blackburn, L. B.
1986-01-01
The thermal response and aging behavior of three 2XXX-series powder metallurgy aluminum alloys have been investigated, using Rockwell B hardness measurements, optical and electron microscopy, and energy-dispersive chemical analysis, in order to correlate microstructure with measured mechanical properties. Results of the thermal response study indicated that an increased solution heat treatment temperature was effective in resolutionizing large primary constituents in the alloy bearing more copper but had no apparent effect on the microconstituents of the other two. Aging studies conducted at room temperature and at 120, 150, and 180 C for times ranging up to 60 days indicated that classic aging response curves, as determined by hardness measurements, occurred at lower aging temperatures than were previously studied for these alloys, as well as at lower aging temperatures than are commonly used for ingot metallurgy alloys of similar compositions. Microstructural examination and fracture surface analysis of peak-aged tension specimens indicated that the highest tensile strengths are associated with extremely fine and homogeneous distributions of theta-prime or S-prime phases combined with low levels of both large constituent particles and dispersoids. Examination of the results suggest that refined solution heat treatments and lower aging temperatures may be necessary to achieve optimum mechanical properties for these 2XXX series alloys.
Artetxe, Beñat; Reinoso, Santiago; San Felices, Leire; Lezama, Luis; Gutiérrez-Zorrilla, Juan M; Vicent, Cristian; Haso, Fadi; Liu, Tianbo
2016-03-18
A series of nine [Sb7W36O133Ln3M2(OAc)(H2O)8](17-) heterometallic anions (Ln3M2; Ln=La-Gd, M=Co; Ln=Ce, M=Ni and Zn) have been obtained by reacting 3 d metal disubstituted Krebs-type tungstoantimonates(III) with early lanthanides. Their unique tetrameric structure contains a novel {MW9O33} capping unit formed by a planar {MW6O24} fragment to which three {WO2} groups are condensed to form a tungstate skeleton identical to that of a hypothetical trilacunary derivative of the ɛ-Keggin cluster. It is shown, for the first time, that classical Anderson-Evans {MW6O24} anions can act as building blocks to construct purely inorganic large frameworks. Unprecedented reactivity in the outer ring of these disk-shaped species is also revealed. The Ln3M2 anions possess chirality owing to a {Sb4O4} cluster being encapsulated in left- or right-handed orientations. Their ability to self-associate in blackberry-type vesicles in solution has been assessed for the Ce3Co2 derivative. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Barbier, O; Anract, P; Pluot, E; Larouserie, F; Sailhan, F; Babinet, A; Tomeno, B
2010-12-01
Extra-abdominal desmoid fibromatosis (EADF) is a benign tumoral condition, classically managed by more or less radical and sometimes mutilating excision. This treatment strategy is associated with a recurrence rate of nearly 50% according to various reports. EADF may show spontaneous stabilization over time. A retrospective series of 26 cases of EADF managed by simple observation was studied to assess spontaneous favorable evolution and identify possible factors impacting evolution. Eleven cases were of primary EADF with no treatment or surgery, and 15 of recurrence after surgery with no adjuvant treatment. MRI was the reference examination during follow-up. Twenty-four cases showed stabilization at a median 14 months; there were no cases of renewed evolution after stabilization. One primary tumor showed spontaneous regression, and one recurrence still showed evolution at end of follow-up (23 months). The sole factor impacting potential for evolution was prior surgery. No radiologic or pathologic criteria of evolution emerged from analysis. The present series, one of the largest dedicated to EADF managed by observation, confirmed recent literature findings: a conservative "wait-and-see" attitude is reasonable and should be considered when large-scale resection would entail significant functional or esthetic impairment. Level IV, retrospective study. Copyright © 2010 Elsevier Masson SAS. All rights reserved.
Diagrammar in classical scalar field theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cattaruzza, E., E-mail: Enrico.Cattaruzza@gmail.com; Gozzi, E., E-mail: gozzi@ts.infn.it; INFN, Sezione di Trieste
2011-09-15
In this paper we analyze perturbatively a g{phi}{sup 4}classical field theory with and without temperature. In order to do that, we make use of a path-integral approach developed some time ago for classical theories. It turns out that the diagrams appearing at the classical level are many more than at the quantum level due to the presence of extra auxiliary fields in the classical formalism. We shall show that a universal supersymmetry present in the classical path-integral mentioned above is responsible for the cancelation of various diagrams. The same supersymmetry allows the introduction of super-fields and super-diagrams which considerably simplifymore » the calculations and make the classical perturbative calculations almost 'identical' formally to the quantum ones. Using the super-diagrams technique, we develop the classical perturbation theory up to third order. We conclude the paper with a perturbative check of the fluctuation-dissipation theorem. - Highlights: > We provide the Feynman diagrams of perturbation theory for a classical field theory. > We give a super-formalism which links the quantum diagrams to the classical ones. > We check perturbatively the fluctuation-dissipation theorem.« less
Probabilities for time-dependent properties in classical and quantum mechanics
NASA Astrophysics Data System (ADS)
Losada, Marcelo; Vanni, Leonardo; Laura, Roberto
2013-05-01
We present a formalism which allows one to define probabilities for expressions that involve properties at different times for classical and quantum systems and we study its lattice structure. The formalism is based on the notion of time translation of properties. In the quantum case, the properties involved should satisfy compatibility conditions in order to obtain well-defined probabilities. The formalism is applied to describe the double-slit experiment.
Attitudes of Pediatric Nurse Practitioners Towards Parental Use of Corporal Punishment
1994-05-01
1974; Baumrind , 1967; Sears, Maccoby, & Levin, 1957). Sears, Maccoby, and Levin’s (1957) classic study examining the child-rearing practices of 379...34 (Sears, Maccoby, & Levin, 1957, pg 484). In a series of studies, Baumrind (1967) found that punishment, even corporal punishment, was an effective...1993; Lamb, Ketterlenus, & Fracasso, 1992; Baumrind , 1967). Baumrind’s research assessed patterns of parental behavior using interviews, standardized
ERIC Educational Resources Information Center
Brod, Richard I.
This study, the tenth in a series, presents college language registration and student contact hour data for all modern and classical language programs in the United States. The body of the report consists of 24 tables summarizing the data, and a directory of the 2,353 institutions that reported registrations in one or more foreign languages.…
ERIC Educational Resources Information Center
Kouzes, James M.; Posner, Barry Z.
This book is written to assist people to lead others in getting extraordinary things done. The basic message is that the best leaders care. This is not about being soft or a cheerleader. In chapter 1, the research to support this point of view is examined. In chapter 2, a classic case study to illustrate the seven essentials of encouraging the…
The Institute for the Study of Non–Model Organisms and other fantasies
Sullivan, William
2015-01-01
In his classic novel Invisible Cities, Italo Calvino describes a series of fantastic imagined cities that fulfill core human needs that remain unmet in ordinary cities. In light of the recent founding of a number of high-profile biomedical institutes, Calvino's descriptions encourage us to consider the unmet needs of the biomedical community and imagine unorthodox institutes designed to fulfill these needs. PMID:25633358
State Defense Force Monograph Series. Winter 2006, Medical Support Teams
2006-01-01
Research Organization, 1981). By 1955, the escalating Cold War saw the formal revival of the classic all-volunteer state militia. But growth was...officer. It was at this point that the MDDF MRC project action officer petitioned the OSG for the formal audit that was required for official MRC...increased need (December, 2006). Shortages of primary caregivers , acute care beds, ventilators, vaccines and antiviral medicines, coupled with the
Reliability of a Measure of Institutional Discrimination against Minorities
1979-12-01
samples are presented. The first is based upon classical statistical theory and the second derives from a series of computer-generated Monte Carlo...Institutional racism and sexism . Englewood Cliffs, N. J.: Prentice-Hall, Inc., 1978. Hays, W. L. and Winkler, R. L. Statistics : probability, inference... statistical measure of the e of institutional discrimination are discussed. Two methods of dealing with the problem of reliability of the measure in small
Baumann, Marcus; Baxendale, Ian R; Kuratli, Christoph; Ley, Steven V; Martin, Rainer E; Schneider, Josef
2011-07-11
A combination of flow and batch chemistries has been successfully applied to the assembly of a series of trisubstituted drug-like pyrrolidines. This study demonstrates the efficient preparation of a focused library of these pharmaceutically important structures using microreactor technologies, as well as classical parallel synthesis techniques, and thus exemplifies the impact of integrating innovative enabling tools within the drug discovery process.
On the Solution of Elliptic Partial Differential Equations on Regions with Corners
2015-07-09
In this report we investigate the solution of boundary value problems on polygonal domains for elliptic partial differential equations . We observe...that when the problems are formulated as the boundary integral equations of classical potential theory, the solutions are representable by series of...efficient numerical algorithms. The results are illustrated by a number of numerical examples. On the solution of elliptic partial differential equations on
Reyes-Arellano, Alicia; Bucio-Cano, Alejandro; Montenegro-Sustaita, Mabel; Curiel-Quesada, Everardo; Salgado-Zamora, Héctor
2012-01-01
A series of selected 2-substituted imidazolines were synthesized in moderate to excellent yields by a modification of protocols reported in the literature. They were evaluated as potential non-classical bioisosteres of AHL with the aim of counteracting bacterial pathogenicity. Imidazolines 18a, 18e and 18f at various concentrations reduced the violacein production by Chromobacterium violaceum, suggesting an anti-quorum sensing profile against Gram-negative bacteria. Imidazoline 18b did not affect the production of violacein, but had a bacteriostatic effect at 100 μM and a bactericidal effect at 1 mM. Imidazoline 18a bearing a hexyl phenoxy moiety was the most active compound of the series, rendering a 72% inhibitory effect of quorum sensing at 100 μM. Imidazoline 18f bearing a phenyl nonamide substituent presented an inhibitory effect on quorum sensing at a very low concentration (1 nM), with a reduction percentage of 28%. This compound showed an irregular performance, decreasing inhibition at concentrations higher than 10 μM, until reaching 100 μM, at which concentration it increased the inhibitory effect with a 49% reduction percentage. When evaluated on Serratia marcescens, compound 18f inhibited the production of prodigiosin by 40% at 100 μM. PMID:22408391
Semi-quantum Dialogue Based on Single Photons
NASA Astrophysics Data System (ADS)
Ye, Tian-Yu; Ye, Chong-Qiang
2018-02-01
In this paper, we propose two semi-quantum dialogue (SQD) protocols by using single photons as the quantum carriers, where one requires the classical party to possess the measurement capability and the other does not have this requirement. The security toward active attacks from an outside Eve in the first SQD protocol is guaranteed by the complete robustness of present semi-quantum key distribution (SQKD) protocols, the classical one-time pad encryption, the classical party's randomization operation and the decoy photon technology. The information leakage problem of the first SQD protocol is overcome by the classical party' classical basis measurements on the single photons carrying messages which makes him share their initial states with the quantum party. The security toward active attacks from Eve in the second SQD protocol is guaranteed by the classical party's randomization operation, the complete robustness of present SQKD protocol and the classical one-time pad encryption. The information leakage problem of the second SQD protocol is overcome by the quantum party' classical basis measurements on each two adjacent single photons carrying messages which makes her share their initial states with the classical party. Compared with the traditional information leakage resistant QD protocols, the advantage of the proposed SQD protocols lies in that they only require one party to have quantum capabilities. Compared with the existing SQD protocol, the advantage of the proposed SQD protocols lies in that they only employ single photons rather than two-photon entangled states as the quantum carriers. The proposed SQD protocols can be implemented with present quantum technologies.
NASA Astrophysics Data System (ADS)
Zhang, Zhizheng; Wang, Tianze
2008-07-01
In this paper, we first give several operator identities involving the bivariate Rogers-Szegö polynomials. By applying the technique of parameter augmentation to the multiple q-binomial theorems given by Milne [S.C. Milne, Balanced summation theorems for U(n) basic hypergeometric series, AdvE Math. 131 (1997) 93-187], we obtain several new multiple q-series identities involving the bivariate Rogers-Szegö polynomials. These include multiple extensions of Mehler's formula and Rogers's formula. Our U(n+1) generalizations are quite natural as they are also a direct and immediate consequence of their (often classical) known one-variable cases and Milne's fundamental theorem for An or U(n+1) basic hypergeometric series in Theorem 1E49 of [S.C. Milne, An elementary proof of the Macdonald identities for , Adv. Math. 57 (1985) 34-70], as rewritten in Lemma 7.3 on p. 163 of [S.C. Milne, Balanced summation theorems for U(n) basic hypergeometric series, Adv. Math. 131 (1997) 93-187] or Corollary 4.4 on pp. 768-769 of [S.C. Milne, M. Schlosser, A new An extension of Ramanujan's summation with applications to multilateral An series, Rocky Mountain J. Math. 32 (2002) 759-792].
Improving long time behavior of Poisson bracket mapping equation: A non-Hamiltonian approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Hyun Woo; Rhee, Young Min, E-mail: ymrhee@postech.ac.kr
2014-05-14
Understanding nonadiabatic dynamics in complex systems is a challenging subject. A series of semiclassical approaches have been proposed to tackle the problem in various settings. The Poisson bracket mapping equation (PBME) utilizes a partial Wigner transform and a mapping representation for its formulation, and has been developed to describe nonadiabatic processes in an efficient manner. Operationally, it is expressed as a set of Hamilton's equations of motion, similar to more conventional classical molecular dynamics. However, this original Hamiltonian PBME sometimes suffers from a large deviation in accuracy especially in the long time limit. Here, we propose a non-Hamiltonian variant ofmore » PBME to improve its behavior especially in that limit. As a benchmark, we simulate spin-boson and photosynthetic model systems and find that it consistently outperforms the original PBME and its Ehrenfest style variant. We explain the source of this improvement by decomposing the components of the mapping Hamiltonian and by assessing the energy flow between the system and the bath. We discuss strengths and weaknesses of our scheme with a viewpoint of offering future prospects.« less
Hilbert-Huang transform analysis of long-term solar magnetic activity
NASA Astrophysics Data System (ADS)
Deng, Linhua
2018-04-01
Astronomical time series analysis is one of the hottest and most important problems, and becomes the suitable way to deal with the underlying dynamical behavior of the considered nonlinear systems. The quasi-periodic analysis of solar magnetic activity has been carried out by various authors during the past fifty years. In this work, the novel Hilbert-Huang transform approach is applied to investigate the yearly numbers of polar faculae in the time interval from 1705 to 1999. The detected periodicities can be allocated to three components: the first one is the short-term variations with periods smaller than 11 years, the second one is the mid- term variations with classical periods from 11 years to 50 years, and the last one is the long-term variations with periods larger than 50 years. The analysis results improve our knowledge on the quasi-periodic variations of solar magnetic activity and could be provided valuable constraints for solar dynamo theory. Furthermore, our analysis results could be useful for understanding the long-term variations of solar magnetic activity, providing crucial information to describe and forecast solar magnetic activity indicators.
NASA Astrophysics Data System (ADS)
Mangiarotti, S.; Muddu, S.; Sharma, A. K.; Corgne, S.; Ruiz, L.; Hubert-Moy, L.
2015-12-01
Groundwater is one of the main water reservoirs used for irrigation in regions of scarce water resources. For this reason, crop irrigation is expected to have a direct influence on this reservoir. To understand the time evolution of the groundwater table and its storage changes, it is important to delineate irrigated crops, whose evaporative demand is relatively higher. Such delineation may be performed based on classical classification approaches using optical remote sensing. However, it remains a difficult problem in regions where plots do not exceed a few hectares and exhibit a very heterogeneous pattern with multiple crops. This difficulty is emphasized in South India where two or three months of cloudy conditions during the monsoon period can hide crop growth during the year. An alternative approach is introduced here that takes advantage of such scarce signal. Ten different crops are considered in the present study. A bank of crop models is first established based on the global modeling technique [1]. These models are then tested using original time series (from which models were obtained) in order to evaluate the information that can be deduced from these models in an inverse approach. The approach is then tested on an independent data set and is finally applied to a large ensemble of 10,000 time series of plot data extracted from the Berambadi catchment (AMBHAS site) part of the Kabini River basin CZO, South India. Results show that despite the important two-month gap in satellite observations in the visible band, interpolated vegetation index remains an interesting indicator for identification of crops in South India. [1] S. Mangiarotti, R. Coudret, L. Drapeau, & L. Jarlan, Polynomial search and global modeling: Two algorithms for modeling chaos, Phys. Rev. E, 86(4), 046205 (2012).
Increase in dust storm related PM10 concentrations: A time series analysis of 2001-2015.
Krasnov, Helena; Katra, Itzhak; Friger, Michael
2016-06-01
Over the last decades, changes in dust storms characteristics have been observed in different parts of the world. The changing frequency of dust storms in the southeastern Mediterranean has led to growing concern regarding atmospheric PM10 levels. A classic time series additive model was used in order to describe and evaluate the changes in PM10 concentrations during dust storm days in different cities in Israel, which is located at the margins of the global dust belt. The analysis revealed variations in the number of dust events and PM10 concentrations during 2001-2015. A significant increase in PM10 concentrations was identified since 2009 in the arid city of Beer Sheva, southern Israel. Average PM10 concentrations during dust days before 2009 were 406, 312, and 364 μg m(-3) (median 337, 269,302) for Beer Sheva, Rehovot (central Israel) and Modi'in (eastern Israel), respectively. After 2009 the average concentrations in these cities during dust storms were 536, 466, and 428 μg m(-3) (median 382, 335, 338), respectively. Regression analysis revealed associations between PM10 variations and seasonality, wind speed, as well as relative humidity. The trends and periodicity are stronger in the southern part of Israel, where higher PM10 concentrations are found. Since 2009 dust events became more extreme with much higher daily and hourly levels. The findings demonstrate that in the arid area variations of dust storms can be quantified easier through PM10 levels over a relatively short time scale of several years. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Sokolov, Valentin V.; Zhirov, Oleg V.; Kharkov, Yaroslav A.
The extraordinary complexity of classical trajectories of typical nonlinear systems that manifest stochastic behavior is intimately connected with exponential sensitivity to small variations of initial conditions and/or weak external perturbations. In rigorous terms, such classical systems are characterized by positive algorithmic complexity described by the Lyapunov exponent or, alternatively, by the Kolmogorov-Sinai entropy. The said implies that, in spite of the fact that, formally, any however complex trajectory of a perfectly isolated (closed) system is unique and differentiable for any certain initial conditions and the motion is perfectly reversible, it is impractical to treat that sort of classical systems as closed ones. Inevitably, arbitrary weak influence of an environment crucially impacts the dynamics. This influence, that can be considered as a noise, rapidly effaces the memory of initial conditions and turns the motion into an irreversible random process. In striking contrast, the quantum mechanics of the classically chaotic systems exhibit much weaker sensitivity and strong memory of the initial state. Qualitatively, this crucial difference could be expected in view of a much simpler structure of quantum states as compared to the extraordinary complexity of random and unpredictable classical trajectories. However the very notion of trajectories is absent in quantum mechanics so that the concept of exponential instability seems to be irrelevant in this case. The problem of a quantitative measure of complexity of a quantum state of motion, that is a very important and nontrivial issue of the theory of quantum dynamical chaos, is the one of our concern. With such a measure in hand, we quantitatively analyze the stability and reversibility of quantum dynamics in the presence of external noise. To solve this problem we point out that individual classical trajectories are of minor interest if the motion is chaotic. Properties of all of them are alike in this case and rather the behavior of their manifolds carries really valuable information. Therefore the phase-space methods and, correspondingly, the Liouville form of the classical mechanics become the most adequate. It is very important that, opposite to the classical trajectories, the classical phase space distribution and the Liouville equation have direct quantum analogs. Hence, the analogy and difference of classical and quantum dynamics can be traced by comparing the classical (W(c)(I,θ;t)) and quantum (Wigner function W(I,θ;t)) phase space distributions both expressed in identical phase-space variables but ruled by different(!) linear equations. The paramount property of the classical dynamical chaos is the exponentially fast structuring of the system's phase space on finer and finer scales. On the contrary, degree of structuring of the corresponding Wigner function is restricted by the quantization of the phase space. This makes Wigner function more coarse and relatively "simple" as compared to its classical counterpart. Fourier analysis affords quite suitable ground for analyzing complexity of a phase space distribution, that is equally valid in classical and quantum cases. We demonstrate that the typical number of Fourier harmonics is indeed a relevant measure of complexity of states of motion in both classical as well as quantum cases. This allowed us to investigate in detail and introduce a quantitative measure of sensitivity to an external noisy environment and formulate the conditions under which the quantum motion remains reversible. It turns out that while the mean number of harmonics of the classical phase-space distribution of a non-integrable system grows with time exponentially during the whole time of the motion, the time of exponential upgrowth of this number in the case of the corresponding quantum Wigner function is restricted only to the Ehrenfest interval 0 < t < tE - just the interval within which the Wigner function still satisfies the classical Liouville equation. We showed that the number of harmonics increases beyond this interval algebraically. This fact gains a crucial importance when the Ehrenfest time is so short that the exponential regime has no time to show up. Under this condition the quantum motion turns out to be quite stable and reversible.
Detection of a sudden change of the field time series based on the Lorenz system.
Da, ChaoJiu; Li, Fang; Shen, BingLu; Yan, PengCheng; Song, Jian; Ma, DeShan
2017-01-01
We conducted an exploratory study of the detection of a sudden change of the field time series based on the numerical solution of the Lorenz system. First, the time when the Lorenz path jumped between the regions on the left and right of the equilibrium point of the Lorenz system was quantitatively marked and the sudden change time of the Lorenz system was obtained. Second, the numerical solution of the Lorenz system was regarded as a vector; thus, this solution could be considered as a vector time series. We transformed the vector time series into a time series using the vector inner product, considering the geometric and topological features of the Lorenz system path. Third, the sudden change of the resulting time series was detected using the sliding t-test method. Comparing the test results with the quantitatively marked time indicated that the method could detect every sudden change of the Lorenz path, thus the method is effective. Finally, we used the method to detect the sudden change of the pressure field time series and temperature field time series, and obtained good results for both series, which indicates that the method can apply to high-dimension vector time series. Mathematically, there is no essential difference between the field time series and vector time series; thus, we provide a new method for the detection of the sudden change of the field time series.
Alignment of time-resolved data from high throughput experiments.
Abidi, Nada; Franke, Raimo; Findeisen, Peter; Klawonn, Frank
2016-12-01
To better understand the dynamics of the underlying processes in cells, it is necessary to take measurements over a time course. Modern high-throughput technologies are often used for this purpose to measure the behavior of cell products like metabolites, peptides, proteins, [Formula: see text]RNA or mRNA at different points in time. Compared to classical time series, the number of time points is usually very limited and the measurements are taken at irregular time intervals. The main reasons for this are the costs of the experiments and the fact that the dynamic behavior usually shows a strong reaction and fast changes shortly after a stimulus and then slowly converges to a certain stable state. Another reason might simply be missing values. It is common to repeat the experiments and to have replicates in order to carry out a more reliable analysis. The ideal assumptions that the initial stimulus really started exactly at the same time for all replicates and that the replicates are perfectly synchronized are seldom satisfied. Therefore, there is a need to first adjust or align the time-resolved data before further analysis is carried out. Dynamic time warping (DTW) is considered as one of the common alignment techniques for time series data with equidistant time points. In this paper, we modified the DTW algorithm so that it can align sequences with measurements at different, non-equidistant time points with large gaps in between. This type of data is usually known as time-resolved data characterized by irregular time intervals between measurements as well as non-identical time points for different replicates. This new algorithm can be easily used to align time-resolved data from high-throughput experiments and to come across existing problems such as time scarcity and existing noise in the measurements. We propose a modified method of DTW to adapt requirements imposed by time-resolved data by use of monotone cubic interpolation splines. Our presented approach provides a nonlinear alignment of two sequences that neither need to have equi-distant time points nor measurements at identical time points. The proposed method is evaluated with artificial as well as real data. The software is available as an R package tra (Time-Resolved data Alignment) which is freely available at: http://public.ostfalia.de/klawonn/tra.zip .
Volatility of linear and nonlinear time series
NASA Astrophysics Data System (ADS)
Kalisky, Tomer; Ashkenazy, Yosef; Havlin, Shlomo
2005-07-01
Previous studies indicated that nonlinear properties of Gaussian distributed time series with long-range correlations, ui , can be detected and quantified by studying the correlations in the magnitude series ∣ui∣ , the “volatility.” However, the origin for this empirical observation still remains unclear and the exact relation between the correlations in ui and the correlations in ∣ui∣ is still unknown. Here we develop analytical relations between the scaling exponent of linear series ui and its magnitude series ∣ui∣ . Moreover, we find that nonlinear time series exhibit stronger (or the same) correlations in the magnitude time series compared with linear time series with the same two-point correlations. Based on these results we propose a simple model that generates multifractal time series by explicitly inserting long range correlations in the magnitude series; the nonlinear multifractal time series is generated by multiplying a long-range correlated time series (that represents the magnitude series) with uncorrelated time series [that represents the sign series sgn(ui) ]. We apply our techniques on daily deep ocean temperature records from the equatorial Pacific, the region of the El-Ninõ phenomenon, and find: (i) long-range correlations from several days to several years with 1/f power spectrum, (ii) significant nonlinear behavior as expressed by long-range correlations of the volatility series, and (iii) broad multifractal spectrum.
Duality between Time Series and Networks
Campanharo, Andriana S. L. O.; Sirer, M. Irmak; Malmgren, R. Dean; Ramos, Fernando M.; Amaral, Luís A. Nunes.
2011-01-01
Studying the interaction between a system's components and the temporal evolution of the system are two common ways to uncover and characterize its internal workings. Recently, several maps from a time series to a network have been proposed with the intent of using network metrics to characterize time series. Although these maps demonstrate that different time series result in networks with distinct topological properties, it remains unclear how these topological properties relate to the original time series. Here, we propose a map from a time series to a network with an approximate inverse operation, making it possible to use network statistics to characterize time series and time series statistics to characterize networks. As a proof of concept, we generate an ensemble of time series ranging from periodic to random and confirm that application of the proposed map retains much of the information encoded in the original time series (or networks) after application of the map (or its inverse). Our results suggest that network analysis can be used to distinguish different dynamic regimes in time series and, perhaps more importantly, time series analysis can provide a powerful set of tools that augment the traditional network analysis toolkit to quantify networks in new and useful ways. PMID:21858093
The Strange World of Classical Physics
ERIC Educational Resources Information Center
Green, David
2010-01-01
We have heard many times that the commonsense world of classical physics was shattered by Einstein's revelation of the laws of relativity. This is certainly true; the shift from our everyday notions of time and space to those revealed by relativity is one of the greatest stretches the mind can make. What is seldom appreciated is that the laws of…
Provable classically intractable sampling with measurement-based computation in constant time
NASA Astrophysics Data System (ADS)
Sanders, Stephen; Miller, Jacob; Miyake, Akimasa
We present a constant-time measurement-based quantum computation (MQC) protocol to perform a classically intractable sampling problem. We sample from the output probability distribution of a subclass of the instantaneous quantum polynomial time circuits introduced by Bremner, Montanaro and Shepherd. In contrast with the usual circuit model, our MQC implementation includes additional randomness due to byproduct operators associated with the computation. Despite this additional randomness we show that our sampling task cannot be efficiently simulated by a classical computer. We extend previous results to verify the quantum supremacy of our sampling protocol efficiently using only single-qubit Pauli measurements. Center for Quantum Information and Control, Department of Physics and Astronomy, University of New Mexico, Albuquerque, NM 87131, USA.
Pilot installation for the thermo-chemical characterisation of solid wastes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marculescu, C.; Antonini, G.; Badea, A.
The increasing production and the large variety of wastes require operators of thermal treatment units to continuously adapt the installations or the functioning parameters to the different physical and chemical properties of the wastes. Usually, the treated waste is encountered in the form of heterogeneous mixtures. The classical tests such as thermogravimetry and calorimetric bomb operate component by component, separately. In addition to this, they can analyse only small quantities of waste at a time (a few grams). These common tests are necessary but insufficient in the global waste analysis in the view further thermal treatment. This paper presents anmore » experimental installation, which was designed and built at the CNRS Science Division, Department of Industrial Methods, Compiegne University of Technology, France. It allows the determination of waste thermal and chemical properties by means of thermal treatment. Also, it is capable of continuously analysing significant quantities of waste (up to 50 kg/h) as compared to the classical tests and it can work under various conditions: {center_dot}oxidant or reductive atmosphere (on choice); {center_dot}variable temperature between 400 and 1000 deg. C; {center_dot}independently set residence time of treated sample in the installation and flow conditions. The installation reproduces the process conditions from incinerators or pyrolysis reactors. It also provides complete information on the kinetics of the waste thermal degradation and on the pollutant emissions. Using different mixtures of components present in the municipal solid waste and also in the reconstituted MSW samples, we defined a series of criteria for characterising waste behaviour during the stages of the main treatment process such as: feeding, devolatilisation/oxidation, advancement, solid residue evacuation, and pollutants emission.« less
A Viscoplastic Constitutive Theory for Monolithic Ceramic Materials. Series 1
NASA Technical Reports Server (NTRS)
Janosik, Lesley A.; Duffy, Stephen F.
1997-01-01
With increasing use of ceramic materials in high temperature structural applications such as advanced heat engine components, the need arises to accurately predict thermomechanical behavior. This paper, which is the first of two in a series, will focus on inelastic deformation behavior associated with these service conditions by providing an overview of a viscoplastic constitutive model that accounts for time-dependent hereditary material deformation (e.g., creep, stress relaxation, etc.) in monolithic structural ceramics. Early work in the field of metal plasticity indicated that inelastic deformations are essentially unaffected by hydrostatic stress. This is not the case, however, for ceramic-based material systems, unless the ceramic is fully dense. The theory presented here allows for fully dense material behavior as a limiting case. In addition, ceramic materials exhibit different time-dependent behavior in tension and compression. Thus, inelastic deformation models for ceramics must be constructed in a fashion that admits both sensitivity to hydrostatic stress and differing behavior in tension and compression. A number of constitutive theories for materials that exhibit sensitivity to the hydrostatic component of stress have been proposed that characterize deformation using time-independent classical plasticity as a foundation. However, none of these theories allow different behavior in tension and compression. In addition, these theories are somewhat lacking in that they are unable to capture creep, relaxation, and rate-sensitive phenomena exhibited by ceramic materials at high temperature. When subjected to elevated service temperatures, ceramic materials exhibit complex thermomechanical behavior that is inherently time-dependent, and hereditary in the sense that current behavior depends not only on current conditions, but also on thermo-mechanical history. The objective of this work is to present the formulation of a macroscopic continuum theory that captures these time-dependent phenomena. Specifically, the overview contained in this paper focuses on the multiaxial derivation of the constitutive model, and examines the scalar threshold function and its attending geometrical implications.
NASA Astrophysics Data System (ADS)
Süveges, Maria; Anderson, Richard I.
2018-03-01
Context. Recent studies have revealed a hitherto unknown complexity of Cepheid pulsations by discovering irregular modulated variability using photometry, radial velocities, and interferometry. Aim. We aim to perform a statistically rigorous search and characterization of such phenomena in continuous time, applying it to 53 classical Cepheids from the OGLE-III catalog. Methods: We have used local kernel regression to search for both period and amplitude modulations simultaneously in continuous time and to investigate their detectability. We determined confidence intervals using parametric and non-parametric bootstrap sampling to estimate significance, and investigated multi-periodicity using a modified pre-whitening approach that relies on time-dependent light curve parameters. Results: We find a wide variety of period and amplitude modulations and confirm that first overtone pulsators are less stable than fundamental mode Cepheids. Significant temporal variations in period are more frequently detected than those in amplitude. We find a range of modulation intensities, suggesting that both amplitude and period modulations are ubiquitous among Cepheids. Over the 12-year baseline offered by OGLE-III, we find that period changes are often nonlinear, sometimes cyclic, suggesting physical origins beyond secular evolution. Our method detects modulations (period and amplitude) more efficiently than conventional methods that are reliant on certain features in the Fourier spectrum, and pre-whitens time series more accurately than using constant light curve parameters, removing spurious secondary peaks effectively. Conclusions: Period and amplitude modulations appear to be ubiquitous among Cepheids. Current detectability is limited by observational cadence and photometric precision: detection of amplitude modulation below 3 mmag requires space-based facilities. Recent and ongoing space missions (K2, BRITE, MOST, CoRoT) as well as upcoming ones (TESS, PLATO) will significantly improve detectability of fast modulations, such as cycle-to-cycle variations, by providing high-cadence high-precision photometry. High-quality long-term ground-based photometric time series will remain crucial to study longer-term modulations and to disentangle random fluctuations from secular evolution.
Zoroquiain, Pablo; Mayo-Goldberg, Erin; Alghamdi, Sarah; Alhumaid, Sulaiman; Perlmann, Eduardo; Barros, Paulo; Mayo, Nancy; Burnier, Miguel N
2016-12-01
The cutoff presented in the current classification of canine melanocytic lesions by Wilcock and Pfeiffer is based on the clinical outcome rather than morphological concepts. Classification of tumors based on morphology or molecular signatures is the key to identifying new therapies or prognostic factors. Therefore, the aim of this study was to analyze morphological findings in canine melanocytic lesions based on classic malignant morphologic principles of neoplasia and to compare these features with human uveal melanoma (HUM) samples. In total, 64 canine and 111 human morphologically malignant melanocytic lesions were classified into two groups (melanocytoma-like or classic melanoma) based on the presence or absence of M cells, respectively. Histopathological characteristics were compared between the two groups using the χ-test, t-test, and multivariate discriminant analysis. Among the 64 canine tumors, 28 (43.7%) were classic and 36 (56.3%) were melanocytoma-like melanomas. Smaller tumor size, a higher degree of pigmentation, and lower mitotic activity distinguished melanocytoma-like from classic tumors with an accuracy of 100% for melanocytoma-like lesions. From the human series, only one case showed melanocytoma-like features and had a low risk for metastasis characteristics. Canine uveal melanoma showed a morphological spectrum with features similar to the HUM counterpart (classic melanoma) and overlapped features between uveal melanoma and melanocytoma (melanocytoma-like melanoma). Recognition that the subgroup of melanocytoma-like melanoma may represent the missing link between benign and malignant lesions could help explain the progression of uveal melanoma in dogs; these findings can potentially be translated to HUM.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paavola, Janika; Hall, Michael J. W.; Paris, Matteo G. A.
The transition from quantum to classical, in the case of a quantum harmonic oscillator, is typically identified with the transition from a quantum superposition of macroscopically distinguishable states, such as the Schroedinger-cat state, into the corresponding statistical mixture. This transition is commonly characterized by the asymptotic loss of the interference term in the Wigner representation of the cat state. In this paper we show that the quantum-to-classical transition has different dynamical features depending on the measure for nonclassicality used. Measures based on an operatorial definition have well-defined physical meaning and allow a deeper understanding of the quantum-to-classical transition. Our analysismore » shows that, for most nonclassicality measures, the Schroedinger-cat state becomes classical after a finite time. Moreover, our results challenge the prevailing idea that more macroscopic states are more susceptible to decoherence in the sense that the transition from quantum to classical occurs faster. Since nonclassicality is a prerequisite for entanglement generation our results also bridge the gap between decoherence, which is lost only asymptotically, and entanglement, which may show a ''sudden death''. In fact, whereas the loss of coherences still remains asymptotic, we emphasize that the transition from quantum to classical can indeed occur at a finite time.« less
Accurate expressions for solar cell fill factors including series and shunt resistances
NASA Astrophysics Data System (ADS)
Green, Martin A.
2016-02-01
Together with open-circuit voltage and short-circuit current, fill factor is a key solar cell parameter. In their classic paper on limiting efficiency, Shockley and Queisser first investigated this factor's analytical properties showing, for ideal cells, it could be expressed implicitly in terms of the maximum power point voltage. Subsequently, fill factors usually have been calculated iteratively from such implicit expressions or from analytical approximations. In the absence of detrimental series and shunt resistances, analytical fill factor expressions have recently been published in terms of the Lambert W function available in most mathematical computing software. Using a recently identified perturbative relationship, exact expressions in terms of this function are derived in technically interesting cases when both series and shunt resistances are present but have limited impact, allowing a better understanding of their effect individually and in combination. Approximate expressions for arbitrary shunt and series resistances are then deduced, which are significantly more accurate than any previously published. A method based on the insights developed is also reported for deducing one-diode fits to experimental data.
Allagui, Anis; Freeborn, Todd J.; Elwakil, Ahmed S.; Maundy, Brent J.
2016-01-01
The electric characteristics of electric-double layer capacitors (EDLCs) are determined by their capacitance which is usually measured in the time domain from constant-current charging/discharging and cyclic voltammetry tests, and from the frequency domain using nonlinear least-squares fitting of spectral impedance. The time-voltage and current-voltage profiles from the first two techniques are commonly treated by assuming ideal SsC behavior in spite of the nonlinear response of the device, which in turn provides inaccurate values for its characteristic metrics. In this paper we revisit the calculation of capacitance, power and energy of EDLCs from the time domain constant-current step response and linear voltage waveform, under the assumption that the device behaves as an equivalent fractional-order circuit consisting of a resistance Rs in series with a constant phase element (CPE(Q, α), with Q being a pseudocapacitance and α a dispersion coefficient). In particular, we show with the derived (Rs, Q, α)-based expressions, that the corresponding nonlinear effects in voltage-time and current-voltage can be encompassed through nonlinear terms function of the coefficient α, which is not possible with the classical RsC model. We validate our formulae with the experimental measurements of different EDLCs. PMID:27934904
NASA Astrophysics Data System (ADS)
Allagui, Anis; Freeborn, Todd J.; Elwakil, Ahmed S.; Maundy, Brent J.
2016-12-01
The electric characteristics of electric-double layer capacitors (EDLCs) are determined by their capacitance which is usually measured in the time domain from constant-current charging/discharging and cyclic voltammetry tests, and from the frequency domain using nonlinear least-squares fitting of spectral impedance. The time-voltage and current-voltage profiles from the first two techniques are commonly treated by assuming ideal SsC behavior in spite of the nonlinear response of the device, which in turn provides inaccurate values for its characteristic metrics. In this paper we revisit the calculation of capacitance, power and energy of EDLCs from the time domain constant-current step response and linear voltage waveform, under the assumption that the device behaves as an equivalent fractional-order circuit consisting of a resistance Rs in series with a constant phase element (CPE(Q, α), with Q being a pseudocapacitance and α a dispersion coefficient). In particular, we show with the derived (Rs, Q, α)-based expressions, that the corresponding nonlinear effects in voltage-time and current-voltage can be encompassed through nonlinear terms function of the coefficient α, which is not possible with the classical RsC model. We validate our formulae with the experimental measurements of different EDLCs.
Smoothing analysis of slug tests data for aquifer characterization at laboratory scale
NASA Astrophysics Data System (ADS)
Aristodemo, Francesco; Ianchello, Mario; Fallico, Carmine
2018-07-01
The present paper proposes a smoothing analysis of hydraulic head data sets obtained by means of different slug tests introduced in a confined aquifer. Laboratory experiments were performed through a 3D large-scale physical model built at the University of Calabria. The hydraulic head data were obtained by a pressure transducer placed in the injection well and subjected to a processing operation to smooth out the high-frequency noise occurring in the recorded signals. The adopted smoothing techniques working in time, frequency and time-frequency domain are the Savitzky-Golay filter modeled by third-order polynomial, the Fourier Transform and two types of Wavelet Transform (Mexican hat and Morlet). The performances of the filtered time series of the hydraulic heads for different slug volumes and measurement frequencies were statistically analyzed in terms of optimal fitting of the classical Cooper's equation. For practical purposes, the hydraulic heads smoothed by the involved techniques were used to determine the hydraulic conductivity of the aquifer. The energy contents and the frequency oscillations of the hydraulic head variations in the aquifer were exploited in the time-frequency domain by means of Wavelet Transform as well as the non-linear features of the observed hydraulic head oscillations around the theoretical Cooper's equation.
Allagui, Anis; Freeborn, Todd J; Elwakil, Ahmed S; Maundy, Brent J
2016-12-09
The electric characteristics of electric-double layer capacitors (EDLCs) are determined by their capacitance which is usually measured in the time domain from constant-current charging/discharging and cyclic voltammetry tests, and from the frequency domain using nonlinear least-squares fitting of spectral impedance. The time-voltage and current-voltage profiles from the first two techniques are commonly treated by assuming ideal R s C behavior in spite of the nonlinear response of the device, which in turn provides inaccurate values for its characteristic metrics [corrected]. In this paper we revisit the calculation of capacitance, power and energy of EDLCs from the time domain constant-current step response and linear voltage waveform, under the assumption that the device behaves as an equivalent fractional-order circuit consisting of a resistance R s in series with a constant phase element (CPE(Q, α), with Q being a pseudocapacitance and α a dispersion coefficient). In particular, we show with the derived (R s , Q, α)-based expressions, that the corresponding nonlinear effects in voltage-time and current-voltage can be encompassed through nonlinear terms function of the coefficient α, which is not possible with the classical R s C model. We validate our formulae with the experimental measurements of different EDLCs.
Quantum break-time of de Sitter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dvali, Gia; Gómez, César; Zell, Sebastian, E-mail: georgi.dvali@physik.uni-muenchen.de, E-mail: cesar.gomez@uam.es, E-mail: sebastian.zell@campus.lmu.de
The quantum break-time of a system is the time-scale after which its true quantum evolution departs from the classical mean field evolution. For capturing it, a quantum resolution of the classical background—e.g., in terms of a coherent state—is required. In this paper, we first consider a simple scalar model with anharmonic oscillations and derive its quantum break-time. Next, following [1], we apply these ideas to de Sitter space. We formulate a simple model of a spin-2 field, which for some time reproduces the de Sitter metric and simultaneously allows for its well-defined representation as quantum coherent state of gravitons. Themore » mean occupation number N of background gravitons turns out to be equal to the de Sitter horizon area in Planck units, while their frequency is given by the de Sitter Hubble parameter. In the semi-classical limit, we show that the model reproduces all the known properties of de Sitter, such as the redshift of probe particles and thermal Gibbons-Hawking radiation, all in the language of quantum S -matrix scatterings and decays of coherent state gravitons. Most importantly, this framework allows to capture the 1/ N -effects to which the usual semi-classical treatment is blind. They violate the de Sitter symmetry and lead to a finite quantum break-time of the de Sitter state equal to the de Sitter radius times N . We also point out that the quantum-break time is inversely proportional to the number of particle species in the theory. Thus, the quantum break-time imposes the following consistency condition: older and species-richer universes must have smaller cosmological constants. For the maximal, phenomenologically acceptable number of species, the observed cosmological constant would saturate this bound if our Universe were 10{sup 100} years old in its entire classical history.« less
Acoustical study of classical Peking Opera singing.
Sundberg, Johan; Gu, Lide; Huang, Qiang; Huang, Ping
2012-03-01
Acoustic characteristics of classical opera singing differ considerably between the Western and the Chinese cultures. Singers in the classical Peking opera tradition specialize on one out of a limited number of standard roles. Audio and electroglottograph signals were recorded for four performers of the Old Man role and three performers of the Colorful Face role. Recordings were made of the singers' speech and when they sang recitatives and songs from their roles. Sound pressure level, fundamental frequency, and spectrum characteristics were analyzed. Histograms showing the distribution of fundamental frequency showed marked peaks for the songs, suggesting a scale tone structure. Some of the intervals between these peaks were similar to those used in Western music. Vibrato rate was about 3.5Hz, that is, considerably slower than in Western classical singing. Spectra of vibrato-free tones contained unbroken series of harmonic partials sometimes reaching up to 17 000Hz. Long-term-average spectrum (LTAS) curves showed no trace of a singer's formant cluster. However, the Colorful Face role singers' LTAS showed a marked peak near 3300Hz, somewhat similar to that found in Western pop music singers. The mean LTAS spectrum slope between 700 and 6000Hz decreased by about 0.2dB/octave per dB of equivalent sound level. Copyright © 2012 The Voice Foundation. Published by Mosby, Inc. All rights reserved.