Sample records for rescaled range analysis

  1. Change of spatial information under rescaling: A case study using multi-resolution image series

    NASA Astrophysics Data System (ADS)

    Chen, Weirong; Henebry, Geoffrey M.

    Spatial structure in imagery depends on a complicated interaction between the observational regime and the types and arrangements of entities within the scene that the image portrays. Although block averaging of pixels has commonly been used to simulate coarser resolution imagery, relatively little attention has been focused on the effects of simple rescaling on spatial structure and the explanation and a possible solution to the problem. Yet, if there are significant differences in spatial variance between rescaled and observed images, it may affect the reliability of retrieved biogeophysical quantities. To investigate these issues, a nested series of high spatial resolution digital imagery was collected at a research site in eastern Nebraska in 2001. An airborne Kodak DCS420IR camera acquired imagery at three altitudes, yielding nominal spatial resolutions ranging from 0.187 m to 1 m. The red and near infrared (NIR) bands of the co-registered image series were normalized using pseudo-invariant features, and the normalized difference vegetation index (NDVI) was calculated. Plots of grain sorghum planted in orthogonal crop row orientations were extracted from the image series. The finest spatial resolution data were then rescaled by averaging blocks of pixels to produce a rescaled image series that closely matched the spatial resolution of the observed image series. Spatial structures of the observed and rescaled image series were characterized using semivariogram analysis. Results for NDVI and its component bands show, as expected, that decreasing spatial resolution leads to decreasing spatial variability and increasing spatial dependence. However, compared to the observed data, the rescaled images contain more persistent spatial structure that exhibits limited variation in both spatial dependence and spatial heterogeneity. Rescaling via simple block averaging fails to consider the effect of scene object shape and extent on spatial information. As the features portrayed by pixels are equally weighted regardless of the shape and extent of the underlying scene objects, the rescaled image retains more of the original spatial information than would occur through direct observation at a coarser sensor spatial resolution. In contrast, for the observed images, due to the effect of the modulation transfer function (MTF) of the imaging system, high frequency features like edges are blurred or lost as the pixel size increases, resulting in greater variation in spatial structure. Successive applications of a low-pass spatial convolution filter are shown to mimic a MTF. Accordingly, it is recommended that such a procedure be applied prior to rescaling by simple block averaging, if insufficient image metadata exist to replicate the net MTF of the imaging system, as might be expected in land cover change analysis studies using historical imagery.

  2. Long-range correlation in cosmic microwave background radiation.

    PubMed

    Movahed, M Sadegh; Ghasemi, F; Rahvar, Sohrab; Tabar, M Reza Rahimi

    2011-08-01

    We investigate the statistical anisotropy and gaussianity of temperature fluctuations of Cosmic Microwave Background (CMB) radiation data from the Wilkinson Microwave Anisotropy Probe survey, using the Multifractal Detrended Fluctuation Analysis, Rescaled Range, and Scaled Windowed Variance methods. Multifractal Detrended Fluctuation Analysis shows that CMB fluctuations has a long-range correlation function with a multifractal behavior. By comparing the shuffled and surrogate series of CMB data, we conclude that the multifractality nature of the temperature fluctuation of CMB radiation is mainly due to the long-range correlations, and the map is consistent with a gaussian distribution.

  3. Testing the conditional mass function of dark matter haloes against numerical N-body simulations

    NASA Astrophysics Data System (ADS)

    Tramonte, D.; Rubiño-Martín, J. A.; Betancort-Rijo, J.; Dalla Vecchia, C.

    2017-05-01

    We compare the predicted conditional mass function (CMF) of dark matter haloes from two theoretical prescriptions against numerical N-body simulations, both in overdense and underdense regions and at different Eulerian scales ranging from 5 to 30 h-1 Mpc. In particular, we consider in detail a locally implemented rescaling of the unconditional mass function (UMF) already discussed in the literature, and also a generalization of the standard rescaling method described in the extended Press-Schechter formalism. First, we test the consistency of these two rescalings by verifying the normalization of the CMF at different scales, and showing that none of the proposed cases provides a normalized CMF. In order to satisfy the normalization condition, we include a modification in the rescaling procedure. After this modification, the resulting CMF generally provides a better description of numerical results. We finally present an analytical fit to the ratio between the CMF and the UMF (also known as the matter-to-halo bias function) in underdense regions, which could be of special interest to speed up the computation of the halo abundance when studying void statistics. In this case, the CMF prescription based on the locally implemented rescaling provides a slightly better description of the numerical results when compared to the standard rescaling.

  4. Discretization analysis of bifurcation based nonlinear amplifiers

    NASA Astrophysics Data System (ADS)

    Feldkord, Sven; Reit, Marco; Mathis, Wolfgang

    2017-09-01

    Recently, for modeling biological amplification processes, nonlinear amplifiers based on the supercritical Andronov-Hopf bifurcation have been widely analyzed analytically. For technical realizations, digital systems have become the most relevant systems in signal processing applications. The underlying continuous-time systems are transferred to the discrete-time domain using numerical integration methods. Within this contribution, effects on the qualitative behavior of the Andronov-Hopf bifurcation based systems concerning numerical integration methods are analyzed. It is shown exemplarily that explicit Runge-Kutta methods transform the truncated normalform equation of the Andronov-Hopf bifurcation into the normalform equation of the Neimark-Sacker bifurcation. Dependent on the order of the integration method, higher order terms are added during this transformation.A rescaled normalform equation of the Neimark-Sacker bifurcation is introduced that allows a parametric design of a discrete-time system which corresponds to the rescaled Andronov-Hopf system. This system approximates the characteristics of the rescaled Hopf-type amplifier for a large range of parameters. The natural frequency and the peak amplitude are preserved for every set of parameters. The Neimark-Sacker bifurcation based systems avoid large computational effort that would be caused by applying higher order integration methods to the continuous-time normalform equations.

  5. Electron swarm properties under the influence of a very strong attachment in SF6 and CF3I obtained by Monte Carlo rescaling procedures

    NASA Astrophysics Data System (ADS)

    Mirić, J.; Bošnjaković, D.; Simonović, I.; Petrović, Z. Lj; Dujko, S.

    2016-12-01

    Electron attachment often imposes practical difficulties in Monte Carlo simulations, particularly under conditions of extensive losses of seed electrons. In this paper, we discuss two rescaling procedures for Monte Carlo simulations of electron transport in strongly attaching gases: (1) discrete rescaling, and (2) continuous rescaling. The two procedures are implemented in our Monte Carlo code with an aim of analyzing electron transport processes and attachment induced phenomena in sulfur-hexafluoride (SF6) and trifluoroiodomethane (CF3I). Though calculations have been performed over the entire range of reduced electric fields E/n 0 (where n 0 is the gas number density) where experimental data are available, the emphasis is placed on the analysis below critical (electric gas breakdown) fields and under conditions when transport properties are greatly affected by electron attachment. The present calculations of electron transport data for SF6 and CF3I at low E/n 0 take into account the full extent of the influence of electron attachment and spatially selective electron losses along the profile of electron swarm and attempts to produce data that may be used to model this range of conditions. The results of Monte Carlo simulations are compared to those predicted by the publicly available two term Boltzmann solver BOLSIG+. A multitude of kinetic phenomena in electron transport has been observed and discussed using physical arguments. In particular, we discuss two important phenomena: (1) the reduction of the mean energy with increasing E/n 0 for electrons in \\text{S}{{\\text{F}}6} and (2) the occurrence of negative differential conductivity (NDC) in the bulk drift velocity only for electrons in both \\text{S}{{\\text{F}}6} and CF3I. The electron energy distribution function, spatial variations of the rate coefficient for electron attachment and average energy as well as spatial profile of the swarm are calculated and used to understand these phenomena.

  6. A Monte Carlo simulation to the performance of the R/S and V/S methods—Statistical revisit and real world application

    NASA Astrophysics Data System (ADS)

    He, Ling-Yun; Qian, Wen-Bin

    2012-07-01

    A correct or precise estimation of the Hurst exponent is one of the fundamentally important problems in the financial economics literature. There are three widely used tools to estimate the Hurst exponent, the canonical rescaled range (R/S), the variance rescaled statistic (V/S) and the Modified rescaled range (Modified R/S). To clarify their performance, we compare them by Monte Carlo simulations; we generate many time-series of a fractal Brownian motion, of a Weierstrass-Mandelbrot cosine fractal function and of a fractionally integrated process, whose theoretical Hurst exponents are known, to compare the Hurst exponents estimated by the three methods. To better understand their pragmatic performance, we further apply all of these methods empirically in real-world applications. Our results imply it is not appropriate to conclude simply which method is better as V/S performs better when the analyzed market is anti-persistent while R/S seems to be a reliable tool used in persistent market.

  7. On the assimilation set-up of ASCAT soil moisture data for improving streamflow catchment simulation

    NASA Astrophysics Data System (ADS)

    Loizu, Javier; Massari, Christian; Álvarez-Mozos, Jesús; Tarpanelli, Angelica; Brocca, Luca; Casalí, Javier

    2018-01-01

    Assimilation of remotely sensed surface soil moisture (SSM) data into hydrological catchment models has been identified as a means to improve streamflow simulations, but reported results vary markedly depending on the particular model, catchment and assimilation procedure used. In this study, the influence of key aspects, such as the type of model, re-scaling technique and SSM observation error considered, were evaluated. For this aim, Advanced SCATterometer ASCAT-SSM observations were assimilated through the ensemble Kalman filter into two hydrological models of different complexity (namely MISDc and TOPLATS) run on two Mediterranean catchments of similar size (750 km2). Three different re-scaling techniques were evaluated (linear re-scaling, variance matching and cumulative distribution function matching), and SSM observation error values ranging from 0.01% to 20% were considered. Four different efficiency measures were used for evaluating the results. Increases in Nash-Sutcliffe efficiency (0.03-0.15) and efficiency indices (10-45%) were obtained, especially when linear re-scaling and observation errors within 4-6% were considered. This study found out that there is a potential to improve streamflow prediction through data assimilation of remotely sensed SSM in catchments of different characteristics and with hydrological models of different conceptualizations schemes, but for that, a careful evaluation of the observation error and re-scaling technique set-up utilized is required.

  8. Quantitative Assessment of Arrhythmia Using Non-linear Approach: A Non-invasive Prognostic Tool

    NASA Astrophysics Data System (ADS)

    Chakraborty, Monisha; Ghosh, Dipak

    2017-12-01

    Accurate prognostic tool to identify severity of Arrhythmia is yet to be investigated, owing to the complexity of the ECG signal. In this paper, we have shown that quantitative assessment of Arrhythmia is possible using non-linear technique based on "Hurst Rescaled Range Analysis". Although the concept of applying "non-linearity" for studying various cardiac dysfunctions is not entirely new, the novel objective of this paper is to identify the severity of the disease, monitoring of different medicine and their dose, and also to assess the efficiency of different medicine. The approach presented in this work is simple which in turn will help doctors in efficient disease management. In this work, Arrhythmia ECG time series are collected from MIT-BIH database. Normal ECG time series are acquired using POLYPARA system. Both time series are analyzed in thelight of non-linear approach following the method "Rescaled Range Analysis". The quantitative parameter, "Fractal Dimension" (D) is obtained from both types of time series. The major finding is that Arrhythmia ECG poses lower values of D as compared to normal. Further, this information can be used to access the severity of Arrhythmia quantitatively, which is a new direction of prognosis as well as adequate software may be developed for the use of medical practice.

  9. Quantitative Assessment of Arrhythmia Using Non-linear Approach: A Non-invasive Prognostic Tool

    NASA Astrophysics Data System (ADS)

    Chakraborty, Monisha; Ghosh, Dipak

    2018-04-01

    Accurate prognostic tool to identify severity of Arrhythmia is yet to be investigated, owing to the complexity of the ECG signal. In this paper, we have shown that quantitative assessment of Arrhythmia is possible using non-linear technique based on "Hurst Rescaled Range Analysis". Although the concept of applying "non-linearity" for studying various cardiac dysfunctions is not entirely new, the novel objective of this paper is to identify the severity of the disease, monitoring of different medicine and their dose, and also to assess the efficiency of different medicine. The approach presented in this work is simple which in turn will help doctors in efficient disease management. In this work, Arrhythmia ECG time series are collected from MIT-BIH database. Normal ECG time series are acquired using POLYPARA system. Both time series are analyzed in thelight of non-linear approach following the method "Rescaled Range Analysis". The quantitative parameter, "Fractal Dimension" (D) is obtained from both types of time series. The major finding is that Arrhythmia ECG poses lower values of D as compared to normal. Further, this information can be used to access the severity of Arrhythmia quantitatively, which is a new direction of prognosis as well as adequate software may be developed for the use of medical practice.

  10. Multiscale analysis of information dynamics for linear multivariate processes.

    PubMed

    Faes, Luca; Montalto, Alessandro; Stramaglia, Sebastiano; Nollo, Giandomenico; Marinazzo, Daniele

    2016-08-01

    In the study of complex physical and physiological systems represented by multivariate time series, an issue of great interest is the description of the system dynamics over a range of different temporal scales. While information-theoretic approaches to the multiscale analysis of complex dynamics are being increasingly used, the theoretical properties of the applied measures are poorly understood. This study introduces for the first time a framework for the analytical computation of information dynamics for linear multivariate stochastic processes explored at different time scales. After showing that the multiscale processing of a vector autoregressive (VAR) process introduces a moving average (MA) component, we describe how to represent the resulting VARMA process using statespace (SS) models and how to exploit the SS model parameters to compute analytical measures of information storage and information transfer for the original and rescaled processes. The framework is then used to quantify multiscale information dynamics for simulated unidirectionally and bidirectionally coupled VAR processes, showing that rescaling may lead to insightful patterns of information storage and transfer but also to potentially misleading behaviors.

  11. Treatment of systematic errors in land data assimilation systems

    NASA Astrophysics Data System (ADS)

    Crow, W. T.; Yilmaz, M.

    2012-12-01

    Data assimilation systems are generally designed to minimize the influence of random error on the estimation of system states. Yet, experience with land data assimilation systems has also revealed the presence of large systematic differences between model-derived and remotely-sensed estimates of land surface states. Such differences are commonly resolved prior to data assimilation through implementation of a pre-processing rescaling step whereby observations are scaled (or non-linearly transformed) to somehow "match" comparable predictions made by an assimilation model. While the rationale for removing systematic differences in means (i.e., bias) between models and observations is well-established, relatively little theoretical guidance is currently available to determine the appropriate treatment of higher-order moments during rescaling. This talk presents a simple analytical argument to define an optimal linear-rescaling strategy for observations prior to their assimilation into a land surface model. While a technique based on triple collocation theory is shown to replicate this optimal strategy, commonly-applied rescaling techniques (e.g., so called "least-squares regression" and "variance matching" approaches) are shown to represent only sub-optimal approximations to it. Since the triple collocation approach is likely infeasible in many real-world circumstances, general advice for deciding between various feasible (yet sub-optimal) rescaling approaches will be presented with an emphasis of the implications of this work for the case of directly assimilating satellite radiances. While the bulk of the analysis will deal with linear rescaling techniques, its extension to nonlinear cases will also be discussed.

  12. Memory and long-range correlations in chess games

    NASA Astrophysics Data System (ADS)

    Schaigorodsky, Ana L.; Perotti, Juan I.; Billoni, Orlando V.

    2014-01-01

    In this paper we report the existence of long-range memory in the opening moves of a chronologically ordered set of chess games using an extensive chess database. We used two mapping rules to build discrete time series and analyzed them using two methods for detecting long-range correlations; rescaled range analysis and detrended fluctuation analysis. We found that long-range memory is related to the level of the players. When the database is filtered according to player levels we found differences in the persistence of the different subsets. For high level players, correlations are stronger at long time scales; whereas in intermediate and low level players they reach the maximum value at shorter time scales. This can be interpreted as a signature of the different strategies used by players with different levels of expertise. These results are robust against the assignation rules and the method employed in the analysis of the time series.

  13. Changes in the Hurst exponent of heartbeat intervals during physical activity

    NASA Astrophysics Data System (ADS)

    Martinis, M.; Knežević, A.; Krstačić, G.; Vargović, E.

    2004-07-01

    The fractal scaling properties of the heartbeat time series are studied in different controlled ergometric regimes using both the improved Hurst rescaled range (R/S) analysis and the detrended fluctuation analysis (DFA). The long-time “memory effect” quantified by the value of the Hurst exponent H>0.5 is found to increase during progressive physical activity in healthy subjects, in contrast to those having stable angina pectoris, where it decreases. The results are also supported by the detrended fluctuation analysis. We argue that this finding may be used as a useful new diagnostic parameter for short heartbeat time series.

  14. Multiscale rescaled range analysis of EEG recordings in sevoflurane anesthesia.

    PubMed

    Liang, Zhenhu; Li, Duan; Ouyang, Gaoxiang; Wang, Yinghua; Voss, Logan J; Sleigh, Jamie W; Li, Xiaoli

    2012-04-01

    The Hurst exponent (HE) is a nonlinear method measuring the smoothness of a fractal time series. In this study we applied the HE index, extracted from electroencephalographic (EEG) recordings, as a measure of anesthetic drug effects on brain activity. In 19 adult patients undergoing sevoflurane general anesthesia, we calculated the HE of the raw EEG; comparing the maximal overlap discrete wavelet transform (MODWT) with the traditional rescaled range (R/S) analysis techniques, and with a commercial index of depth of anesthesia - the response entropy (RE). We analyzed each wavelet-decomposed sub-band as well as the combined low frequency bands (HEOLFBs). The methods were compared in regard to pharmacokinetic/pharmacodynamic (PK/PD) modeling, and prediction probability. All the low frequency band HE indices decreased when anesthesia deepened. However the HEOLFB was the best index because: it was less sensitive to artifacts, most closely tracked the exact point of loss of consciousness, showed a better prediction probability in separating the awake and unconscious states, and tracked sevoflurane concentration better - as estimated by the PK/PD models. The HE is a useful measure for estimating the depth of anesthesia. It was noted that HEOLFB showed the best performance for tracking drug effect. The HEOLFB could be used as an index for accurately estimating the effect of anesthesia on brain activity. Copyright © 2011 International Federation of Clinical Neurophysiology. All rights reserved.

  15. Rescaled range analysis of streamflow records in the São Francisco River Basin, Brazil

    NASA Astrophysics Data System (ADS)

    Araujo, Marcelo Vitor Oliveira; Celeste, Alcigeimes B.

    2018-01-01

    Hydrological time series are sometimes found to have a distinctive behavior known as long-term persistence, in which subsequent values depend on each other even under very large time scales. This implies multiyear consecutive droughts or floods. Typical models used to generate synthetic hydrological scenarios, widely used in the planning and management of water resources, fail to preserve this kind of persistence in the generated data and therefore may have a major impact on projects whose design lives span for long periods of time. This study deals with the evaluation of long-term persistence in streamflow records by means of the rescaled range analysis proposed by British engineer Harold E. Hurst, who first observed the phenomenon in the mid-twentieth century. In this paper, Hurst's procedure is enhanced by a strategy based on statistical hypothesis testing. The case study comprises the six main hydroelectric power plants located in the São Francisco River Basin, part of the Brazilian National Grid. Historical time series of inflows to the major reservoirs of the system are investigated and 5/6 sites show significant persistence, with values for the so-called Hurst exponent near or greater than 0.7, i.e., around 40% above the value 0.5 that represents a white noise process, suggesting that decision makers should take long-term persistence into consideration when conducting water resources planning and management studies in the region.

  16. Rescaled earthquake recurrence time statistics: application to microrepeaters

    NASA Astrophysics Data System (ADS)

    Goltz, Christian; Turcotte, Donald L.; Abaimov, Sergey G.; Nadeau, Robert M.; Uchida, Naoki; Matsuzawa, Toru

    2009-01-01

    Slip on major faults primarily occurs during `characteristic' earthquakes. The recurrence statistics of characteristic earthquakes play an important role in seismic hazard assessment. A major problem in determining applicable statistics is the short sequences of characteristic earthquakes that are available worldwide. In this paper, we introduce a rescaling technique in which sequences can be superimposed to establish larger numbers of data points. We consider the Weibull and log-normal distributions, in both cases we rescale the data using means and standard deviations. We test our approach utilizing sequences of microrepeaters, micro-earthquakes which recur in the same location on a fault. It seems plausible to regard these earthquakes as a miniature version of the classic characteristic earthquakes. Microrepeaters are much more frequent than major earthquakes, leading to longer sequences for analysis. In this paper, we present results for the analysis of recurrence times for several microrepeater sequences from Parkfield, CA as well as NE Japan. We find that, once the respective sequence can be considered to be of sufficient stationarity, the statistics can be well fitted by either a Weibull or a log-normal distribution. We clearly demonstrate this fact by our technique of rescaled combination. We conclude that the recurrence statistics of the microrepeater sequences we consider are similar to the recurrence statistics of characteristic earthquakes on major faults.

  17. Nonlinear multi-analysis of agent-based financial market dynamics by epidemic system

    NASA Astrophysics Data System (ADS)

    Lu, Yunfan; Wang, Jun; Niu, Hongli

    2015-10-01

    Based on the epidemic dynamical system, we construct a new agent-based financial time series model. In order to check and testify its rationality, we compare the statistical properties of the time series model with the real stock market indices, Shanghai Stock Exchange Composite Index and Shenzhen Stock Exchange Component Index. For analyzing the statistical properties, we combine the multi-parameter analysis with the tail distribution analysis, the modified rescaled range analysis, and the multifractal detrended fluctuation analysis. For a better perspective, the three-dimensional diagrams are used to present the analysis results. The empirical research in this paper indicates that the long-range dependence property and the multifractal phenomenon exist in the real returns and the proposed model. Therefore, the new agent-based financial model can recurrence some important features of real stock markets.

  18. Discrete Time Rescaling Theorem: Determining Goodness of Fit for Discrete Time Statistical Models of Neural Spiking

    PubMed Central

    Haslinger, Robert; Pipa, Gordon; Brown, Emery

    2010-01-01

    One approach for understanding the encoding of information by spike trains is to fit statistical models and then test their goodness of fit. The time rescaling theorem provides a goodness of fit test consistent with the point process nature of spike trains. The interspike intervals (ISIs) are rescaled (as a function of the model’s spike probability) to be independent and exponentially distributed if the model is accurate. A Kolmogorov Smirnov (KS) test between the rescaled ISIs and the exponential distribution is then used to check goodness of fit. This rescaling relies upon assumptions of continuously defined time and instantaneous events. However spikes have finite width and statistical models of spike trains almost always discretize time into bins. Here we demonstrate that finite temporal resolution of discrete time models prevents their rescaled ISIs from being exponentially distributed. Poor goodness of fit may be erroneously indicated even if the model is exactly correct. We present two adaptations of the time rescaling theorem to discrete time models. In the first we propose that instead of assuming the rescaled times to be exponential, the reference distribution be estimated through direct simulation by the fitted model. In the second, we prove a discrete time version of the time rescaling theorem which analytically corrects for the effects of finite resolution. This allows us to define a rescaled time which is exponentially distributed, even at arbitrary temporal discretizations. We demonstrate the efficacy of both techniques by fitting Generalized Linear Models (GLMs) to both simulated spike trains and spike trains recorded experimentally in monkey V1 cortex. Both techniques give nearly identical results, reducing the false positive rate of the KS test and greatly increasing the reliability of model evaluation based upon the time rescaling theorem. PMID:20608868

  19. Discrete time rescaling theorem: determining goodness of fit for discrete time statistical models of neural spiking.

    PubMed

    Haslinger, Robert; Pipa, Gordon; Brown, Emery

    2010-10-01

    One approach for understanding the encoding of information by spike trains is to fit statistical models and then test their goodness of fit. The time-rescaling theorem provides a goodness-of-fit test consistent with the point process nature of spike trains. The interspike intervals (ISIs) are rescaled (as a function of the model's spike probability) to be independent and exponentially distributed if the model is accurate. A Kolmogorov-Smirnov (KS) test between the rescaled ISIs and the exponential distribution is then used to check goodness of fit. This rescaling relies on assumptions of continuously defined time and instantaneous events. However, spikes have finite width, and statistical models of spike trains almost always discretize time into bins. Here we demonstrate that finite temporal resolution of discrete time models prevents their rescaled ISIs from being exponentially distributed. Poor goodness of fit may be erroneously indicated even if the model is exactly correct. We present two adaptations of the time-rescaling theorem to discrete time models. In the first we propose that instead of assuming the rescaled times to be exponential, the reference distribution be estimated through direct simulation by the fitted model. In the second, we prove a discrete time version of the time-rescaling theorem that analytically corrects for the effects of finite resolution. This allows us to define a rescaled time that is exponentially distributed, even at arbitrary temporal discretizations. We demonstrate the efficacy of both techniques by fitting generalized linear models to both simulated spike trains and spike trains recorded experimentally in monkey V1 cortex. Both techniques give nearly identical results, reducing the false-positive rate of the KS test and greatly increasing the reliability of model evaluation based on the time-rescaling theorem.

  20. Multifractal features in stock and foreign exchange markets

    NASA Astrophysics Data System (ADS)

    Kim, Kyungsik; Yoon, Seong-Min

    2004-03-01

    We investigate the tick dynamical behavior of three assets(the yen-dollar exchange rate, the won-dollar exchange rate, and the KOSPI) using the rescaled range analysis in stock and foreign exchange markets. The multifractal Hurst exponents with long-run memory effects can be obtained from assets, and we discuss whether it exists the crossover or not for the Hurst exponents at charateristic time scales. Particularly, we find that the probability distribution of prices is approached to a Lorentz distribution, different from fat-tailed properties.

  1. R/S analysis of reaction time in Neuron Type Test for human activity in civil aviation

    NASA Astrophysics Data System (ADS)

    Zhang, Hong-Yan; Kang, Ming-Cui; Li, Jing-Qiang; Liu, Hai-Tao

    2017-03-01

    Human factors become the most serious problem leading to accidents of civil aviation, which stimulates the design and analysis of Neuron Type Test (NTT) system to explore the intrinsic properties and patterns behind the behaviors of professionals and students in civil aviation. In the experiment, normal practitioners' reaction time sequences, collected from NTT, exhibit log-normal distribution approximately. We apply the χ2 test to compute the goodness-of-fit by transforming the time sequence with Box-Cox transformation to cluster practitioners. The long-term correlation of different individual practitioner's time sequence is represented by the Hurst exponent via Rescaled Range Analysis, also named by Range/Standard deviation (R/S) Analysis. The different Hurst exponent suggests the existence of different collective behavior and different intrinsic patterns of human factors in civil aviation.

  2. Are pound and euro the same currency?

    NASA Astrophysics Data System (ADS)

    Matsushita, Raul; Gleria, Iram; Figueiredo, Annibal; da Silva, Sergio

    2007-08-01

    Based on long-range dependence, some analysts claim that the exchange rate time series of the pound sterling and of an artificially extended euro have been locked together for years despite daily changes [M. Ausloos, K. Ivanova, Physica A 286 (2000) 353; K. Ivanova, M. Ausloos, False EUR exchange rates vs DKK, CHF, JPY and USD. What is a strong currency? in: H. Takayasu (Ed.), Empirical Sciences in Financial Fluctuations: The Advent of Econophysics, Springer-Verlag, Berlin, 2002, pp. 62 76]. They conclude that pound and euro are in practice the same currency. We assess the long-range dependence over time through Hurst exponents of pound dollar and extended euro dollar exchange rates employing three alternative techniques, namely rescaled range analysis, detrended fluctuation analysis, and detrended moving average. We find the result above (which is based on detrended fluctuation analysis) not to be robust to the changing techniques and parameterizing.

  3. Moist Baroclinic Life Cycles in an Idealized Model with Varying Hydrostasy

    NASA Astrophysics Data System (ADS)

    Hsieh, T. L.; Garner, S.; Held, I.

    2016-12-01

    Baroclinic life cycles are simulated in a limited-area model having varying degrees of hydrostasy to examine their interaction with explicitly resolved moist convection. The life cycles are driven by an idealized sea surface temperature field in an f-plane channel, and no convective parameterization is used. The hydrostasy is controlled by rescaling the model equations following the hypohydrostatic rescaling and by changing the resolution. In experiments having the same ratio between the grid spacing and the rescaling factor, the simulated convection is shown to have the same hydrostasy, suggesting that the low resolution models have been rescaled to be as nonhydrostatic as the high resolution model without additional computational cost. The nonhydrostatic convective cells in the rescaled models are found to be wider and slower than those in the unscaled models, consistent with predictions of the similarity theory. For the same resolution, although the wider cells in the rescaled models have better resolved structure, the total latent heating is insensitive to the rescaling factor. This is because latent heating is constrained by long-wave cooling which is found to be insensitive to the model hydrostasy, requiring a non-similarity in the frequency and distribution of convection. Consequently, the resolved nonhydrostatic convection maintains the same stability profile as the unresolved hydrostatic convection, so the statistics of the life cycles are also insensitive to the rescaling factor. The findings suggest that the mean climate and internal variability would be unaffected by the hypohydrostatic rescaling when the self-organization of convection is not important.

  4. A Real Space Cellular Automaton Laboratory

    NASA Astrophysics Data System (ADS)

    Rozier, O.; Narteau, C.

    2013-12-01

    Investigations in geomorphology may benefit from computer modelling approaches that rely entirely on self-organization principles. In the vast majority of numerical models, instead, points in space are characterised by a variety of physical variables (e.g. sediment transport rate, velocity, temperature) recalculated over time according to some predetermined set of laws. However, there is not always a satisfactory theoretical framework from which we can quantify the overall dynamics of the system. For these reasons, we prefer to concentrate on interaction patterns using a basic cellular automaton modelling framework, the Real Space Cellular Automaton Laboratory (ReSCAL), a powerful and versatile generator of 3D stochastic models. The objective of this software suite released under a GNU license is to develop interdisciplinary research collaboration to investigate the dynamics of complex systems. The models in ReSCAL are essentially constructed from a small number of discrete states distributed on a cellular grid. An elementary cell is a real-space representation of the physical environment and pairs of nearest neighbour cells are called doublets. Each individual physical process is associated with a set of doublet transitions and characteristic transition rates. Using a modular approach, we can simulate and combine a wide range of physical, chemical and/or anthropological processes. Here, we present different ingredients of ReSCAL leading to applications in geomorphology: dune morphodynamics and landscape evolution. We also discuss how ReSCAL can be applied and developed across many disciplines in natural and human sciences.

  5. Adimensional theory of shielding in ultracold collisions of dipolar rotors

    NASA Astrophysics Data System (ADS)

    González-Martínez, Maykel L.; Bohn, John L.; Quéméner, Goulven

    2017-09-01

    We investigate the electric field shielding of ultracold collisions of dipolar rotors, initially in their first rotational excited state, using an adimensional approach. We establish a map of good and bad candidates for efficient evaporative cooling based on this shielding mechanism, by presenting the ratio of elastic over quenching processes as a function of a rescaled rotational constant B ˜=B /sE3 and a rescaled electric field F ˜=d F /B . B ,d ,F ,andsE 3 are respectively the rotational constant, the full electric dipole moment of the molecules, the applied electric field, and a characteristic dipole-dipole energy. We identify two groups of bi-alkali-metal dipolar molecules. The first group, including RbCs, NaK, KCs, LiK, NaRb, LiRb, NaCs, and LiCs, is favorable with a ratio over 1000 at collision energies equal to (or even higher than) their characteristic dipolar energy. The second group, including LiNa and KRb, is not favorable. More generally, for molecules well described by Hund's case b, our adimensional study provides the conditions of efficient evaporative cooling. The range of appropriate rescaled rotational constant and rescaled field is approximately B ˜≥108 and 3.25 ≤F ˜≤3.8 , with a maximum ratio reached for F ˜≃3.4 for a given B ˜. We also discuss the importance of the electronic van der Waals interaction on the adimensional character of our study.

  6. The Relation between Factor Score Estimates, Image Scores, and Principal Component Scores

    ERIC Educational Resources Information Center

    Velicer, Wayne F.

    1976-01-01

    Investigates the relation between factor score estimates, principal component scores, and image scores. The three methods compared are maximum likelihood factor analysis, principal component analysis, and a variant of rescaled image analysis. (RC)

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ogden, K; O’Dwyer, R; Bradford, T

    Purpose: To reduce differences in features calculated from MRI brain scans acquired at different field strengths with or without Gadolinium contrast. Methods: Brain scans were processed for 111 epilepsy patients to extract hippocampus and thalamus features. Scans were acquired on 1.5 T scanners with Gadolinium contrast (group A), 1.5T scanners without Gd (group B), and 3.0 T scanners without Gd (group C). A total of 72 features were extracted. Features were extracted from original scans and from scans where the image pixel values were rescaled to the mean of the hippocampi and thalami values. For each data set, cluster analysismore » was performed on the raw feature set and for feature sets with normalization (conversion to Z scores). Two methods of normalization were used: The first was over all values of a given feature, and the second by normalizing within the patient group membership. The clustering software was configured to produce 3 clusters. Group fractions in each cluster were calculated. Results: For features calculated from both the non-rescaled and rescaled data, cluster membership was identical for both the non-normalized and normalized data sets. Cluster 1 was comprised entirely of Group A data, Cluster 2 contained data from all three groups, and Cluster 3 contained data from only groups 1 and 2. For the categorically normalized data sets there was a more uniform distribution of group data in the three Clusters. A less pronounced effect was seen in the rescaled image data features. Conclusion: Image Rescaling and feature renormalization can have a significant effect on the results of clustering analysis. These effects are also likely to influence the results of supervised machine learning algorithms. It may be possible to partly remove the influence of scanner field strength and the presence of Gadolinium based contrast in feature extraction for radiomics applications.« less

  8. R/S analysis based study on long memory about CODMn in Poyang Lake Inlet and Outlet

    NASA Astrophysics Data System (ADS)

    Wang, Lili

    2018-02-01

    Rescaled range analysis (R/S) is applied to the long memory behavior analysis of water CODMn series in Poyang Lake Inlet and Outlet in China. The results show that these CODMn series are characterized by long memory, and the characteristics have obvious differences between the Lake Inlet and Outlet. Our findings suggest that there was an obvious scale invariance, namely CODMn series in Lake Inlet for 13 weeks and CODMn in Lake Outlet for 17 weeks. Both displayed a two-power-law distribution and a similar high long memory. We made a preliminary explanation for the existence of the boundary point tc , using self-organized criticality. This work can be helpful to improvement of modelling of lake water quality.

  9. The Contradictions of Uneven Development for States and Firms: Capital and State Rescaling in Peripheral Regions

    ERIC Educational Resources Information Center

    Quark, Amy Adams

    2008-01-01

    Recent studies suggest that processes of capital and state rescaling are generating new socio-spatial inequalities within nation-states. I explore rescaling in the understudied context of a peripheral region through the case of a global apparel merchant, Lands' End, and its decision to relocate its call and distribution centers to Dodgeville,…

  10. Optimizing Complexity Measures for fMRI Data: Algorithm, Artifact, and Sensitivity

    PubMed Central

    Rubin, Denis; Fekete, Tomer; Mujica-Parodi, Lilianne R.

    2013-01-01

    Introduction Complexity in the brain has been well-documented at both neuronal and hemodynamic scales, with increasing evidence supporting its use in sensitively differentiating between mental states and disorders. However, application of complexity measures to fMRI time-series, which are short, sparse, and have low signal/noise, requires careful modality-specific optimization. Methods Here we use both simulated and real data to address two fundamental issues: choice of algorithm and degree/type of signal processing. Methods were evaluated with regard to resilience to acquisition artifacts common to fMRI as well as detection sensitivity. Detection sensitivity was quantified in terms of grey-white matter contrast and overlap with activation. We additionally investigated the variation of complexity with activation and emotional content, optimal task length, and the degree to which results scaled with scanner using the same paradigm with two 3T magnets made by different manufacturers. Methods for evaluating complexity were: power spectrum, structure function, wavelet decomposition, second derivative, rescaled range, Higuchi’s estimate of fractal dimension, aggregated variance, and detrended fluctuation analysis. To permit direct comparison across methods, all results were normalized to Hurst exponents. Results Power-spectrum, Higuchi’s fractal dimension, and generalized Hurst exponent based estimates were most successful by all criteria; the poorest-performing measures were wavelet, detrended fluctuation analysis, aggregated variance, and rescaled range. Conclusions Functional MRI data have artifacts that interact with complexity calculations in nontrivially distinct ways compared to other physiological data (such as EKG, EEG) for which these measures are typically used. Our results clearly demonstrate that decisions regarding choice of algorithm, signal processing, time-series length, and scanner have a significant impact on the reliability and sensitivity of complexity estimates. PMID:23700424

  11. Fractal analysis on human dynamics of library loans

    NASA Astrophysics Data System (ADS)

    Fan, Chao; Guo, Jin-Li; Zha, Yi-Long

    2012-12-01

    In this paper, the fractal characteristic of human behaviors is investigated from the perspective of time series constructed with the amount of library loans. The values of the Hurst exponent and length of non-periodic cycle calculated through rescaled range analysis indicate that the time series of human behaviors and their sub-series are fractal with self-similarity and long-range dependence. Then the time series are converted into complex networks by the visibility algorithm. The topological properties of the networks such as scale-free property and small-world effect imply that there is a close relationship among the numbers of repetitious behaviors performed by people during certain periods of time. Our work implies that there is intrinsic regularity in the human collective repetitious behaviors. The conclusions may be helpful to develop some new approaches to investigate the fractal feature and mechanism of human dynamics, and provide some references for the management and forecast of human collective behaviors.

  12. Rescaling citations of publications in physics

    NASA Astrophysics Data System (ADS)

    Radicchi, Filippo; Castellano, Claudio

    2011-04-01

    We analyze the citation distributions of all papers published in Physical Review journals between 1985 and 2009. The average number of citations received by papers published in a given year and in a given field is computed. Large variations are found, showing that it is not fair to compare citation numbers across fields and years. However, when a rescaling procedure by the average is used, it is possible to compare impartially articles across years and fields. We make the rescaling factors available for use by the readers. We also show that rescaling citation numbers by the number of publication authors has strong effects and should therefore be taken into account when assessing the bibliometric performance of researchers.

  13. Rescaling citations of publications in physics.

    PubMed

    Radicchi, Filippo; Castellano, Claudio

    2011-04-01

    We analyze the citation distributions of all papers published in Physical Review journals between 1985 and 2009. The average number of citations received by papers published in a given year and in a given field is computed. Large variations are found, showing that it is not fair to compare citation numbers across fields and years. However, when a rescaling procedure by the average is used, it is possible to compare impartially articles across years and fields. We make the rescaling factors available for use by the readers. We also show that rescaling citation numbers by the number of publication authors has strong effects and should therefore be taken into account when assessing the bibliometric performance of researchers.

  14. Mind the Costs: Rescaling and Multi-Level Environmental Governance in Venice Lagoon

    PubMed Central

    Fritsch, Oliver

    2010-01-01

    Competences over environmental matters are distributed across agencies at different scales on a national-to-local continuum. This article adopts a transaction costs economics perspective in order to explore the question whether, in the light of a particular problem, the scale at which a certain competence is attributed can be reconsidered. Specifically, it tests whether a presumption of least-cost operation concerning an agency at a given scale can hold. By doing so, it investigates whether the rescaling of certain tasks, aiming at solving a scale-related problem, is likely to produce an increase in costs for day-to-day agency operations as compared to the status quo. The article explores such a perspective for the case of Venice Lagoon. The negative aspects of the present arrangement concerning fishery management and morphological remediation are directly linked to the scale of the agencies involved. The analysis suggests that scales have been chosen correctly, at least from the point of view of the costs incurred to the agencies involved. Consequently, a rescaling of those agencies does not represent a viable option. PMID:20162274

  15. Mind the Costs: Rescaling and Multi-Level Environmental Governance in Venice Lagoon

    NASA Astrophysics Data System (ADS)

    Roggero, Matteo; Fritsch, Oliver

    2010-07-01

    Competences over environmental matters are distributed across agencies at different scales on a national-to-local continuum. This article adopts a transaction costs economics perspective in order to explore the question whether, in the light of a particular problem, the scale at which a certain competence is attributed can be reconsidered. Specifically, it tests whether a presumption of least-cost operation concerning an agency at a given scale can hold. By doing so, it investigates whether the rescaling of certain tasks, aiming at solving a scale-related problem, is likely to produce an increase in costs for day-to-day agency operations as compared to the status quo. The article explores such a perspective for the case of Venice Lagoon. The negative aspects of the present arrangement concerning fishery management and morphological remediation are directly linked to the scale of the agencies involved. The analysis suggests that scales have been chosen correctly, at least from the point of view of the costs incurred to the agencies involved. Consequently, a rescaling of those agencies does not represent a viable option.

  16. Mind the costs: rescaling and multi-level environmental governance in Venice lagoon.

    PubMed

    Roggero, Matteo; Fritsch, Oliver

    2010-07-01

    Competences over environmental matters are distributed across agencies at different scales on a national-to-local continuum. This article adopts a transaction costs economics perspective in order to explore the question whether, in the light of a particular problem, the scale at which a certain competence is attributed can be reconsidered. Specifically, it tests whether a presumption of least-cost operation concerning an agency at a given scale can hold. By doing so, it investigates whether the rescaling of certain tasks, aiming at solving a scale-related problem, is likely to produce an increase in costs for day-to-day agency operations as compared to the status quo. The article explores such a perspective for the case of Venice Lagoon. The negative aspects of the present arrangement concerning fishery management and morphological remediation are directly linked to the scale of the agencies involved. The analysis suggests that scales have been chosen correctly, at least from the point of view of the costs incurred to the agencies involved. Consequently, a rescaling of those agencies does not represent a viable option.

  17. Frequency domain technique for a two-dimensional mapping of optical tissue properties

    NASA Astrophysics Data System (ADS)

    Bocher, Thomas; Beuthan, Juergen; Minet, Olaf; Naber, Rolf-Dieter; Mueller, Gerhard J.

    1995-12-01

    Locally and individually varying optical tissue parameters (mu) a, (mu) s, and g are responsible for non-neglectible uncertainties in the interpretation of spectroscopic data in optical biopsy techniques. The intrinsic fluorescence signal for instance doesn't depend only on the fluorophore concentration but also on the amount of other background absorbers and on alterations of scattering properties. Therefore neither a correct relative nor an absolute mapping of the lateral fluorophore concentration can be derived from the intrinsic fluorescence signal alone. Using MC-simulations it can be shown that in time-resolved LIFS the simultaneously measured backscattered signal of the excitation wavelength (UV) can be used to develop a special, linearized rescaling algorithm to take into account the most dominant of these varying tissue parameters which is (mu) a,ex. In combination with biochemical calibration measurements we were able to perform fiberbased quantitative NADH- concentration measurements. In this paper a new rescaling method for VIS and IR light in the frequency domain is proposed. It can be applied within the validity range of the diffusion approximation and provides full (mu) a and (mu) s rescaling possibility in a 2- dimensional, non-contact mapping mode. The scanning device is planned to be used in combination with a standard operation microscope of ZEISS, Germany.

  18. Asymptotic analysis of the density of states in random matrix models associated with a slowly decaying weight

    NASA Astrophysics Data System (ADS)

    Kuijlaars, A. B. J.

    2001-08-01

    The asymptotic behavior of polynomials that are orthogonal with respect to a slowly decaying weight is very different from the asymptotic behavior of polynomials that are orthogonal with respect to a Freud-type weight. While the latter has been extensively studied, much less is known about the former. Following an earlier investigation into the zero behavior, we study here the asymptotics of the density of states in a unitary ensemble of random matrices with a slowly decaying weight. This measure is also naturally connected with the orthogonal polynomials. It is shown that, after suitable rescaling, the weak limit is the same as the weak limit of the rescaled zeros.

  19. Applying the multivariate time-rescaling theorem to neural population models

    PubMed Central

    Gerhard, Felipe; Haslinger, Robert; Pipa, Gordon

    2011-01-01

    Statistical models of neural activity are integral to modern neuroscience. Recently, interest has grown in modeling the spiking activity of populations of simultaneously recorded neurons to study the effects of correlations and functional connectivity on neural information processing. However any statistical model must be validated by an appropriate goodness-of-fit test. Kolmogorov-Smirnov tests based upon the time-rescaling theorem have proven to be useful for evaluating point-process-based statistical models of single-neuron spike trains. Here we discuss the extension of the time-rescaling theorem to the multivariate (neural population) case. We show that even in the presence of strong correlations between spike trains, models which neglect couplings between neurons can be erroneously passed by the univariate time-rescaling test. We present the multivariate version of the time-rescaling theorem, and provide a practical step-by-step procedure for applying it towards testing the sufficiency of neural population models. Using several simple analytically tractable models and also more complex simulated and real data sets, we demonstrate that important features of the population activity can only be detected using the multivariate extension of the test. PMID:21395436

  20. Long-range correlations and charge transport properties of DNA sequences

    NASA Astrophysics Data System (ADS)

    Liu, Xiao-liang; Ren, Yi; Xie, Qiong-tao; Deng, Chao-sheng; Xu, Hui

    2010-04-01

    By using Hurst's analysis and transfer approach, the rescaled range functions and Hurst exponents of human chromosome 22 and enterobacteria phage lambda DNA sequences are investigated and the transmission coefficients, Landauer resistances and Lyapunov coefficients of finite segments based on above genomic DNA sequences are calculated. In a comparison with quasiperiodic and random artificial DNA sequences, we find that λ-DNA exhibits anticorrelation behavior characterized by a Hurst exponent 0.5

  1. Acceleration and sensitivity analysis of lattice kinetic Monte Carlo simulations using parallel processing and rate constant rescaling

    NASA Astrophysics Data System (ADS)

    Núñez, M.; Robie, T.; Vlachos, D. G.

    2017-10-01

    Kinetic Monte Carlo (KMC) simulation provides insights into catalytic reactions unobtainable with either experiments or mean-field microkinetic models. Sensitivity analysis of KMC models assesses the robustness of the predictions to parametric perturbations and identifies rate determining steps in a chemical reaction network. Stiffness in the chemical reaction network, a ubiquitous feature, demands lengthy run times for KMC models and renders efficient sensitivity analysis based on the likelihood ratio method unusable. We address the challenge of efficiently conducting KMC simulations and performing accurate sensitivity analysis in systems with unknown time scales by employing two acceleration techniques: rate constant rescaling and parallel processing. We develop statistical criteria that ensure sufficient sampling of non-equilibrium steady state conditions. Our approach provides the twofold benefit of accelerating the simulation itself and enabling likelihood ratio sensitivity analysis, which provides further speedup relative to finite difference sensitivity analysis. As a result, the likelihood ratio method can be applied to real chemistry. We apply our methodology to the water-gas shift reaction on Pt(111).

  2. Shrinkage Degree in $L_{2}$ -Rescale Boosting for Regression.

    PubMed

    Xu, Lin; Lin, Shaobo; Wang, Yao; Xu, Zongben

    2017-08-01

    L 2 -rescale boosting ( L 2 -RBoosting) is a variant of L 2 -Boosting, which can essentially improve the generalization performance of L 2 -Boosting. The key feature of L 2 -RBoosting lies in introducing a shrinkage degree to rescale the ensemble estimate in each iteration. Thus, the shrinkage degree determines the performance of L 2 -RBoosting. The aim of this paper is to develop a concrete analysis concerning how to determine the shrinkage degree in L 2 -RBoosting. We propose two feasible ways to select the shrinkage degree. The first one is to parameterize the shrinkage degree and the other one is to develop a data-driven approach. After rigorously analyzing the importance of the shrinkage degree in L 2 -RBoosting, we compare the pros and cons of the proposed methods. We find that although these approaches can reach the same learning rates, the structure of the final estimator of the parameterized approach is better, which sometimes yields a better generalization capability when the number of sample is finite. With this, we recommend to parameterize the shrinkage degree of L 2 -RBoosting. We also present an adaptive parameter-selection strategy for shrinkage degree and verify its feasibility through both theoretical analysis and numerical verification. The obtained results enhance the understanding of L 2 -RBoosting and give guidance on how to use it for regression tasks.

  3. Long memory of abnormal investor attention and the cross-correlations between abnormal investor attention and trading volume, volatility respectively

    NASA Astrophysics Data System (ADS)

    Fan, Xiaoqian; Yuan, Ying; Zhuang, Xintian; Jin, Xiu

    2017-03-01

    Taking Baidu Index as a proxy for abnormal investor attention (AIA), the long memory property in the AIA of Shanghai Stock Exchange (SSE) 50 Index component stocks was empirically investigated using detrended fluctuation analysis (DFA) method. The results show that abnormal investor attention is power-law correlated with Hurst exponents between 0.64 and 0.98. Furthermore, the cross-correlations between abnormal investor attention and trading volume, volatility respectively are studied using detrended cross-correlation analysis (DCCA) and the DCCA cross-correlation coefficient (ρDCCA). The results suggest that there are positive correlations between AIA and trading volume, volatility respectively. In addition, the correlations for trading volume are in general higher than the ones for volatility. By carrying on rescaled range analysis (R/S) and rolling windows analysis, we find that the results mentioned above are effective and significant.

  4. Experimental evidence of the self-similarity and long-range correlations of the edge fluctuations in HT-6M tokamak

    NASA Astrophysics Data System (ADS)

    Wang, Wen-hao; Yu, Chang-xuan; Wen, Yi-zhi; Xu, Yu-hong; Ling, Bi-li; Gong, Xian-zu; Liu, Bao-hua; Wan, Bao-nian

    2001-02-01

    For a better understanding of long timescale transport dynamics, the rescaled range analysis techniques, the autocorrelation function (ACF) and the probability distribution function (PDF) are used to investigate long-range dependences in edge plasma fluctuations in an HT-6M tokamak. The results reveal the self-similar characters of the electrostatic fluctuations with self-similarity parameters (Hurst exponent) ranging from 0.64 to 0.79, taking into consideration the Er×B rotation-sheared effect. Fluctuation ACFs of both the ion saturation current and the floating potential, as well as PDF of the turbulence-induced particle flux, have two distinct timescales. One corresponds to the decorrelation timescale of local fluctuations (µs) and the other lasts to the order of the confinement time (ms). All these experimental results suggest that some of the mechanisms of the underlying turbulence are consistent with plasma transport as characterized by self-organized criticality (SOC).

  5. On the Extraction of Components and the Applicability of the Factor Model.

    ERIC Educational Resources Information Center

    Dziuban, Charles D.; Harris, Chester W.

    A reanalysis of Shaycroft's matrix of intercorrelations of 10 test variables plus 4 random variables is discussed. Three different procedures were used in the reanalysis: (1) Image Component Analysis, (2) Uniqueness Rescaling Factor Analysis, and (3) Alpha Factor Analysis. The results of these analyses are presented in tables. It is concluded from…

  6. The causal perturbation expansion revisited: Rescaling the interacting Dirac sea

    NASA Astrophysics Data System (ADS)

    Finster, Felix; Grotz, Andreas

    2010-07-01

    The causal perturbation expansion defines the Dirac sea in the presence of a time-dependent external field. It yields an operator whose image generalizes the vacuum solutions of negative energy and thus gives a canonical splitting of the solution space into two subspaces. After giving a self-contained introduction to the ideas and techniques, we show that this operator is, in general, not idempotent. We modify the standard construction by a rescaling procedure giving a projector on the generalized negative-energy subspace. The resulting rescaled causal perturbation expansion uniquely defines the fermionic projector in terms of a series of distributional solutions of the Dirac equation. The technical core of the paper is to work out the combinatorics of the expansion in detail. It is also shown that the fermionic projector with interaction can be obtained from the free projector by a unitary transformation. We finally analyze the consequences of the rescaling procedure on the light-cone expansion.

  7. Scaling analysis and model estimation of solar corona index

    NASA Astrophysics Data System (ADS)

    Ray, Samujjwal; Ray, Rajdeep; Khondekar, Mofazzal Hossain; Ghosh, Koushik

    2018-04-01

    A monthly average solar green coronal index time series for the period from January 1939 to December 2008 collected from NOAA (The National Oceanic and Atmospheric Administration) has been analysed in this paper in perspective of scaling analysis and modelling. Smoothed and de-noising have been done using suitable mother wavelet as a pre-requisite. The Finite Variance Scaling Method (FVSM), Higuchi method, rescaled range (R/S) and a generalized method have been applied to calculate the scaling exponents and fractal dimensions of the time series. Autocorrelation function (ACF) is used to find autoregressive (AR) process and Partial autocorrelation function (PACF) has been used to get the order of AR model. Finally a best fit model has been proposed using Yule-Walker Method with supporting results of goodness of fit and wavelet spectrum. The results reveal an anti-persistent, Short Range Dependent (SRD), self-similar property with signatures of non-causality, non-stationarity and nonlinearity in the data series. The model shows the best fit to the data under observation.

  8. Enhanced identification of synergistic and antagonistic emergent interactions among three or more drugs

    PubMed Central

    White, Cynthia; Mao, Zhiyuan; Savage, Van M.

    2016-01-01

    Interactions among drugs play a critical role in the killing efficacy of multi-drug treatments. Recent advances in theory and experiment for three-drug interactions enable the search for emergent interactions—ones not predictable from pairwise interactions. Previous work has shown it is easier to detect synergies and antagonisms among pairwise interactions when a rescaling method is applied to the interaction metric. However, no study has carefully examined whether new types of normalization might be needed for emergence. Here, we propose several rescaling methods for enhancing the classification of the higher order drug interactions based on our conceptual framework. To choose the rescaling that best separates synergism, antagonism and additivity, we conducted bacterial growth experiments in the presence of single, pairwise and triple-drug combinations among 14 antibiotics. We found one of our rescaling methods is far better at distinguishing synergistic and antagonistic emergent interactions than any of the other methods. Using our new method, we find around 50% of emergent interactions are additive, much less than previous reports of greater than 90% additivity. We conclude that higher order emergent interactions are much more common than previously believed, and we argue these findings for drugs suggest that appropriate rescaling is crucial to infer higher order interactions. PMID:27278366

  9. Statistical analysis of Geopotential Height (GH) timeseries based on Tsallis non-extensive statistical mechanics

    NASA Astrophysics Data System (ADS)

    Karakatsanis, L. P.; Iliopoulos, A. C.; Pavlos, E. G.; Pavlos, G. P.

    2018-02-01

    In this paper, we perform statistical analysis of time series deriving from Earth's climate. The time series are concerned with Geopotential Height (GH) and correspond to temporal and spatial components of the global distribution of month average values, during the period (1948-2012). The analysis is based on Tsallis non-extensive statistical mechanics and in particular on the estimation of Tsallis' q-triplet, namely {qstat, qsens, qrel}, the reconstructed phase space and the estimation of correlation dimension and the Hurst exponent of rescaled range analysis (R/S). The deviation of Tsallis q-triplet from unity indicates non-Gaussian (Tsallis q-Gaussian) non-extensive character with heavy tails probability density functions (PDFs), multifractal behavior and long range dependences for all timeseries considered. Also noticeable differences of the q-triplet estimation found in the timeseries at distinct local or temporal regions. Moreover, in the reconstructive phase space revealed a lower-dimensional fractal set in the GH dynamical phase space (strong self-organization) and the estimation of Hurst exponent indicated multifractality, non-Gaussianity and persistence. The analysis is giving significant information identifying and characterizing the dynamical characteristics of the earth's climate.

  10. Trends in currency’s return

    NASA Astrophysics Data System (ADS)

    Tan, A.; Shahrill, M.; Daud, S.; Leung, E.

    2018-03-01

    The purpose of this paper is to show that short-ranged dependence prevailed for Singapore-Malaysia exchange. Although, it is perceived that there is some evidence of long-ranged dependence [1,2], it is still unclear whether Singapore-Malaysia exchange indeed exhibits long-ranged dependence. For this paper, we focus on the currency rate for a sixteen-year period ranging from September 2002 to September 2017. We estimate the Hurst parameter using the famous rescaled R/S statistics technique. From our analysis, we showed that the Hurst parameter is approximately 0.5 which indicates short-ranged dependence. This short memory property is further validated by performing a one-tailed z-test whose alternative hypothesis is that the Hurst parameter is not 0.5 at 1% significance level. We conclude that the alternative hypothesis is rejected. The existence of short memory proves that the behaviour of the exchange rate is unpredictable, supporting the efficient market hypothesis, which states that not only is price movement completely random but also tomorrow’s prices are predicted by all the information in today’s prices.

  11. Rescaling Temperature and Entropy

    ERIC Educational Resources Information Center

    Olmsted, John, III

    2010-01-01

    Temperature and entropy traditionally are expressed in units of kelvin and joule/kelvin. These units obscure some important aspects of the natures of these thermodynamic quantities. Defining a rescaled temperature using the Boltzmann constant, T' = k[subscript B]T, expresses temperature in energy units, thereby emphasizing the close relationship…

  12. Comparative analysis of seismic persistence of Hindu Kush nests (Afghanistan) and Los Santos (Colombia) using fractal dimension

    NASA Astrophysics Data System (ADS)

    Prada, D. A.; Sanabria, M. P.; Torres, A. F.; Álvarez, M. A.; Gómez, J.

    2018-04-01

    The study of persistence in time series in seismic events in two of the most important nets such as Hindu Kush in Afghanistan and Los Santos Santander in Colombia generate great interest due to its high presence of telluric activity. The data were taken from the global seismological network. Using the Jarque-Bera test the presence of gaussian distribution was analyzed, and because the distribution in the series was asymmetric, without presence of mesocurtisity, the Hurst coefficient was calculated using the rescaled range method, with which it was found the fractal dimension associated to these time series and under what is possible to determine the persistence, antipersistence and volatility in these phenomena.

  13. Fluorescence Imaging Study of Transition in Underexpanded Free Jets

    NASA Technical Reports Server (NTRS)

    Wilkes, Jennifer A.; Danehy, Paul M.; Nowak, Robert J.

    2005-01-01

    Planar laser-induced fluorescence (PLIF) is demonstrated to be a valuable tool for studying the onset of transition to turbulence. For this study, we have used PLIF of nitric oxide (NO) to image underexpanded axisymmetric free jets issuing into a low-pressure chamber through a smooth converging nozzle with a sonic orifice. Flows were studied over a range of Reynolds numbers and nozzle-exit-to-ambient pressure ratios with the aim of empirically determining criteria governing the onset of turbulence. We have developed an image processing technique, involving calculation of the standard deviation of the intensity in PLIF images, in order to aid in the identification of turbulence. We have used the resulting images to identify laminar, transitional and turbulent flow regimes. Jet scaling parameters were used to define a rescaled Reynolds number that incorporates the influence of a varying pressure ratio. An empirical correlation was found between transition length and this rescaled Reynolds number for highly underexpanded jets.

  14. Maintaining Equivalent Cut Scores for Small Sample Test Forms

    ERIC Educational Resources Information Center

    Dwyer, Andrew C.

    2016-01-01

    This study examines the effectiveness of three approaches for maintaining equivalent performance standards across test forms with small samples: (1) common-item equating, (2) resetting the standard, and (3) rescaling the standard. Rescaling the standard (i.e., applying common-item equating methodology to standard setting ratings to account for…

  15. Universal Recurrence Time Statistics of Characteristic Earthquakes

    NASA Astrophysics Data System (ADS)

    Goltz, C.; Turcotte, D. L.; Abaimov, S.; Nadeau, R. M.

    2006-12-01

    Characteristic earthquakes are defined to occur quasi-periodically on major faults. Do recurrence time statistics of such earthquakes follow a particular statistical distribution? If so, which one? The answer is fundamental and has important implications for hazard assessment. The problem cannot be solved by comparing the goodness of statistical fits as the available sequences are too short. The Parkfield sequence of M ≍ 6 earthquakes, one of the most extensive reliable data sets available, has grown to merely seven events with the last earthquake in 2004, for example. Recently, however, advances in seismological monitoring and improved processing methods have unveiled so-called micro-repeaters, micro-earthquakes which recur exactly in the same location on a fault. It seems plausible to regard these earthquakes as a miniature version of the classic characteristic earthquakes. Micro-repeaters are much more frequent than major earthquakes, leading to longer sequences for analysis. Due to their recent discovery, however, available sequences contain less than 20 events at present. In this paper we present results for the analysis of recurrence times for several micro-repeater sequences from Parkfield and adjacent regions. To improve the statistical significance of our findings, we combine several sequences into one by rescaling the individual sets by their respective mean recurrence intervals and Weibull exponents. This novel approach of rescaled combination yields the most extensive data set possible. We find that the resulting statistics can be fitted well by an exponential distribution, confirming the universal applicability of the Weibull distribution to characteristic earthquakes. A similar result is obtained from rescaled combination, however, with regard to the lognormal distribution.

  16. Forward and backward tone mapping of high dynamic range images based on subband architecture

    NASA Astrophysics Data System (ADS)

    Bouzidi, Ines; Ouled Zaid, Azza

    2015-01-01

    This paper presents a novel High Dynamic Range (HDR) tone mapping (TM) system based on sub-band architecture. Standard wavelet filters of Daubechies, Symlets, Coiflets and Biorthogonal were used to estimate the proposed system performance in terms of Low Dynamic Range (LDR) image quality and reconstructed HDR image fidelity. During TM stage, the HDR image is firstly decomposed in sub-bands using symmetrical analysis-synthesis filter bank. The transform coefficients are then rescaled using a predefined gain map. The inverse Tone Mapping (iTM) stage is straightforward. Indeed, the LDR image passes through the same sub-band architecture. But, instead of reducing the dynamic range, the LDR content is boosted to an HDR representation. Moreover, in our TM sheme, we included an optimization module to select the gain map components that minimize the reconstruction error, and consequently resulting in high fidelity HDR content. Comparisons with recent state-of-the-art methods have shown that our method provides better results in terms of visual quality and HDR reconstruction fidelity using objective and subjective evaluations.

  17. Long-range dependence in earthquake-moment release and implications for earthquake occurrence probability.

    PubMed

    Barani, Simone; Mascandola, Claudia; Riccomagno, Eva; Spallarossa, Daniele; Albarello, Dario; Ferretti, Gabriele; Scafidi, Davide; Augliera, Paolo; Massa, Marco

    2018-03-28

    Since the beginning of the 1980s, when Mandelbrot observed that earthquakes occur on 'fractal' self-similar sets, many studies have investigated the dynamical mechanisms that lead to self-similarities in the earthquake process. Interpreting seismicity as a self-similar process is undoubtedly convenient to bypass the physical complexities related to the actual process. Self-similar processes are indeed invariant under suitable scaling of space and time. In this study, we show that long-range dependence is an inherent feature of the seismic process, and is universal. Examination of series of cumulative seismic moment both in Italy and worldwide through Hurst's rescaled range analysis shows that seismicity is a memory process with a Hurst exponent H ≈ 0.87. We observe that H is substantially space- and time-invariant, except in cases of catalog incompleteness. This has implications for earthquake forecasting. Hence, we have developed a probability model for earthquake occurrence that allows for long-range dependence in the seismic process. Unlike the Poisson model, dependent events are allowed. This model can be easily transferred to other disciplines that deal with self-similar processes.

  18. Stress Corrosion Cracking Study of Aluminum Alloys Using Electrochemical Noise Analysis

    NASA Astrophysics Data System (ADS)

    Rathod, R. C.; Sapate, S. G.; Raman, R.; Rathod, W. S.

    2013-12-01

    Stress corrosion cracking studies of aluminum alloys AA2219, AA8090, and AA5456 in heat-treated and non heat-treated condition were carried out using electrochemical noise technique with various applied stresses. Electrochemical noise time series data (corrosion potential vs. time) was obtained for the stressed tensile specimens in 3.5% NaCl aqueous solution at room temperature (27 °C). The values of drop in corrosion potential, total corrosion potential, mean corrosion potential, and hydrogen overpotential were evaluated from corrosion potential versus time series data. The electrochemical noise time series data was further analyzed with rescaled range ( R/ S) analysis proposed by Hurst to obtain the Hurst exponent. According to the results, higher values of the Hurst exponents with increased applied stresses showed more susceptibility to stress corrosion cracking as confirmed in case of alloy AA 2219 and AA8090.

  19. A comment on measuring the Hurst exponent of financial time series

    NASA Astrophysics Data System (ADS)

    Couillard, Michel; Davison, Matt

    2005-03-01

    A fundamental hypothesis of quantitative finance is that stock price variations are independent and can be modeled using Brownian motion. In recent years, it was proposed to use rescaled range analysis and its characteristic value, the Hurst exponent, to test for independence in financial time series. Theoretically, independent time series should be characterized by a Hurst exponent of 1/2. However, finite Brownian motion data sets will always give a value of the Hurst exponent larger than 1/2 and without an appropriate statistical test such a value can mistakenly be interpreted as evidence of long term memory. We obtain a more precise statistical significance test for the Hurst exponent and apply it to real financial data sets. Our empirical analysis shows no long-term memory in some financial returns, suggesting that Brownian motion cannot be rejected as a model for price dynamics.

  20. A space-time multifractal analysis on radar rainfall sequences from central Poland

    NASA Astrophysics Data System (ADS)

    Licznar, Paweł; Deidda, Roberto

    2014-05-01

    Rainfall downscaling belongs to most important tasks of modern hydrology. Especially from the perspective of urban hydrology there is real need for development of practical tools for possible rainfall scenarios generation. Rainfall scenarios of fine temporal scale reaching single minutes are indispensable as inputs for hydrological models. Assumption of probabilistic philosophy of drainage systems design and functioning leads to widespread application of hydrodynamic models in engineering practice. However models like these covering large areas could not be supplied with only uncorrelated point-rainfall time series. They should be rather supplied with space time rainfall scenarios displaying statistical properties of local natural rainfall fields. Implementation of a Space-Time Rainfall (STRAIN) model for hydrometeorological applications in Polish conditions, such as rainfall downscaling from the large scales of meteorological models to the scale of interest for rainfall-runoff processes is the long-distance aim of our research. As an introduction part of our study we verify the veracity of the following STRAIN model assumptions: rainfall fields are isotropic and statistically homogeneous in space; self-similarity holds (so that, after having rescaled the time by the advection velocity, rainfall is a fully homogeneous and isotropic process in the space-time domain); statistical properties of rainfall are characterized by an "a priori" known multifractal behavior. We conduct a space-time multifractal analysis on radar rainfall sequences selected from the Polish national radar system POLRAD. Radar rainfall sequences covering the area of 256 km x 256 km of original 2 km x 2 km spatial resolution and 15 minutes temporal resolution are used as study material. Attention is mainly focused on most severe summer convective rainfalls. It is shown that space-time rainfall can be considered with a good approximation to be a self-similar multifractal process. Multifractal analysis is carried out assuming Taylor's hypothesis to hold and the advection velocity needed to rescale the time dimension is assumed to be equal about 16 km/h. This assumption is verified by the analysis of autocorrelation functions along the x and y directions of "rainfall cubes" and along the time axis rescaled with assumed advection velocity. In general for analyzed rainfall sequences scaling is observed for spatial scales ranging from 4 to 256 km and for timescales from 15 min to 16 hours. However in most cases scaling break is identified for spatial scales between 4 and 8, corresponding to spatial dimensions of 16 km to 32 km. It is assumed that the scaling break occurrence at these particular scales in central Poland conditions could be at least partly explained by the rainfall mesoscale gap (on the edge of meso-gamma, storm-scale and meso-beta scale).

  1. Model accuracy impact through rescaled observations in hydrological data assimilation studies

    USDA-ARS?s Scientific Manuscript database

    Signal and noise time-series variability of soil moisture datasets (e.g. satellite-, model-, station-based) vary greatly. Optimality of the analysis obtained after observations are assimilated into the model depends on the degree that the differences between the signal variances of model and observa...

  2. Impact of model relative accuracy in framework of rescaling observations in hydrological data assimilation studies

    USDA-ARS?s Scientific Manuscript database

    Soil moisture datasets (e.g. satellite-, model-, station-based) vary greatly with respect to their signal, noise, and/or combined time-series variability. Minimizing differences in signal variances is particularly important in data assimilation techniques to optimize the accuracy of the analysis obt...

  3. Comparison of proposed alternative methods for rescaling dialysis dose: resting energy expenditure, high metabolic rate organ mass, liver size, and body surface area.

    PubMed

    Daugirdas, John T; Levin, Nathan W; Kotanko, Peter; Depner, Thomas A; Kuhlmann, Martin K; Chertow, Glenn M; Rocco, Michael V

    2008-01-01

    A number of denominators for scaling the dose of dialysis have been proposed as alternatives to the urea distribution volume (V). These include resting energy expenditure (REE), mass of high metabolic rate organs (HMRO), visceral mass, and body surface area. Metabolic rate is an unlikely denominator as it varies enormously among humans with different levels of activity and correlates poorly with the glomerular filtration rate. Similarly, scaling based on HMRO may not be optimal, as many organs with high metabolic rates such as spleen, brain, and heart are unlikely to generate unusually large amounts of uremic toxins. Visceral mass, in particular the liver and gut, has potential merit as a denominator for scaling; liver size is related to protein intake and the liver, along with the gut, is known to be responsible for the generation of suspected uremic toxins. Surface area is time-honored as a scaling method for glomerular filtration rate and scales similarly to liver size. How currently recommended dialysis doses might be affected by these alternative rescaling methods was modeled by applying anthropometric equations to a large group of dialysis patients who participated in the HEMO study. The data suggested that rescaling to REE would not be much different from scaling to V. Scaling to HMRO mass would mandate substantially higher dialysis doses for smaller patients of either gender. Rescaling to liver mass would require substantially more dialysis for women compared with men at all levels of body size. Rescaling to body surface area would require more dialysis for smaller patients of either gender and also more dialysis for women of any size. Of these proposed alternative rescaling measures, body surface area may be the best, because it reflects gender-based scaling of liver size and thereby the rate of generation of uremic toxins.

  4. Probe-Independent EEG Assessment of Mental Workload in Pilots

    DTIC Science & Technology

    2015-05-18

    Teager Energy Operator - Frequency Modulated Component - z- score 10.94 17.46 10 Hurst Exponent - Discrete Second Order Derivative 7.02 17.06 D. Best...Teager Energy Operator– Frequency Modulated Component – Z-score 45. Line Length – Time Series 46. Line Length – Time Series – Z-score 47. Hurst Exponent ...Discrete Second Order Derivative 48. Hurst Exponent – Wavelet Based Adaptation 49. Hurst Exponent – Rescaled Range 50. Hurst Exponent – Discrete

  5. Remapping dark matter halo catalogues between cosmological simulations

    NASA Astrophysics Data System (ADS)

    Mead, A. J.; Peacock, J. A.

    2014-05-01

    We present and test a method for modifying the catalogue of dark matter haloes produced from a given cosmological simulation, so that it resembles the result of a simulation with an entirely different set of parameters. This extends the method of Angulo & White, which rescales the full particle distribution from a simulation. Working directly with the halo catalogue offers an advantage in speed, and also allows modifications of the internal structure of the haloes to account for non-linear differences between cosmologies. Our method can be used directly on a halo catalogue in a self-contained manner without any additional information about the overall density field; although the large-scale displacement field is required by the method, this can be inferred from the halo catalogue alone. We show proof of concept of our method by rescaling a matter-only simulation with no baryon acoustic oscillation (BAO) features to a more standard Λ cold dark matter model containing a cosmological constant and a BAO signal. In conjunction with the halo occupation approach, this method provides a basis for the rapid generation of mock galaxy samples spanning a wide range of cosmological parameters.

  6. Rescaled Range analysis of Induced Seismicity: rapid classification of clusters in seismic crisis

    NASA Astrophysics Data System (ADS)

    Bejar-Pizarro, M.; Perez Lopez, R.; Benito-Parejo, M.; Guardiola-Albert, C.; Herraiz, M.

    2017-12-01

    Different underground fluid operations, mainly gas storing, fracking and water pumping, can trigger Induced Seismicity (IS). This seismicity is normally featured by small-sized earthquakes (M<2.5), although particular cases reach magnitude as great as 5. It has been up for debate whether earthquakes greater than 5 can be triggered by IS or this level of magnitude only corresponds to tectonic earthquakes caused by stress change. Whatever the case, the characterization of IS for seismic clusters and seismic series recorded close but not into the gas storage, is still under discussion. Time-series of earthquakes obey non-linear patterns where the Hurst exponent describes the persistency or anti-persistency of the sequence. Natural seismic sequences have an H-exponent close to 0.7, which combined with the b-value time evolution during the time clusters, give us valuable information about the stationarity of the phenomena. Tectonic earthquakes consist in a main shock with a decay of time-occurrence of seismic shocks obeying the Omori's empirical law. On the contrary, IS does not exhibit a main shock and the time occurrence depends on the injection operations instead of on the tectonic energy released. In this context, the H-exponent can give information about the origin of the sequence. In 2013, a seismic crisis was declared from the Castor underground gas storing located off-shore in the Mediterranean Sea, close to the Northeastern Spanish cost. The greatest induced earthquake was 3.7. However, a 4.2 earthquake, probably of tectonic origin, occurred few days after the operations stopped. In this work, we have compared the H-exponent and the b-value time evolution according to the timeline of gas injection. Moreover, we have divided the seismic sequence into two groups: (1) Induced Seismicity and (2) Triggered Seismicity. The rescaled range analysis allows the differentiation between natural and induced seismicity and gives information about the persistency and long-term memory of the seismic crisis. These results are a part of the Spanish project SISMOSIMA (CGL2013-47412-C2-2P).

  7. Origins of the anomalous stress behavior in charged colloidal suspensions under shear.

    PubMed

    Kumar, Amit; Higdon, Jonathan J L

    2010-11-01

    Numerical simulations are conducted to determine microstructure and rheology of sheared suspensions of charged colloidal particles at a volume fraction of ϕ=0.33. Over broad ranges of repulsive force strength F0 and Péclet number Pe, dynamic simulations show coexistence of ordered and disordered stable states with the state dependent on the initial condition. In contrast to the common view, at low shear rates, the disordered phase exhibits a lower viscosity (μ(r)) than the ordered phase, while this behavior is reversed at higher shear rates. Analysis shows the stress reversal is associated with different shear induced microstructural distortions in the ordered and disordered systems. Viscosity vs shear rate data over a wide range of F0 and Pe collapses well upon rescaling with the long-time self-diffusivity. Shear thinning viscosity in the ordered phase scaled as μ(r)∼Pe(-0.81) at low shear rates. The microstructural dynamics revealed in these studies explains the anomalous behavior and hysteresis loops in stress data reported in the literature.

  8. Correlation between centre offsets and gas velocity dispersion of galaxy clusters in cosmological simulations

    NASA Astrophysics Data System (ADS)

    Li, Ming-Hua; Zhu, Weishan; Zhao, Dong

    2018-05-01

    The gas is the dominant component of baryonic matter in most galaxy groups and clusters. The spatial offsets of gas centre from the halo centre could be an indicator of the dynamical state of cluster. Knowledge of such offsets is important for estimate the uncertainties when using clusters as cosmological probes. In this paper, we study the centre offsets roff between the gas and that of all the matter within halo systems in ΛCDM cosmological hydrodynamic simulations. We focus on two kinds of centre offsets: one is the three-dimensional PB offsets between the gravitational potential minimum of the entire halo and the barycentre of the ICM, and the other is the two-dimensional PX offsets between the potential minimum of the halo and the iterative centroid of the projected synthetic X-ray emission of the halo. Haloes at higher redshifts tend to have larger values of rescaled offsets roff/r200 and larger gas velocity dispersion σ v^gas/σ _{200}. For both types of offsets, we find that the correlation between the rescaled centre offsets roff/r200 and the rescaled 3D gas velocity dispersion, σ _v^gas/σ _{200} can be approximately described by a quadratic function as r_{off}/r_{200} ∝ (σ v^gas/σ _{200} - k_2)2. A Bayesian analysis with MCMC method is employed to estimate the model parameters. Dependence of the correlation relation on redshifts and the gas mass fraction are also investigated.

  9. "Gap Talk" and the Global Rescaling of Educational Accountability in Canada

    ERIC Educational Resources Information Center

    Martino, Wayne; Rezai-Rashti, Goli

    2013-01-01

    In this paper, we undertake a particular policy critique and analysis of the gender achievement gap discourse in Ontario and Canada, and situate it within the context of what has been termed "the governance turn" in educational policy with its focus on policy as numbers and its multi-scalar manifestations. We show how this "gap…

  10. Rescaling Education: Reconstructions of Scale in President Reagan's 1983 State of the Union Address

    ERIC Educational Resources Information Center

    Collin, Ross; Ferrare, Joseph J.

    2015-01-01

    This article presents a discourse analysis of President Ronald Reagan's 1983 State of the Union Address. Focusing on questions of scale, the article considers how and with what effects Reagan reconstructs education as a local, state, national and global endeavour. It is argued that by situating education in a competitive global economy, Reagan…

  11. A new combined approach on Hurst exponent estimate and its applications in realized volatility

    NASA Astrophysics Data System (ADS)

    Luo, Yi; Huang, Yirong

    2018-02-01

    The purpose of this paper is to propose a new estimator of Hurst exponent based on the combined information of the conventional rescaled range methods. We demonstrate the superiority of the proposed estimator by Monte Carlo simulations, and the applications in estimating the Hurst exponent of daily volatility series in Chinese stock market. Moreover, we indicate the impact of the type of estimator and structural break on the estimating results of Hurst exponent.

  12. How long will the traffic flow time series keep efficacious to forecast the future?

    NASA Astrophysics Data System (ADS)

    Yuan, PengCheng; Lin, XuXun

    2017-02-01

    This paper investigate how long will the historical traffic flow time series keep efficacious to forecast the future. In this frame, we collect the traffic flow time series data with different granularity at first. Then, using the modified rescaled range analysis method, we analyze the long memory property of the traffic flow time series by computing the Hurst exponent. We calculate the long-term memory cycle and test its significance. We also compare it with the maximum Lyapunov exponent method result. Our results show that both of the freeway traffic flow time series and the ground way traffic flow time series demonstrate positively correlated trend (have long-term memory property), both of their memory cycle are about 30 h. We think this study is useful for the short-term or long-term traffic flow prediction and management.

  13. Critical scaling analysis for displacive-type organic ferroelectrics around ferroelectric transition

    NASA Astrophysics Data System (ADS)

    Ding, L. J.

    2017-04-01

    The critical scaling properties of displacive-type organic ferroelectrics, in which the ferroelectric-paraelectric transition is induced by spin-Peierls instability, are investigated by Green's function theory through the modified Arrott plot, critical isothermal and electrocaloric effect (ECE) analysis around the transition temperature TC. It is shown that the electric entropy change - ΔS follows a power-law dependence of electric field E : - ΔS ∼En with n satisfying the Franco equation n(TC) = 1 +(β - 1) /(β + γ) = 0.618, wherein the obtained critical exponents β = 0.440 and γ = 1.030 are not only corroborated by Kouvel-Fisher method, but also confirm the Widom critical relation δ = 1 + γ / β. The self-consistency and reliability of the obtained critical exponents are further verified by the scaling equations. Additionally, a universal curve of - ΔS is constructed with rescaling temperature and electric field, so that one can extrapolate the ECE in a certain temperature and electric field range, which would be helpful in designing controlled electric refrigeration devices.

  14. Iterative simulated quenching for designing irregular-spot-array generators.

    PubMed

    Gillet, J N; Sheng, Y

    2000-07-10

    We propose a novel, to our knowledge, algorithm of iterative simulated quenching with temperature rescaling for designing diffractive optical elements, based on an analogy between simulated annealing and statistical thermodynamics. The temperature is iteratively rescaled at the end of each quenching process according to ensemble statistics to bring the system back from a frozen imperfect state with a local minimum of energy to a dynamic state in a Boltzmann heat bath in thermal equilibrium at the rescaled temperature. The new algorithm achieves much lower cost function and reconstruction error and higher diffraction efficiency than conventional simulated annealing with a fast exponential cooling schedule and is easy to program. The algorithm is used to design binary-phase generators of large irregular spot arrays. The diffractive phase elements have trapezoidal apertures of varying heights, which fit ideal arbitrary-shaped apertures better than do trapezoidal apertures of fixed heights.

  15. Generic construction of efficient matrix product operators

    NASA Astrophysics Data System (ADS)

    Hubig, C.; McCulloch, I. P.; Schollwöck, U.

    2017-01-01

    Matrix product operators (MPOs) are at the heart of the second-generation density matrix renormalization group (DMRG) algorithm formulated in matrix product state language. We first summarize the widely known facts on MPO arithmetic and representations of single-site operators. Second, we introduce three compression methods (rescaled SVD, deparallelization, and delinearization) for MPOs and show that it is possible to construct efficient representations of arbitrary operators using MPO arithmetic and compression. As examples, we construct powers of a short-ranged spin-chain Hamiltonian, a complicated Hamiltonian of a two-dimensional system and, as proof of principle, the long-range four-body Hamiltonian from quantum chemistry.

  16. Water Balance in the Amazon Basin from a Land Surface Model Ensemble

    NASA Technical Reports Server (NTRS)

    Getirana, Augusto C. V.; Dutra, Emanuel; Guimberteau, Matthieu; Kam, Jonghun; Li, Hong-Yi; Decharme, Bertrand; Zhang, Zhengqiu; Ducharne, Agnes; Boone, Aaron; Balsamo, Gianpaolo; hide

    2014-01-01

    Despite recent advances in land surfacemodeling and remote sensing, estimates of the global water budget are still fairly uncertain. This study aims to evaluate the water budget of the Amazon basin based on several state-ofthe- art land surface model (LSM) outputs. Water budget variables (terrestrial water storage TWS, evapotranspiration ET, surface runoff R, and base flow B) are evaluated at the basin scale using both remote sensing and in situ data. Meteorological forcings at a 3-hourly time step and 18 spatial resolution were used to run 14 LSMs. Precipitation datasets that have been rescaled to matchmonthly Global Precipitation Climatology Project (GPCP) andGlobal Precipitation Climatology Centre (GPCC) datasets and the daily Hydrologie du Bassin de l'Amazone (HYBAM) dataset were used to perform three experiments. The Hydrological Modeling and Analysis Platform (HyMAP) river routing scheme was forced with R and B and simulated discharges are compared against observations at 165 gauges. Simulated ET and TWS are compared against FLUXNET and MOD16A2 evapotranspiration datasets andGravity Recovery and ClimateExperiment (GRACE)TWSestimates in two subcatchments of main tributaries (Madeira and Negro Rivers).At the basin scale, simulated ET ranges from 2.39 to 3.26 mm day(exp -1) and a low spatial correlation between ET and precipitation indicates that evapotranspiration does not depend on water availability over most of the basin. Results also show that other simulated water budget components vary significantly as a function of both the LSM and precipitation dataset, but simulated TWS generally agrees with GRACE estimates at the basin scale. The best water budget simulations resulted from experiments using HYBAM, mostly explained by a denser rainfall gauge network and the rescaling at a finer temporal scale.

  17. Studies of short and long memory in mining-induced seismic processes

    NASA Astrophysics Data System (ADS)

    Węglarczyk, Stanisław; Lasocki, Stanisław

    2009-09-01

    Memory of a stochastic process implies its predictability, understood as a possibility to gain information on the future above the random guess level. Here we search for memory in the mining-induced seismic process (MIS), that is, a process induced or triggered by mining operations. Long memory is investigated by means of the Hurst rescaled range analysis, and the autocorrelation function estimate is used to test for short memory. Both methods are complemented with result uncertainty analyses based on different resampling techniques. The analyzed data comprise event series from Rudna copper mine in Poland. The studies show that the interevent time and interevent distance processes have both long and short memory. MIS occurrences and locations are internally interrelated. Internal relations among the sizes of MIS events are apparently weaker than those of other two studied parameterizations and are limited to long term interactions.

  18. A fractal comparison of real and Austrian business cycle models

    NASA Astrophysics Data System (ADS)

    Mulligan, Robert F.

    2010-06-01

    Rescaled range and power spectral density analysis are applied to examine a diverse set of macromonetary data for fractal character and stochastic dependence. Fractal statistics are used to evaluate two competing models of the business cycle, Austrian business cycle theory and real business cycle theory. Strong evidence is found for antipersistent stochastic dependence in transactions money (M1) and components of the monetary aggregates most directly concerned with transactions, which suggests an activist monetary policy. Savings assets exhibit persistent long memory, as do those monetary aggregates which include savings assets, such as savings money (M2), M2 minus small time deposits, and money of zero maturity (MZM). Virtually all measures of economic activity display antipersistence, and this finding is invariant to whether the measures are adjusted for inflation, including real gross domestic product, real consumption expenditures, real fixed private investment, and labor productivity. This strongly disconfirms real business cycle theory.

  19. Height biases and scale variations in VLBI networks due to antenna gravitational deformations

    NASA Astrophysics Data System (ADS)

    Abbondanza, Claudio; Sarti, Pierguido; Petrov, Leonid; Negusini, Monia

    2010-05-01

    The impact of signal path variations (SPVs) caused by antenna gravity deformations on geodetic VLBI results is evaluated for the first time. Elevation-dependent models of SPV for Medicina and Noto (Italy) telescopes were derived from a combination of terrestrial surveying methods to account for gravitational deformations. After applying these models, estimates of the antenna reference point (ARP) positions are shifted upward by 8.9 mm and 6.7 mm, respectively. The impact on other parameters is negligible. To infer the impact of antenna gravity deformations on the entire VLBI network, lacking measurements for other telescopes, we rescaled the SPV models of Medicina and Noto for other antennas according to their size. The effects are changes in VLBI heights in the range [-3,73] mm and a significant net scale increase of 0.3 - 0.8 ppb. This demonstrates the need to include SPV models in routine VLBI data analysis.

  20. Inadequacy of internal covariance estimation for super-sample covariance

    NASA Astrophysics Data System (ADS)

    Lacasa, Fabien; Kunz, Martin

    2017-08-01

    We give an analytical interpretation of how subsample-based internal covariance estimators lead to biased estimates of the covariance, due to underestimating the super-sample covariance (SSC). This includes the jackknife and bootstrap methods as estimators for the full survey area, and subsampling as an estimator of the covariance of subsamples. The limitations of the jackknife covariance have been previously presented in the literature because it is effectively a rescaling of the covariance of the subsample area. However we point out that subsampling is also biased, but for a different reason: the subsamples are not independent, and the corresponding lack of power results in SSC underprediction. We develop the formalism in the case of cluster counts that allows the bias of each covariance estimator to be exactly predicted. We find significant effects for a small-scale area or when a low number of subsamples is used, with auto-redshift biases ranging from 0.4% to 15% for subsampling and from 5% to 75% for jackknife covariance estimates. The cross-redshift covariance is even more affected; biases range from 8% to 25% for subsampling and from 50% to 90% for jackknife. Owing to the redshift evolution of the probe, the covariances cannot be debiased by a simple rescaling factor, and an exact debiasing has the same requirements as the full SSC prediction. These results thus disfavour the use of internal covariance estimators on data itself or a single simulation, leaving analytical prediction and simulations suites as possible SSC predictors.

  1. An underdamped stochastic resonance method with stable-state matching for incipient fault diagnosis of rolling element bearings

    NASA Astrophysics Data System (ADS)

    Lei, Yaguo; Qiao, Zijian; Xu, Xuefang; Lin, Jing; Niu, Shantao

    2017-09-01

    Most traditional overdamped monostable, bistable and even tristable stochastic resonance (SR) methods have three shortcomings in weak characteristic extraction: (1) their potential structures characterized by single stable-state type are insufficient to match with the complicated and diverse mechanical vibration signals; (2) they vulnerably suffer the interference from multiscale noise and largely depend on the help of highpass filters whose parameters are selected subjectively, probably resulting in false detection; and (3) their rescaling factors are fixed as constants generally, thereby ignoring the synergistic effect among vibration signals, potential structures and rescaling factors. These three shortcomings have limited the enhancement ability of SR. To explore the SR potential, this paper initially investigates the SR in a multistable system by calculating its output spectral amplification, further analyzes its output frequency response numerically, then examines the effect of both damping and rescaling factors on output responses and finally presents a promising underdamped SR method with stable-state matching for incipient bearing fault diagnosis. This method has three advantages: (1) the diversity of stable-state types in a multistable potential makes it easy to match with various vibration signals; (2) the underdamped multistable SR, equivalent to a moving nonlinear bandpass filter that is dependent on the rescaling factors, is able to suppress the multiscale noise; and (3) the synergistic effect among vibration signals, potential structures and rescaling and damping factors is achieved using quantum genetic algorithms whose fitness functions are new weighted signal-to-noise ratio (WSNR) instead of SNR. Therefore, the proposed method is expected to possess good enhancement ability. Simulated and experimental data of rolling element bearings demonstrate its effectiveness. The comparison results show that the proposed method is able to obtain higher amplitude at target frequency and larger output WSNR, and performs better than traditional SR methods.

  2. Illusions of having small or large invisible bodies influence visual perception of object size

    PubMed Central

    van der Hoort, Björn; Ehrsson, H. Henrik

    2016-01-01

    The size of our body influences the perceived size of the world so that objects appear larger to children than to adults. The mechanisms underlying this effect remain unclear. It has been difficult to dissociate visual rescaling of the external environment based on an individual’s visible body from visual rescaling based on a central multisensory body representation. To differentiate these potential causal mechanisms, we manipulated body representation without a visible body by taking advantage of recent developments in body representation research. Participants experienced the illusion of having a small or large invisible body while object-size perception was tested. Our findings show that the perceived size of test-objects was determined by the size of the invisible body (inverse relation), and by the strength of the invisible body illusion. These findings demonstrate how central body representation directly influences visual size perception, without the need for a visible body, by rescaling the spatial representation of the environment. PMID:27708344

  3. Partial Adaptation of Obtained and Observed Value Signals Preserves Information about Gains and Losses

    PubMed Central

    Baddeley, Michelle; Tobler, Philippe N.; Schultz, Wolfram

    2016-01-01

    Given that the range of rewarding and punishing outcomes of actions is large but neural coding capacity is limited, efficient processing of outcomes by the brain is necessary. One mechanism to increase efficiency is to rescale neural output to the range of outcomes expected in the current context, and process only experienced deviations from this expectation. However, this mechanism comes at the cost of not being able to discriminate between unexpectedly low losses when times are bad versus unexpectedly high gains when times are good. Thus, too much adaptation would result in disregarding information about the nature and absolute magnitude of outcomes, preventing learning about the longer-term value structure of the environment. Here we investigate the degree of adaptation in outcome coding brain regions in humans, for directly experienced outcomes and observed outcomes. We scanned participants while they performed a social learning task in gain and loss blocks. Multivariate pattern analysis showed two distinct networks of brain regions adapt to the most likely outcomes within a block. Frontostriatal areas adapted to directly experienced outcomes, whereas lateral frontal and temporoparietal regions adapted to observed social outcomes. Critically, in both cases, adaptation was incomplete and information about whether the outcomes arose in a gain block or a loss block was retained. Univariate analysis confirmed incomplete adaptive coding in these regions but also detected nonadapting outcome signals. Thus, although neural areas rescale their responses to outcomes for efficient coding, they adapt incompletely and keep track of the longer-term incentives available in the environment. SIGNIFICANCE STATEMENT Optimal value-based choice requires that the brain precisely and efficiently represents positive and negative outcomes. One way to increase efficiency is to adapt responding to the most likely outcomes in a given context. However, too strong adaptation would result in loss of precise representation (e.g., when the avoidance of a loss in a loss-context is coded the same as receipt of a gain in a gain-context). We investigated an intermediate form of adaptation that is efficient while maintaining information about received gains and avoided losses. We found that frontostriatal areas adapted to directly experienced outcomes, whereas lateral frontal and temporoparietal regions adapted to observed social outcomes. Importantly, adaptation was intermediate, in line with influential models of reference dependence in behavioral economics. PMID:27683899

  4. Non-Cooperative Target Imaging and Parameter Estimation with Narrowband Radar Echoes.

    PubMed

    Yeh, Chun-mao; Zhou, Wei; Lu, Yao-bing; Yang, Jian

    2016-01-20

    This study focuses on the rotating target imaging and parameter estimation with narrowband radar echoes, which is essential for radar target recognition. First, a two-dimensional (2D) imaging model with narrowband echoes is established in this paper, and two images of the target are formed on the velocity-acceleration plane at two neighboring coherent processing intervals (CPIs). Then, the rotating velocity (RV) is proposed to be estimated by utilizing the relationship between the positions of the scattering centers among two images. Finally, the target image is rescaled to the range-cross-range plane with the estimated rotational parameter. The validity of the proposed approach is confirmed using numerical simulations.

  5. Modeling fluid injection induced microseismicity in shales

    NASA Astrophysics Data System (ADS)

    Carcione, José M.; Currenti, Gilda; Johann, Lisa; Shapiro, Serge

    2018-02-01

    Hydraulic fracturing in shales generates a cloud of seismic—tensile and shear—events that can be used to evaluate the extent of the fracturing (event clouds) and obtain the hydraulic properties of the medium, such as the degree of anisotropy and the permeability. Firstly, we investigate the suitability of novel semi-analytical reference solutions for pore pressure evolution around a well after fluid injection in anisotropic media. To do so, we use cylindrical coordinates in the presence of a formation (a layer) and spherical coordinates for a homogeneous and unbounded medium. The involved differential equations are transformed to an isotropic diffusion equation by means of pseudo-spatial coordinates obtained from the spatial variables re-scaled by the permeability components. We consider pressure-dependent permeability components, which are independent of the spatial direction. The analytical solutions are compared to numerical solutions to verify their applicability. The comparison shows that the solutions are suitable for a limited permeability range and moderate to minor pressure dependences of the permeability. Once the pressure evolution around the well has been established, we can model the microseismic events. Induced seismicity by failure due to fluid injection in a porous rock depends on the properties of the hydraulic and elastic medium and in situ stress conditions. Here, we define a tensile threshold pressure above which there is tensile emission, while the shear threshold is obtained by using the octahedral stress criterion and the in situ rock properties and conditions. Subsequently, we generate event clouds for both cases and study the spatio-temporal features. The model considers anisotropic permeability and the results are spatially re-scaled to obtain an effective isotropic medium representation. For a 3D diffusion in spherical coordinates and exponential pressure dependence of the permeability, the results differ from those of the classical diffusion equation. Use of the classical front to fit cloud events spatially, provides good results but with a re-scaled value of these components. Modeling is required to evaluate the scaling constant in real cases.

  6. Accounting for data variability, a key factor in in vivo/in vitro relationships: application to the skin sensitization potency (in vivo LLNA versus in vitro DPRA) example.

    PubMed

    Dimitrov, S; Detroyer, A; Piroird, C; Gomes, C; Eilstein, J; Pauloin, T; Kuseva, C; Ivanova, H; Popova, I; Karakolev, Y; Ringeissen, S; Mekenyan, O

    2016-12-01

    When searching for alternative methods to animal testing, confidently rescaling an in vitro result to the corresponding in vivo classification is still a challenging problem. Although one of the most important factors affecting good correlation is sample characteristics, they are very rarely integrated into correlation studies. Usually, in these studies, it is implicitly assumed that both compared values are error-free numbers, which they are not. In this work, we propose a general methodology to analyze and integrate data variability and thus confidence estimation when rescaling from one test to another. The methodology is demonstrated through the case study of rescaling the in vitro Direct Peptide Reactivity Assay (DPRA) reactivity to the in vivo Local Lymph Node Assay (LLNA) skin sensitization potency classifications. In a first step, a comprehensive statistical analysis evaluating the reliability and variability of LLNA and DPRA as such was done. These results allowed us to link the concept of gray zones and confidence probability, which in turn represents a new perspective for a more precise knowledge of the classification of chemicals within their in vivo OR in vitro test. Next, the novelty and practical value of our methodology introducing variability into the threshold optimization between the in vitro AND in vivo test resides in the fact that it attributes a confidence probability to the predicted classification. The methodology, classification and screening approach presented in this study are not restricted to skin sensitization only. They could be helpful also for fate, toxicity and health hazard assessment where plenty of in vitro and in chemico assays and/or QSARs models are available. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  7. Fractal Tempo Fluctuation and Pulse Prediction

    PubMed Central

    Rankin, Summer K.; Large, Edward W.; Fink, Philip W.

    2010-01-01

    WE INVESTIGATED PEOPLES’ ABILITY TO ADAPT TO THE fluctuating tempi of music performance. In Experiment 1, four pieces from different musical styles were chosen, and performances were recorded from a skilled pianist who was instructed to play with natural expression. Spectral and rescaled range analyses on interbeat interval time-series revealed long-range (1/f type) serial correlations and fractal scaling in each piece. Stimuli for Experiment 2 included two of the performances from Experiment 1, with mechanical versions serving as controls. Participants tapped the beat at ¼- and ⅛-note metrical levels, successfully adapting to large tempo fluctuations in both performances. Participants predicted the structured tempo fluctuations, with superior performance at the ¼-note level. Thus, listeners may exploit long-range correlations and fractal scaling to predict tempo changes in music. PMID:25190901

  8. The "Chaos Theory" and nonlinear dynamics in heart rate variability analysis: does it work in short-time series in patients with coronary heart disease?

    PubMed

    Krstacic, Goran; Krstacic, Antonija; Smalcelj, Anton; Milicic, Davor; Jembrek-Gostovic, Mirjana

    2007-04-01

    Dynamic analysis techniques may quantify abnormalities in heart rate variability (HRV) based on nonlinear and fractal analysis (chaos theory). The article emphasizes clinical and prognostic significance of dynamic changes in short-time series applied on patients with coronary heart disease (CHD) during the exercise electrocardiograph (ECG) test. The subjects were included in the series after complete cardiovascular diagnostic data. Series of R-R and ST-T intervals were obtained from exercise ECG data after sampling digitally. The range rescaled analysis method determined the fractal dimension of the intervals. To quantify fractal long-range correlation's properties of heart rate variability, the detrended fluctuation analysis technique was used. Approximate entropy (ApEn) was applied to quantify the regularity and complexity of time series, as well as unpredictability of fluctuations in time series. It was found that the short-term fractal scaling exponent (alpha(1)) is significantly lower in patients with CHD (0.93 +/- 0.07 vs 1.09 +/- 0.04; P < 0.001). The patients with CHD had higher fractal dimension in each exercise test program separately, as well as in exercise program at all. ApEn was significant lower in CHD group in both RR and ST-T ECG intervals (P < 0.001). The nonlinear dynamic methods could have clinical and prognostic applicability also in short-time ECG series. Dynamic analysis based on chaos theory during the exercise ECG test point out the multifractal time series in CHD patients who loss normal fractal characteristics and regularity in HRV. Nonlinear analysis technique may complement traditional ECG analysis.

  9. Reply to comment by Ma and Zhang on "Rescaling the complementary relationship for land surface evaporation"

    NASA Astrophysics Data System (ADS)

    Crago, Richard; Qualls, Russell; Szilagyi, Jozsef; Huntington, Justin

    2017-07-01

    Ma and Zhang (2017) note a concern they have with our rescaled Complementary Relationship (CR) for land surface evaporation when daily average wind speeds are very low (perhaps less than 1 m/s). We discuss conditions and specific formulations that lead to this concern, but ultimately argue that under these conditions, a key assumption behind the CR itself may not be satisfied at the daily time scale. Thus, careful consideration of the reliability of the CR is needed when wind speeds are very low.

  10. Lattice Boltzmann method for weakly ionized isothermal plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li Huayu; Ki, Hyungson

    2007-12-15

    In this paper, a lattice Boltzmann method (LBM) for weakly ionized isothermal plasmas is presented by introducing a rescaling scheme for the Boltzmann transport equation. Without using this rescaling, we found that the nondimensional relaxation time used in the LBM is too large and the LBM does not produce physically realistic results. The developed model was applied to the electrostatic wave problem and the diffusion process of singly ionized helium plasmas with a 1-3% degree of ionization under an electric field. The obtained results agree well with theoretical values.

  11. SMR Re-Scaling and Modeling for Load Following Studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoover, K.; Wu, Q.; Bragg-Sitton, S.

    2016-11-01

    This study investigates the creation of a new set of scaling parameters for the Oregon State University Multi-Application Small Light Water Reactor (MASLWR) scaled thermal hydraulic test facility. As part of a study being undertaken by Idaho National Lab involving nuclear reactor load following characteristics, full power operations need to be simulated, and therefore properly scaled. Presented here is the scaling analysis and plans for RELAP5-3D simulation.

  12. Height bias and scale effect induced by antenna gravitational deformations in geodetic VLBI data analysis

    NASA Astrophysics Data System (ADS)

    Sarti, Pierguido; Abbondanza, Claudio; Petrov, Leonid; Negusini, Monia

    2011-01-01

    The impact of signal path variations (SPVs) caused by antenna gravitational deformations on geodetic very long baseline interferometry (VLBI) results is evaluated for the first time. Elevation-dependent models of SPV for Medicina and Noto (Italy) telescopes were derived from a combination of terrestrial surveying methods to account for gravitational deformations. After applying these models in geodetic VLBI data analysis, estimates of the antenna reference point positions are shifted upward by 8.9 and 6.7 mm, respectively. The impact on other parameters is negligible. To simulate the impact of antenna gravitational deformations on the entire VLBI network, lacking measurements for other telescopes, we rescaled the SPV models of Medicina and Noto for other antennas according to their size. The effects of the simulations are changes in VLBI heights in the range [-3, 73] mm and a net scale increase of 0.3-0.8 ppb. The height bias is larger than random errors of VLBI position estimates, implying the possibility of significant scale distortions related to antenna gravitational deformations. This demonstrates the need to precisely measure gravitational deformations of other VLBI telescopes, to derive their precise SPV models and to apply them in routine geodetic data analysis.

  13. Analysis of the Seismicity Preceding Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Stallone, A.; Marzocchi, W.

    2016-12-01

    The most common earthquake forecasting models assume that the magnitude of the next earthquake is independent from the past. This feature is probably one of the most severe limitations of the capability to forecast large earthquakes.In this work, we investigate empirically on this specific aspect, exploring whether spatial-temporal variations in seismicity encode some information on the magnitude of the future earthquakes. For this purpose, and to verify the universality of the findings, we consider seismic catalogs covering quite different space-time-magnitude windows, such as the Alto Tiberina Near Fault Observatory (TABOO) catalogue, and the California and Japanese seismic catalog. Our method is inspired by the statistical methodology proposed by Zaliapin (2013) to distinguish triggered and background earthquakes, using the nearest-neighbor clustering analysis in a two-dimension plan defined by rescaled time and space. In particular, we generalize the metric based on the nearest-neighbor to a metric based on the k-nearest-neighbors clustering analysis that allows us to consider the overall space-time-magnitude distribution of k-earthquakes (k-foreshocks) which anticipate one target event (the mainshock); then we analyze the statistical properties of the clusters identified in this rescaled space. In essence, the main goal of this study is to verify if different classes of mainshock magnitudes are characterized by distinctive k-foreshocks distribution. The final step is to show how the findings of this work may (or not) improve the skill of existing earthquake forecasting models.

  14. Complex Dynamic Processes in Sign Tracking With an Omission Contingency (Negative Automaintenance)

    PubMed Central

    Killeen, Peter R.

    2008-01-01

    Hungry pigeons received food periodically, signaled by the onset of a keylight. Key pecks aborted the feeding. Subjects responded for thousands of trials, despite the contingent nonreinforcement, with varying probability as the intertrial interval was varied. Hazard functions showed the dominant tendency to be perseveration in responding and not responding. Once perseveration was accounted for, a linear operator model of associative conditioning further improved predictions. Response rates during trials were correlated with the prior probabilities of a response. Rescaled range analyses showed that the behavioral trajectories were a kind of fractional Brownian motion. PMID:12561133

  15. Complex dynamic processes in sign tracking with an omission contingency (negative automaintenance).

    PubMed

    Killeen, Peter R

    2003-01-01

    Hungry pigeons received food periodically, signaled by the onset of a keylight. Key pecks aborted the feeding. Subjects responded for thousands of trials, despite the contingent nonreinforcement, with varying probability as the intertrial interval was varied. Hazard functions showed the dominant tendency to be perseveration in responding and not responding. Once perseveration was accounted for, a linear operator model of associative conditioning further improved predictions. Response rates during trials were correlated with the prior probabilities of a response. Rescaled range analyses showed that the behavioral trajectories were a kind of fractional Brownian motion.

  16. Statistical properties of solar Hα flare activity

    NASA Astrophysics Data System (ADS)

    Deng, Linhua; Zhang, Xiaojuan; An, Jianmei; Cai, Yunfang

    2017-12-01

    Magnetic field structures on the solar atmosphere are not symmetric distribution in the northern and southern hemispheres, which is an important aspect of quasi-cyclical evolution of magnetic activity indicators that are related to solar dynamo theories. Three standard analysis techniques are applied to analyze the hemispheric coupling (north-south asymmetry and phase asynchrony) of monthly averaged values of solar Hα flare activity over the past 49 years (from 1966 January to 2014 December). The prominent results are as follows: (1) from a global point of view, solar Hα flare activity on both hemispheres are strongly correlated with each other, but the northern hemisphere precedes the southern one with a phase shift of 7 months; (2) the long-range persistence indeed exists in solar Hα flare activity, but the dynamical complexities in the two hemispheres are not identical; (3) the prominent periodicities of Hα flare activity are 17 years full-disk activity cycle and 11 years Schwabe solar cycle, but the short- and mid-term periodicities cannot determined by monthly time series; (4) by comparing the non-parametric rescaling behavior on a point-by-point basis, the hemispheric asynchrony of solar Hα flare activity are estimated to be ranging from several months to tens of months with an average value of 8.7 months. The analysis results could promote our knowledge on the long-range persistence, the quasi-periodic variation, and the hemispheric asynchrony of solar Hα flare activity on both hemispheres, and possibly provide valuable information for the hemispheric interrelation of solar magnetic activity.

  17. Recycling inflow method for simulations of spatially evolving turbulent boundary layers over rough surfaces

    NASA Astrophysics Data System (ADS)

    Yang, Xiang I. A.; Meneveau, Charles

    2016-01-01

    The technique by Lund et al. to generate turbulent inflow for simulations of developing boundary layers over smooth flat plates is extended to the case of surfaces with roughness elements. In the Lund et al. method, turbulent velocities on a sampling plane are rescaled and recycled back to the inlet as inflow boundary condition. To rescale mean and fluctuating velocities, appropriate length scales need be identified and for smooth surfaces, the viscous scale lν = ν/uτ (where ν is the kinematic viscosity and uτ is the friction velocity) is employed for the inner layer. Different from smooth surfaces, in rough wall boundary layers the length scale of the inner layer, i.e. the roughness sub-layer scale ld, must be determined by the geometric details of the surface roughness elements and the flow around them. In the proposed approach, it is determined by diagnosing dispersive stresses that quantify the spatial inhomogeneity caused by the roughness elements in the flow. The scale ld is used for rescaling in the inner layer, and the boundary layer thickness δ is used in the outer region. Both parts are then combined for recycling using a blending function. Unlike the blending function proposed by Lund et al. which transitions from the inner layer to the outer layer at approximately 0.2δ, here the location of blending is shifted upwards to enable simulations of very rough surfaces in which the roughness length may exceed the height of 0.2δ assumed in the traditional method. The extended rescaling-recycling method is tested in large eddy simulation of flow over surfaces with various types of roughness element shapes.

  18. More efficient parameter estimates for factor analysis of ordinal variables by ridge generalized least squares.

    PubMed

    Yuan, Ke-Hai; Jiang, Ge; Cheng, Ying

    2017-11-01

    Data in psychology are often collected using Likert-type scales, and it has been shown that factor analysis of Likert-type data is better performed on the polychoric correlation matrix than on the product-moment covariance matrix, especially when the distributions of the observed variables are skewed. In theory, factor analysis of the polychoric correlation matrix is best conducted using generalized least squares with an asymptotically correct weight matrix (AGLS). However, simulation studies showed that both least squares (LS) and diagonally weighted least squares (DWLS) perform better than AGLS, and thus LS or DWLS is routinely used in practice. In either LS or DWLS, the associations among the polychoric correlation coefficients are completely ignored. To mend such a gap between statistical theory and empirical work, this paper proposes new methods, called ridge GLS, for factor analysis of ordinal data. Monte Carlo results show that, for a wide range of sample sizes, ridge GLS methods yield uniformly more accurate parameter estimates than existing methods (LS, DWLS, AGLS). A real-data example indicates that estimates by ridge GLS are 9-20% more efficient than those by existing methods. Rescaled and adjusted test statistics as well as sandwich-type standard errors following the ridge GLS methods also perform reasonably well. © 2017 The British Psychological Society.

  19. Survey: interpolation methods for whole slide image processing.

    PubMed

    Roszkowiak, L; Korzynska, A; Zak, J; Pijanowska, D; Swiderska-Chadaj, Z; Markiewicz, T

    2017-02-01

    Evaluating whole slide images of histological and cytological samples is used in pathology for diagnostics, grading and prognosis . It is often necessary to rescale whole slide images of a very large size. Image resizing is one of the most common applications of interpolation. We collect the advantages and drawbacks of nine interpolation methods, and as a result of our analysis, we try to select one interpolation method as the preferred solution. To compare the performance of interpolation methods, test images were scaled and then rescaled to the original size using the same algorithm. The modified image was compared to the original image in various aspects. The time needed for calculations and results of quantification performance on modified images were also compared. For evaluation purposes, we used four general test images and 12 specialized biological immunohistochemically stained tissue sample images. The purpose of this survey is to determine which method of interpolation is the best to resize whole slide images, so they can be further processed using quantification methods. As a result, the interpolation method has to be selected depending on the task involving whole slide images. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  20. Detection of long term persistence in time series of the Neuquen River (Argentina)

    NASA Astrophysics Data System (ADS)

    Seoane, Rafael; Paz González, Antonio

    2014-05-01

    In the Patagonian region (Argentina), previous hydrometeorological studies that have been developed using general circulation models show variations in annual mean flows. Future climate scenarios obtained from high-resolution models indicate decreases in total annual precipitation, and these scenarios are more important in the Neuquén river basin (23000 km2). The aim of this study was the estimation of long term persistence in the Neuquén River basin (Argentina). The detection of variations in the long range dependence term and long memory of time series was evaluated with the Hurst exponent. We applied rescaled adjusted range analysis (R/S) to time series of River discharges measured from 1903 to 2011 and this time series was divided into two subperiods: the first was from 1903 to 1970 and the second from 1970 to 2011. Results show a small increase in persistence for the second period. Our results are consistent with those obtained by Koch and Markovic (2007), who observed and estimated an increase of the H exponent for the period 1960-2000 in the Elbe River (Germany). References Hurst, H. (1951).Long term storage capacities of reservoirs". Trans. Am. Soc. Civil Engrs., 116:776-808. Koch and Markovic (2007). Evidences for Climate Change in Germany over the 20th Century from the Stochastic Analysis of hydro-meteorological Time Series, MODSIM07, International Congress on Modelling and Simulation, Christchurch, New Zealand.

  1. Self-Organized Criticality Properties of the Turbulence-Induced Particle Flux at the Plasma Edge of the HT-6M Tokamak

    NASA Astrophysics Data System (ADS)

    Wang, Wen-Hao; Yu, Chang-Xuan; Wen, Yi-Zhi; Xu, Yu-Hong; Ling, Bi-Li; Gong, Xian-Zu; Liu, Bao-Hua; Wan, Bao-Nian

    2001-06-01

    The power spectrum and the probability distribution function (PDF) of the turbulence-induced particle flux Γ in the velocity shear layer of the HT-6M edge region have been measured and analysed. Three regions of frequency dependence (f 0, f-1, f-4) have been observed in the spectrum of the flux. The PDF of the flux displays a Γ-1 scaling over one decade in Γ. Using the rescaled-range statistical technique, we find that the degree of the self-similarity (Hurst exponent) of the particle flux in the measured region ranges from 0.64 to 0.83. All of these results may mean that the plasma transport is in a state characterized by self-organized criticality.

  2. Application of the stochastic resonance algorithm to the simultaneous quantitative determination of multiple weak peaks of ultra-performance liquid chromatography coupled to time-of-flight mass spectrometry.

    PubMed

    Deng, Haishan; Shang, Erxin; Xiang, Bingren; Xie, Shaofei; Tang, Yuping; Duan, Jin-ao; Zhan, Ying; Chi, Yumei; Tan, Defei

    2011-03-15

    The stochastic resonance algorithm (SRA) has been developed as a potential tool for amplifying and determining weak chromatographic peaks in recent years. However, the conventional SRA cannot be applied directly to ultra-performance liquid chromatography/time-of-flight mass spectrometry (UPLC/TOFMS). The obstacle lies in the fact that the narrow peaks generated by UPLC contain high-frequency components which fall beyond the restrictions of the theory of stochastic resonance. Although there already exists an algorithm that allows a high-frequency weak signal to be detected, the sampling frequency of TOFMS is not fast enough to meet the requirement of the algorithm. Another problem is the depression of the weak peak of the compound with low concentration or weak detection response, which prevents the simultaneous determination of multi-component UPLC/TOFMS peaks. In order to lower the frequencies of the peaks, an interpolation and re-scaling frequency stochastic resonance (IRSR) is proposed, which re-scales the peak frequencies via linear interpolating sample points numerically. The re-scaled UPLC/TOFMS peaks could then be amplified significantly. By introducing an external energy field upon the UPLC/TOFMS signals, the method of energy gain was developed to simultaneously amplify and determine weak peaks from multi-components. Subsequently, a multi-component stochastic resonance algorithm was constructed for the simultaneous quantitative determination of multiple weak UPLC/TOFMS peaks based on the two methods. The optimization of parameters was discussed in detail with simulated data sets, and the applicability of the algorithm was evaluated by quantitative analysis of three alkaloids in human plasma using UPLC/TOFMS. The new algorithm behaved well in the improvement of signal-to-noise (S/N) compared to several normally used peak enhancement methods, including the Savitzky-Golay filter, Whittaker-Eilers smoother and matched filtration. Copyright © 2011 John Wiley & Sons, Ltd.

  3. An empirical analysis of the Ebola outbreak in West Africa

    NASA Astrophysics Data System (ADS)

    Khaleque, Abdul; Sen, Parongama

    2017-02-01

    The data for the Ebola outbreak that occurred in 2014-2016 in three countries of West Africa are analysed within a common framework. The analysis is made using the results of an agent based Susceptible-Infected-Removed (SIR) model on a Euclidean network, where nodes at a distance l are connected with probability P(l) ∝ l-δ, δ determining the range of the interaction, in addition to nearest neighbors. The cumulative (total) density of infected population here has the form , where the parameters depend on δ and the infection probability q. This form is seen to fit well with the data. Using the best fitting parameters, the time at which the peak is reached is estimated and is shown to be consistent with the data. We also show that in the Euclidean model, one can choose δ and q values which reproduce the data for the three countries qualitatively. These choices are correlated with population density, control schemes and other factors. Comparing the real data and the results from the model one can also estimate the size of the actual population susceptible to the disease. Rescaling the real data a reasonably good quantitative agreement with the simulation results is obtained.

  4. Study nonlinear dynamics of stratospheric ozone concentration at Pakistan Terrestrial region

    NASA Astrophysics Data System (ADS)

    Jan, Bulbul; Zai, Muhammad Ayub Khan Yousuf; Afradi, Faisal Khan; Aziz, Zohaib

    2018-03-01

    This study investigates the nonlinear dynamics of the stratospheric ozone layer at Pakistan atmospheric region. Ozone considered now the most important issue in the world because of its diverse effects on earth biosphere, including human health, ecosystem, marine life, agriculture yield and climate change. Therefore, this paper deals with total monthly time series data of stratospheric ozone over the Pakistan atmospheric region from 1970 to 2013. Two approaches, basic statistical analysis and Fractal dimension (D) have adapted to study the nature of nonlinear dynamics of stratospheric ozone level. Results obtained from this research have shown that the Hurst exponent values of both methods of fractal dimension revealed an anti-persistent behavior (negatively correlated), i.e. decreasing trend for all lags and Rescaled range analysis is more appropriate as compared to Detrended fluctuation analysis. For seasonal time series all month follows an anti-persistent behavior except in the month of November which shown persistence behavior i.e. time series is an independent and increasing trend. The normality test statistics also confirmed the nonlinear behavior of ozone and the rejection of hypothesis indicates the strong evidence of the complexity of data. This study will be useful to the researchers working in the same field in the future to verify the complex nature of stratospheric ozone.

  5. Dynamic range adaptation in primary motor cortical populations

    PubMed Central

    Rasmussen, Robert G; Schwartz, Andrew; Chase, Steven M

    2017-01-01

    Neural populations from various sensory regions demonstrate dynamic range adaptation in response to changes in the statistical distribution of their input stimuli. These adaptations help optimize the transmission of information about sensory inputs. Here, we show a similar effect in the firing rates of primary motor cortical cells. We trained monkeys to operate a brain-computer interface in both two- and three-dimensional virtual environments. We found that neurons in primary motor cortex exhibited a change in the amplitude of their directional tuning curves between the two tasks. We then leveraged the simultaneous nature of the recordings to test several hypotheses about the population-based mechanisms driving these changes and found that the results are most consistent with dynamic range adaptation. Our results demonstrate that dynamic range adaptation is neither limited to sensory regions nor to rescaling of monotonic stimulus intensity tuning curves, but may rather represent a canonical feature of neural encoding. DOI: http://dx.doi.org/10.7554/eLife.21409.001 PMID:28417848

  6. Radial rescaling approach for the eigenvalue problem of a particle in an arbitrarily shaped box.

    PubMed

    Lijnen, Erwin; Chibotaru, Liviu F; Ceulemans, Arnout

    2008-01-01

    In the present work we introduce a methodology for solving a quantum billiard with Dirichlet boundary conditions. The procedure starts from the exactly known solutions for the particle in a circular disk, which are subsequently radially rescaled in such a way that they obey the new boundary conditions. In this way one constructs a complete basis set which can be used to obtain the eigenstates and eigenenergies of the corresponding quantum billiard to a high level of precision. Test calculations for several regular polygons show the efficiency of the method which often requires one or two basis functions to describe the lowest eigenstates with high accuracy.

  7. Vegetation Cover Analysis in Shaanxi Province of China Based on Grid Pixel Ternd Analysis and Stability Evaluation

    NASA Astrophysics Data System (ADS)

    Yue, H.; Liu, Y.

    2018-04-01

    As a key factor affecting the biogeochemical cycle of human existence, terrestrial vegetation is vulnerable to natural environment and human activities, with obvious temporal and spatial characteristics. The change of vegetation cover will affect the ecological balance and environmental quality to a great extent. Therefore, the research on the causes and influencing factors of vegetation cover has become the focus of attention of scholars at home and abroad. In the evolution of human activities and natural environment, the vegetation coverage in Shaanxi has changed accordingly. Using MODIS/NDVI 2000-2014 time series data, using the method of raster pixel trend analysis, stability evaluation, rescaled range analysis and correlation analysis, the climatic factors in Shaanxi province were studied in the near 15 years vegetation spatial and temporal variation and influence of vegetation NDVI changes. The results show that NDVI in Shaanxi province in the near 15 years increased by 0.081, the increase of NDVI in Northern Shaanxi was obvious, and negative growth was found in some areas of Guanzhong, southern Shaanxi NDVI overall still maintained at a high level; the trend of vegetation change in Shaanxi province has obvious spatial differences, most of the province is a slight tendency to improve vegetation, there are many obvious improvement areas in Northern Shaanxi Province. Guanzhong area vegetation area decreased, the small range of variation of vegetation in Shaanxi province; the most stable areas are mainly concentrated in the southern, southern Yanan, Yulin, Xi'an area of Weinan changed greatly; Shaanxi Province in recent 15 a, the temperature and precipitation have shown an increasing trend, and the vegetation NDVI is more closely related to the average annual rainfall, with increase of 0.48 °C/10 years and 69.5 mm per year.

  8. Historical foundations and future directions in macrosystems ecology.

    PubMed

    Rose, Kevin C; Graves, Rose A; Hansen, Winslow D; Harvey, Brian J; Qiu, Jiangxiao; Wood, Stephen A; Ziter, Carly; Turner, Monica G

    2017-02-01

    Macrosystems ecology is an effort to understand ecological processes and interactions at the broadest spatial scales and has potential to help solve globally important social and ecological challenges. It is important to understand the intellectual legacies underpinning macrosystems ecology: How the subdiscipline fits within, builds upon, differs from and extends previous theories. We trace the rise of macrosystems ecology with respect to preceding theories and present a new hypothesis that integrates the multiple components of macrosystems theory. The spatio-temporal anthropogenic rescaling (STAR) hypothesis suggests that human activities are altering the scales of ecological processes, resulting in interactions at novel space-time scale combinations that are diverse and predictable. We articulate four predictions about how human actions are "expanding", "shrinking", "speeding up" and "slowing down" ecological processes and interactions, and thereby generating new scaling relationships for ecological patterns and processes. We provide examples of these rescaling processes and describe ecological consequences across terrestrial, freshwater and marine ecosystems. Rescaling depends in part on characteristics including connectivity, stability and heterogeneity. Our STAR hypothesis challenges traditional assumptions about how the spatial and temporal scales of processes and interactions operate in different types of ecosystems and provides a lens through which to understand macrosystem-scale environmental change. © 2016 John Wiley & Sons Ltd/CNRS.

  9. Droplet breakup driven by shear thinning solutions in a microfluidic T-junction

    NASA Astrophysics Data System (ADS)

    Chiarello, Enrico; Gupta, Anupam; Mistura, Giampaolo; Sbragaglia, Mauro; Pierno, Matteo

    2017-12-01

    Droplet-based microfluidics turned out to be an efficient and adjustable platform for digital analysis, encapsulation of cells, drug formulation, and polymerase chain reaction. Typically, for most biomedical applications, the handling of complex, non-Newtonian fluids is involved, e.g., synovial and salivary fluids, collagen, and gel scaffolds. In this study, we investigate the problem of droplet formation occurring in a microfluidic T-shaped junction, when the continuous phase is made of shear thinning liquids. At first, we review in detail the breakup process, providing extensive, side-by-side comparisons between Newtonian and non-Newtonian liquids over unexplored ranges of flow conditions and viscous responses. The non-Newtonian liquid carrying the droplets is made of Xanthan solutions, a stiff, rodlike polysaccharide displaying a marked shear thinning rheology. By defining an effective Capillary number, a simple yet effective methodology is used to account for the shear-dependent viscous response occurring at the breakup. The droplet size can be predicted over a wide range of flow conditions simply by knowing the rheology of the bulk continuous phase. Experimental results are complemented with numerical simulations of purely shear thinning fluids using lattice Boltzmann models. The good agreement between the experimental and numerical data confirm the validity of the proposed rescaling with the effective Capillary number.

  10. Comparison of detrending methods for fluctuation analysis in hydrology

    NASA Astrophysics Data System (ADS)

    Zhang, Qiang; Zhou, Yu; Singh, Vijay P.; Chen, Yongqin David

    2011-03-01

    SummaryTrends within a hydrologic time series can significantly influence the scaling results of fluctuation analysis, such as rescaled range (RS) analysis and (multifractal) detrended fluctuation analysis (MF-DFA). Therefore, removal of trends is important in the study of scaling properties of the time series. In this study, three detrending methods, including adaptive detrending algorithm (ADA), Fourier-based method, and average removing technique, were evaluated by analyzing numerically generated series and observed streamflow series with obvious relative regular periodic trend. Results indicated that: (1) the Fourier-based detrending method and ADA were similar in detrending practices, and given proper parameters, these two methods can produce similarly satisfactory results; (2) detrended series by Fourier-based detrending method and ADA lose the fluctuation information at larger time scales, and the location of crossover points is heavily impacted by the chosen parameters of these two methods; and (3) the average removing method has an advantage over the other two methods, i.e., the fluctuation information at larger time scales is kept well-an indication of relatively reliable performance in detrending. In addition, the average removing method performed reasonably well in detrending a time series with regular periods or trends. In this sense, the average removing method should be preferred in the study of scaling properties of the hydrometeorolgical series with relative regular periodic trend using MF-DFA.

  11. Nonadiabatic laser-induced alignment of molecules: Reconstructing ⟨ θ⟩ directly from ⟨ θ2D⟩ by Fourier analysis.

    PubMed

    Søndergaard, Anders Aspegren; Shepperson, Benjamin; Stapelfeldt, Henrik

    2017-07-07

    We present an efficient, noise-robust method based on Fourier analysis for reconstructing the three-dimensional measure of the alignment degree, ⟨cos 2 θ⟩, directly from its two-dimensional counterpart, ⟨cos 2 θ 2D ⟩. The method applies to nonadiabatic alignment of linear molecules induced by a linearly polarized, nonresonant laser pulse. Our theoretical analysis shows that the Fourier transform of the time-dependent ⟨cos 2 θ 2D ⟩ trace over one molecular rotational period contains additional frequency components compared to the Fourier transform of ⟨cos 2 θ⟩. These additional frequency components can be identified and removed from the Fourier spectrum of ⟨cos 2 θ 2D ⟩. By rescaling of the remaining frequency components, the Fourier spectrum of ⟨cos 2 θ⟩ is obtained and, finally, ⟨cos 2 θ⟩ is reconstructed through inverse Fourier transformation. The method allows the reconstruction of the ⟨cos 2 θ⟩ trace from a measured ⟨cos 2 θ 2D ⟩ trace, which is the typical observable of many experiments, and thereby provides direct comparison to calculated ⟨cos 2 θ⟩ traces, which is the commonly used alignment metric in theoretical descriptions. We illustrate our method by applying it to the measurement of nonadiabatic alignment of I 2 molecules. In addition, we present an efficient algorithm for calculating the matrix elements of cos 2 θ 2D and any other observable in the symmetric top basis. These matrix elements are required in the rescaling step, and they allow for highly efficient numerical calculation of ⟨cos 2 θ 2D ⟩ and ⟨cos 2 θ⟩ in general.

  12. Discrete Self-Similarity in Interfacial Hydrodynamics and the Formation of Iterated Structures.

    PubMed

    Dallaston, Michael C; Fontelos, Marco A; Tseluiko, Dmitri; Kalliadasis, Serafim

    2018-01-19

    The formation of iterated structures, such as satellite and subsatellite drops, filaments, and bubbles, is a common feature in interfacial hydrodynamics. Here we undertake a computational and theoretical study of their origin in the case of thin films of viscous fluids that are destabilized by long-range molecular or other forces. We demonstrate that iterated structures appear as a consequence of discrete self-similarity, where certain patterns repeat themselves, subject to rescaling, periodically in a logarithmic time scale. The result is an infinite sequence of ridges and filaments with similarity properties. The character of these discretely self-similar solutions as the result of a Hopf bifurcation from ordinarily self-similar solutions is also described.

  13. Accelerated rescaling of single Monte Carlo simulation runs with the Graphics Processing Unit (GPU).

    PubMed

    Yang, Owen; Choi, Bernard

    2013-01-01

    To interpret fiber-based and camera-based measurements of remitted light from biological tissues, researchers typically use analytical models, such as the diffusion approximation to light transport theory, or stochastic models, such as Monte Carlo modeling. To achieve rapid (ideally real-time) measurement of tissue optical properties, especially in clinical situations, there is a critical need to accelerate Monte Carlo simulation runs. In this manuscript, we report on our approach using the Graphics Processing Unit (GPU) to accelerate rescaling of single Monte Carlo runs to calculate rapidly diffuse reflectance values for different sets of tissue optical properties. We selected MATLAB to enable non-specialists in C and CUDA-based programming to use the generated open-source code. We developed a software package with four abstraction layers. To calculate a set of diffuse reflectance values from a simulated tissue with homogeneous optical properties, our rescaling GPU-based approach achieves a reduction in computation time of several orders of magnitude as compared to other GPU-based approaches. Specifically, our GPU-based approach generated a diffuse reflectance value in 0.08ms. The transfer time from CPU to GPU memory currently is a limiting factor with GPU-based calculations. However, for calculation of multiple diffuse reflectance values, our GPU-based approach still can lead to processing that is ~3400 times faster than other GPU-based approaches.

  14. Halo mass and weak galaxy-galaxy lensing profiles in rescaled cosmological N-body simulations

    NASA Astrophysics Data System (ADS)

    Renneby, Malin; Hilbert, Stefan; Angulo, Raúl E.

    2018-05-01

    We investigate 3D density and weak lensing profiles of dark matter haloes predicted by a cosmology-rescaling algorithm for N-body simulations. We extend the rescaling method of Angulo & White (2010) and Angulo & Hilbert (2015) to improve its performance on intra-halo scales by using models for the concentration-mass-redshift relation based on excursion set theory. The accuracy of the method is tested with numerical simulations carried out with different cosmological parameters. We find that predictions for median density profiles are more accurate than ˜5 % for haloes with masses of 1012.0 - 1014.5h-1 M⊙ for radii 0.05 < r/r200m < 0.5, and for cosmologies with Ωm ∈ [0.15, 0.40] and σ8 ∈ [0.6, 1.0]. For larger radii, 0.5 < r/r200m < 5, the accuracy degrades to ˜20 %, due to inaccurate modelling of the cosmological and redshift dependence of the splashback radius. For changes in cosmology allowed by current data, the residuals decrease to ≲ 2 % up to scales twice the virial radius. We illustrate the usefulness of the method by estimating the mean halo mass of a mock galaxy group sample. We find that the algorithm's accuracy is sufficient for current data. Improvements in the algorithm, particularly in the modelling of baryons, are likely required for interpreting future (dark energy task force stage IV) experiments.

  15. An early prediction of 25th solar cycle using Hurst exponent

    NASA Astrophysics Data System (ADS)

    Singh, A. K.; Bhargawa, Asheesh

    2017-11-01

    The analysis of long memory processes in solar activity, space weather and other geophysical phenomena has been a major issue even after the availability of enough data. We have examined the data of various solar parameters like sunspot numbers, 10.7 cm radio flux, solar magnetic field, proton flux and Alfven Mach number observed for the year 1976-2016. We have done the statistical test for persistence of solar activity based on the value of Hurst exponent (H) which is one of the most classical applied methods known as rescaled range analysis. We have discussed the efficiency of this methodology as well as prediction content for next solar cycle based on long term memory. In the present study, Hurst exponent analysis has been used to investigate the persistence of above mentioned (five) solar activity parameters and a simplex projection analysis has been used to predict the ascension time and the maximum number of counts for 25th solar cycle. For available dataset of the year 1976-2016, we have calculated H = 0.86 and 0.82 for sunspot number and 10.7 cm radio flux respectively. Further we have calculated maximum number of counts for sunspot numbers and F10.7 cm index as 102.8± 24.6 and 137.25± 8.9 respectively. Using the simplex projection analysis, we have forecasted that the solar cycle 25th would start in the year 2021 (January) and would last up to the year 2031 (September) with its maxima in June 2024.

  16. Pressure Effect on the Boson Peak in Deeply Cooled Confined Water: Evidence of a Liquid-Liquid Transition.

    PubMed

    Wang, Zhe; Kolesnikov, Alexander I; Ito, Kanae; Podlesnyak, Andrey; Chen, Sow-Hsin

    2015-12-04

    The boson peak in deeply cooled water confined in nanopores is studied to examine the liquid-liquid transition (LLT). Below ∼180  K, the boson peaks at pressures P higher than ∼3.5  kbar are evidently distinct from those at low pressures by higher mean frequencies and lower heights. Moreover, the higher-P boson peaks can be rescaled to a master curve while the lower-P boson peaks can be rescaled to a different one. These phenomena agree with the existence of two liquid phases with different densities and local structures and the associated LLT in the measured (P, T) region. In addition, the P dependence of the librational band also agrees with the above conclusion.

  17. Pressure Effect on the Boson Peak in Deeply Cooled Confined Water: Evidence of a Liquid-Liquid Transition

    DOE PAGES

    Wang, Zhe; Kolesnikov, Alexander I.; Ito, Kanae; ...

    2015-12-03

    We studied the boson peak in deeply cooled water confined in nanopores in order to examine the liquid-liquid transition (LLT). Below ~180 K, the boson peaks at pressures P higher than ~3.5 kbar are evidently distinct from those at low pressures by higher mean frequencies and lower heights. Moreover, the higher-P boson peaks can be rescaled to a master curve while the lower-P boson peaks can be rescaled to a different one. Moreover, these phenomena agree with the existence of two liquid phases with different densities and local structures and the associated LLT in the measured (P, T) region. Additionally,more » the P dependence of the librational band also agrees with the above conclusion.« less

  18. A study of self organized criticality in ion temperature gradient mode driven gyrokinetic turbulence

    NASA Astrophysics Data System (ADS)

    Mavridis, M.; Isliker, H.; Vlahos, L.; Görler, T.; Jenko, F.; Told, D.

    2014-10-01

    An investigation on the characteristics of self organized criticality (Soc) in ITG mode driven turbulence is made, with the use of various statistical tools (histograms, power spectra, Hurst exponents estimated with the rescaled range analysis, and the structure function method). For this purpose, local non-linear gyrokinetic simulations of the cyclone base case scenario are performed with the GENE software package. Although most authors concentrate on global simulations, which seem to be a better choice for such an investigation, we use local simulations in an attempt to study the locally underlying mechanisms of Soc. We also study the structural properties of radially extended structures, with several tools (fractal dimension estimate, cluster analysis, and two dimensional autocorrelation function), in order to explore whether they can be characterized as avalanches. We find that, for large enough driving temperature gradients, the local simulations exhibit most of the features of Soc, with the exception of the probability distribution of observables, which show a tail, yet they are not of power-law form. The radial structures have the same radial extent at all temperature gradients examined; radial motion (transport) though appears only at large temperature gradients, in which case the radial structures can be interpreted as avalanches.

  19. Quantification of scaling exponents and dynamical complexity of microwave refractivity in a tropical climate

    NASA Astrophysics Data System (ADS)

    Fuwape, Ibiyinka A.; Ogunjo, Samuel T.

    2016-12-01

    Radio refractivity index is used to quantify the effect of atmospheric parameters in communication systems. Scaling and dynamical complexities of radio refractivity across different climatic zones of Nigeria have been studied. Scaling property of the radio refractivity across Nigeria was estimated from the Hurst Exponent obtained using two different scaling methods namely: The Rescaled Range (R/S) and the detrended fluctuation analysis(DFA). The delay vector variance (DVV), Largest Lyapunov Exponent (λ1) and Correlation Dimension (D2) methods were used to investigate nonlinearity and the results confirm the presence of deterministic nonlinear profile in the radio refractivity time series. The recurrence quantification analysis (RQA) was used to quantify the degree of chaoticity in the radio refractivity across the different climatic zones. RQA was found to be a good measure for identifying unique fingerprint and signature of chaotic time series data. Microwave radio refractivity was found to be persistent and chaotic in all the study locations. The dynamics of radio refractivity increases in complexity and chaoticity from the Coastal region towards the Sahelian climate. The design, development and deployment of robust and reliable microwave communication link in the region will be greatly affected by the chaotic nature of radio refractivity in the region.

  20. A study of self organized criticality in ion temperature gradient mode driven gyrokinetic turbulence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mavridis, M.; Isliker, H.; Vlahos, L.

    2014-10-15

    An investigation on the characteristics of self organized criticality (Soc) in ITG mode driven turbulence is made, with the use of various statistical tools (histograms, power spectra, Hurst exponents estimated with the rescaled range analysis, and the structure function method). For this purpose, local non-linear gyrokinetic simulations of the cyclone base case scenario are performed with the GENE software package. Although most authors concentrate on global simulations, which seem to be a better choice for such an investigation, we use local simulations in an attempt to study the locally underlying mechanisms of Soc. We also study the structural properties ofmore » radially extended structures, with several tools (fractal dimension estimate, cluster analysis, and two dimensional autocorrelation function), in order to explore whether they can be characterized as avalanches. We find that, for large enough driving temperature gradients, the local simulations exhibit most of the features of Soc, with the exception of the probability distribution of observables, which show a tail, yet they are not of power-law form. The radial structures have the same radial extent at all temperature gradients examined; radial motion (transport) though appears only at large temperature gradients, in which case the radial structures can be interpreted as avalanches.« less

  1. Domain wall motion in ferroelectrics: Barkhausen noise

    NASA Astrophysics Data System (ADS)

    Shur, V.; Rumyantsev, E.; Kozhevnikov, V.; Nikolaeva, E.; Shishkin, E.

    2002-03-01

    The switching current noise has been recorded during polarization reversal in single-crystalline gadolinium molybdate (GMO) and lithium tantalate (LT). Analysis of Barkhausen noise (BN) data allows to classify the noise types by determination of the critical indexes and fractal dimensions. BN is manifested as the short pulses during the polarization reversal. We have analyzed the BN data recorded in GMO and LT with various types of controlled domain structure. The data treatment in terms of probability distribution of duration, area and energy of individual pulses reveals the critical behavior typical for the fractal records in time. We used the Fourier transform and Hurst's rescaled range analysis for obtaining the Hurst factor, fractal dimension and classifying the noise types. We investigated by computer simulation the mechanism of sideways motion of 180O domain wall by nucleation at the wall taking into account the nuclei-nuclei interaction. It was shown that the moving domain walls display the fractal shape and their motion is accompanied by Flicker noise, which is in accord with experimental data. The research was made possible in part by Programs "Basic Research in Russian Universities" and "Priority Research in High School. Electronics", by Grant No. 01-02-17443 of RFBR, by Award No.REC-005 of CRDF.

  2. Statistical persistence of air pollutants (O3,SO2,NO2 and PM10) in Mexico City

    NASA Astrophysics Data System (ADS)

    Meraz, M.; Rodriguez, E.; Femat, R.; Echeverria, J. C.; Alvarez-Ramirez, J.

    2015-06-01

    The rescaled range (R / S) analysis was used for analyzing the statistical persistence of air pollutants in Mexico City. The air-pollution time series consisted of hourly observations of ozone, nitrogen dioxide, sulfur dioxide and particulate matter obtained at the Mexico City downtown monitoring station during 1999-2014. The results showed that long-range persistence is not a uniform property over a wide range of time scales, from days to months. In fact, although the air pollutant concentrations exhibit an average persistent behavior, environmental (e.g., daily and yearly) and socio-economic (e.g., daily and weekly) cycles are reflected in the dependence of the persistence strength as quantified in terms of the Hurst exponent. It was also found that the Hurst exponent exhibits time variations, with the ozone and nitrate oxide concentrations presenting some regularity, such as annual cycles. The persistence dynamics of the pollutant concentrations increased during the rainy season and decreased during the dry season. The time and scale dependences of the persistence properties provide some insights in the mechanisms involved in the internal dynamics of the Mexico City atmosphere for accumulating and dissipating dangerous air pollutants. While in the short-term individual pollutants dynamics seems to be governed by specific mechanisms, in the long-term (for monthly and higher scales) meteorological and seasonal mechanisms involved in atmospheric recirculation seem to dominate the dynamics of all air pollutant concentrations.

  3. Particle identification with neural networks using a rotational invariant moment representation

    NASA Astrophysics Data System (ADS)

    Sinkus, R.; Voss, T.

    1997-02-01

    A feed-forward neural network is used to identify electromagnetic particles based upon their showering properties within a segmented calorimeter. The novel feature is the expansion of the energy distribution in terms of moments of the so-called Zernike functions which are invariant under rotation. The multidimensional input distribution for the neural network is transformed via a principle component analysis and rescaled by its respective variances to ensure input values of the order of one. This results is a better performance in identifying and separating electromagnetic from hadronic particles, especially at low energies.

  4. Deviations in expected price impact for small transaction volumes under fee restructuring

    NASA Astrophysics Data System (ADS)

    Harvey, M.; Hendricks, D.; Gebbie, T.; Wilcox, D.

    2017-04-01

    We report on the occurrence of an anomaly in the price impacts of small transaction volumes following a change in the fee structure of an electronic market. We first review evidence for the existence of a master curve for price impact on the Johannesburg Stock Exchange (JSE). On attempting to re-estimate a master curve after fee reductions, it is found that the price impact corresponding to smaller volume trades is greater than expected relative to prior estimates for a range of listed stocks. We show that a master curve for price impact can be found following rescaling by an appropriate liquidity proxy, providing a means for practitioners to approximate price impact curves without onerous processing of tick data.

  5. Strongly nonlinear composite dielectrics: A perturbation method for finding the potential field and bulk effective properties

    NASA Astrophysics Data System (ADS)

    Blumenfeld, Raphael; Bergman, David J.

    1991-10-01

    A class of strongly nonlinear composite dielectrics is studied. We develop a general method to reduce the scalar-potential-field problem to the solution of a set of linear Poisson-type equations in rescaled coordinates. The method is applicable for a large variety of nonlinear materials. For a power-law relation between the displacement and the electric fields, it is used to solve explicitly for the value of the bulk effective dielectric constant ɛe to second order in the fluctuations of its local value. A simlar procedure for the vector potential, whose curl is the displacement field, yields a quantity analogous to the inverse dielectric constant in linear dielectrics. The bulk effective dielectric constant is given by a set of linear integral expressions in the rescaled coordinates and exact bounds for it are derived.

  6. Use of an OSSE to Evaluate Background Error Covariances Estimated by the 'NMC Method'

    NASA Technical Reports Server (NTRS)

    Errico, Ronald M.; Prive, Nikki C.; Gu, Wei

    2014-01-01

    The NMC method has proven utility for prescribing approximate background-error covariances required by variational data assimilation systems. Here, untunedNMCmethod estimates are compared with explicitly determined error covariances produced within an OSSE context by exploiting availability of the true simulated states. Such a comparison provides insights into what kind of rescaling is required to render the NMC method estimates usable. It is shown that rescaling of variances and directional correlation lengths depends greatly on both pressure and latitude. In particular, some scaling coefficients appropriate in the Tropics are the reciprocal of those in the Extratropics. Also, the degree of dynamic balance is grossly overestimated by the NMC method. These results agree with previous examinations of the NMC method which used ensembles as an alternative for estimating background-error statistics.

  7. Simulation of supersonic turbulent flow in the vicinity of an inclined backward-facing step

    NASA Astrophysics Data System (ADS)

    El-Askary, W. A.

    2011-08-01

    Large eddy simulation (LES) is a viable and powerful tool to analyse unsteady three-dimensional turbulent flows. In this article, the method of LES is used to compute a plane turbulent supersonic boundary layer subjected to different pressure gradients. The pressure gradients are generated by allowing the flow to pass in the vicinity of an expansion-compression ramp (inclined backward-facing step with leeward-face angle of 25°) for an upstream Mach number of 2.9. The inflow boundary condition is the main problem for all turbulent wall-bounded flows. An approach to solve this problem is to extract instantaneous velocities, temperature and density data from an auxiliary simulation (inflow generator). To generate an appropriate realistic inflow condition to the inflow generator itself the rescaling technique for compressible flows is used. In this method, Morkovin's hypothesis, in which the total temperature fluctuations are neglected compared with the static temperature fluctuations, is applied to rescale and generate the temperature profile at inlet. This technique was successfully developed and applied by the present author for an LES of subsonic three-dimensional boundary layer of a smooth curved ramp. The present LES results are compared with the available experimental data as well as numerical data. The positive impact of the rescaling formulation of the temperature is proven by the convincing agreement of the obtained results with the experimental data compared with published numerical work and sheds light on the quality of the developed compressible inflow generator.

  8. Statistics of Smoothed Cosmic Fields in Perturbation Theory. I. Formulation and Useful Formulae in Second-Order Perturbation Theory

    NASA Astrophysics Data System (ADS)

    Matsubara, Takahiko

    2003-02-01

    We formulate a general method for perturbative evaluations of statistics of smoothed cosmic fields and provide useful formulae for application of the perturbation theory to various statistics. This formalism is an extensive generalization of the method used by Matsubara, who derived a weakly nonlinear formula of the genus statistic in a three-dimensional density field. After describing the general method, we apply the formalism to a series of statistics, including genus statistics, level-crossing statistics, Minkowski functionals, and a density extrema statistic, regardless of the dimensions in which each statistic is defined. The relation between the Minkowski functionals and other geometrical statistics is clarified. These statistics can be applied to several cosmic fields, including three-dimensional density field, three-dimensional velocity field, two-dimensional projected density field, and so forth. The results are detailed for second-order theory of the formalism. The effect of the bias is discussed. The statistics of smoothed cosmic fields as functions of rescaled threshold by volume fraction are discussed in the framework of second-order perturbation theory. In CDM-like models, their functional deviations from linear predictions plotted against the rescaled threshold are generally much smaller than that plotted against the direct threshold. There is still a slight meatball shift against rescaled threshold, which is characterized by asymmetry in depths of troughs in the genus curve. A theory-motivated asymmetry factor in the genus curve is proposed.

  9. Version 3 of the SMAP Level 4 Soil Moisture Product

    NASA Technical Reports Server (NTRS)

    Reichle, Rolf; Liu, Qing; Ardizzone, Joe; Crow, Wade; De Lannoy, Gabrielle; Kolassa, Jana; Kimball, John; Koster, Randy

    2017-01-01

    The NASA Soil Moisture Active Passive (SMAP) Level 4 Soil Moisture (L4_SM) product provides 3-hourly, 9-km resolution, global estimates of surface (0-5 cm) and root zone (0-100 cm) soil moisture as well as related land surface states and fluxes from 31 March 2015 to present with a latency of 2.5 days. The ensemble-based L4_SM algorithm is a variant of the Goddard Earth Observing System version 5 (GEOS-5) land data assimilation system and ingests SMAP L-band (1.4 GHz) Level 1 brightness temperature observations into the Catchment land surface model. The soil moisture analysis is non-local (spatially distributed), performs downscaling from the 36-km resolution of the observations to that of the model, and respects the relative uncertainties of the modeled and observed brightness temperatures. Prior to assimilation, a climatological rescaling is applied to the assimilated brightness temperatures using a 6 year record of SMOS observations. A new feature in Version 3 of the L4_SM data product is the use of 2 years of SMAP observations for rescaling where SMOS observations are not available because of radio frequency interference, which expands the impact of SMAP observations on the L4_SM estimates into large regions of northern Africa and Asia. This presentation investigates the performance and data assimilation diagnostics of the Version 3 L4_SM data product. The L4_SM soil moisture estimates meet the 0.04 m3m3 (unbiased) RMSE requirement. We further demonstrate that there is little bias in the soil moisture analysis. Finally, we illustrate where the assimilation system overestimates or underestimates the actual errors in the system.

  10. Emergent space-time via a geometric renormalization method

    NASA Astrophysics Data System (ADS)

    Rastgoo, Saeed; Requardt, Manfred

    2016-12-01

    We present a purely geometric renormalization scheme for metric spaces (including uncolored graphs), which consists of a coarse graining and a rescaling operation on such spaces. The coarse graining is based on the concept of quasi-isometry, which yields a sequence of discrete coarse grained spaces each having a continuum limit under the rescaling operation. We provide criteria under which such sequences do converge within a superspace of metric spaces, or may constitute the basin of attraction of a common continuum limit, which hopefully may represent our space-time continuum. We discuss some of the properties of these coarse grained spaces as well as their continuum limits, such as scale invariance and metric similarity, and show that different layers of space-time can carry different distance functions while being homeomorphic. Important tools in this analysis are the Gromov-Hausdorff distance functional for general metric spaces and the growth degree of graphs or networks. The whole construction is in the spirit of the Wilsonian renormalization group (RG). Furthermore, we introduce a physically relevant notion of dimension on the spaces of interest in our analysis, which, e.g., for regular lattices reduces to the ordinary lattice dimension. We show that this dimension is stable under the proposed coarse graining procedure as long as the latter is sufficiently local, i.e., quasi-isometric, and discuss the conditions under which this dimension is an integer. We comment on the possibility that the limit space may turn out to be fractal in case the dimension is noninteger. At the end of the paper we briefly mention the possibility that our network carries a translocal far order that leads to the concept of wormhole spaces and a scale dependent dimension if the coarse graining procedure is no longer local.

  11. Assessing Agreement Between Salivary Alpha Amylase Levels Collected by Passive Drool and Eluted Filter Paper in Adolescents With Cancer

    PubMed Central

    Ameringer, Suzanne; Munro, Cindy; Elswick, R.K.

    2014-01-01

    Purpose/Objectives To assess the validity of filter paper (FP) against the gold standard of passive drool (PD) for collecting salivary alpha amylase as a surrogate biomarker of psychological stress in adolescents with cancer. Design Part of a longitudinal, descriptive study of symptoms in adolescents with cancer during chemotherapy. Setting A pediatric hematology/oncology treatment center. Sample 33 saliva sample pairs from nine adolescents with cancer, aged 13–18 years. Methods Salivary alpha amylase was collected by PD and FP at four time points during a cycle of chemotherapy: days 1 (time 1) and 2 (time 2) of chemotherapy, day 7–10 (time 3), and day 1 of the next cycle (time 4). A random effects regression was used to assess the correlation between PD and FP values, and a Bland Altman analysis was conducted to assess agreement between the values. Main Research Variables Salivary alpha amylase. Findings The estimated correlation between PD and FP values was r = 0.91, p < 0.001. Regression results were also used to rescale FP values to the levels of the PD values because the FP values were on a different scale than the PD values. The Bland Altman analysis revealed that the agreement between the rescaled FP values and PD values was not satisfactory. Conclusions Eluted FP may not be a valid method for collecting salivary alpha amylase in adolescents with cancer. Implications for Nursing Psychological stress in adolescents with cancer may be linked to negative outcomes, such as greater symptom severity and post-traumatic stress disorder. Nurses need valid, efficient, biobehavioral measures to assess psychological stress in the clinical setting. PMID:22750901

  12. Developmental instability: measures of resistance and resilience using pumpkin (Cucurbita pepo L.)

    USGS Publications Warehouse

    Freeman, D. Carl; Brown, Michelle L.; Dobson, Melissa; Jordan, Yolanda; Kizy, Anne; Micallef, Chris; Hancock, Leandria C.; Graham, John H.; Emlen, John M.

    2003-01-01

    Fluctuating asymmetry measures random deviations from bilateral symmetry, and thus estimates developmental instability, the loss of ability by an organism to regulate its development. There have been few rigorous tests of this proposition. Regulation of bilateral symmetry must involve either feedback between the sides or independent regulation toward a symmetric set point. Either kind of regulation should decrease asymmetry over time, but only right–left feedback produces compensatory growth across sides, seen as antipersistent growth following perturbation. Here, we describe the developmental trajectories of perturbed and unperturbed leaves of pumpkin, Cucurbita pepoL., grown at three densities. Covering one side of a leaf with aluminium foil for 24 h perturbed leaf growth. Reduced growth on the perturbed side caused leaves to become more asymmetrical than unperturbed controls. After the treatment the size-corrected asymmetry decreased over time. In addition, rescaled range analysis showed that asymmetry was antipersistent rather than random, i.e. fluctuation in one direction was likely to be followed by fluctuations in the opposite direction. Development involves right–left feedback. This feedback reduced size-corrected asymmetry over time most strongly in the lowest density treatment suggesting that developmental instability results from a lack of resilience rather than resistance. 

  13. Seasonal differences in the subjective assessment of outdoor thermal conditions and the impact of analysis techniques on the obtained results

    NASA Astrophysics Data System (ADS)

    Kántor, Noémi; Kovács, Attila; Takács, Ágnes

    2016-11-01

    Wide research attention has been paid in the last two decades to the thermal comfort conditions of different outdoor and semi-outdoor urban spaces. Field studies were conducted in a wide range of geographical regions in order to investigate the relationship between the thermal sensation of people and thermal comfort indices. Researchers found that the original threshold values of these indices did not describe precisely the actual thermal sensation patterns of subjects, and they reported neutral temperatures that vary among nations and with time of the year. For that reason, thresholds of some objective indices were rescaled and new thermal comfort categories were defined. This research investigates the outdoor thermal perception patterns of Hungarians regarding the Physiologically Equivalent Temperature ( PET) index, based on more than 5800 questionnaires. The surveys were conducted in the city of Szeged on 78 days in spring, summer, and autumn. Various, frequently applied analysis approaches (simple descriptive technique, regression analysis, and probit models) were adopted to reveal seasonal differences in the thermal assessment of people. Thermal sensitivity and neutral temperatures were found to be significantly different, especially between summer and the two transient seasons. Challenges of international comparison are also emphasized, since the results prove that neutral temperatures obtained through different analysis techniques may be considerably different. The outcomes of this study underline the importance of the development of standard measurement and analysis methodologies in order to make future studies comprehensible, hereby facilitating the broadening of the common scientific knowledge about outdoor thermal comfort.

  14. Bin Packing, Number Balancing, and Rescaling Linear Programs

    NASA Astrophysics Data System (ADS)

    Hoberg, Rebecca

    This thesis deals with several important algorithmic questions using techniques from diverse areas including discrepancy theory, machine learning and lattice theory. In Chapter 2, we construct an improved approximation algorithm for a classical NP-complete problem, the bin packing problem. In this problem, the goal is to pack items of sizes si ∈ [0,1] into as few bins as possible, where a set of items fits into a bin provided the sum of the item sizes is at most one. We give a polynomial-time rounding scheme for a standard linear programming relaxation of the problem, yielding a packing that uses at most OPT + O(log OPT) bins. This makes progress towards one of the "10 open problems in approximation algorithms" stated in the book of Shmoys and Williamson. In fact, based on related combinatorial lower bounds, Rothvoss conjectures that theta(logOPT) may be a tight bound on the additive integrality gap of this LP relaxation. In Chapter 3, we give a new polynomial-time algorithm for linear programming. Our algorithm is based on the multiplicative weights update (MWU) method, which is a general framework that is currently of great interest in theoretical computer science. An algorithm for linear programming based on MWU was known previously, but was not polynomial time--we remedy this by alternating between a MWU phase and a rescaling phase. The rescaling methods we introduce improve upon previous methods by reducing the number of iterations needed until one can rescale, and they can be used for any algorithm with a similar rescaling structure. Finally, we note that the MWU phase of the algorithm has a simple interpretation as gradient descent of a particular potential function, and we show we can speed up this phase by walking in a direction that decreases both the potential function and its gradient. In Chapter 4, we show that an approximate oracle for Minkowski's Theorem gives an approximate oracle for the number balancing problem, and conversely. Number balancing is the problem of minimizing | 〈a,x〉 | over x ∈ {-1,0,1}n \\ { 0}, given a ∈ [0,1]n. While an application of the pigeonhole principle shows that there always exists x with | 〈a,x〉| ≤ O(√ n/2n), the best known algorithm only guarantees |〈a,x〉| ≤ 2-ntheta(log n). We show that an oracle for Minkowski's Theorem with approximation factor rho would give an algorithm for NBP that guarantees | 〈a,x〉 | ≤ 2-ntheta(1/rho). In particular, this would beat the bound of Karmarkar and Karp provided rho ≤ O(logn/loglogn). In the other direction, we prove that any polynomial time algorithm for NBP that guarantees a solution of difference at most 2√n/2 n would give a polynomial approximation for Minkowski as well as a polynomial factor approximation algorithm for the Shortest Vector Problem.

  15. Time rescaling and pattern formation in biological evolution.

    PubMed

    Igamberdiev, Abir U

    2014-09-01

    Biological evolution is analyzed as a process of continuous measurement in which biosystems interpret themselves in the environment resulting in changes of both. This leads to rescaling of internal time (heterochrony) followed by spatial reconstructions of morphology (heterotopy). The logical precondition of evolution is the incompleteness of biosystem's internal description, while the physical precondition is the uncertainty of quantum measurement. The process of evolution is based on perpetual changes in interpretation of information in the changing world. In this interpretation the external biospheric gradients are used for establishment of new features of organization. It is concluded that biological evolution involves the anticipatory epigenetic changes in the interpretation of genetic symbolism which cannot generally be forecasted but can provide canalization of structural transformations defined by the existing organization and leading to predictable patterns of form generation. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  16. Fast repurposing of high-resolution stereo video content for mobile use

    NASA Astrophysics Data System (ADS)

    Karaoglu, Ali; Lee, Bong Ho; Boev, Atanas; Cheong, Won-Sik; Gotchev, Atanas

    2012-06-01

    3D video content is captured and created mainly in high resolution targeting big cinema or home TV screens. For 3D mobile devices, equipped with small-size auto-stereoscopic displays, such content has to be properly repurposed, preferably in real-time. The repurposing requires not only spatial resizing but also properly maintaining the output stereo disparity, as it should deliver realistic, pleasant and harmless 3D perception. In this paper, we propose an approach to adapt the disparity range of the source video to the comfort disparity zone of the target display. To achieve this, we adapt the scale and the aspect ratio of the source video. We aim at maximizing the disparity range of the retargeted content within the comfort zone, and minimizing the letterboxing of the cropped content. The proposed algorithm consists of five stages. First, we analyse the display profile, which characterises what 3D content can be comfortably observed in the target display. Then, we perform fast disparity analysis of the input stereoscopic content. Instead of returning the dense disparity map, it returns an estimate of the disparity statistics (min, max, meanand variance) per frame. Additionally, we detect scene cuts, where sharp transitions in disparities occur. Based on the estimated input, and desired output disparity ranges, we derive the optimal cropping parameters and scale of the cropping window, which would yield the targeted disparity range and minimize the area of cropped and letterboxed content. Once the rescaling and cropping parameters are known, we perform resampling procedure using spline-based and perceptually optimized resampling (anti-aliasing) kernels, which have also a very efficient computational structure. Perceptual optimization is achieved through adjusting the cut-off frequency of the anti-aliasing filter with the throughput of the target display.

  17. A Dynamic Approach to Addressing Observation-Minus-Forecast Mean Differences in a Land Surface Skin Temperature Data Assimilation System

    NASA Technical Reports Server (NTRS)

    Draper, Clara; Reichle, Rolf; De Lannoy, Gabrielle; Scarino, Benjamin

    2015-01-01

    In land data assimilation, bias in the observation-minus-forecast (O-F) residuals is typically removed from the observations prior to assimilation by rescaling the observations to have the same long-term mean (and higher-order moments) as the corresponding model forecasts. Such observation rescaling approaches require a long record of observed and forecast estimates, and an assumption that the O-F mean differences are stationary. A two-stage observation bias and state estimation filter is presented, as an alternative to observation rescaling that does not require a long data record or assume stationary O-F mean differences. The two-stage filter removes dynamic (nonstationary) estimates of the seasonal scale O-F mean difference from the assimilated observations, allowing the assimilation to correct the model for synoptic-scale errors without adverse effects from observation biases. The two-stage filter is demonstrated by assimilating geostationary skin temperature (Tsk) observations into the Catchment land surface model. Global maps of the O-F mean differences are presented, and the two-stage filter is evaluated for one year over the Americas. The two-stage filter effectively removed the Tsk O-F mean differences, for example the GOES-West O-F mean difference at 21:00 UTC was reduced from 5.1 K for a bias-blind assimilation to 0.3 K. Compared to independent in situ and remotely sensed Tsk observations, the two-stage assimilation reduced the unbiased Root Mean Square Difference (ubRMSD) of the modeled Tsk by 10 of the open-loop values.

  18. Initial conditions for accurate N-body simulations of massive neutrino cosmologies

    NASA Astrophysics Data System (ADS)

    Zennaro, M.; Bel, J.; Villaescusa-Navarro, F.; Carbone, C.; Sefusatti, E.; Guzzo, L.

    2017-04-01

    The set-up of the initial conditions in cosmological N-body simulations is usually implemented by rescaling the desired low-redshift linear power spectrum to the required starting redshift consistently with the Newtonian evolution of the simulation. The implementation of this practical solution requires more care in the context of massive neutrino cosmologies, mainly because of the non-trivial scale-dependence of the linear growth that characterizes these models. In this work, we consider a simple two-fluid, Newtonian approximation for cold dark matter and massive neutrinos perturbations that can reproduce the cold matter linear evolution predicted by Boltzmann codes such as CAMB or CLASS with a 0.1 per cent accuracy or below for all redshift relevant to non-linear structure formation. We use this description, in the first place, to quantify the systematic errors induced by several approximations often assumed in numerical simulations, including the typical set-up of the initial conditions for massive neutrino cosmologies adopted in previous works. We then take advantage of the flexibility of this approach to rescale the late-time linear power spectra to the simulation initial redshift, in order to be as consistent as possible with the dynamics of the N-body code and the approximations it assumes. We implement our method in a public code (REPS rescaled power spectra for initial conditions with massive neutrinos https://github.com/matteozennaro/reps) providing the initial displacements and velocities for cold dark matter and neutrino particles that will allow accurate, I.e. 1 per cent level, numerical simulations for this cosmological scenario.

  19. Stationary Random Metrics on Hierarchical Graphs Via {(min,+)}-type Recursive Distributional Equations

    NASA Astrophysics Data System (ADS)

    Khristoforov, Mikhail; Kleptsyn, Victor; Triestino, Michele

    2016-07-01

    This paper is inspired by the problem of understanding in a mathematical sense the Liouville quantum gravity on surfaces. Here we show how to define a stationary random metric on self-similar spaces which are the limit of nice finite graphs: these are the so-called hierarchical graphs. They possess a well-defined level structure and any level is built using a simple recursion. Stopping the construction at any finite level, we have a discrete random metric space when we set the edges to have random length (using a multiplicative cascade with fixed law {m}). We introduce a tool, the cut-off process, by means of which one finds that renormalizing the sequence of metrics by an exponential factor, they converge in law to a non-trivial metric on the limit space. Such limit law is stationary, in the sense that glueing together a certain number of copies of the random limit space, according to the combinatorics of the brick graph, the obtained random metric has the same law when rescaled by a random factor of law {m} . In other words, the stationary random metric is the solution of a distributional equation. When the measure m has continuous positive density on {mathbf{R}+}, the stationary law is unique up to rescaling and any other distribution tends to a rescaled stationary law under the iterations of the hierarchical transformation. We also investigate topological and geometric properties of the random space when m is log-normal, detecting a phase transition influenced by the branching random walk associated to the multiplicative cascade.

  20. Fluid-driven cracks in an elastic matrix in the toughness-dominated limit

    PubMed Central

    Lai, Ching-Yao; Zheng, Zhong; Dressaire, Emilie

    2016-01-01

    The dynamics of fluid-driven cracks in an elastic matrix is studied experimentally. We report the crack radius R(t) as a function of time, as well as the crack shapes w(r,t) as a function of space and time. A dimensionless parameter, the pressure ratio Δpf/Δpv, is identified to gauge the relative importance between the toughness (Δpf) and viscous (Δpv) effects. In our previous paper (Lai et al. 2015 Proc. R. Soc. A 471, 20150255. (doi:10.1098/rspa.2015.0255)), we investigated the viscous limit experimentally when the toughness-related stresses are negligible for the crack propagation. In this paper, the experimental parameters, i.e. Young’s modulus E of the gelatin, viscosity μ of the fracturing liquid and the injection flow rate Q, were chosen so that the viscous effects in the flow are negligible compared with the toughness effects, i.e. Δpf/Δpv≫1. In this limit, the crack dynamics can be described by the toughness-dominated scaling laws, which give the crack radius R(t)∝t2/5 and the half maximum crack thickness W(t)∝t1/5. The experimental results are in good agreement with the predictions of the toughness scaling laws: the experimental data for crack radius R(t) for a wide range of parameters (E,μ,Q) collapse after being rescaled by the toughness scaling laws, and the rescaled crack shapes w(r,t) also collapse to a dimensionless shape, which demonstrates the self-similarity of the crack shape. The appropriate choice of the viscous or toughness scaling laws is important to accurately describe the crack dynamics. This article is part of the themed issue ‘Energy and the subsurface’. PMID:27597782

  1. Probabilistic Learning by Rodent Grid Cells

    PubMed Central

    Cheung, Allen

    2016-01-01

    Mounting evidence shows mammalian brains are probabilistic computers, but the specific cells involved remain elusive. Parallel research suggests that grid cells of the mammalian hippocampal formation are fundamental to spatial cognition but their diverse response properties still defy explanation. No plausible model exists which explains stable grids in darkness for twenty minutes or longer, despite being one of the first results ever published on grid cells. Similarly, no current explanation can tie together grid fragmentation and grid rescaling, which show very different forms of flexibility in grid responses when the environment is varied. Other properties such as attractor dynamics and grid anisotropy seem to be at odds with one another unless additional properties are assumed such as a varying velocity gain. Modelling efforts have largely ignored the breadth of response patterns, while also failing to account for the disastrous effects of sensory noise during spatial learning and recall, especially in darkness. Here, published electrophysiological evidence from a range of experiments are reinterpreted using a novel probabilistic learning model, which shows that grid cell responses are accurately predicted by a probabilistic learning process. Diverse response properties of probabilistic grid cells are statistically indistinguishable from rat grid cells across key manipulations. A simple coherent set of probabilistic computations explains stable grid fields in darkness, partial grid rescaling in resized arenas, low-dimensional attractor grid cell dynamics, and grid fragmentation in hairpin mazes. The same computations also reconcile oscillatory dynamics at the single cell level with attractor dynamics at the cell ensemble level. Additionally, a clear functional role for boundary cells is proposed for spatial learning. These findings provide a parsimonious and unified explanation of grid cell function, and implicate grid cells as an accessible neuronal population readout of a set of probabilistic spatial computations. PMID:27792723

  2. Effect of the depreciation of public goods in spatial public goods games

    NASA Astrophysics Data System (ADS)

    Shi, Dong-Mei; Zhuang, Yong; Wang, Bing-Hong

    2012-02-01

    In this work, the depreciation effect of public goods is considered in the public goods games, which is realized by rescaling the multiplication factor r of each group as r‧=r( (β≥0). It is assumed that each individual enjoys the full profit r of the public goods if all the players of this group are cooperators. Otherwise, the value of public goods is reduced to r‧. It is found that compared with the original version (β=0), the emergence of cooperation is remarkably promoted for β>0, and there exist intermediate values of β inducing the best cooperation. Particularly, there exists a range of β inducing the highest cooperative level, and this range of β broadens as r increases. It is further presented that the variation of cooperator density with noise has close relations with the values of β and r, and cooperation at an intermediate value of β=1.0 is most tolerant to noise.

  3. Bound states of dipolar bosons in one-dimensional systems

    NASA Astrophysics Data System (ADS)

    Volosniev, A. G.; Armstrong, J. R.; Fedorov, D. V.; Jensen, A. S.; Valiente, M.; Zinner, N. T.

    2013-04-01

    We consider one-dimensional tubes containing bosonic polar molecules. The long-range dipole-dipole interactions act both within a single tube and between different tubes. We consider arbitrary values of the externally aligned dipole moments with respect to the symmetry axis of the tubes. The few-body structures in this geometry are determined as a function of polarization angles and dipole strength by using both essentially exact stochastic variational methods and the harmonic approximation. The main focus is on the three-, four- and five-body problems in two or more tubes. Our results indicate that in the weakly coupled limit the intertube interaction is similar to a zero-range term with a suitable rescaled strength. This allows us to address the corresponding many-body physics of the system by constructing a model where bound chains with one molecule in each tube are the effective degrees of freedom. This model can be mapped onto one-dimensional Hamiltonians for which exact solutions are known.

  4. Comparison of the order of magnetic phase transitions in several magnetocaloric materials using the rescaled universal curve, Banerjee and mean field theory criteria

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burrola-Gándara, L. A., E-mail: andres.burrola@gmail.com; Santillan-Rodriguez, C. R.; Rivera-Gomez, F. J.

    2015-05-07

    Magnetocaloric materials with second order phase transition near the Curie temperature can be described by critical phenomena theory. In this theory, scaling, universality, and renormalization are key concepts from which several phase transition order criteria are derived. In this work, the rescaled universal curve, Banerjee and mean field theory criteria were used to make a comparison for several magnetocaloric materials including pure Gd, SmCo{sub 1.8}Fe{sub 0.2}, MnFeP{sub 0.46}As{sub 0.54}, and La{sub 0.7}Ca{sub 0.15}Sr{sub 0.15}MnO{sub 3}. Pure Gd, SmCo{sub 1.8}Fe{sub 0.2}, and La{sub 0.7}Ca{sub 0.15}Sr{sub 0.15}MnO{sub 3} present a collapse of the rescaled magnetic entropy change curves into a universal curve,more » which indicates a second order phase transition; applying Banerjee criterion to H/σ vs σ{sup 2} Arrot plots and the mean field theory relation |ΔS{sub M}| ∝ (μ{sub 0}H/T{sub c}){sup 2/3} for the same materials also determines a second order phase transition. However, in the MnFeP{sub 0.46}As{sub 0.54} sample, the Banerjee criterion applied to the H/σ vs σ{sup 2} Arrot plot indicates a first order magnetic phase transition, while the mean field theory prediction for a second order phase transition, |ΔS{sub M}| ∝ (μ{sub 0}H/T{sub c}){sup 2/3}, describes a second order behavior. Also, a mixture of first and second order behavior was indicated by the rescaled universal curve criterion. The diverse results obtained for each criterion in MnFeP{sub 0.46}As{sub 0.54} are apparently related to the magnetoelastic effect and to the simultaneous presence of weak and strong magnetism in Fe (3f) and Mn (3g) alternate atomic layers, respectively. The simultaneous application of the universal curve, the Banerjee and the mean field theory criteria has allowed a better understanding about the nature of the order of the phase transitions in different magnetocaloric materials.« less

  5. Quantitative analysis of the correlations in the Boltzmann-Grad limit for hard spheres

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pulvirenti, M.

    2014-12-09

    In this contribution I consider the problem of the validity of the Boltzmann equation for a system of hard spheres in the Boltzmann-Grad limit. I briefly review the results available nowadays with a particular emphasis on the celebrated Lanford’s validity theorem. Finally I present some recent results, obtained in collaboration with S. Simonella, concerning a quantitative analysis of the propagation of chaos. More precisely we introduce a quantity (the correlation error) measuring how close a j-particle rescaled correlation function at time t (sufficiently small) is far from the full statistical independence. Roughly speaking, a correlation error of order k, measuresmore » (in the context of the BBKGY hierarchy) the event in which k tagged particles form a recolliding group.« less

  6. Temporal and spatial variation of hydrological condition in the Ziwu River Basin of the Han River in China

    NASA Astrophysics Data System (ADS)

    Li, Ziyan; Liu, Dengfeng; Huang, Qiang; Bai, Tao; Zhou, Shuai; Lin, Mu

    2018-06-01

    The middle route of South-To-North Water Diversion in China transfers water from the Han River and Han-To-Wei Water Diversion project of Shaanxi Province will transfer water from the Ziwu River, which is a tributary of the Han River. In order to gain a better understanding of future changes in the hydrological conditions within the Ziwu River basin, a Mann-Kendall (M-K) trend analysis is coupled with a persistence analysis using the rescaled range analysis (R/S) method. The future change in the hydrological characteristics of the Ziwu River basin is obtained by analysing the change of meteorological factors. The results show that, the future precipitation and potential evaporation are seasonal, and the spatial variation is significant. The proportion of basin area where the spring, summer, autumn and winter precipitation is predicted to continue increase is 0.00, 100.00, 19.00 and 16.00 %, meanwhile, the proportion of basin area that will continue to decrease in the future respectively will be 100.00, 0.00, 81.00 and 74.00 %.The future potential evapotranspiration of the four seasons in the basin shows a decreasing trend. The future water supply situation in the spring and autumn of the Ziwu River basin will degrade, and the future water supply situation in the summer and winter will improve. In addition, the areas with the same water supply situation are relatively concentrated. The results will provide scientific basis for the planning and management of river basin water resources and socio-hydrological processes analysis.

  7. ASCAT soil moisture data assimilation through the Ensemble Kalman Filter for improving streamflow simulation in Mediterranean catchments

    NASA Astrophysics Data System (ADS)

    Loizu, Javier; Massari, Christian; Álvarez-Mozos, Jesús; Casalí, Javier; Goñi, Mikel

    2016-04-01

    Assimilation of Surface Soil Moisture (SSM) observations obtained from remote sensing techniques have been shown to improve streamflow prediction at different time scales of hydrological modeling. Different sensors and methods have been tested for their application in SSM estimation, especially in the microwave region of the electromagnetic spectrum. The available observation devices include passive microwave sensors such as the Advanced Microwave Scanning Radiometer - Earth Observation System (AMSR-E) onboard the Aqua satellite and the Soil Moisture and Ocean Salinity (SMOS) mission. On the other hand, active microwave systems include Scatterometers (SCAT) onboard the European Remote Sensing satellites (ERS-1/2) and the Advanced Scatterometer (ASCAT) onboard MetOp-A satellite. Data assimilation (DA) include different techniques that have been applied in hydrology and other fields for decades. These techniques include, among others, Kalman Filtering (KF), Variational Assimilation or Particle Filtering. From the initial KF method, different techniques were developed to suit its application to different systems. The Ensemble Kalman Filter (EnKF), extensively applied in hydrological modeling improvement, shows its capability to deal with nonlinear model dynamics without linearizing model equations, as its main advantage. The objective of this study was to investigate whether data assimilation of SSM ASCAT observations, through the EnKF method, could improve streamflow simulation of mediterranean catchments with TOPLATS hydrological complex model. The DA technique was programmed in FORTRAN, and applied to hourly simulations of TOPLATS catchment model. TOPLATS (TOPMODEL-based Land-Atmosphere Transfer Scheme) was applied on its lumped version for two mediterranean catchments of similar size, located in northern Spain (Arga, 741 km2) and central Italy (Nestore, 720 km2). The model performs a separated computation of energy and water balances. In those balances, the soil is divided into two layers, the upper Surface Zone (SZ), and the deeper Transmission Zone (TZ). In this study, the SZ depth was fixed to 5 cm, for adequate assimilation of observed data. Available data was distributed as follows: first, the model was calibrated for the 2001-2007 period; then the 2007-2010 period was used for satellite data rescaling purposes. Finally, data assimilation was applied during the validation (2010-2013) period. Application of the EnKF required the following steps: 1) rescaling of satellite data, 2) transformation of rescaled data into Soil Water Index (SWI) through a moving average filter, where a T = 9 calibrated value was applied, 3) generation of a 50 member ensemble through perturbation of inputs (rainfall and temperature) and three selected parameters, 4) validation of the ensemble through the compliance of two criteria based on ensemble's spread, mean square error and skill and, 5) Kalman Gain calculation. In this work, comparison of three satellite data rescaling techniques: 1) cumulative distribution Function (CDF) matching, 2) variance matching and 3) linear least square regression was also performed. Results obtained in this study showed slight improvements of hourly Nash-Sutcliffe Efficiency (NSE) in both catchments, with the different rescaling methods evaluated. Larger improvements were found in terms of seasonal simulated volume error reduction.

  8. A manual for microcomputer image analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rich, P.M.; Ranken, D.M.; George, J.S.

    1989-12-01

    This manual is intended to serve three basic purposes: as a primer in microcomputer image analysis theory and techniques, as a guide to the use of IMAGE{copyright}, a public domain microcomputer program for image analysis, and as a stimulus to encourage programmers to develop microcomputer software suited for scientific use. Topics discussed include the principals of image processing and analysis, use of standard video for input and display, spatial measurement techniques, and the future of microcomputer image analysis. A complete reference guide that lists the commands for IMAGE is provided. IMAGE includes capabilities for digitization, input and output of images,more » hardware display lookup table control, editing, edge detection, histogram calculation, measurement along lines and curves, measurement of areas, examination of intensity values, output of analytical results, conversion between raster and vector formats, and region movement and rescaling. The control structure of IMAGE emphasizes efficiency, precision of measurement, and scientific utility. 18 refs., 18 figs., 2 tabs.« less

  9. Beyond triple collocation: Applications to satellite soil moisture

    USDA-ARS?s Scientific Manuscript database

    Triple collocation is now routinely used to resolve the exact (linear) relationships between multiple measurements and/or representations of a geophysical variable that are subject to errors. It has been utilized in the context of calibration, rescaling and error characterisation to allow comparison...

  10. Effects of different regional climate model resolution and forcing scales on projected hydrologic changes

    NASA Astrophysics Data System (ADS)

    Mendoza, Pablo A.; Mizukami, Naoki; Ikeda, Kyoko; Clark, Martyn P.; Gutmann, Ethan D.; Arnold, Jeffrey R.; Brekke, Levi D.; Rajagopalan, Balaji

    2016-10-01

    We examine the effects of regional climate model (RCM) horizontal resolution and forcing scaling (i.e., spatial aggregation of meteorological datasets) on the portrayal of climate change impacts. Specifically, we assess how the above decisions affect: (i) historical simulation of signature measures of hydrologic behavior, and (ii) projected changes in terms of annual water balance and hydrologic signature measures. To this end, we conduct our study in three catchments located in the headwaters of the Colorado River basin. Meteorological forcings for current and a future climate projection are obtained at three spatial resolutions (4-, 12- and 36-km) from dynamical downscaling with the Weather Research and Forecasting (WRF) regional climate model, and hydrologic changes are computed using four different hydrologic model structures. These projected changes are compared to those obtained from running hydrologic simulations with current and future 4-km WRF climate outputs re-scaled to 12- and 36-km. The results show that the horizontal resolution of WRF simulations heavily affects basin-averaged precipitation amounts, propagating into large differences in simulated signature measures across model structures. The implications of re-scaled forcing datasets on historical performance were primarily observed on simulated runoff seasonality. We also found that the effects of WRF grid resolution on projected changes in mean annual runoff and evapotranspiration may be larger than the effects of hydrologic model choice, which surpasses the effects from re-scaled forcings. Scaling effects on projected variations in hydrologic signature measures were found to be generally smaller than those coming from WRF resolution; however, forcing aggregation in many cases reversed the direction of projected changes in hydrologic behavior.

  11. Disentangling Puzzles of Spatial Scales and Participation in Environmental Governance—The Case of Governance Re-scaling Through the European Water Framework Directive

    NASA Astrophysics Data System (ADS)

    Newig, Jens; Schulz, Daniel; Jager, Nicolas W.

    2016-12-01

    This article attempts to shed new light on prevailing puzzles of spatial scales in multi-level, participatory governance as regards the democratic legitimacy and environmental effectiveness of governance systems. We focus on the governance re-scaling by the European Water Framework Directive, which introduced new governance scales (mandated river basin management) and demands consultation of citizens and encourages `active involvement' of stakeholders. This allows to examine whether and how re-scaling through deliberate governance interventions impacts on democratic legitimacy and effective environmental policy delivery. To guide the enquiry, this article organizes existing—partly contradictory—claims on the relation of scale, democratic legitimacy, and environmental effectiveness into three clusters of mechanisms, integrating insights from multi-level governance, social-ecological systems, and public participation. We empirically examine Water Framework Directive implementation in a comparative case study of multi-level systems in the light of the suggested mechanisms. We compare two planning areas in Germany: North Rhine Westphalia and Lower Saxony. Findings suggest that the Water Framework Directive did have some impact on institutionalizing hydrological scales and participation. Local participation appears generally both more effective and legitimate than on higher levels, pointing to the need for yet more tailored multi-level governance approaches, depending on whether environmental knowledge or advocacy is sought. We find mixed results regarding the potential of participation to bridge spatial `misfits' between ecological and administrative scales of governance, depending on the historical institutionalization of governance on ecological scales. Polycentricity, finally, appeared somewhat favorable in effectiveness terms with some distinct differences regarding polycentricity in planning vs. polycentricity in implementation.

  12. Disentangling Puzzles of Spatial Scales and Participation in Environmental Governance-The Case of Governance Re-scaling Through the European Water Framework Directive.

    PubMed

    Newig, Jens; Schulz, Daniel; Jager, Nicolas W

    2016-12-01

    This article attempts to shed new light on prevailing puzzles of spatial scales in multi-level, participatory governance as regards the democratic legitimacy and environmental effectiveness of governance systems. We focus on the governance re-scaling by the European Water Framework Directive, which introduced new governance scales (mandated river basin management) and demands consultation of citizens and encourages 'active involvement' of stakeholders. This allows to examine whether and how re-scaling through deliberate governance interventions impacts on democratic legitimacy and effective environmental policy delivery. To guide the enquiry, this article organizes existing-partly contradictory-claims on the relation of scale, democratic legitimacy, and environmental effectiveness into three clusters of mechanisms, integrating insights from multi-level governance, social-ecological systems, and public participation. We empirically examine Water Framework Directive implementation in a comparative case study of multi-level systems in the light of the suggested mechanisms. We compare two planning areas in Germany: North Rhine Westphalia and Lower Saxony. Findings suggest that the Water Framework Directive did have some impact on institutionalizing hydrological scales and participation. Local participation appears generally both more effective and legitimate than on higher levels, pointing to the need for yet more tailored multi-level governance approaches, depending on whether environmental knowledge or advocacy is sought. We find mixed results regarding the potential of participation to bridge spatial 'misfits' between ecological and administrative scales of governance, depending on the historical institutionalization of governance on ecological scales. Polycentricity, finally, appeared somewhat favorable in effectiveness terms with some distinct differences regarding polycentricity in planning vs. polycentricity in implementation.

  13. A conformal approach for the analysis of the non-linear stability of radiation cosmologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luebbe, Christian, E-mail: c.luebbe@ucl.ac.uk; Department of Mathematics, University of Leicester, University Road, LE1 8RH; Valiente Kroon, Juan Antonio, E-mail: j.a.valiente-kroon@qmul.ac.uk

    2013-01-15

    The conformal Einstein equations for a trace-free (radiation) perfect fluid are derived in terms of the Levi-Civita connection of a conformally rescaled metric. These equations are used to provide a non-linear stability result for de Sitter-like trace-free (radiation) perfect fluid Friedman-Lemaitre-Robertson-Walker cosmological models. The solutions thus obtained exist globally towards the future and are future geodesically complete. - Highlights: Black-Right-Pointing-Pointer We study the Einstein-Euler system in General Relativity using conformal methods. Black-Right-Pointing-Pointer We analyze the structural properties of the associated evolution equations. Black-Right-Pointing-Pointer We establish the non-linear stability of pure radiation cosmological models.

  14. Law of corresponding states for open collaborations

    NASA Astrophysics Data System (ADS)

    Gherardi, Marco; Bassetti, Federico; Cosentino Lagomarsino, Marco

    2016-04-01

    We study the relation between number of contributors and product size in Wikipedia and GitHub. In contrast to traditional production, this is strongly probabilistic, but is characterized by two quantitative nonlinear laws: a power-law bound to product size for increasing number of contributors, and the universal collapse of rescaled distributions. A variant of the random-energy model shows that both laws are due to the heterogeneity of contributors, and displays an intriguing finite-size scaling property with no equivalent in standard systems. The analysis uncovers the right intensive densities, enabling the comparison of projects with different numbers of contributors on equal grounds. We use this property to expose the detrimental effects of conflicting interactions in Wikipedia.

  15. Large space structure damping design

    NASA Technical Reports Server (NTRS)

    Pilkey, W. D.; Haviland, J. K.

    1983-01-01

    Several FORTRAN subroutines and programs were developed which compute complex eigenvalues of a damped system using different approaches, and which rescale mode shapes to unit generalized mass and make rigid bodies orthogonal to each other. An analytical proof of a Minimum Constrained Frequency Criterion (MCFC) for a single damper is presented. A method to minimize the effect of control spill-over for large space structures is proposed. The characteristic equation of an undamped system with a generalized control law is derived using reanalysis theory. This equation can be implemented in computer programs for efficient eigenvalue analysis or control quasi synthesis. Methods to control vibrations in large space structure are reviewed and analyzed. The resulting prototype, using electromagnetic actuator, is described.

  16. NLO Higgs+jet at Large Transverse Momenta Including Top Quark Mass Effects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neumann, Tobias

    We present a next-to-leading order calculation of H+jet in gluon fusion including the effect of a finite top quark massmore » $$m_t$$ at large transverse momenta. Using the recently published two-loop amplitudes in the high energy expansion and our previous setup that includes finite $$m_t$$ effects in a low energy expansion, we are able to obtain $$m_t$$-finite results for transverse momenta below 225 GeV and above 500 GeV with negligible remaining top quark mass uncertainty. The only remaining region that has to rely on the common leading order rescaling approach is the threshold region $$\\sqrt{\\hat s}\\simeq 2m_t$$. We demonstrate that this rescaling provides an excellent approximation in the high $$p_T$$ region. Our calculation settles the issue of top quark mass effects at large transverse momenta. It is implemented in the parton level Monte Carlo code MCFM and is publicly available immediately in version 8.2.« less

  17. Quantitative body DW-MRI biomarkers uncertainty estimation using unscented wild-bootstrap.

    PubMed

    Freiman, M; Voss, S D; Mulkern, R V; Perez-Rossello, J M; Warfield, S K

    2011-01-01

    We present a new method for the uncertainty estimation of diffusion parameters for quantitative body DW-MRI assessment. Diffusion parameters uncertainty estimation from DW-MRI is necessary for clinical applications that use these parameters to assess pathology. However, uncertainty estimation using traditional techniques requires repeated acquisitions, which is undesirable in routine clinical use. Model-based bootstrap techniques, for example, assume an underlying linear model for residuals rescaling and cannot be utilized directly for body diffusion parameters uncertainty estimation due to the non-linearity of the body diffusion model. To offset this limitation, our method uses the Unscented transform to compute the residuals rescaling parameters from the non-linear body diffusion model, and then applies the wild-bootstrap method to infer the body diffusion parameters uncertainty. Validation through phantom and human subject experiments shows that our method identify the regions with higher uncertainty in body DWI-MRI model parameters correctly with realtive error of -36% in the uncertainty values.

  18. The challenges of rescaling South African water resources management: Catchment Management Agencies and interbasin transfers

    NASA Astrophysics Data System (ADS)

    Bourblanc, Magalie; Blanchon, David

    2014-11-01

    The implementation of Catchment Management Agencies (CMAs) was supposed to be the cornerstone of the rescaling process of the South African water reform policy. Yet, less than 10 years after the adoption of the National Water Act, the process was suspended for 4 years and by 2012 only two CMAs had been established. Combining approaches in geography and political science, this paper investigates the reasons for the delays in CMAs' implementation in South Africa. It shows that the construction of interbasin transfers (IBTs) since the 1950s by the apartheid regime and nowadays the power struggles between CMAs and the Department of Water Affairs (DWA) are two of the main obstacles to the creation of CMAs planned by the 1998 National Water Act (NWA). Finally, the paper advocates taking the "hydrosocial cycle" as an analytical framework for designing new institutional arrangements that will include both rectifying the legacy of the past (the specific role of DWA) and acknowledging legitimate local interests.

  19. Compressibility effect on thermal coherent structures in spatially-developing turbulent boundary layers via DNS

    NASA Astrophysics Data System (ADS)

    Araya, Guillermo; Jansen, Kenneth

    2017-11-01

    DNS of compressible spatially-developing turbulent boundary layers is performed at a Mach number of 2.5 over an isothermal flat plate. Turbulent inflow information is generated by following the concept of the rescaling-recycling approach introduced by Lund et al. (J. Comp. Phys. 140, 233-258, 1998); although, the proposed methodology is extended to compressible flows. Furthermore, a dynamic approach is employed to connect the friction velocities at the inlet and recycle stations (i.e., there is no need of an empirical correlation as in Lund et al.). Additionally, the Morkovin's Strong Reynolds Analogy (SRA) is used in the rescaling process of the thermal fluctuations from the recycle plane. Low/high order flow statistics is compared with direct simulations of an incompressible isothermal ZPG boundary layer at similar Reynolds numbers and temperature regarded as a passive scalar. Focus is given to the effect assessment of flow compressibility on the dynamics of thermal coherent structures. AFOSR #FA9550-17-1-0051.

  20. Differentiating induced and natural seismicity using space-time-magnitude statistics applied to the Coso Geothermal field

    USGS Publications Warehouse

    Schoenball, Martin; Davatzes, Nicholas C.; Glen, Jonathan M. G.

    2015-01-01

    A remarkable characteristic of earthquakes is their clustering in time and space, displaying their self-similarity. It remains to be tested if natural and induced earthquakes share the same behavior. We study natural and induced earthquakes comparatively in the same tectonic setting at the Coso Geothermal Field. Covering the preproduction and coproduction periods from 1981 to 2013, we analyze interevent times, spatial dimension, and frequency-size distributions for natural and induced earthquakes. Individually, these distributions are statistically indistinguishable. Determining the distribution of nearest neighbor distances in a combined space-time-magnitude metric, lets us identify clear differences between both kinds of seismicity. Compared to natural earthquakes, induced earthquakes feature a larger population of background seismicity and nearest neighbors at large magnitude rescaled times and small magnitude rescaled distances. Local stress perturbations induced by field operations appear to be strong enough to drive local faults through several seismic cycles and reactivate them after time periods on the order of a year.

  1. Joint refinement model for the spin resolved one-electron reduced density matrix of YTiO3 using magnetic structure factors and magnetic Compton profiles data.

    PubMed

    Gueddida, Saber; Yan, Zeyin; Kibalin, Iurii; Voufack, Ariste Bolivard; Claiser, Nicolas; Souhassou, Mohamed; Lecomte, Claude; Gillon, Béatrice; Gillet, Jean-Michel

    2018-04-28

    In this paper, we propose a simple cluster model with limited basis sets to reproduce the unpaired electron distributions in a YTiO 3 ferromagnetic crystal. The spin-resolved one-electron-reduced density matrix is reconstructed simultaneously from theoretical magnetic structure factors and directional magnetic Compton profiles using our joint refinement algorithm. This algorithm is guided by the rescaling of basis functions and the adjustment of the spin population matrix. The resulting spin electron density in both position and momentum spaces from the joint refinement model is in agreement with theoretical and experimental results. Benefits brought from magnetic Compton profiles to the entire spin density matrix are illustrated. We studied the magnetic properties of the YTiO 3 crystal along the Ti-O 1 -Ti bonding. We found that the basis functions are mostly rescaled by means of magnetic Compton profiles, while the molecular occupation numbers are mainly modified by the magnetic structure factors.

  2. Inferring Lévy walks from curved trajectories: A rescaling method

    NASA Astrophysics Data System (ADS)

    Tromer, R. M.; Barbosa, M. B.; Bartumeus, F.; Catalan, J.; da Luz, M. G. E.; Raposo, E. P.; Viswanathan, G. M.

    2015-08-01

    An important problem in the study of anomalous diffusion and transport concerns the proper analysis of trajectory data. The analysis and inference of Lévy walk patterns from empirical or simulated trajectories of particles in two and three-dimensional spaces (2D and 3D) is much more difficult than in 1D because path curvature is nonexistent in 1D but quite common in higher dimensions. Recently, a new method for detecting Lévy walks, which considers 1D projections of 2D or 3D trajectory data, has been proposed by Humphries et al. The key new idea is to exploit the fact that the 1D projection of a high-dimensional Lévy walk is itself a Lévy walk. Here, we ask whether or not this projection method is powerful enough to cleanly distinguish 2D Lévy walk with added curvature from a simple Markovian correlated random walk. We study the especially challenging case in which both 2D walks have exactly identical probability density functions (pdf) of step sizes as well as of turning angles between successive steps. Our approach extends the original projection method by introducing a rescaling of the projected data. Upon projection and coarse-graining, the renormalized pdf for the travel distances between successive turnings is seen to possess a fat tail when there is an underlying Lévy process. We exploit this effect to infer a Lévy walk process in the original high-dimensional curved trajectory. In contrast, no fat tail appears when a (Markovian) correlated random walk is analyzed in this way. We show that this procedure works extremely well in clearly identifying a Lévy walk even when there is noise from curvature. The present protocol may be useful in realistic contexts involving ongoing debates on the presence (or not) of Lévy walks related to animal movement on land (2D) and in air and oceans (3D).

  3. Mobility of power-law and Carreau fluids through fibrous media.

    PubMed

    Shahsavari, Setareh; McKinley, Gareth H

    2015-12-01

    The flow of generalized Newtonian fluids with a rate-dependent viscosity through fibrous media is studied, with a focus on developing relationships for evaluating the effective fluid mobility. Three methods are used here: (i) a numerical solution of the Cauchy momentum equation with the Carreau or power-law constitutive equations for pressure-driven flow in a fiber bed consisting of a periodic array of cylindrical fibers, (ii) an analytical solution for a unit cell model representing the flow characteristics of a periodic fibrous medium, and (iii) a scaling analysis of characteristic bulk parameters such as the effective shear rate, the effective viscosity, geometrical parameters of the system, and the fluid rheology. Our scaling analysis yields simple expressions for evaluating the transverse mobility functions for each model, which can be used for a wide range of medium porosity and fluid rheological parameters. While the dimensionless mobility is, in general, a function of the Carreau number and the medium porosity, our results show that for porosities less than ɛ≃0.65, the dimensionless mobility becomes independent of the Carreau number and the mobility function exhibits power-law characteristics as a result of the high shear rates at the pore scale. We derive a suitable criterion for determining the flow regime and the transition from a constant viscosity Newtonian response to a power-law regime in terms of a new Carreau number rescaled with a dimensionless function which incorporates the medium porosity and the arrangement of fibers.

  4. Texturing Space-Times in the Australian Curriculum: Cross-Curriculum Priorities

    ERIC Educational Resources Information Center

    Peacock, David; Lingard, Robert; Sellar, Sam

    2015-01-01

    The Australian curriculum, as a policy imagining what learning should take place in schools, and what that learning should achieve, involves the imagining and rescaling of social relations amongst students, their schools, the nation-state and the globe. Following David Harvey's theorisations of space-time and Norman Fairclough's operationalisation…

  5. Interaction Rescaled: How Monastic Debate Became a Diasporic Pedagogy

    ERIC Educational Resources Information Center

    Lempert, Michael

    2012-01-01

    Rather than assume the relevance of "a priori" scalar distinctions (micro-, macro-, meso-), this article examines scale as an emergent dimension of sociospatial practice in educational institutions. Focusing on Buddhist debate at Tibetan monasteries in India, I describe how this educational practice has been placed as a rite of…

  6. Impact of rescaling anomaly and seasonal components of soil moisture on hydrologic data assimilation

    USDA-ARS?s Scientific Manuscript database

    In hydrological sciences many observations and model simulations have moderate linear association due to the noise in the datasets and/or the systematic differences between their seasonality components. This degrades the performance of model-observation integration algorithms, such as the Kalman Fil...

  7. Rescaling Vocational Education: Workforce Development in a Metropolitan Region

    ERIC Educational Resources Information Center

    Lakes, Richard D.

    2008-01-01

    This article profiles a vocational charter school located in Atlanta as an institutional model for customized industry training in the high-tech production firms located nearby. Social partnerships with business and industry, parents and educators, and elected officials will be illuminated, exhibiting new forms of neoliberalism that reconstitute…

  8. Derringer desirability and kinetic plot LC-column comparison approach for MS-compatible lipopeptide analysis.

    PubMed

    D'Hondt, Matthias; Verbeke, Frederick; Stalmans, Sofie; Gevaert, Bert; Wynendaele, Evelien; De Spiegeleer, Bart

    2014-06-01

    Lipopeptides are currently re-emerging as an interesting subgroup in the peptide research field, having historical applications as antibacterial and antifungal agents and new potential applications as antiviral, antitumor, immune-modulating and cell-penetrating compounds. However, due to their specific structure, chromatographic analysis often requires special buffer systems or the use of trifluoroacetic acid, limiting mass spectrometry detection. Therefore, we used a traditional aqueous/acetonitrile based gradient system, containing 0.1% (m/v) formic acid, to separate four pharmaceutically relevant lipopeptides (polymyxin B 1 , caspofungin, daptomycin and gramicidin A 1 ), which were selected based upon hierarchical cluster analysis (HCA) and principal component analysis (PCA). In total, the performance of four different C18 columns, including one UPLC column, were evaluated using two parallel approaches. First, a Derringer desirability function was used, whereby six single and multiple chromatographic response values were rescaled into one overall D -value per column. Using this approach, the YMC Pack Pro C18 column was ranked as the best column for general MS-compatible lipopeptide separation. Secondly, the kinetic plot approach was used to compare the different columns at different flow rate ranges. As the optimal kinetic column performance is obtained at its maximal pressure, the length elongation factor λ ( P max / P exp ) was used to transform the obtained experimental data (retention times and peak capacities) and construct kinetic performance limit (KPL) curves, allowing a direct visual and unbiased comparison of the selected columns, whereby the YMC Triart C18 UPLC and ACE C18 columns performed as best. Finally, differences in column performance and the (dis)advantages of both approaches are discussed.

  9. Topological phase transition and unexpected mass acquisition of Dirac fermion in TlBi(S1-xSex)2

    NASA Astrophysics Data System (ADS)

    Niu, Chengwang; Dai, Ying; Zhu, Yingtao; Lu, Jibao; Ma, Yandong; Huang, Baibiao

    2012-10-01

    Based on first-principles calculations and effective Hamiltonian analysis, we predict a topological phase transition from normal to topological insulators and the opening of a gap without breaking the time-reversal symmetry in TlBi(S1-xSex)2. The transition can be driven by modulating the Se concentration, and the rescaled spin-orbit coupling and lattice parameters are the key ingredients for the transition. For topological surface states, the Dirac cone evolves differently as the explicit breaking of inversion symmetry and the energy band can be opened under asymmetry surface. Our results present theoretical evidence for experimental observations [Xu et al., Science 332, 560 (2011); Sato et al., Nat. Phys. 7, 840 (2011)].

  10. Drag reduction in the turbulent Kolmogorov flow.

    PubMed

    Boffetta, Guido; Celani, Antonio; Mazzino, Andrea

    2005-03-01

    We investigate the phenomenon of drag reduction in a viscoelastic fluid model of dilute polymer solutions. By means of direct numerical simulations of the three-dimensional turbulent Kolmogorov flow we show that drag reduction takes place above a critical Reynolds number Re(c). An explicit expression for the dependence of Re(c) on polymer elasticity and diffusivity is derived. The values of the drag coefficient obtained for different fluid parameters collapse onto a universal curve when plotted as a function of the rescaled Reynolds number Re/ Re(c). The analysis of the momentum budget allows us to gain some insight on the physics of drag reduction, and suggests the existence of a Re-independent value of the drag cofficient--lower than the Newtonian one--for large Reynolds numbers.

  11. Quantum cryptography with an ideal local relay

    NASA Astrophysics Data System (ADS)

    Spedalieri, Gaetana; Ottaviani, Carlo; Braunstein, Samuel L.; Gehring, Tobias; Jacobsen, Christian S.; Andersen, Ulrik L.; Pirandola, Stefano

    2015-10-01

    We consider two remote parties connected to a relay by two quantum channels. To generate a secret key, they transmit coherent states to the relay, where the states are subject to a continuous-variable (CV) Bell detection. We study the ideal case where Alice's channel is lossless, i.e., the relay is locally in her lab and the Bell detection is perfomed with unit efficiency. This configuration allows us to explore the optimal performances achievable by CV measurement-device-independent quantum key distribution. This corresponds to the limit of a trusted local relay, where the detection loss can be re-scaled. Our theoretical analysis is confirmed by an experimental simulation where 10-4 secret bits per use can potentially be distributed at 170km assuming ideal reconciliation.

  12. Efficient Power Network Analysis with Modeling of Inductive Effects

    NASA Astrophysics Data System (ADS)

    Zeng, Shan; Yu, Wenjian; Hong, Xianlong; Cheng, Chung-Kuan

    In this paper, an efficient method is proposed to accurately analyze large-scale power/ground (P/G) networks, where inductive parasitics are modeled with the partial reluctance. The method is based on frequency-domain circuit analysis and the technique of vector fitting [14], and obtains the time-domain voltage response at given P/G nodes. The frequency-domain circuit equation including partial reluctances is derived, and then solved with the GMRES algorithm with rescaling, preconditioning and recycling techniques. With the merit of sparsified reluctance matrix and iterative solving techniques for the frequency-domain circuit equations, the proposed method is able to handle large-scale P/G networks with complete inductive modeling. Numerical results show that the proposed method is orders of magnitude faster than HSPICE, several times faster than INDUCTWISE [4], and capable of handling the inductive P/G structures with more than 100, 000 wire segments.

  13. Universality in voting behavior: an empirical analysis

    PubMed Central

    Chatterjee, Arnab; Mitrović, Marija; Fortunato, Santo

    2013-01-01

    Election data represent a precious source of information to study human behavior at a large scale. In proportional elections with open lists, the number of votes received by a candidate, rescaled by the average performance of all competitors in the same party list, has the same distribution regardless of the country and the year of the election. Here we provide the first thorough assessment of this claim. We analyzed election datasets of 15 countries with proportional systems. We confirm that a class of nations with similar election rules fulfill the universality claim. Discrepancies from this trend in other countries with open-lists elections are always associated with peculiar differences in the election rules, which matter more than differences between countries and historical periods. Our analysis shows that the role of parties in the electoral performance of candidates is crucial: alternative scalings not taking into account party affiliations lead to poor results. PMID:23308342

  14. Roughness in the Kolmogorov Johnson Mehl Avrami framework: extension to (2+1)D of the Trofimov Park model

    NASA Astrophysics Data System (ADS)

    Pacchiarotti, Barbara; Fanfoni, Massimo; Tomellini, Massimo

    2005-12-01

    In this paper the reformulation of Trofimov-Park (TP) model, [V.I. Trofimov, Appl. Surf. Sci. 219 (2003) 93), of thin film roughness evolution during nucleation and growth of islands in case of simultaneous nucleation is presented. The calculation of TP restricted to one-dimensional triangular islands has been extended to both the one-dimensional elliptical case and to the pyramidal two-dimensional one. The kinetics of the interface width, w, and the height-height autocorrelation function G, through which the correlation length ξ has been defined, have been estimated. Moreover, w(Θ) and ξ(Θ), where Θ is the fraction of the covered substrate, if properly rescaled to the density of nuclei N and to the aspect ratio of islands, are universal functions that, for a conspicuous range of Θ, obey a power law with the exponent depending upon island shape.

  15. Contrast Gain Control in Auditory Cortex

    PubMed Central

    Rabinowitz, Neil C.; Willmore, Ben D.B.; Schnupp, Jan W.H.; King, Andrew J.

    2011-01-01

    Summary The auditory system must represent sounds with a wide range of statistical properties. One important property is the spectrotemporal contrast in the acoustic environment: the variation in sound pressure in each frequency band, relative to the mean pressure. We show that neurons in ferret auditory cortex rescale their gain to partially compensate for the spectrotemporal contrast of recent stimulation. When contrast is low, neurons increase their gain, becoming more sensitive to small changes in the stimulus, although the effectiveness of contrast gain control is reduced at low mean levels. Gain is primarily determined by contrast near each neuron's preferred frequency, but there is also a contribution from contrast in more distant frequency bands. Neural responses are modulated by contrast over timescales of ∼100 ms. By using contrast gain control to expand or compress the representation of its inputs, the auditory system may be seeking an efficient coding of natural sounds. PMID:21689603

  16. Timing the start of division in E. coli: a single-cell study

    NASA Astrophysics Data System (ADS)

    Reshes, G.; Vanounou, S.; Fishov, I.; Feingold, M.

    2008-12-01

    We monitor the shape dynamics of individual E. coli cells using time-lapse microscopy together with accurate image analysis. This allows measuring the dynamics of single-cell parameters throughout the cell cycle. In previous work, we have used this approach to characterize the main features of single-cell morphogenesis between successive divisions. Here, we focus on the behavior of the parameters that are related to cell division and study their variation over a population of 30 cells. In particular, we show that the single-cell data for the constriction width dynamics collapse onto a unique curve following appropriate rescaling of the corresponding variables. This suggests the presence of an underlying time scale that determines the rate at which the cell cycle advances in each individual cell. For the case of cell length dynamics a similar rescaling of variables emphasizes the presence of a breakpoint in the growth rate at the time when division starts, τc. We also find that the τc of individual cells is correlated with their generation time, τg, and inversely correlated with the corresponding length at birth, L0. Moreover, the extent of the T-period, τg - τc, is apparently independent of τg. The relations between τc, τg and L0 indicate possible compensation mechanisms that maintain cell length variability at about 10%. Similar behavior was observed for both fast-growing cells in a rich medium (LB) and for slower growth in a minimal medium (M9-glucose). To reveal the molecular mechanisms that lead to the observed organization of the cell cycle, we should further extend our approach to monitor the formation of the divisome.

  17. Transforming Functions by Rescaling Axes

    ERIC Educational Resources Information Center

    Ferguson, Robert

    2017-01-01

    Students are often asked to plot a generalised parent function from their knowledge of a parent function. One approach is to sketch the parent function, choose a few points on the parent function curve, transform and plot these points, and use the transformed points as a guide to sketching the generalised parent function. Another approach is to…

  18. "Representing Your Country": Scotland, PISA and New Spatialities of Educational Governance

    ERIC Educational Resources Information Center

    Lingard, Bob; Sellar, Sam

    2014-01-01

    This paper focuses on the rescaling and re-spatialization of policy and governance in education, including the constitution of a global education policy field. It deals with the changing education policy work of the OECD, particularly the influential Programme for International Student Assessment (PISA). We argue that PISA has become the most…

  19. Some Factor Analytic Approximations to Latent Class Structure.

    ERIC Educational Resources Information Center

    Dziuban, Charles D.; Denton, William T.

    Three procedures, alpha, image, and uniqueness rescaling, were applied to a joint occurrence probability matrix. That matrix was the basis of a well-known latent class structure. The values of the recurring subscript elements were varied as follows: Case 1 - The known elements were input; Case 2 - The upper bounds to the recurring subscript…

  20. Rescaling the complementary relationship for land surface evaporation

    NASA Astrophysics Data System (ADS)

    Crago, R.; Szilagyi, J.; Qualls, R.; Huntington, J.

    2016-11-01

    Recent research into the complementary relationship (CR) between actual and apparent potential evaporation has resulted in numerous alternative forms for the CR. Inspired by Brutsaert (2015), who derived a general CR in the form y = function (x), where x is the ratio of potential evaporation to apparent potential evaporation and y is the ratio of actual to apparent potential evaporation, an equation is proposed to calculate the value of x at which y goes to zero, denoted xmin. The value of xmin varies even at an individual observation site, but can be calculated using only the data required for the Penman (1948) equation as expressed here, so no calibration of xmin is required. It is shown that the scatter in x-y plots using experimental data is reduced when x is replaced by X = (x - xmin)/(1 - xmin). This rescaling results in data falling along the line y = X, which is proposed as a new version of the CR. While a reinterpretation of the fundamental boundary conditions proposed by Brutsaert (2015) is required, the physical constraints behind them are still met. An alternative formulation relating y to X is also discussed.

  1. Vibrational properties of nanocrystals from the Debye Scattering Equation

    DOE PAGES

    Scardi, P.; Gelisio, L.

    2016-02-26

    One hundred years after the original formulation by Petrus J.W. Debije (aka Peter Debye), the Debye Scattering Equation (DSE) is still the most accurate expression to model the diffraction pattern from nanoparticle systems. A major limitation in the original form of the DSE is that it refers to a static domain, so that including thermal disorder usually requires rescaling the equation by a Debye-Waller thermal factor. The last is taken from the traditional diffraction theory developed in Reciprocal Space (RS), which is opposed to the atomistic paradigm of the DSE, usually referred to as Direct Space (DS) approach. Besides beingmore » a hybrid of DS and RS expressions, rescaling the DSE by the Debye-Waller factor is an approximation which completely misses the contribution of Temperature Diffuse Scattering (TDS). The present work proposes a solution to include thermal effects coherently with the atomistic approach of the DSE. Here, a deeper insight into the vibrational dynamics of nanostructured materials can be obtained with few changes with respect to the standard formulation of the DSE, providing information on the correlated displacement of vibrating atoms.« less

  2. Phenomenology of stochastic exponential growth

    NASA Astrophysics Data System (ADS)

    Pirjol, Dan; Jafarpour, Farshid; Iyer-Biswas, Srividya

    2017-06-01

    Stochastic exponential growth is observed in a variety of contexts, including molecular autocatalysis, nuclear fission, population growth, inflation of the universe, viral social media posts, and financial markets. Yet literature on modeling the phenomenology of these stochastic dynamics has predominantly focused on one model, geometric Brownian motion (GBM), which can be described as the solution of a Langevin equation with linear drift and linear multiplicative noise. Using recent experimental results on stochastic exponential growth of individual bacterial cell sizes, we motivate the need for a more general class of phenomenological models of stochastic exponential growth, which are consistent with the observation that the mean-rescaled distributions are approximately stationary at long times. We show that this behavior is not consistent with GBM, instead it is consistent with power-law multiplicative noise with positive fractional powers. Therefore, we consider this general class of phenomenological models for stochastic exponential growth, provide analytical solutions, and identify the important dimensionless combination of model parameters, which determines the shape of the mean-rescaled distribution. We also provide a prescription for robustly inferring model parameters from experimentally observed stochastic growth trajectories.

  3. Dissipative gravitational bouncer on a vibrating surface

    NASA Astrophysics Data System (ADS)

    Espinoza Ortiz, J. S.; Lagos, R. E.

    2017-12-01

    We study the dynamical behavior of a particle flying under the influence of a gravitational field, with dissipation constant λ (Stokes-like), colliding successive times against a rigid surface vibrating harmonically with restitution coefficient α. We define re-scaled dimensionless dynamical variables, such as the relative particle velocity Ω with respect to the surface’s velocity; and the real parameter τ accounting for the temporal evolution of the system. At the particle-surface contact point and for the k‧th collision, we construct the mapping described by (τk ; Ω k ) in order to analyze the system’s nonlinear dynamical behavior. From the dynamical mapping, the fixed point trajectory is computed and its stability is analyzed. We find the dynamical behavior of the fixed point trajectory to be stable or unstable, depending on the values of the re-scaled vibrating surface amplitude Γ, the restitution coefficient α and the damping constant λ. Other important dynamical aspects such as the phase space volume and the one cycle vibrating surface (decomposed into absorbing and transmitting regions) are also discussed. Furthermore, the model rescues well known results in the limit λ = 0.

  4. Multi-Frequency Signal Detection Based on Frequency Exchange and Re-Scaling Stochastic Resonance and Its Application to Weak Fault Diagnosis.

    PubMed

    Liu, Jinjun; Leng, Yonggang; Lai, Zhihui; Fan, Shengbo

    2018-04-25

    Mechanical fault diagnosis usually requires not only identification of the fault characteristic frequency, but also detection of its second and/or higher harmonics. However, it is difficult to detect a multi-frequency fault signal through the existing Stochastic Resonance (SR) methods, because the characteristic frequency of the fault signal as well as its second and higher harmonics frequencies tend to be large parameters. To solve the problem, this paper proposes a multi-frequency signal detection method based on Frequency Exchange and Re-scaling Stochastic Resonance (FERSR). In the method, frequency exchange is implemented using filtering technique and Single SideBand (SSB) modulation. This new method can overcome the limitation of "sampling ratio" which is the ratio of the sampling frequency to the frequency of target signal. It also ensures that the multi-frequency target signals can be processed to meet the small-parameter conditions. Simulation results demonstrate that the method shows good performance for detecting a multi-frequency signal with low sampling ratio. Two practical cases are employed to further validate the effectiveness and applicability of this method.

  5. Linear and nonlinear characteristics of the runoff response to regional climate factors in the Qira River basin, Xinjiang, Northwest China.

    PubMed

    Xue, Jie; Gui, Dongwei

    2015-01-01

    The inland river watersheds of arid Northwest China represent an example of how, in recent times, climatic warming has increased the complexity of Earth's hydrological processes. In the present study, the linear and nonlinear characteristics of the runoff response to temperature and precipitation were investigated in the Qira River basin, located on the northern slope of the Kunlun Mountains. The results showed that average temperature on annual and seasonal scales has displayed a significantly increasing trend, but this has not been reflected in accumulated precipitation and runoff. Using path analysis, a positive link between precipitation and runoff was found both annually and in the summer season. Conversely, it was found that the impact of temperature on runoff has been negative since the 1960s, attributable to higher evaporation and infiltration in the Qira River basin. Over the past 50 years, abrupt changes in annual temperature, precipitation and runoff occurred in 1997, 1987 and 1995, respectively. Combined with analysis using the correlation dimension method, it was found that the temperature, precipitation and runoff, both annually and seasonally, possessed chaotic dynamic characteristics, implying that complex hydro-climatic processes must be introduced into other variables within models to describe the dynamics. In addition, as determined via rescaled range analysis, a consistent annual and seasonal decreasing trend in runoff under increasing temperature and precipitation conditions in the future should be taken into account. This work may provide a theoretical perspective that can be applied to the proper use and management of oasis water resources in the lower reaches of river basins like that of the Qira River.

  6. Linear and nonlinear characteristics of the runoff response to regional climate factors in the Qira River basin, Xinjiang, Northwest China

    PubMed Central

    Xue, Jie

    2015-01-01

    The inland river watersheds of arid Northwest China represent an example of how, in recent times, climatic warming has increased the complexity of Earth’s hydrological processes. In the present study, the linear and nonlinear characteristics of the runoff response to temperature and precipitation were investigated in the Qira River basin, located on the northern slope of the Kunlun Mountains. The results showed that average temperature on annual and seasonal scales has displayed a significantly increasing trend, but this has not been reflected in accumulated precipitation and runoff. Using path analysis, a positive link between precipitation and runoff was found both annually and in the summer season. Conversely, it was found that the impact of temperature on runoff has been negative since the 1960s, attributable to higher evaporation and infiltration in the Qira River basin. Over the past 50 years, abrupt changes in annual temperature, precipitation and runoff occurred in 1997, 1987 and 1995, respectively. Combined with analysis using the correlation dimension method, it was found that the temperature, precipitation and runoff, both annually and seasonally, possessed chaotic dynamic characteristics, implying that complex hydro-climatic processes must be introduced into other variables within models to describe the dynamics. In addition, as determined via rescaled range analysis, a consistent annual and seasonal decreasing trend in runoff under increasing temperature and precipitation conditions in the future should be taken into account. This work may provide a theoretical perspective that can be applied to the proper use and management of oasis water resources in the lower reaches of river basins like that of the Qira River. PMID:26244113

  7. Using the Graded Response Model to Control Spurious Interactions in Moderated Multiple Regression

    ERIC Educational Resources Information Center

    Morse, Brendan J.; Johanson, George A.; Griffeth, Rodger W.

    2012-01-01

    Recent simulation research has demonstrated that using simple raw score to operationalize a latent construct can result in inflated Type I error rates for the interaction term of a moderated statistical model when the interaction (or lack thereof) is proposed at the latent variable level. Rescaling the scores using an appropriate item response…

  8. Relative Performance of Rescaling and Resampling Approaches to Model Chi Square and Parameter Standard Error Estimation in Structural Equation Modeling.

    ERIC Educational Resources Information Center

    Nevitt, Johnathan; Hancock, Gregory R.

    Though common structural equation modeling (SEM) methods are predicated upon the assumption of multivariate normality, applied researchers often find themselves with data clearly violating this assumption and without sufficient sample size to use distribution-free estimation methods. Fortunately, promising alternatives are being integrated into…

  9. pyMOOGi - python wrapper for MOOG

    NASA Astrophysics Data System (ADS)

    Adamow, Monika M.

    2017-06-01

    pyMOOGi is a python wrapper for MOOG. It allows to use MOOG in a classical, interactive way, but with all graphics handled by python libraries. Some MOOG features have been redesigned, like plotting with abfind driver. Also, new funtions have been added, like automatic rescaling of stellar spectrum for synth driver. pyMOOGi is an open source project.

  10. Rescaling the Local: Multi-Academy Trusts, Private Monopoly and Statecraft in England

    ERIC Educational Resources Information Center

    Wilkins, Andrew

    2017-01-01

    For the past six years successive UK governments in England have introduced reforms intended to usher in less aggregated, top-down, bureaucratically overloaded models of service delivery. Yet the "hollowing out" of local government has not resulted in less bureaucracy on the ground or less regulation from above, nor has it diminished…

  11. Towards a Topological Re-Assemblage of Education Policy? Observing the Implementation of Performance Data Infrastructures and "Centers of Calculation" in Germany

    ERIC Educational Resources Information Center

    Hartong, Sigrid

    2018-01-01

    The ongoing trend towards educational globalisation has brought about various dynamics of education policy "rescaling," resulting in a growing number of governmental arrangements, which are operating across traditional scales, levels or sectors of policy. This contribution takes up the conceptual frameworks of topological spatialisation…

  12. Electron Alfvén waves in collisionless magnetic reconnection with a guide field

    NASA Astrophysics Data System (ADS)

    Zhao, S.; Wang, X.; Xiao, C.; Pu, Z.

    2017-12-01

    It is well known that many wave modes may be related to some important reconnection issues, such as particle acceleration, the reconnection trigger, reconnection rate, etc. Here a new wave mode, the electron Alfvén wave, is introduced for the first time, with both theoretical derivations and observational data analysis. Firstly, we present a theoretical derivation of the dispersion relations of the electron Alfvén mode in a rescaled `Electron Fluid' model. Secondly, based on in situ measurements of the Magnetospheric Multiscale Mission (MMS) spacecraft, an electron Alfvén wave is identified in the electron dissipation region of a reconnection event at the magnetopause. In the last part, the excitation of the electron Alfven waves and some related reconnection issues are discussed.

  13. Mean-field theory of active electrolytes: Dynamic adsorption and overscreening

    NASA Astrophysics Data System (ADS)

    Frydel, Derek; Podgornik, Rudolf

    2018-05-01

    We investigate active electrolytes within the mean-field level of description. The focus is on how the double-layer structure of passive, thermalized charges is affected by active dynamics of constituting ions. One feature of active dynamics is that particles adhere to hard surfaces, regardless of chemical properties of a surface and specifically in complete absence of any chemisorption or physisorption. To carry out the mean-field analysis of the system that is out of equilibrium, we develop the "mean-field simulation" technique, where the simulated system consists of charged parallel sheets moving on a line and obeying active dynamics, with the interaction strength rescaled by the number of sheets. The mean-field limit becomes exact in the limit of an infinite number of movable sheets.

  14. Analysis of dynamic cerebral autoregulation using an ARX model based on arterial blood pressure and middle cerebral artery velocity simulation.

    PubMed

    Liu, Y; Allen, R

    2002-09-01

    The study aimed to model the cerebrovascular system, using a linear ARX model based on data simulated by a comprehensive physiological model, and to assess the range of applicability of linear parametric models. Arterial blood pressure (ABP) and middle cerebral arterial blood flow velocity (MCAV) were measured from 11 subjects non-invasively, following step changes in ABP, using the thigh cuff technique. By optimising parameters associated with autoregulation, using a non-linear optimisation technique, the physiological model showed a good performance (r=0.83+/-0.14) in fitting MCAV. An additional five sets of measured ABP of length 236+/-154 s were acquired from a subject at rest. These were normalised and rescaled to coefficients of variation (CV=SD/mean) of 2% and 10% for model comparisons. Randomly generated Gaussian noise with standard deviation (SD) from 1% to 5% was added to both ABP and physiologically simulated MCAV (SMCAV), with 'normal' and 'impaired' cerebral autoregulation, to simulate the real measurement conditions. ABP and SMCAV were fitted by ARX modelling, and cerebral autoregulation was quantified by a 5 s recovery percentage R5% of the step responses of the ARX models. The study suggests that cerebral autoregulation can be assessed by computing the R5% of the step response of an ARX model of appropriate order, even when measurement noise is considerable.

  15. Constraining the equation of state with identified particle spectra

    NASA Astrophysics Data System (ADS)

    Monnai, Akihiko; Ollitrault, Jean-Yves

    2017-10-01

    We show that in a central nucleus-nucleus collision, the variation of the mean transverse mass with the multiplicity is determined, up to a rescaling, by the variation of the energy over entropy ratio as a function of the entropy density, thus providing a direct link between experimental data and the equation of state. Each colliding energy thus probes the equation of state at an effective entropy density, whose approximate value is 19 fm-3 for Au+Au collisions at 200 GeV and 41 fm-3 for Pb+Pb collisions at 2.76 TeV, corresponding to temperatures of 227 and 279 MeV if the equation of state is taken from lattice calculations. The relative change of the mean transverse mass as a function of the colliding energy gives a direct measure of the pressure over energy density ratio P /ɛ , at the corresponding effective density. Using Relativistic Heavy Ion Collider (RHIC) and Large Hadron Collider (LHC) data, we obtain P /ɛ =0.21 ±0.10 , in agreement with the lattice value P /ɛ =0.23 in the corresponding temperature range. Measurements over a wide range of colliding energies using a single detector with good particle identification would help reduce the error.

  16. Interaction of chiral rafts in self-assembled colloidal membranes

    NASA Astrophysics Data System (ADS)

    Xie, Sheng; Hagan, Michael F.; Pelcovits, Robert A.

    2016-03-01

    Colloidal membranes are monolayer assemblies of rodlike particles that capture the long-wavelength properties of lipid bilayer membranes on the colloidal scale. Recent experiments on colloidal membranes formed by chiral rodlike viruses showed that introducing a second species of virus with different length and opposite chirality leads to the formation of rafts—micron-sized domains of one virus species floating in a background of the other viruses [Sharma et al., Nature (London) 513, 77 (2014), 10.1038/nature13694]. In this article we study the interaction of such rafts using liquid crystal elasticity theory. By numerically minimizing the director elastic free energy, we predict the tilt angle profile for both a single raft and two rafts in a background membrane, and the interaction between two rafts as a function of their separation. We find that the chiral penetration depth in the background membrane sets the scale for the range of the interaction. We compare our results with the experimental data and find good agreement for the strength and range of the interaction. Unlike the experiments, however, we do not observe a complete collapse of the data when rescaled by the tilt angle at the raft edge.

  17. Self-organized criticality: An interplay between stable and turbulent regimes of multiple anodic double layers in glow discharge plasma

    NASA Astrophysics Data System (ADS)

    Alex, Prince; Carreras, Benjamin Andres; Arumugam, Saravanan; Sinha, Suraj Kumar

    2018-05-01

    The role of self-organized criticality (SOC) in the transformation of multiple anodic double layers (MADLs) from the stable to turbulent regime has been investigated experimentally as the system approaches towards critical behavior. The experiment was performed in a modified glow discharge plasma setup, and the initial stable state of MADL comprising three concentric perceptible layers was produced when the drift velocity of electrons towards the anode exceeds the electron thermal velocity (νd ≥ 1.3νte). The macroscopic arrangement of both positive and negative charges in opposite layers of MADL is attributed to the self-organization scenario. Beyond νd ≥ 3νte, MADL begins to collapse and approaches critical and supercritical states through layer reduction which continue till the last remaining layer of the double layer is transformed into a highly unstable radiant anode glow. The avalanche resulting from the collapse of MADL leads to the rise of turbulence in the system. Long-range correlations, a key signature of SOC, have been explored in the turbulent floating potential fluctuations using the rescaled-range analysis technique. The result shows that the existence of the self-similarity regime with self-similarity parameter H varies between 0.55 and 0.91 for time lags longer than the decorrelation time. The power law tail in the rank function, slowly decaying tail of the autocorrelation function, and 1/f behavior of the power spectra of the fluctuations are consistent with the fact that SOC plays a conclusive role in the transformation of MADL from the stable to turbulent regime. Since the existence of SOC gives a measure of complexity in the system, the result provides the condition under which complexity arises in cold plasma.

  18. Fokker-Planck description for the queue dynamics of large tick stocks.

    PubMed

    Garèche, A; Disdier, G; Kockelkoren, J; Bouchaud, J-P

    2013-09-01

    Motivated by empirical data, we develop a statistical description of the queue dynamics for large tick assets based on a two-dimensional Fokker-Planck (diffusion) equation. Our description explicitly includes state dependence, i.e., the fact that the drift and diffusion depend on the volume present on both sides of the spread. "Jump" events, corresponding to sudden changes of the best limit price, must also be included as birth-death terms in the Fokker-Planck equation. All quantities involved in the equation can be calibrated using high-frequency data on the best quotes. One of our central findings is that the dynamical process is approximately scale invariant, i.e., the only relevant variable is the ratio of the current volume in the queue to its average value. While the latter shows intraday seasonalities and strong variability across stocks and time periods, the dynamics of the rescaled volumes is universal. In terms of rescaled volumes, we found that the drift has a complex two-dimensional structure, which is a sum of a gradient contribution and a rotational contribution, both stable across stocks and time. This drift term is entirely responsible for the dynamical correlations between the ask queue and the bid queue.

  19. Fokker-Planck description for the queue dynamics of large tick stocks

    NASA Astrophysics Data System (ADS)

    Garèche, A.; Disdier, G.; Kockelkoren, J.; Bouchaud, J.-P.

    2013-09-01

    Motivated by empirical data, we develop a statistical description of the queue dynamics for large tick assets based on a two-dimensional Fokker-Planck (diffusion) equation. Our description explicitly includes state dependence, i.e., the fact that the drift and diffusion depend on the volume present on both sides of the spread. “Jump” events, corresponding to sudden changes of the best limit price, must also be included as birth-death terms in the Fokker-Planck equation. All quantities involved in the equation can be calibrated using high-frequency data on the best quotes. One of our central findings is that the dynamical process is approximately scale invariant, i.e., the only relevant variable is the ratio of the current volume in the queue to its average value. While the latter shows intraday seasonalities and strong variability across stocks and time periods, the dynamics of the rescaled volumes is universal. In terms of rescaled volumes, we found that the drift has a complex two-dimensional structure, which is a sum of a gradient contribution and a rotational contribution, both stable across stocks and time. This drift term is entirely responsible for the dynamical correlations between the ask queue and the bid queue.

  20. Lumen-based detection of prostate cancer via convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Kwak, Jin Tae; Hewitt, Stephen M.

    2017-03-01

    We present a deep learning approach for detecting prostate cancers. The approach consists of two steps. In the first step, we perform tissue segmentation that identifies lumens within digitized prostate tissue specimen images. Intensity- and texture-based image features are computed at five different scales, and a multiview boosting method is adopted to cooperatively combine the image features from differing scales and to identify lumens. In the second step, we utilize convolutional neural networks (CNN) to automatically extract high-level image features of lumens and to predict cancers. The segmented lumens are rescaled to reduce computational complexity and data augmentation by scaling, rotating, and flipping the rescaled image is applied to avoid overfitting. We evaluate the proposed method using two tissue microarrays (TMA) - TMA1 includes 162 tissue specimens (73 Benign and 89 Cancer) and TMA2 comprises 185 tissue specimens (70 Benign and 115 Cancer). In cross-validation on TMA1, the proposed method achieved an AUC of 0.95 (CI: 0.93-0.98). Trained on TMA1 and tested on TMA2, CNN obtained an AUC of 0.95 (CI: 0.92-0.98). This demonstrates that the proposed method can potentially improve prostate cancer pathology.

  1. A new algorithm for detection of apnea in infants in neonatal intensive care units

    NASA Astrophysics Data System (ADS)

    Lee, Hoshik; Vergales, Brooke; Paget-Brown, Alix; Rusin, Craig; Moorman, Randall; Kattwinkel, John; Delos, John

    2011-03-01

    Apnea is a very common problem for premature infants: apnea of prematurity (AOP) occurs in >50% of babies whose birth weight is less than 1500 g, and AOP is found in almost all babies who are < 1000 g at birth. Current respiration detectors often fail to detect apnea, and also give many false alarms. We have created a new algorithm for detection of apnea. Respiration is monitored by continuous measurement of chest impedance (CI). However, the pulsing of the heart also causes fluctuations in CI. We developed a new adaptive filtering system to remove heart activity from CI, thereby giving much more reliable measurements of respiration. The new approach is to rescale the impedance measurement to heartbeat-time, sampling 30 times per interbeat interval. We take the Fourier transform of the rescaled signal, bandstop filter at 1 per beat to remove fluctuations due to heartbeats, and then take the inverse transform. The filtered signal retains all properties except the impedance changes due to cardiac filling and emptying. We convert the variance of CI into an estimated likelihood of apnea. This work is supported by NICHD 5RCZHD064488.

  2. Multi-Frequency Signal Detection Based on Frequency Exchange and Re-Scaling Stochastic Resonance and Its Application to Weak Fault Diagnosis

    PubMed Central

    Leng, Yonggang; Fan, Shengbo

    2018-01-01

    Mechanical fault diagnosis usually requires not only identification of the fault characteristic frequency, but also detection of its second and/or higher harmonics. However, it is difficult to detect a multi-frequency fault signal through the existing Stochastic Resonance (SR) methods, because the characteristic frequency of the fault signal as well as its second and higher harmonics frequencies tend to be large parameters. To solve the problem, this paper proposes a multi-frequency signal detection method based on Frequency Exchange and Re-scaling Stochastic Resonance (FERSR). In the method, frequency exchange is implemented using filtering technique and Single SideBand (SSB) modulation. This new method can overcome the limitation of "sampling ratio" which is the ratio of the sampling frequency to the frequency of target signal. It also ensures that the multi-frequency target signals can be processed to meet the small-parameter conditions. Simulation results demonstrate that the method shows good performance for detecting a multi-frequency signal with low sampling ratio. Two practical cases are employed to further validate the effectiveness and applicability of this method. PMID:29693577

  3. Recalibration in functional perceptual-motor tasks: A systematic review.

    PubMed

    Brand, Milou Tessa; de Oliveira, Rita Ferraz

    2017-12-01

    Skilled actions are the result of a perceptual-motor system being well-calibrated to the appropriate information variables. Changes to the perceptual or motor system initiates recalibration, which is the rescaling of the perceptual-motor system to informational variables. For example, a professional baseball player may need to rescale their throws due to fatigue. The aim of this systematic review is to analyse how recalibration can and has been measured and also to evaluate the literature on recalibration. Five databases were systematically screened to identify literature that reported experiments where a disturbance was applied to the perceptual-motor system in functional perceptual-motor tasks. Each of the 91 experiments reported the immediate effects of a disturbance and/or the effects of removing that disturbance after recalibration. The results showed that experiments applied disturbances to either perception or action, and used either direct or indirect measures of recalibration. In contrast with previous conclusions, active exploration was only sufficient for fast recalibration when the relevant information source was available. Further research into recalibration mechanisms should include the study of information sources as well as skill expertise. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Implications of Secondary Aftershocks for Failure Processes

    NASA Astrophysics Data System (ADS)

    Gross, S. J.

    2001-12-01

    When a seismic sequence with more than one mainshock or an unusually large aftershock occurs, there is a compound aftershock sequence. The secondary aftershocks need not have exactly the same decay as the primary sequence, with the differences having implications for the failure process. When the stress step from the secondary mainshock is positive but not large enough to cause immediate failure of all the remaining primary aftershocks, failure processes which involve accelerating slip will produce secondary aftershocks that decay more rapidly than primary aftershocks. This is because the primary aftershocks are an accelerated version of the background seismicity, and secondary aftershocks are an accelerated version of the primary aftershocks. Real stress perturbations may be negative, and heterogeneities in mainshock stress fields mean that the real world situation is quite complicated. I will first describe and verify my picture of secondary aftershock decay with reference to a simple numerical model of slipping faults which obeys rate and state dependent friction and lacks stress heterogeneity. With such a model, it is possible to generate secondary aftershock sequences with perturbed decay patterns, quantify those patterns, and develop an analysis technique capable of correcting for the effect in real data. The secondary aftershocks are defined in terms of frequency linearized time s(T), which is equal to the number of primary aftershocks expected by a time T, $ s ≡ ∫ t=0T n(t) dt, where the start time t=0 is the time of the primary aftershock, and the primary aftershock decay function n(t) is extrapolated forward to the times of the secondary aftershocks. In the absence of secondary sequences the function s(T)$ re-scales the time so that approximately one event occurs per new time unit; the aftershock sequence is gone. If this rescaling is applied in the presence of a secondary sequence, the secondary sequence is shaped like a primary aftershock sequence, and can be fit by the same modeling techniques applied to simple sequences. The later part of the presentation will concern the decay of Hector Mine aftershocks as influenced by the Landers aftershocks. Although attempts to predict the abundance of Hector aftershocks based on stress overlap analysis are not very successful, the analysis does do a good job fitting the decay of secondary sequences.

  5. A numerical study of attraction/repulsion collective behavior models: 3D particle analyses and 1D kinetic simulations

    NASA Astrophysics Data System (ADS)

    Vecil, Francesco; Lafitte, Pauline; Rosado Linares, Jesús

    2013-10-01

    We study at particle and kinetic level a collective behavior model based on three phenomena: self-propulsion, friction (Rayleigh effect) and an attractive/repulsive (Morse) potential rescaled so that the total mass of the system remains constant independently of the number of particles N. In the first part of the paper, we introduce the particle model: the agents are numbered and described by their position and velocity. We identify five parameters that govern the possible asymptotic states for this system (clumps, spheres, dispersion, mills, rigid-body rotation, flocks) and perform a numerical analysis on the 3D setting. Then, in the second part of the paper, we describe the kinetic system derived as the limit from the particle model as N tends to infinity; we propose, in 1D, a numerical scheme for the simulations, and perform a numerical analysis devoted to trying to recover asymptotically patterns similar to those emerging for the equivalent particle systems, when particles originally evolved on a circle.

  6. Empirical analysis of online human dynamics

    NASA Astrophysics Data System (ADS)

    Zhao, Zhi-Dan; Zhou, Tao

    2012-06-01

    Patterns of human activities have attracted increasing academic interests, since the quantitative understanding of human behavior is helpful to uncover the origins of many socioeconomic phenomena. This paper focuses on behaviors of Internet users. Six large-scale systems are studied in our experiments, including the movie-watching in Netflix and MovieLens, the transaction in Ebay, the bookmark-collecting in Delicious, and the posting in FreindFeed and Twitter. Empirical analysis reveals some common statistical features of online human behavior: (1) The total number of user's actions, the user's activity, and the interevent time all follow heavy-tailed distributions. (2) There exists a strongly positive correlation between user's activity and the total number of user's actions, and a significantly negative correlation between the user's activity and the width of the interevent time distribution. We further study the rescaling method and show that this method could to some extent eliminate the different statistics among users caused by the different activities, yet the effectiveness depends on the data sets.

  7. Spatial and temporal stability of temperature in the first-level basins of China during 1951-2013

    NASA Astrophysics Data System (ADS)

    Cheng, Yuting; Li, Peng; Xu, Guoce; Li, Zhanbin; Cheng, Shengdong; Wang, Bin; Zhao, Binhua

    2018-05-01

    In recent years, global warming has attracted great attention around the world. Temperature change is not only involved in global climate change but also closely linked to economic development, the ecological environment, and agricultural production. In this study, based on temperature data recorded by 756 meteorological stations in China during 1951-2013, the spatial and temporal stability characteristics of annual temperature in China and its first-level basins were investigated using the rank correlation coefficient method, the relative difference method, rescaled range (R/S) analysis, and wavelet transforms. The results showed that during 1951-2013, the spatial variation of annual temperature belonged to moderate variability in the national level. Among the first-level basins, the largest variation coefficient was 114% in the Songhuajiang basin and the smallest variation coefficient was 10% in the Huaihe basin. During 1951-2013, the spatial distribution pattern of annual temperature presented extremely strong spatial and temporal stability characteristics in the national level. The variation range of Spearman's rank correlation coefficient was 0.97-0.99, and the spatial distribution pattern of annual temperature showed an increasing trend. In the national level, the Liaohe basin, the rivers in the southwestern region, the Haihe basin, the Yellow River basin, the Yangtze River basin, the Huaihe basin, the rivers in the southeastern region, and the Pearl River basin all had representative meteorological stations for annual temperature. In the Songhuajiang basin and the rivers in the northwestern region, there was no representative meteorological station. R/S analysis, the Mann-Kendall test, and the Morlet wavelet analysis of annual temperature showed that the best representative meteorological station could reflect the variation trend and the main periodic changes of annual temperature in the region. Therefore, strong temporal stability characteristics exist for annual temperature in China and its first-level basins. It was therefore feasible to estimate the annual average temperature by the annual temperature recorded by the representative meteorological station in the region. Moreover, it was of great significance to assess average temperature changes quickly and forecast future change tendencies in the region.

  8. Histogram-based normalization technique on human brain magnetic resonance images from different acquisitions.

    PubMed

    Sun, Xiaofei; Shi, Lin; Luo, Yishan; Yang, Wei; Li, Hongpeng; Liang, Peipeng; Li, Kuncheng; Mok, Vincent C T; Chu, Winnie C W; Wang, Defeng

    2015-07-28

    Intensity normalization is an important preprocessing step in brain magnetic resonance image (MRI) analysis. During MR image acquisition, different scanners or parameters would be used for scanning different subjects or the same subject at a different time, which may result in large intensity variations. This intensity variation will greatly undermine the performance of subsequent MRI processing and population analysis, such as image registration, segmentation, and tissue volume measurement. In this work, we proposed a new histogram normalization method to reduce the intensity variation between MRIs obtained from different acquisitions. In our experiment, we scanned each subject twice on two different scanners using different imaging parameters. With noise estimation, the image with lower noise level was determined and treated as the high-quality reference image. Then the histogram of the low-quality image was normalized to the histogram of the high-quality image. The normalization algorithm includes two main steps: (1) intensity scaling (IS), where, for the high-quality reference image, the intensities of the image are first rescaled to a range between the low intensity region (LIR) value and the high intensity region (HIR) value; and (2) histogram normalization (HN),where the histogram of low-quality image as input image is stretched to match the histogram of the reference image, so that the intensity range in the normalized image will also lie between LIR and HIR. We performed three sets of experiments to evaluate the proposed method, i.e., image registration, segmentation, and tissue volume measurement, and compared this with the existing intensity normalization method. It is then possible to validate that our histogram normalization framework can achieve better results in all the experiments. It is also demonstrated that the brain template with normalization preprocessing is of higher quality than the template with no normalization processing. We have proposed a histogram-based MRI intensity normalization method. The method can normalize scans which were acquired on different MRI units. We have validated that the method can greatly improve the image analysis performance. Furthermore, it is demonstrated that with the help of our normalization method, we can create a higher quality Chinese brain template.

  9. Element analysis: a wavelet-based method for analysing time-localized events in noisy time series.

    PubMed

    Lilly, Jonathan M

    2017-04-01

    A method is derived for the quantitative analysis of signals that are composed of superpositions of isolated, time-localized 'events'. Here, these events are taken to be well represented as rescaled and phase-rotated versions of generalized Morse wavelets, a broad family of continuous analytic functions. Analysing a signal composed of replicates of such a function using another Morse wavelet allows one to directly estimate the properties of events from the values of the wavelet transform at its own maxima. The distribution of events in general power-law noise is determined in order to establish significance based on an expected false detection rate. Finally, an expression for an event's 'region of influence' within the wavelet transform permits the formation of a criterion for rejecting spurious maxima due to numerical artefacts or other unsuitable events. Signals can then be reconstructed based on a small number of isolated points on the time/scale plane. This method, termed element analysis , is applied to the identification of long-lived eddy structures in ocean currents as observed by along-track measurements of sea surface elevation from satellite altimetry.

  10. Faraday rotation data analysis with least-squares elliptical fitting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, Adam D.; McHale, G. Brent; Goerz, David A.

    2010-10-15

    A method of analyzing Faraday rotation data from pulsed magnetic field measurements is described. The method uses direct least-squares elliptical fitting to measured data. The least-squares fit conic parameters are used to rotate, translate, and rescale the measured data. Interpretation of the transformed data provides improved accuracy and time-resolution characteristics compared with many existing methods of analyzing Faraday rotation data. The method is especially useful when linear birefringence is present at the input or output of the sensing medium, or when the relative angle of the polarizers used in analysis is not aligned with precision; under these circumstances the methodmore » is shown to return the analytically correct input signal. The method may be pertinent to other applications where analysis of Lissajous figures is required, such as the velocity interferometer system for any reflector (VISAR) diagnostics. The entire algorithm is fully automated and requires no user interaction. An example of algorithm execution is shown, using data from a fiber-based Faraday rotation sensor on a capacitive discharge experiment.« less

  11. GREAT: a gradient-based color-sampling scheme for Retinex.

    PubMed

    Lecca, Michela; Rizzi, Alessandro; Serapioni, Raul Paolo

    2017-04-01

    Modeling the local color spatial distribution is a crucial step for the algorithms of the Milano Retinex family. Here we present GREAT, a novel, noise-free Milano Retinex implementation based on an image-aware spatial color sampling. For each channel of a color input image, GREAT computes a 2D set of edges whose magnitude exceeds a pre-defined threshold. Then GREAT re-scales the channel intensity of each image pixel, called target, by the average of the intensities of the selected edges weighted by a function of their positions, gradient magnitudes, and intensities relative to the target. In this way, GREAT enhances the input image, adjusting its brightness, contrast and dynamic range. The use of the edges as pixels relevant to color filtering is justified by the importance that edges play in human color sensation. The name GREAT comes from the expression "Gradient RElevAnce for ReTinex," which refers to the threshold-based definition of a gradient relevance map for edge selection and thus for image color filtering.

  12. Interacting Multiscale Acoustic Vortices as Coherent Excitations in Dust Acoustic Wave Turbulence

    NASA Astrophysics Data System (ADS)

    Lin, Po-Cheng; I, Lin

    2018-03-01

    In this work, using three-dimensional intermittent dust acoustic wave turbulence in a dusty plasma as a platform and multidimensional empirical mode decomposition into different-scale modes in the 2 +1 D spatiotemporal space, we demonstrate the experimental observation of the interacting multiscale acoustic vortices, winding around wormlike amplitude hole filaments coinciding with defect filaments, as the basic coherent excitations for acoustic-type wave turbulence. For different decomposed modes, the self-similar rescaled stretched exponential lifetime histograms of amplitude hole filaments, and the self-similar power spectra of dust density fluctuations, indicate that similar dynamical rules are followed over a wide range of scales. In addition to the intermode acoustic vortex pair generation, propagation, or annihilation, the intra- and intermode interactions of acoustic vortices with the same or opposite helicity, their entanglement and synchronization, are found to be the key dynamical processes in acoustic wave turbulence, akin to the interacting multiscale vortices around wormlike cores observed in hydrodynamic turbulence.

  13. Coherent quantum dynamics in steady-state manifolds of strongly dissipative systems.

    PubMed

    Zanardi, Paolo; Campos Venuti, Lorenzo

    2014-12-12

    Recently, it has been realized that dissipative processes can be harnessed and exploited to the end of coherent quantum control and information processing. In this spirit, we consider strongly dissipative quantum systems admitting a nontrivial manifold of steady states. We show how one can enact adiabatic coherent unitary manipulations, e.g., quantum logical gates, inside this steady-state manifold by adding a weak, time-rescaled, Hamiltonian term into the system's Liouvillian. The effective long-time dynamics is governed by a projected Hamiltonian which results from the interplay between the weak unitary control and the fast relaxation process. The leakage outside the steady-state manifold entailed by the Hamiltonian term is suppressed by an environment-induced symmetrization of the dynamics. We present applications to quantum-computation in decoherence-free subspaces and noiseless subsystems and numerical analysis of nonadiabatic errors.

  14. nCTEQ15 - Global analysis of nuclear parton distributions with uncertainties in the CTEQ framework

    DOE PAGES

    Kovarik, K.; Kusina, A.; Jezo, T.; ...

    2016-04-28

    We present the new nCTEQ15 set of nuclear parton distribution functions with uncertainties. This fit extends the CTEQ proton PDFs to include the nuclear dependence using data on nuclei all the way up to 208Pb. The uncertainties are determined using the Hessian method with an optimal rescaling of the eigenvectors to accurately represent the uncertainties for the chosen tolerance criteria. In addition to the Deep Inelastic Scattering (DIS) and Drell-Yan (DY) processes, we also include inclusive pion production data from RHIC to help constrain the nuclear gluon PDF. Here, we investigate the correlation of the data sets with specific nPDFmore » flavor components, and asses the impact of individual experiments. We also provide comparisons of the nCTEQ15 set with recent fits from other groups.« less

  15. Particle identification with neural networks using a rotational invariant moment representation

    NASA Astrophysics Data System (ADS)

    Sinkus, Ralph; Voss, Thomas

    1997-02-01

    A feed-forward neural network is used to identify electromagnetic particles based upon their showering properties within a segmented calorimeter. A preprocessing procedure is applied to the spatial energy distribution of the particle shower in order to account for the varying geometry of the calorimeter. The novel feature is the expansion of the energy distribution in terms of moments of the so-called Zernike functions which are invariant under rotation. The distributions of moments exhibit very different scales, thus the multidimensional input distribution for the neural network is transformed via a principal component analysis and rescaled by its respective variances to ensure input values of the order of one. This increases the sensitivity of the network and thus results in better performance in identifying and separating electromagnetic from hadronic particles, especially at low energies.

  16. Critical behavior of the ideal-gas Bose-Einstein condensation in the Apollonian network.

    PubMed

    de Oliveira, I N; dos Santos, T B; de Moura, F A B F; Lyra, M L; Serva, M

    2013-08-01

    We show that the ideal Boson gas displays a finite-temperature Bose-Einstein condensation transition in the complex Apollonian network exhibiting scale-free, small-world, and hierarchical properties. The single-particle tight-binding Hamiltonian with properly rescaled hopping amplitudes has a fractal-like energy spectrum. The energy spectrum is analytically demonstrated to be generated by a nonlinear mapping transformation. A finite-size scaling analysis over several orders of magnitudes of network sizes is shown to provide precise estimates for the exponents characterizing the condensed fraction, correlation size, and specific heat. The critical exponents, as well as the power-law behavior of the density of states at the bottom of the band, are similar to those of the ideal Boson gas in lattices with spectral dimension d(s)=2ln(3)/ln(9/5)~/=3.74.

  17. Fractional Brownian motion time-changed by gamma and inverse gamma process

    NASA Astrophysics Data System (ADS)

    Kumar, A.; Wyłomańska, A.; Połoczański, R.; Sundar, S.

    2017-02-01

    Many real time-series exhibit behavior adequate to long range dependent data. Additionally very often these time-series have constant time periods and also have characteristics similar to Gaussian processes although they are not Gaussian. Therefore there is need to consider new classes of systems to model these kinds of empirical behavior. Motivated by this fact in this paper we analyze two processes which exhibit long range dependence property and have additional interesting characteristics which may be observed in real phenomena. Both of them are constructed as the superposition of fractional Brownian motion (FBM) and other process. In the first case the internal process, which plays role of the time, is the gamma process while in the second case the internal process is its inverse. We present in detail their main properties paying main attention to the long range dependence property. Moreover, we show how to simulate these processes and estimate their parameters. We propose to use a novel method based on rescaled modified cumulative distribution function for estimation of parameters of the second considered process. This method is very useful in description of rounded data, like waiting times of subordinated processes delayed by inverse subordinators. By using the Monte Carlo method we show the effectiveness of proposed estimation procedures. Finally, we present the applications of proposed models to real time series.

  18. Calibration of context-specific survey items to assess youth physical activity behaviour.

    PubMed

    Saint-Maurice, Pedro F; Welk, Gregory J; Bartee, R Todd; Heelan, Kate

    2017-05-01

    This study tests calibration models to re-scale context-specific physical activity (PA) items to accelerometer-derived PA. A total of 195 4th-12th grades children wore an Actigraph monitor and completed the Physical Activity Questionnaire (PAQ) one week later. The relative time spent in moderate-to-vigorous PA (MVPA % ) obtained from the Actigraph at recess, PE, lunch, after-school, evening and weekend was matched with a respective item score obtained from the PAQ's. Item scores from 145 participants were calibrated against objective MVPA % using multiple linear regression with age, and sex as additional predictors. Predicted minutes of MVPA for school, out-of-school and total week were tested in the remaining sample (n = 50) using equivalence testing. The results showed that PAQ β-weights ranged from 0.06 (lunch) to 4.94 (PE) MVPA % (P < 0.05) and models root mean square error ranged from 4.2% (evening) to 20.2% (recess). When applied to an independent sample, differences between PAQ and accelerometer MVPA at school and out-of-school ranged from -15.6 to +3.8 min and the PAQ was within 10-15% of accelerometer measured activity. This study demonstrated that context-specific items can be calibrated to predict minutes of MVPA in groups of youth during in- and out-of-school periods.

  19. Investigating different filter and rescaling methods on simulated GRACE-like TWS variations for hydrological applications

    NASA Astrophysics Data System (ADS)

    Zhang, Liangjing; Dobslaw, Henryk; Dahle, Christoph; Thomas, Maik; Neumayer, Karl-Hans; Flechtner, Frank

    2017-04-01

    By operating for more than one decade now, the GRACE satellite provides valuable information on the total water storage (TWS) for hydrological and hydro-meteorological applications. The increasing interest in use of the GRACE-based TWS requires an in-depth assessment of the reliability of the outputs and also its uncertainties. Through years of development, different post-processing methods have been suggested for TWS estimation. However, since GRACE offers an unique way to provide high spatial and temporal scale TWS, there is no global ground truth data available to fully validate the results. In this contribution, we re-assess a number of commonly used post-processing methods using a simulated GRACE-type gravity field time-series based on realistic orbits and instrument error assumptions as well as background error assumptions out of the updated ESA Earth System Model. Three non-isotropic filter methods from Kusche (2007) and a combined filter from DDK1 and DDK3 based on the ground tracks are tested. Rescaling factors estimated from five different hydrological models and the ensemble median are applied to the post-processed simulated GRACE-type TWS estimates to correct the bias and leakage. Time variant rescaling factors as monthly scaling factors and scaling factors for seasonal and long-term variations separately are investigated as well. Since TWS anomalies out of the post-processed simulation results can be readily compared to the time-variable Earth System Model initially used as "truth" during the forward simulation step, we are able to thoroughly check the plausibility of our error estimation assessment (Zhang et al., 2016) and will subsequently recommend a processing strategy that shall also be applied for planned GRACE and GRACE-FO Level-3 products for terrestrial applications provided by GFZ. Kusche, J., 2007:Approximate decorrelation and non-isotropic smoothing of time-variable GRACE-type gravity field models. J. Geodesy, 81 (11), 733-749, doi:10.1007/s00190-007-0143-3. Zhang L, Dobslaw H, Thomas M (2016) Globally gridded terrestrial water storage variations from GRACE satellite gravimetry for hydrometeorological applications. Geophysical Journal International 206(1):368-378, DOI 10.1093/gji/ggw153.

  20. On the scatter in the relation between stellar mass and halo mass: random or halo formation time dependent?

    NASA Astrophysics Data System (ADS)

    Wang, Lan; De Lucia, Gabriella; Weinmann, Simone M.

    2013-05-01

    The empirical traditional halo occupation distribution (HOD) model of Wang et al. fits, by construction, both the stellar mass function and correlation function of galaxies in the local Universe. In contrast, the semi-analytical models of De Lucia & Blazoit (hereafter DLB07) and Guo et al. (hereafter Guo11), built on the same dark matter halo merger trees than the empirical model, still have difficulties in reproducing these observational data simultaneously. We compare the relations between the stellar mass of galaxies and their host halo mass in the three models, and find that they are different. When the relations are rescaled to have the same median values and the same scatter as in Wang et al., the rescaled DLB07 model can fit both the measured galaxy stellar mass function and the correlation function measured in different galaxy stellar mass bins. In contrast, the rescaled Guo11 model still overpredicts the clustering of low-mass galaxies. This indicates that the detail of how galaxies populate the scatter in the stellar mass-halo mass relation does play an important role in determining the correlation functions of galaxies. While the stellar mass of galaxies in the Wang et al. model depends only on halo mass and is randomly distributed within the scatter, galaxy stellar mass depends also on the halo formation time in semi-analytical models. At fixed value of infall mass, galaxies that lie above the median stellar mass-halo mass relation reside in haloes that formed earlier, while galaxies that lie below the median relation reside in haloes that formed later. This effect is much stronger in Guo11 than in DLB07, which explains the overclustering of low mass galaxies in Guo11. Assembly bias in Guo11 model might be overly strong. Nevertheless, in case that a significant assembly bias indeed exists in the real Universe, one needs to use caution when applying current HOD and abundance matching models that employ the assumption of random scatter in the relation between stellar and halo mass.

  1. Nonlinear rescaling of control values simplifies fuzzy control

    NASA Technical Reports Server (NTRS)

    Vanlangingham, H.; Tsoukkas, A.; Kreinovich, V.; Quintana, C.

    1993-01-01

    Traditional control theory is well-developed mainly for linear control situations. In non-linear cases there is no general method of generating a good control, so we have to rely on the ability of the experts (operators) to control them. If we want to automate their control, we must acquire their knowledge and translate it into a precise control strategy. The experts' knowledge is usually represented in non-numeric terms, namely, in terms of uncertain statements of the type 'if the obstacle is straight ahead, the distance to it is small, and the velocity of the car is medium, press the brakes hard'. Fuzzy control is a methodology that translates such statements into precise formulas for control. The necessary first step of this strategy consists of assigning membership functions to all the terms that the expert uses in his rules (in our sample phrase these words are 'small', 'medium', and 'hard'). The appropriate choice of a membership function can drastically improve the quality of a fuzzy control. In the simplest cases, we can take the functions whose domains have equally spaced endpoints. Because of that, many software packages for fuzzy control are based on this choice of membership functions. This choice is not very efficient in more complicated cases. Therefore, methods have been developed that use neural networks or generic algorithms to 'tune' membership functions. But this tuning takes lots of time (for example, several thousands iterations are typical for neural networks). In some cases there are evident physical reasons why equally space domains do not work: e.g., if the control variable u is always positive (i.e., if we control temperature in a reactor), then negative values (that are generated by equal spacing) simply make no sense. In this case it sounds reasonable to choose another scale u' = f(u) to represent u, so that equal spacing will work fine for u'. In the present paper we formulate the problem of finding the best rescaling function, solve this problem, and show (on a real-life example) that after an optimal rescaling, the un-tuned fuzzy control can be as good as the best state-of-art traditional non-linear controls.

  2. Classification of emotional states from electrocardiogram signals: a non-linear approach based on hurst

    PubMed Central

    2013-01-01

    Background Identifying the emotional state is helpful in applications involving patients with autism and other intellectual disabilities; computer-based training, human computer interaction etc. Electrocardiogram (ECG) signals, being an activity of the autonomous nervous system (ANS), reflect the underlying true emotional state of a person. However, the performance of various methods developed so far lacks accuracy, and more robust methods need to be developed to identify the emotional pattern associated with ECG signals. Methods Emotional ECG data was obtained from sixty participants by inducing the six basic emotional states (happiness, sadness, fear, disgust, surprise and neutral) using audio-visual stimuli. The non-linear feature ‘Hurst’ was computed using Rescaled Range Statistics (RRS) and Finite Variance Scaling (FVS) methods. New Hurst features were proposed by combining the existing RRS and FVS methods with Higher Order Statistics (HOS). The features were then classified using four classifiers – Bayesian Classifier, Regression Tree, K- nearest neighbor and Fuzzy K-nearest neighbor. Seventy percent of the features were used for training and thirty percent for testing the algorithm. Results Analysis of Variance (ANOVA) conveyed that Hurst and the proposed features were statistically significant (p < 0.001). Hurst computed using RRS and FVS methods showed similar classification accuracy. The features obtained by combining FVS and HOS performed better with a maximum accuracy of 92.87% and 76.45% for classifying the six emotional states using random and subject independent validation respectively. Conclusions The results indicate that the combination of non-linear analysis and HOS tend to capture the finer emotional changes that can be seen in healthy ECG data. This work can be further fine tuned to develop a real time system. PMID:23680041

  3. P04.19 Recommendations for computation of textural measures obtained from 3D brain tumor MRIs: A robustness analysis points out the need for standardization.

    PubMed Central

    Molina, D.; Pérez-Beteta, J.; Martínez-González, A.; Velásquez, C.; Martino, J.; Luque, B.; Revert, A.; Herruzo, I.; Arana, E.; Pérez-García, V. M.

    2017-01-01

    Abstract Introduction: Textural analysis refers to a variety of mathematical methods used to quantify the spatial variations in grey levels within images. In brain tumors, textural features have a great potential as imaging biomarkers having been shown to correlate with survival, tumor grade, tumor type, etc. However, these measures should be reproducible under dynamic range and matrix size changes for their clinical use. Our aim is to study this robustness in brain tumors with 3D magnetic resonance imaging, not previously reported in the literature. Materials and methods: 3D T1-weighted images of 20 patients with glioblastoma (64.80 ± 9.12 years-old) obtained from a 3T scanner were analyzed. Tumors were segmented using an in-house semi-automatic 3D procedure. A set of 16 3D textural features of the most common types (co-occurrence and run-length matrices) were selected, providing regional (run-length based measures) and local information (co-ocurrence matrices) on the tumor heterogeneity. Feature robustness was assessed by means of the coefficient of variation (CV) under both dynamic range (16, 32 and 64 gray levels) and/or matrix size (256x256 and 432x432) changes. Results: None of the textural features considered were robust under dynamic range changes. The textural co-occurrence matrix feature Entropy was the only textural feature robust (CV < 10%) under spatial resolution changes. Conclusions: In general, textural measures of three-dimensional brain tumor images are neither robust under dynamic range nor under matrix size changes. Thus, it becomes mandatory to fix standards for image rescaling after acquisition before the textural features are computed if they are to be used as imaging biomarkers. For T1-weighted images a dynamic range of 16 grey levels and a matrix size of 256x256 (and isotropic voxel) is found to provide reliable and comparable results and is feasible with current MRI scanners. The implications of this work go beyond the specific tumor type and MRI sequence studied here and pose the need for standardization in textural feature calculation of oncological images. FUNDING: James S. Mc. Donnell Foundation (USA) 21st Century Science Initiative in Mathematical and Complex Systems Approaches for Brain Cancer [Collaborative award 220020450 and planning grant 220020420], MINECO/FEDER [MTM2015-71200-R], JCCM [PEII-2014-031-P].

  4. Simulated gamma-ray pulse profile of the Crab pulsar with the Cherenkov Telescope Array

    NASA Astrophysics Data System (ADS)

    Burtovoi, A.; Zampieri, L.

    2016-07-01

    We present simulations of the very high energy (VHE) gamma-ray light curve of the Crab pulsar as observed by the Cherenkov Telescope Array (CTA). The CTA pulse profile of the Crab pulsar is simulated with the specific goal of determining the accuracy of the position of the interpulse. We fit the pulse shape obtained by the Major Atmospheric Gamma-Ray Imaging Cherenkov (MAGIC) telescope with a three-Gaussian template and rescale it to account for the different CTA instrumental and observational configurations. Simulations are performed for different configurations of CTA and for the ASTRI (Astrofisica con Specchi a Tecnologia Replicante Italiana) mini-array. The northern CTA configuration will provide an improvement of a factor of ˜3 in accuracy with an observing time comparable to that of MAGIC (73 h). Unless the VHE spectrum above 1 TeV behaves differently from what we presently know, unreasonably long observing times are required for a significant detection of the pulsations of the Crab pulsar with the high-energy-range sub-arrays. We also found that an independent VHE timing analysis is feasible with Large Size Telescopes. CTA will provide a significant improvement in determining the VHE pulse shape parameters necessary to constrain theoretical models of the gamma-ray emission of the Crab pulsar. One of such parameters is the shift in phase between peaks in the pulse profile at VHE and in other energy bands that, if detected, may point to different locations of the emission regions.

  5. Equivalence classes of Fibonacci lattices and their similarity properties

    NASA Astrophysics Data System (ADS)

    Lo Gullo, N.; Vittadello, L.; Bazzan, M.; Dell'Anna, L.

    2016-08-01

    We investigate, theoretically and experimentally, the properties of Fibonacci lattices with arbitrary spacings. Different from periodic structures, the reciprocal lattice and the dynamical properties of Fibonacci lattices depend strongly on the lengths of their lattice parameters, even if the sequence of long and short segment, the Fibonacci string, is the same. In this work we show that by exploiting a self-similarity property of Fibonacci strings under a suitable composition rule, it is possible to define equivalence classes of Fibonacci lattices. We show that the diffraction patterns generated by Fibonacci lattices belonging to the same equivalence class can be rescaled to a common pattern of strong diffraction peaks thus giving to this classification a precise meaning. Furthermore we show that, through the gap labeling theorem, gaps in the energy spectra of Fibonacci crystals belonging to the same class can be labeled by the same momenta (up to a proper rescaling) and that the larger gaps correspond to the strong peaks of the diffraction spectra. This observation makes the definition of equivalence classes meaningful also for the spectral and therefore dynamical and thermodynamical properties of quasicrystals. Our results apply to the more general class of quasiperiodic lattices for which similarity under a suitable deflation rule is in order.

  6. Is the firewall consistent? Gedanken experiments on black hole complementarity and firewall proposal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hwang, Dong-il; Lee, Bum-Hoon; Yeom, Dong-han, E-mail: dongil.j.hwang@gmail.com, E-mail: bhl@sogang.ac.kr, E-mail: innocent.yeom@gmail.com

    2013-01-01

    In this paper, we discuss the black hole complementarity and the firewall proposal at length. Black hole complementarity is inevitable if we assume the following five things: unitarity, entropy-area formula, existence of an information observer, semi-classical quantum field theory for an asymptotic observer, and the general relativity for an in-falling observer. However, large N rescaling and the AMPS argument show that black hole complementarity is inconsistent. To salvage the basic philosophy of the black hole complementarity, AMPS introduced a firewall around the horizon. According to large N rescaling, the firewall should be located close to the apparent horizon. We investigatemore » the consistency of the firewall with the two critical conditions: the firewall should be near the time-like apparent horizon and it should not affect the future infinity. Concerning this, we have introduced a gravitational collapse with a false vacuum lump which can generate a spacetime structure with disconnected apparent horizons. This reveals a situation that there is a firewall outside of the event horizon, while the apparent horizon is absent. Therefore, the firewall, if it exists, not only does modify the general relativity for an in-falling observer, but also modify the semi-classical quantum field theory for an asymptotic observer.« less

  7. Time step rescaling recovers continuous-time dynamical properties for discrete-time Langevin integration of nonequilibrium systems.

    PubMed

    Sivak, David A; Chodera, John D; Crooks, Gavin E

    2014-06-19

    When simulating molecular systems using deterministic equations of motion (e.g., Newtonian dynamics), such equations are generally numerically integrated according to a well-developed set of algorithms that share commonly agreed-upon desirable properties. However, for stochastic equations of motion (e.g., Langevin dynamics), there is still broad disagreement over which integration algorithms are most appropriate. While multiple desiderata have been proposed throughout the literature, consensus on which criteria are important is absent, and no published integration scheme satisfies all desiderata simultaneously. Additional nontrivial complications stem from simulating systems driven out of equilibrium using existing stochastic integration schemes in conjunction with recently developed nonequilibrium fluctuation theorems. Here, we examine a family of discrete time integration schemes for Langevin dynamics, assessing how each member satisfies a variety of desiderata that have been enumerated in prior efforts to construct suitable Langevin integrators. We show that the incorporation of a novel time step rescaling in the deterministic updates of position and velocity can correct a number of dynamical defects in these integrators. Finally, we identify a particular splitting (related to the velocity Verlet discretization) that has essentially universally appropriate properties for the simulation of Langevin dynamics for molecular systems in equilibrium, nonequilibrium, and path sampling contexts.

  8. Time rescaling reproduces EEG behavior during transition from propofol anesthesia-induced unconsciousness to consciousness.

    PubMed

    Boussen, S; Spiegler, A; Benar, C; Carrère, M; Bartolomei, F; Metellus, P; Voituriez, R; Velly, L; Bruder, N; Trébuchon, A

    2018-04-16

    General anesthesia (GA) is a reversible manipulation of consciousness whose mechanism is mysterious at the level of neural networks leaving space for several competing hypotheses. We recorded electrocorticography (ECoG) signals in patients who underwent intracranial monitoring during awake surgery for the treatment of cerebral tumors in functional areas of the brain. Therefore, we recorded the transition from unconsciousness to consciousness directly on the brain surface. Using frequency resolved interferometry; we studied the intermediate ECoG frequencies (4-40 Hz). In the theoretical study, we used a computational Jansen and Rit neuron model to simulate recovery of consciousness (ROC). During ROC, we found that f increased by a factor equal to 1.62 ± 0.09, and δf varied by the same factor (1.61 ± 0.09) suggesting the existence of a scaling factor. We accelerated the time course of an unconscious EEG trace by an approximate factor 1.6 and we showed that the resulting EEG trace match the conscious state. Using the theoretical model, we successfully reproduced this behavior. We show that the recovery of consciousness corresponds to a transition in the frequency (f, δf) space, which is exactly reproduced by a simple time rescaling. These findings may perhaps be applied to other altered consciousness states.

  9. Is the firewall consistent? Gedanken experiments on black hole complementarity and firewall proposal

    NASA Astrophysics Data System (ADS)

    Hwang, Dong-il; Lee, Bum-Hoon; Yeom, Dong-han

    2013-01-01

    In this paper, we discuss the black hole complementarity and the firewall proposal at length. Black hole complementarity is inevitable if we assume the following five things: unitarity, entropy-area formula, existence of an information observer, semi-classical quantum field theory for an asymptotic observer, and the general relativity for an in-falling observer. However, large N rescaling and the AMPS argument show that black hole complementarity is inconsistent. To salvage the basic philosophy of the black hole complementarity, AMPS introduced a firewall around the horizon. According to large N rescaling, the firewall should be located close to the apparent horizon. We investigate the consistency of the firewall with the two critical conditions: the firewall should be near the time-like apparent horizon and it should not affect the future infinity. Concerning this, we have introduced a gravitational collapse with a false vacuum lump which can generate a spacetime structure with disconnected apparent horizons. This reveals a situation that there is a firewall outside of the event horizon, while the apparent horizon is absent. Therefore, the firewall, if it exists, not only does modify the general relativity for an in-falling observer, but also modify the semi-classical quantum field theory for an asymptotic observer.

  10. Rescaling of temporal expectations during extinction

    PubMed Central

    Drew, Michael R.; Walsh, Carolyn; Balsam, Peter D

    2016-01-01

    Previous research suggests that extinction learning is temporally specific. Changing the CS duration between training and extinction can facilitate the loss of the CR within the extinction session but impairs long-term retention of extinction. In two experiments using conditioned magazine approach with rats, we examined the relation between temporal specificity of extinction and CR timing. In Experiment 1 rats were trained on a 12-s, fixed CS-US interval and then extinguished with CS presentations that were 6, 12, or 24 s in duration. The design of Experiment 2 was the same except rats were trained using partial rather than continuous reinforcement. In both experiments, extending the CS duration in extinction facilitated the diminution of CRs during the extinction session, but shortening the CS duration failed to slow extinction. In addition, extending (but not shortening) the CS duration caused temporal rescaling of the CR, in that the peak CR rate migrated later into the trial over the course of extinction training. This migration partially accounted for the faster loss of the CR when the CS duration was extended. Results are incompatible with the hypothesis that extinction is driven by cumulative CS exposure and suggest that temporally extended nonreinforced CS exposure reduces conditioned responding via temporal displacement rather than through extinction per se. PMID:28045291

  11. Covariance, correlation matrix, and the multiscale community structure of networks.

    PubMed

    Shen, Hua-Wei; Cheng, Xue-Qi; Fang, Bin-Xing

    2010-07-01

    Empirical studies show that real world networks often exhibit multiple scales of topological descriptions. However, it is still an open problem how to identify the intrinsic multiple scales of networks. In this paper, we consider detecting the multiscale community structure of network from the perspective of dimension reduction. According to this perspective, a covariance matrix of network is defined to uncover the multiscale community structure through the translation and rotation transformations. It is proved that the covariance matrix is the unbiased version of the well-known modularity matrix. We then point out that the translation and rotation transformations fail to deal with the heterogeneous network, which is very common in nature and society. To address this problem, a correlation matrix is proposed through introducing the rescaling transformation into the covariance matrix. Extensive tests on real world and artificial networks demonstrate that the correlation matrix significantly outperforms the covariance matrix, identically the modularity matrix, as regards identifying the multiscale community structure of network. This work provides a novel perspective to the identification of community structure and thus various dimension reduction methods might be used for the identification of community structure. Through introducing the correlation matrix, we further conclude that the rescaling transformation is crucial to identify the multiscale community structure of network, as well as the translation and rotation transformations.

  12. Hierarchical complexity and the size limits of life.

    PubMed

    Heim, Noel A; Payne, Jonathan L; Finnegan, Seth; Knope, Matthew L; Kowalewski, Michał; Lyons, S Kathleen; McShea, Daniel W; Novack-Gottshall, Philip M; Smith, Felisa A; Wang, Steve C

    2017-06-28

    Over the past 3.8 billion years, the maximum size of life has increased by approximately 18 orders of magnitude. Much of this increase is associated with two major evolutionary innovations: the evolution of eukaryotes from prokaryotic cells approximately 1.9 billion years ago (Ga), and multicellular life diversifying from unicellular ancestors approximately 0.6 Ga. However, the quantitative relationship between organismal size and structural complexity remains poorly documented. We assessed this relationship using a comprehensive dataset that includes organismal size and level of biological complexity for 11 172 extant genera. We find that the distributions of sizes within complexity levels are unimodal, whereas the aggregate distribution is multimodal. Moreover, both the mean size and the range of size occupied increases with each additional level of complexity. Increases in size range are non-symmetric: the maximum organismal size increases more than the minimum. The majority of the observed increase in organismal size over the history of life on the Earth is accounted for by two discrete jumps in complexity rather than evolutionary trends within levels of complexity. Our results provide quantitative support for an evolutionary expansion away from a minimal size constraint and suggest a fundamental rescaling of the constraints on minimal and maximal size as biological complexity increases. © 2017 The Author(s).

  13. Self-organized chiral colloidal crystals of Brownian square crosses.

    PubMed

    Zhao, Kun; Mason, Thomas G

    2014-04-16

    We study aqueous Brownian dispersions of microscale, hard, monodisperse platelets, shaped as achiral square crosses, in two dimensions (2D). When slowly concentrated while experiencing thermal excitations, the crosses self-organize into fluctuating 2D colloidal crystals. As the particle area fraction φA is raised, an achiral rhombic crystal phase forms at φA ≈ 0.52. Above φA ≈ 0.56, the rhombic crystal gives way to a square crystal phase that exhibits long-range chiral symmetry breaking (CSB) via a crystal-crystal phase transition; the observed chirality in a particular square crystallite has either a positive or a negative enantiomeric sense. By contrast to triangles and rhombs, which exhibit weak CSB as a result of total entropy maximization, square crosses display robust long-range CSB that is primarily dictated by how they tile space at high densities. We measure the thermal distribution of orientation angles γ of the crosses' arms relative to the diagonal bisector of the local square crystal lattice as a function of φA, and the average measured γ (φA) agrees with a re-scaled model involving efficient packing of rotated cross shapes. Our findings imply that a variety of hard achiral shapes can be designed to form equilibrium chiral phases by considering their tiling at high densities.

  14. Multilayered analog optical differentiating device: performance analysis on structural parameters.

    PubMed

    Wu, Wenhui; Jiang, Wei; Yang, Jiang; Gong, Shaoxiang; Ma, Yungui

    2017-12-15

    Analogy optical devices (AODs) able to do mathematical computations have recently gained strong research interest for their potential applications as accelerating hardware in traditional electronic computers. The performance of these wavefront-processing devices is primarily decided by the accuracy of the angular spectral engineering. In this Letter, we show that the multilayer technique could be a promising method to flexibly design AODs according to the input wavefront conditions. As examples, various Si-SiO 2 -based multilayer films are designed that can precisely perform the second-order differentiation for the input wavefronts of different Fourier spectrum widths. The minimum number and thickness uncertainty of sublayers for the device performance are discussed. A technique by rescaling the Fourier spectrum intensity has been proposed in order to further improve the practical feasibility. These results are thought to be instrumental for the development of AODs.

  15. Measuring the health-related Sustainable Development Goals in 188 countries: a baseline analysis from the Global Burden of Disease Study 2015.

    PubMed

    2016-10-08

    In September, 2015, the UN General Assembly established the Sustainable Development Goals (SDGs). The SDGs specify 17 universal goals, 169 targets, and 230 indicators leading up to 2030. We provide an analysis of 33 health-related SDG indicators based on the Global Burden of Diseases, Injuries, and Risk Factors Study 2015 (GBD 2015). We applied statistical methods to systematically compiled data to estimate the performance of 33 health-related SDG indicators for 188 countries from 1990 to 2015. We rescaled each indicator on a scale from 0 (worst observed value between 1990 and 2015) to 100 (best observed). Indices representing all 33 health-related SDG indicators (health-related SDG index), health-related SDG indicators included in the Millennium Development Goals (MDG index), and health-related indicators not included in the MDGs (non-MDG index) were computed as the geometric mean of the rescaled indicators by SDG target. We used spline regressions to examine the relations between the Socio-demographic Index (SDI, a summary measure based on average income per person, educational attainment, and total fertility rate) and each of the health-related SDG indicators and indices. In 2015, the median health-related SDG index was 59·3 (95% uncertainty interval 56·8-61·8) and varied widely by country, ranging from 85·5 (84·2-86·5) in Iceland to 20·4 (15·4-24·9) in Central African Republic. SDI was a good predictor of the health-related SDG index (r 2 =0·88) and the MDG index (r 2 =0·92), whereas the non-MDG index had a weaker relation with SDI (r 2 =0·79). Between 2000 and 2015, the health-related SDG index improved by a median of 7·9 (IQR 5·0-10·4), and gains on the MDG index (a median change of 10·0 [6·7-13·1]) exceeded that of the non-MDG index (a median change of 5·5 [2·1-8·9]). Since 2000, pronounced progress occurred for indicators such as met need with modern contraception, under-5 mortality, and neonatal mortality, as well as the indicator for universal health coverage tracer interventions. Moderate improvements were found for indicators such as HIV and tuberculosis incidence, minimal changes for hepatitis B incidence took place, and childhood overweight considerably worsened. GBD provides an independent, comparable avenue for monitoring progress towards the health-related SDGs. Our analysis not only highlights the importance of income, education, and fertility as drivers of health improvement but also emphasises that investments in these areas alone will not be sufficient. Although considerable progress on the health-related MDG indicators has been made, these gains will need to be sustained and, in many cases, accelerated to achieve the ambitious SDG targets. The minimal improvement in or worsening of health-related indicators beyond the MDGs highlight the need for additional resources to effectively address the expanded scope of the health-related SDGs. Bill & Melinda Gates Foundation. Copyright © 2016 The Authors(s). Published by Elsevier Ltd. This is an Open Access article under the CC BY license. Published by Elsevier Ltd.. All rights reserved.

  16. Turbulent cascade in a two-ion plasma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qiu, Xin; Faculty of Information Engineering, Jiangxi University of Science and Technology, Ganzhou 341000; Liu, San-Qiu, E-mail: sqlgroup@ncu.edu.cn

    2014-11-15

    It is shown that small but finite-amplitude drift wave turbulence in a two-ion-species plasma can be modeled by a Hasegawa-Mima equation. The mode cascade process and resulting turbulent spectrum are investigated. The spectrum is found to be similar to that of a two-component plasma, but the space and time scales of the turbulent cascade process can be quite different since they are rescaled by the presence of the second ion species.

  17. Symmetries and conservation laws of a nonlinear sigma model with gravitino

    NASA Astrophysics Data System (ADS)

    Jost, Jürgen; Keßler, Enno; Tolksdorf, Jürgen; Wu, Ruijun; Zhu, Miaomiao

    2018-06-01

    We study the symmetries and invariances of a version of the action functional of the nonlinear sigma model with gravitino, as considered in Jost et al. (2017). The action is invariant under rescaled conformal transformations, super Weyl transformations, and diffeomorphisms. In particular cases the functional possesses a degenerate supersymmetry. The corresponding conservation laws lead to a geometric interpretation of the energy-momentum tensor and supercurrent as holomorphic sections of appropriate bundles.

  18. Multivariate analysis of longitudinal rates of change.

    PubMed

    Bryan, Matthew; Heagerty, Patrick J

    2016-12-10

    Longitudinal data allow direct comparison of the change in patient outcomes associated with treatment or exposure. Frequently, several longitudinal measures are collected that either reflect a common underlying health status, or characterize processes that are influenced in a similar way by covariates such as exposure or demographic characteristics. Statistical methods that can combine multivariate response variables into common measures of covariate effects have been proposed in the literature. Current methods for characterizing the relationship between covariates and the rate of change in multivariate outcomes are limited to select models. For example, 'accelerated time' methods have been developed which assume that covariates rescale time in longitudinal models for disease progression. In this manuscript, we detail an alternative multivariate model formulation that directly structures longitudinal rates of change and that permits a common covariate effect across multiple outcomes. We detail maximum likelihood estimation for a multivariate longitudinal mixed model. We show via asymptotic calculations the potential gain in power that may be achieved with a common analysis of multiple outcomes. We apply the proposed methods to the analysis of a trivariate outcome for infant growth and compare rates of change for HIV infected and uninfected infants. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  19. Element analysis: a wavelet-based method for analysing time-localized events in noisy time series

    PubMed Central

    2017-01-01

    A method is derived for the quantitative analysis of signals that are composed of superpositions of isolated, time-localized ‘events’. Here, these events are taken to be well represented as rescaled and phase-rotated versions of generalized Morse wavelets, a broad family of continuous analytic functions. Analysing a signal composed of replicates of such a function using another Morse wavelet allows one to directly estimate the properties of events from the values of the wavelet transform at its own maxima. The distribution of events in general power-law noise is determined in order to establish significance based on an expected false detection rate. Finally, an expression for an event’s ‘region of influence’ within the wavelet transform permits the formation of a criterion for rejecting spurious maxima due to numerical artefacts or other unsuitable events. Signals can then be reconstructed based on a small number of isolated points on the time/scale plane. This method, termed element analysis, is applied to the identification of long-lived eddy structures in ocean currents as observed by along-track measurements of sea surface elevation from satellite altimetry. PMID:28484325

  20. Inverse statistics in the foreign exchange market

    NASA Astrophysics Data System (ADS)

    Jensen, M. H.; Johansen, A.; Petroni, F.; Simonsen, I.

    2004-09-01

    We investigate intra-day foreign exchange (FX) time series using the inverse statistic analysis developed by Simonsen et al. (Eur. Phys. J. 27 (2002) 583) and Jensen et al. (Physica A 324 (2003) 338). Specifically, we study the time-averaged distributions of waiting times needed to obtain a certain increase (decrease) ρ in the price of an investment. The analysis is performed for the Deutsch Mark (DM) against the US for the full year of 1998, but similar results are obtained for the Japanese Yen against the US. With high statistical significance, the presence of “resonance peaks” in the waiting time distributions is established. Such peaks are a consequence of the trading habits of the market participants as they are not present in the corresponding tick (business) waiting time distributions. Furthermore, a new stylized fact, is observed for the (normalized) waiting time distribution in the form of a power law Pdf. This result is achieved by rescaling of the physical waiting time by the corresponding tick time thereby partially removing scale-dependent features of the market activity.

  1. Arctic lead detection using a waveform mixture algorithm from CryoSat-2 data

    NASA Astrophysics Data System (ADS)

    Lee, Sanggyun; Kim, Hyun-cheol; Im, Jungho

    2018-05-01

    We propose a waveform mixture algorithm to detect leads from CryoSat-2 data, which is novel and different from the existing threshold-based lead detection methods. The waveform mixture algorithm adopts the concept of spectral mixture analysis, which is widely used in the field of hyperspectral image analysis. This lead detection method was evaluated with high-resolution (250 m) MODIS images and showed comparable and promising performance in detecting leads when compared to the previous methods. The robustness of the proposed approach also lies in the fact that it does not require the rescaling of parameters (i.e., stack standard deviation, stack skewness, stack kurtosis, pulse peakiness, and backscatter σ0), as it directly uses L1B waveform data, unlike the existing threshold-based methods. Monthly lead fraction maps were produced by the waveform mixture algorithm, which shows interannual variability of recent sea ice cover during 2011-2016, excluding the summer season (i.e., June to September). We also compared the lead fraction maps to other lead fraction maps generated from previously published data sets, resulting in similar spatiotemporal patterns.

  2. The performance-variability paradox, financial decision making, and the curious case of negative Hurst exponents.

    PubMed

    Guastello, Stephen J; Reiter, Katherine; Shircel, Anton; Timm, Paul; Malon, Matthew; Fabisch, Megan

    2014-07-01

    This study examined the relationship between performance variability and actual performance of financial decision makers who were working under experimental conditions of increasing workload and fatigue. The rescaled range statistic, also known as the Hurst exponent (H) was used as an index of variability. Although H is defined as having a range between 0 and 1, 45% of the 172 time series generated by undergraduates were negative. Participants in the study chose the optimum investment out of sets of 3 to 5 options that were presented a series of 350 displays. The sets of options varied in both the complexity of the options and number of options under simultaneous consideration. One experimental condition required participants to make their choices within 15 sec, and the other condition required them to choose within 7.5 sec. Results showed that (a) negative H was possible and not a result of psychometric error; (b) negative H was associated with negative autocorrelations in a time series. (c) H was the best predictor of performance of the variables studied; (d) three other significant predictors were scores on an anagrams test and ratings of physical demands and performance demands; (e) persistence as evidenced by the autocorrelations was associated with ratings of greater time pressure. It was concluded, furthermore, that persistence and overall performance were correlated, that 'healthy' variability only exists within a limited range, and other individual differences related to ability and resistance to stress or fatigue are also involved in the prediction of performance.

  3. Study of the daily and seasonal atmospheric CH4 mixing ratio variability in a rural Spanish region using 222Rn tracer

    NASA Astrophysics Data System (ADS)

    Grossi, Claudia; Vogel, Felix R.; Curcoll, Roger; Àgueda, Alba; Vargas, Arturo; Rodó, Xavier; Morguí, Josep-Anton

    2018-04-01

    The ClimaDat station at Gredos (GIC3) has been continuously measuring atmospheric (dry air) mixing ratios of carbon dioxide (CO2) and methane (CH4), as well as meteorological parameters, since November 2012. In this study we investigate the atmospheric variability of CH4 mixing ratios between 2013 and 2015 at GIC3 with the help of co-located observations of 222Rn concentrations, modelled 222Rn fluxes and modelled planetary boundary layer heights (PBLHs). Both daily and seasonal changes in atmospheric CH4 can be better understood with the help of atmospheric concentrations of 222Rn (and the corresponding fluxes). On a daily timescale, the variation in the PBLH is the main driver for 222Rn and CH4 variability while, on monthly timescales, their atmospheric variability seems to depend on emission changes. To understand (changing) CH4 emissions, nocturnal fluxes of CH4 were estimated using two methods: the radon tracer method (RTM) and a method based on the EDGARv4.2 bottom-up emission inventory, both using FLEXPARTv9.0.2 footprints. The mean value of RTM-based methane fluxes (FR_CH4) is 0.11 mg CH4 m-2 h-1 with a standard deviation of 0.09 or 0.29 mg CH4 m-2 h-1 with a standard deviation of 0.23 mg CH4 m-2 h-1 when using a rescaled 222Rn map (FR_CH4_rescale). For our observational period, the mean value of methane fluxes based on the bottom-up inventory (FE_CH4) is 0.33 mg CH4 m-2 h-1 with a standard deviation of 0.08 mg CH4 m-2 h-1. Monthly CH4 fluxes based on RTM (both FR_CH4 and FR_CH4_rescale) show a seasonality which is not observed for monthly FE_CH4 fluxes. During January-May, RTM-based CH4 fluxes present mean values 25 % lower than during June-December. This seasonal increase in methane fluxes calculated by RTM for the GIC3 area appears to coincide with the arrival of transhumant livestock at GIC3 in the second half of the year.

  4. Spatial patterns and scale freedom in Prisoner's Dilemma cellular automata with Pavlovian strategies

    NASA Astrophysics Data System (ADS)

    Fort, H.; Viola, S.

    2005-01-01

    A cellular automaton in which cells represent agents playing the Prisoner's Dilemma (PD) game following the simple 'win—stay, lose—shift' strategy is studied. Individuals with binary behaviour, such that they can either cooperate (C) or defect (D), play repeatedly with their neighbours (Von Neumann's and Moore's neighbourhoods). Their utilities in each round of the game are given by a rescaled pay-off matrix described by a single parameter τ, which measures the ratio of temptation to defect to reward for cooperation. Depending on the region of the parameter space τ, the system self-organizes—after a transient—into dynamical equilibrium states characterized by different definite fractions of C agents \\bar {c}_\\infty (two states for the von Neumann neighbourhood and four for the Moore neighbourhood). For some ranges of τ the cluster size distributions, the power spectra P(f) and the perimeter-area curves follow power law scalings. Percolation below threshold is also found for D agent clusters. We also analyse the asynchronous dynamics version of this model and compare results.

  5. A scalable population code for time in the striatum.

    PubMed

    Mello, Gustavo B M; Soares, Sofia; Paton, Joseph J

    2015-05-04

    To guide behavior and learn from its consequences, the brain must represent time over many scales. Yet, the neural signals used to encode time in the seconds-to-minute range are not known. The striatum is a major input area of the basal ganglia associated with learning and motor function. Previous studies have also shown that the striatum is necessary for normal timing behavior. To address how striatal signals might be involved in timing, we recorded from striatal neurons in rats performing an interval timing task. We found that neurons fired at delays spanning tens of seconds and that this pattern of responding reflected the interaction between time and the animals' ongoing sensorimotor state. Surprisingly, cells rescaled responses in time when intervals changed, indicating that striatal populations encoded relative time. Moreover, time estimates decoded from activity predicted timing behavior as animals adjusted to new intervals, and disrupting striatal function led to a decrease in timing performance. These results suggest that striatal activity forms a scalable population code for time, providing timing signals that animals use to guide their actions. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. On better estimating and normalizing the relationship between clinical parameters: comparing respiratory modulations in the photoplethysmogram and blood pressure signal (DPOP versus PPV).

    PubMed

    Addison, Paul S; Wang, Rui; Uribe, Alberto A; Bergese, Sergio D

    2015-01-01

    DPOP (ΔPOP or Delta-POP) is a noninvasive parameter which measures the strength of respiratory modulations present in the pulse oximeter waveform. It has been proposed as a noninvasive alternative to pulse pressure variation (PPV) used in the prediction of the response to volume expansion in hypovolemic patients. We considered a number of simple techniques for better determining the underlying relationship between the two parameters. It was shown numerically that baseline-induced signal errors were asymmetric in nature, which corresponded to observation, and we proposed a method which combines a least-median-of-squares estimator with the requirement that the relationship passes through the origin (the LMSO method). We further developed a method of normalization of the parameters through rescaling DPOP using the inverse gradient of the linear fitted relationship. We propose that this normalization method (LMSO-N) is applicable to the matching of a wide range of clinical parameters. It is also generally applicable to the self-normalizing of parameters whose behaviour may change slightly due to algorithmic improvements.

  7. On Better Estimating and Normalizing the Relationship between Clinical Parameters: Comparing Respiratory Modulations in the Photoplethysmogram and Blood Pressure Signal (DPOP versus PPV)

    PubMed Central

    Addison, Paul S.; Wang, Rui; Uribe, Alberto A.; Bergese, Sergio D.

    2015-01-01

    DPOP (ΔPOP or Delta-POP) is a noninvasive parameter which measures the strength of respiratory modulations present in the pulse oximeter waveform. It has been proposed as a noninvasive alternative to pulse pressure variation (PPV) used in the prediction of the response to volume expansion in hypovolemic patients. We considered a number of simple techniques for better determining the underlying relationship between the two parameters. It was shown numerically that baseline-induced signal errors were asymmetric in nature, which corresponded to observation, and we proposed a method which combines a least-median-of-squares estimator with the requirement that the relationship passes through the origin (the LMSO method). We further developed a method of normalization of the parameters through rescaling DPOP using the inverse gradient of the linear fitted relationship. We propose that this normalization method (LMSO-N) is applicable to the matching of a wide range of clinical parameters. It is also generally applicable to the self-normalizing of parameters whose behaviour may change slightly due to algorithmic improvements. PMID:25691912

  8. Power-law versus log-law in wall-bounded turbulence: A large-eddy simulation perspective

    NASA Astrophysics Data System (ADS)

    Cheng, W.; Samtaney, R.

    2014-01-01

    The debate whether the mean streamwise velocity in wall-bounded turbulent flows obeys a log-law or a power-law scaling originated over two decades ago, and continues to ferment in recent years. As experiments and direct numerical simulation can not provide sufficient clues, in this study we present an insight into this debate from a large-eddy simulation (LES) viewpoint. The LES organically combines state-of-the-art models (the stretched-vortex model and inflow rescaling method) with a virtual-wall model derived under different scaling law assumptions (the log-law or the power-law by George and Castillo ["Zero-pressure-gradient turbulent boundary layer," Appl. Mech. Rev. 50, 689 (1997)]). Comparison of LES results for Reθ ranging from 105 to 1011 for zero-pressure-gradient turbulent boundary layer flows are carried out for the mean streamwise velocity, its gradient and its scaled gradient. Our results provide strong evidence that for both sets of modeling assumption (log law or power law), the turbulence gravitates naturally towards the log-law scaling at extremely large Reynolds numbers.

  9. Accounting for rate variation among lineages in comparative demographic analyses

    USGS Publications Warehouse

    Hope, Andrew G.; Ho, Simon Y. W.; Malaney, Jason L.; Cook, Joseph A.; Talbot, Sandra L.

    2014-01-01

    Genetic analyses of contemporary populations can be used to estimate the demographic histories of species within an ecological community. Comparison of these demographic histories can shed light on community responses to past climatic events. However, species experience different rates of molecular evolution, and this presents a major obstacle to comparative demographic analyses. We address this problem by using a Bayesian relaxed-clock method to estimate the relative evolutionary rates of 22 small mammal taxa distributed across northwestern North America. We found that estimates of the relative molecular substitution rate for each taxon were consistent across the range of sampling schemes that we compared. Using three different reference rates, we rescaled the relative rates so that they could be used to estimate absolute evolutionary timescales. Accounting for rate variation among taxa led to temporal shifts in our skyline-plot estimates of demographic history, highlighting both uniform and idiosyncratic evolutionary responses to directional climate trends for distinct ecological subsets of the small mammal community. Our approach can be used in evolutionary analyses of populations from multiple species, including comparative demographic studies.

  10. A Method for Studying Piston Friction

    DTIC Science & Technology

    1943-03-01

    t_ough the labyrinth , and assure atmosohoric T_rcssuroon the uoper diaphragm° Reduction of gas leakage through the labyrinth ix provided by a duo%(ii...meochined so as to form a labvrinbh sea].. (6) to the cortoustmon _.-os_s and yet not restrain the oylindor-s].oevc motion, The labyrinth section of the...8217-]-_:-._oan lever (8) ,’__rescaled off from the jacket cooling water by _._cans of flexible neoprene seals (9). Those seals _ert no aporccia0!e constraint

  11. Strengthening and Plastic Flow of Ni3Al Alloy Microcrystals (Preprint)

    DTIC Science & Technology

    2012-08-01

    the degree they can be re- solved ), with essentially no slip-band thickening. Note that the image of Fig. 4b has been digi- tally enhanced to better...solution hardening stress. The second term in Eqn. (2) represents a forest hardening contribution. Solving for the mi- crocrystal flow stress, one...but, the truncated glide lengths associated with the mean-field dis- location dynamics forces the stress to increase to re-scale the processes to the

  12. The isotropic-nematic phase transition of tangent hard-sphere chain fluids—Pure components

    NASA Astrophysics Data System (ADS)

    van Westen, Thijs; Oyarzún, Bernardo; Vlugt, Thijs J. H.; Gross, Joachim

    2013-07-01

    An extension of Onsager's second virial theory is developed to describe the isotropic-nematic phase transition of tangent hard-sphere chain fluids. Flexibility is introduced by the rod-coil model. The effect of chain-flexibility on the second virial coefficient is described using an accurate, analytical approximation for the orientation-dependent pair-excluded volume. The use of this approximation allows for an analytical treatment of intramolecular flexibility by using a single pure-component parameter. Two approaches to approximate the effect of the higher virial coefficients are considered, i.e., the Vega-Lago rescaling and Scaled Particle Theory (SPT). The Onsager trial function is employed to describe the orientational distribution function. Theoretical predictions for the equation of state and orientational order parameter are tested against the results from Monte Carlo (MC) simulations. For linear chains of length 9 and longer, theoretical results are in excellent agreement with MC data. For smaller chain lengths, small errors introduced by the approximation of the higher virial coefficients become apparent, leading to a small under- and overestimation of the pressure and density difference at the phase transition, respectively. For rod-coil fluids of reasonable rigidity, a quantitative comparison between theory and MC simulations is obtained. For more flexible chains, however, both the Vega-Lago rescaling and SPT lead to a small underestimation of the location of the phase transition.

  13. Single-coil properties and concentration effects for polyelectrolyte-like wormlike micelles: a Monte Carlo study

    NASA Astrophysics Data System (ADS)

    Cannavacciuolo, Luigi; Skov Pedersen, Jan; Schurtenberger, Peter

    2002-03-01

    Results of an extensive Monte Carlo (MC) study on both single and many semiflexible charged chains with excluded volume (EV) are summarized. The model employed has been tailored to mimic wormlike micelles in solution. Simulations have been performed at different ionic strengths of added salt, charge densities, chain lengths and volume fractions Φ, covering the dilute to concentrated regime. At infinite dilution the scattering functions can be fitted by the same fitting functions as for uncharged semiflexible chains with EV, provided that an electrostatic contribution bel is added to the bare Kuhn length. The scaling of bel is found to be more complex than the Odijk-Skolnick-Fixman predictions, and qualitatively compatible with more recent variational calculations. Universality in the scaling of the radius of gyration is found if all lengths are rescaled by the total Kuhn length. At finite concentrations, the simple model used is able to reproduce the structural peak in the scattering function S(q) observed in many experiments, as well as other properties of polyelectrolytes (PELs) in solution. Universal behaviour of the forward scattering S(0) is established after a rescaling of Φ. MC data are found to be in very good agreement with experimental scattering measurements with equilibrium PELs, which are giant wormlike micelles formed in mixtures of nonionic and ionic surfactants in dilute aqueous solution, with added salt.

  14. A generic implementation of replica exchange with solute tempering (REST2) algorithm in NAMD for complex biophysical simulations

    NASA Astrophysics Data System (ADS)

    Jo, Sunhwan; Jiang, Wei

    2015-12-01

    Replica Exchange with Solute Tempering (REST2) is a powerful sampling enhancement algorithm of molecular dynamics (MD) in that it needs significantly smaller number of replicas but achieves higher sampling efficiency relative to standard temperature exchange algorithm. In this paper, we extend the applicability of REST2 for quantitative biophysical simulations through a robust and generic implementation in greatly scalable MD software NAMD. The rescaling procedure of force field parameters controlling REST2 "hot region" is implemented into NAMD at the source code level. A user can conveniently select hot region through VMD and write the selection information into a PDB file. The rescaling keyword/parameter is written in NAMD Tcl script interface that enables an on-the-fly simulation parameter change. Our implementation of REST2 is within communication-enabled Tcl script built on top of Charm++, thus communication overhead of an exchange attempt is vanishingly small. Such a generic implementation facilitates seamless cooperation between REST2 and other modules of NAMD to provide enhanced sampling for complex biomolecular simulations. Three challenging applications including native REST2 simulation for peptide folding-unfolding transition, free energy perturbation/REST2 for absolute binding affinity of protein-ligand complex and umbrella sampling/REST2 Hamiltonian exchange for free energy landscape calculation were carried out on IBM Blue Gene/Q supercomputer to demonstrate efficacy of REST2 based on the present implementation.

  15. Lévy/Anomalous Diffusion as a Mean-Field Theory for 3D Cloud Effects in SW-RT: Empirical Support, New Analytical Formulation, and Impact on Atmospheric Absorption

    NASA Astrophysics Data System (ADS)

    Pfeilsticker, K.; Davis, A.; Marshak, A.; Suszcynsky, D. M.; Buldryrev, S.; Barker, H.

    2001-12-01

    2-stream RT models, as used in all current GCMs, are mathematically equivalent to standard diffusion theory where the physical picture is a slow propagation of the diffuse radiation by Gaussian random walks. In other words, after the conventional van de Hulst rescaling by 1/(1-g) in R3 and also by (1-g) in t, solar photons follow convoluted fractal trajectories in the atmosphere. For instance, we know that transmitted light is typically scattered about (1-g)τ 2 times while reflected light is scattered on average about τ times, where τ is the optical depth of the column. The space/time spread of this diffusion process is described exactly by a Gaussian distribution; from the statistical physics viewpoint, this follows from the convergence of the sum of many (rescaled) steps between scattering events with a finite variance. This Gaussian picture follows from directly from first principles (the RT equation) under the assumptions of horizontal uniformity and large optical depth, i.e., there is a homogeneous plane-parallel cloud somewhere in the column. The first-order effect of 3D variability of cloudiness, the main source of scattering, is to perturb the distribution of single steps between scatterings which, modulo the '1-g' rescaling, can be assumed effectively isotropic. The most natural generalization of the Gaussian distribution is the 1-parameter family of symmetric Lévy-stable distributions because the sum of many zero-mean random variables with infinite variance, but finite moments of order q < α (0 < α < 2), converge to them. It has been shown on heuristic grounds that for these Lévy-based random walks the typical number of scatterings is now (1-g)τ α for transmitted light. The appearance of a non-rational exponent is why this is referred to as anomalous diffusion. Note that standard/Gaussian diffusion is retrieved in the limit α = 2-. Lévy transport theory has been successfully used in the statistical physics to investigate a wide variety of systems with strongly nonlinear dynamics; these applications range from random advection in turbulent fluids to the erratic behavior of financial time-series and, most recently, self-regulating ecological systems. We will briefly survey the state-of-the-art observations that offer compelling empirical support for the Lévy/anomalous diffusion model in atmospheric radiation: (1) high-resolution spectroscopy of differential absorption in the O2 A-band from ground; (2) temporal transient records of lightning strokes transmitted through clouds to a sensitive detector in space; and (3) the Gamma-distributions of optical depths derived from Landsat cloud scenes at 30-m resolution. We will then introduce a rigorous analytical formulation of anomalous transport through finite media based on fractional derivatives and Sonin calculus. A remarkable result from this new theoretical development is an extremal property of the α = 1+ case (divergent mean-free-path), as is observed in the cloudy atmosphere. Finally, we will discuss the implications of anomalous transport theory for bulk 3D effects on the current enhanced absorption problem as well as its role as the basis of a next-generation GCM RT parameterization.

  16. Association Between Mortality and Heritability of the Scale of Aging Vigor in Epidemiology.

    PubMed

    Sanders, Jason L; Singh, Jatinder; Minster, Ryan L; Walston, Jeremy D; Matteini, Amy M; Christensen, Kaare; Mayeux, Richard; Borecki, Ingrid B; Perls, Thomas; Newman, Anne B

    2016-08-01

    To investigate the association between mortality and heritability of a rescaled Fried frailty index, the Scale of Aging Vigor in Epidemiology (SAVE), to determine its value for genetic analyses. Longitudinal, community-based cohort study. The Long Life Family Study (LLFS) in the United States and Denmark. Long-lived individuals (N = 4,875, including 4,075 genetically related individuals) and their families (N = 551). The SAVE was administered to 3,599 participants and included weight change, weakness (grip strength), fatigue (questionnaire), physical activity (days walked in prior 2 weeks), and slowness (gait speed); each component was scored 0, 1, or 2 using approximate tertiles, and summed (range 0 (vigorous) to 10 (frail)). Heritability was determined using a variance component-based family analysis using a polygenic model. Association with mortality in the proband generation (N = 1,421) was calculated using Cox proportional hazards mixed-effect models. Heritability of the SAVE was 0.23 (P < .001) overall (n = 3,599), 0.31 (P < .001) in probands (n = 1,479), and 0.26 (P < .001) in offspring (n = 2,120). In adjusted models, higher SAVE scores were associated with higher mortality (score 5-6: hazard ratio (HR) = 2.83, 95% confidence interval (CI) = 1.46-5.51; score 7-10: HR = 3.40, 95% CI = 1.72-6.71) than lower scores (0-2). The SAVE was associated with mortality and was moderately heritable in the LLFS, suggesting a genetic component to age-related vigor and frailty and supporting its use for further genetic analyses. © 2016, Copyright the Authors Journal compilation © 2016, The American Geriatrics Society.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Getirana, Augusto; Dutra, Emanuel; Guimberteau, Matthieu

    Despite recent advances in modeling and remote sensing of land surfaces, estimates of the global water budget are still fairly uncertain. The objective of this study is to evaluate the water budget of the Amazon basin based on several state-of-the-art land surface model (LSM) outputs. Water budget variables [total water storage (TWS), evapotranspiration (ET), surface runoff (R) and baseflow (B)] are evaluated at the basin scale using both remote sensing and in situ data. Fourteen LSMs were run using meteorological forcings at a 3-hourly time step and 1-degree spatial resolution. Three experiments are performed using precipitation which has been rescaledmore » to match monthly global GPCP and GPCC datasets and the daily HYBAM dataset for the Amazon basin. R and B are used to force the Hydrological Modeling and Analysis Platform (HyMAP) river routing scheme and simulated discharges are compared against observations at 165 gauges. Simulated ET and TWS are compared against FLUXNET and MOD16A2 evapotranspiration, and GRACE TWS estimates in different catchments. At the basin scale, simulated ET ranges from 2.39mm.d-1 to 3.26mm.d-1 and a low spatial correlation between ET and P indicates that evapotranspiration does not depend on water availability over most of the basin. Results also show that other simulated water budget variables vary significantly as a function of both the LSM and precipitation used, but simulated TWS generally agree at the basin scale. The best water budget simulations resulted from experiments using the HYBAM dataset, mostly explained by a denser rainfall gauge network the daily rescaling.« less

  18. Evaluation of the grand-canonical partition function using expanded Wang-Landau simulations. V. Impact of an electric field on the thermodynamic properties and ideality contours of water

    NASA Astrophysics Data System (ADS)

    Desgranges, Caroline; Delhommelle, Jerome

    2016-11-01

    Using molecular simulation, we assess the impact of an electric field on the properties of water, modeled with the SPC/E potential, over a wide range of states and conditions. Electric fields of the order of 0.1 V/Å and beyond are found to have a significant impact on the grand-canonical partition function of water, resulting in shifts in the chemical potential at the vapor-liquid coexistence of up to 20%. This, in turn, leads to an increase in the critical temperatures by close to 7% for a field of 0.2 V/Å, to lower vapor pressures, and to much larger entropies of vaporization (by up to 35%). We interpret these results in terms of the greater density change at the transition and of the increased structural order resulting from the applied field. The thermodynamics of compressed liquids and of supercritical water are also analyzed over a wide range of pressures, leading to the determination of the Zeno line and of the curve of ideal enthalpy that span the supercritical region of the phase diagram. Rescaling the phase diagrams obtained for the different field strengths by their respective critical properties allows us to draw a correspondence between these systems for fields of up to 0.2 V/Å.

  19. Tetrahedron deformation and alignment of perceived vorticity and strain in a turbulent flow

    NASA Astrophysics Data System (ADS)

    Pumir, Alain; Bodenschatz, Eberhard; Xu, Haitao

    2013-03-01

    We describe the structure and dynamics of turbulence by the scale-dependent perceived velocity gradient tensor as supported by following four tracers, i.e., fluid particles, that initially form a regular tetrahedron. We report results from experiments in a von Kármán swirling water flow and from numerical simulations of the incompressible Navier-Stokes equation. We analyze the statistics and the dynamics of the perceived rate of strain tensor and vorticity for initially regular tetrahedron of size r0 from the dissipative to the integral scale. Just as for the true velocity gradient, at any instant, the perceived vorticity is also preferentially aligned with the intermediate eigenvector of the perceived rate of strain. However, in the perceived rate of strain eigenframe fixed at a given time t = 0, the perceived vorticity evolves in time such as to align with the strongest eigendirection at t = 0. This also applies to the true velocity gradient. The experimental data at the higher Reynolds number suggests the existence of a self-similar regime in the inertial range. In particular, the dynamics of alignment of the perceived vorticity and strain can be rescaled by t0, the turbulence time scale of the flow when the scale r0 is in the inertial range. For smaller Reynolds numbers we found the dynamics to be scale dependent.

  20. Correcting the Relative Bias of Light Obscuration and Flow Imaging Particle Counters.

    PubMed

    Ripple, Dean C; Hu, Zhishang

    2016-03-01

    Industry and regulatory bodies desire more accurate methods for counting and characterizing particles. Measurements of proteinaceous-particle concentrations by light obscuration and flow imaging can differ by factors of ten or more. We propose methods to correct the diameters reported by light obscuration and flow imaging instruments. For light obscuration, diameters were rescaled based on characterization of the refractive index of typical particles and a light scattering model for the extinction efficiency factor. The light obscuration models are applicable for either homogeneous materials (e.g., silicone oil) or for chemically homogeneous, but spatially non-uniform aggregates (e.g., protein aggregates). For flow imaging, the method relied on calibration of the instrument with silica beads suspended in water-glycerol mixtures. These methods were applied to a silicone-oil droplet suspension and four particle suspensions containing particles produced from heat stressed and agitated human serum albumin, agitated polyclonal immunoglobulin, and abraded ethylene tetrafluoroethylene polymer. All suspensions were measured by two flow imaging and one light obscuration apparatus. Prior to correction, results from the three instruments disagreed by a factor ranging from 3.1 to 48 in particle concentration over the size range from 2 to 20 μm. Bias corrections reduced the disagreement from an average factor of 14 down to an average factor of 1.5. The methods presented show promise in reducing the relative bias between light obscuration and flow imaging.

  1. Dimensionality Reduction in Big Data with Nonnegative Matrix Factorization

    DTIC Science & Technology

    2017-06-20

    appli- cations of data mining, signal processing , computer vision, bioinformatics, etc. Fun- damentally, NMF has two main purposes. First, it reduces...shape of the function becomes more spherical because ∂ 2g ∂y2i = 1, ∀i, and g(y) is convex. This part aims to make the post- processing parts more...maxStop = 0 for each thread of computation */; 3 /*Re-scaling variables*/; 4 Q = H√ diag(H)diag(H)T ; q = h√ diag(H) ; 5 /*Solving NQP: minimizingf(x

  2. An Estimation of the Logarithmic Timescale in Ergodic Dynamics

    NASA Astrophysics Data System (ADS)

    Gomez, Ignacio S.

    An estimation of the logarithmic timescale in quantum systems having an ergodic dynamics in the semiclassical limit, is presented. The estimation is based on an extension of the Krieger’s finite generator theorem for discretized σ-algebras and using the time rescaling property of the Kolmogorov-Sinai entropy. The results are in agreement with those obtained in the literature but with a simpler mathematics and within the context of the ergodic theory. Moreover, some consequences of the Poincaré’s recurrence theorem are also explored.

  3. Averages of $b$-hadron, $c$-hadron, and $$\\tau$$-lepton properties as of summer 2014

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amhis, Y.; et al.

    2014-12-23

    This article reports world averages of measurements ofmore » $b$-hadron, $c$-hadron, and $$\\tau$$-lepton properties obtained by the Heavy Flavor Averaging Group (HFAG) using results available through summer 2014. For the averaging, common input parameters used in the various analyses are adjusted (rescaled) to common values, and known correlations are taken into account. The averages include branching fractions, lifetimes, neutral meson mixing parameters, $CP$ violation parameters, parameters of semileptonic decays and CKM matrix elements.« less

  4. Power laws for gravity and topography of Solar System bodies

    NASA Astrophysics Data System (ADS)

    Ermakov, A.; Park, R. S.; Bills, B. G.

    2017-12-01

    When a spacecraft visits a planetary body, it is useful to be able to predict its gravitational and topographic properties. This knowledge is important for determining the level of perturbations in spacecraft's motion as well as for planning the observation campaign. It has been known for the Earth that the power spectrum of gravity follows a power law, also known as the Kaula rule (Kaula, 1963; Rapp, 1989). A similar rule was derived for topography (Vening-Meinesz, 1951). The goal of this paper is to generalize the power law that can characterize the gravity and topography power spectra for bodies across a wide range of size. We have analyzed shape power spectra of the bodies that have either global shape and gravity field measured. These bodies span across five orders of magnitude in their radii and surface gravities and include terrestrial planets, icy moons and minor bodies. We have found that despite having different internal structure, composition and mechanical properties, the topography power spectrum of these bodies' shapes can be modeled with a similar power law rescaled by the surface gravity. Having empirically found a power law for topography, we can map it to a gravity power law. Special care should be taken for low-degree harmonic coefficients due to potential isostatic compensation. For minor bodies, uniform density can be assumed. The gravity coefficients are a linear function of the shape coefficients for close-to-spherical bodoes. In this case, the power law for gravity will be steeper than the power law of topography due to the factor (2n+1) in the gravity expansion (e.g. Eq. 10 in Wieczorek & Phillips, 1998). Higher powers of topography must be retained for irregularly shaped bodies, which breaks the linearity. Therefore, we propose the following procedure to derive an a priori constraint for gravity. First, a surface gravity needs to be determined assuming typical density for the relevant class of bodies. Second, the scaling coefficient of the power law can be found by rescaling the values known for other bodies. Third, an ensemble of synthetic shapes that follow the defined power law can be generated and gravity-from-shape can be found. The averaged power spectrum can be used as an a priori constraint for the gravity field and variance of power can be computed for individual degrees.

  5. Quantization of Time-Like Energy for Wave Maps into Spheres

    NASA Astrophysics Data System (ADS)

    Grinis, Roland

    2017-06-01

    In this article we consider large energy wave maps in dimension 2+1, as in the resolution of the threshold conjecture by Sterbenz and Tataru (Commun. Math. Phys. 298(1):139-230, 2010; Commun. Math. Phys. 298(1):231-264, 2010), but more specifically into the unit Euclidean sphere S^{n-1} \\subsetRn with {n≥2}, and study further the dynamics of the sequence of wave maps that are obtained in Sterbenz and Tataru (Commun. Math. Phys. 298(1):231-264, 2010) at the final rescaling for a first, finite or infinite, time singularity. We prove that, on a suitably chosen sequence of time slices at this scaling, there is a decomposition of the map, up to an error with asymptotically vanishing energy, into a decoupled sum of rescaled solitons concentrating in the interior of the light cone and a term having asymptotically vanishing energy dispersion norm, concentrating on the null boundary and converging to a constant locally in the interior of the cone, in the energy space. Similar and stronger results have been recently obtained in the equivariant setting by several authors (Côte, Commun. Pure Appl. Math. 68(11):1946-2004, 2015; Côte, Commun. Pure Appl. Math. 69(4):609-612, 2016; Côte, Am. J. Math. 137(1):139-207, 2015; Côte et al., Am. J. Math. 137(1):209-250, 2015; Krieger, Commun. Math. Phys. 250(3):507-580, 2004), where better control on the dispersive term concentrating on the null boundary of the cone is provided, and in some cases the asymptotic decomposition is shown to hold for all time. Here, however, we do not impose any symmetry condition on the map itself and our strategy follows the one from bubbling analysis of harmonic maps into spheres in the supercritical regime due to Lin and Rivière (Ann. Math. 149(2):785-829, 1999; Duke Math. J. 111:177-193, 2002), which we make work here in the hyperbolic context of Sterbenz and Tataru (Commun. Math. Phys. 298(1), 231-264, 2010).

  6. Full-range k-domain linearization in spectral-domain optical coherence tomography.

    PubMed

    Jeon, Mansik; Kim, Jeehyun; Jung, Unsang; Lee, Changho; Jung, Woonggyu; Boppart, Stephen A

    2011-03-10

    A full-bandwidth k-domain linearization method for spectral-domain optical coherence tomography (SD-OCT) is demonstrated. The method uses information of the wavenumber-pixel-position provided by a translating-slit-based wavelength filter. For calibration purposes, the filter is placed either after a broadband source or at the end of the sample path, and the filtered spectrum with a narrowed line width (∼0.5 nm) is incident on a line-scan camera in the detection path. The wavelength-swept spectra are co-registered with the pixel positions according to their central wavelengths, which can be automatically measured with an optical spectrum analyzer. For imaging, the method does not require a filter or a software recalibration algorithm; it simply resamples the OCT signal from the detector array without employing rescaling or interpolation methods. The accuracy of k-linearization is maximized by increasing the k-linearization order, which is known to be a crucial parameter for maintaining a narrow point-spread function (PSF) width at increasing depths. The broadening effect is studied by changing the k-linearization order by undersampling to search for the optimal value. The system provides more position information, surpassing the optimum without compromising the imaging speed. The proposed full-range k-domain linearization method can be applied to SD-OCT systems to simplify their hardware/software, increase their speed, and improve the axial image resolution. The experimentally measured width of PSF in air has an FWHM of 8 μm at the edge of the axial measurement range. At an imaging depth of 2.5 mm, the sensitivity of the full-range calibration case drops less than 10 dB compared with the uncompensated case.

  7. Accuracy of Snow Water Equivalent Estimated From GPS Vertical Displacements: A Synthetic Loading Case Study for Western U.S. Mountains

    NASA Astrophysics Data System (ADS)

    Enzminger, Thomas L.; Small, Eric E.; Borsa, Adrian A.

    2018-01-01

    GPS monitoring of solid Earth deformation due to surface loading is an independent approach for estimating seasonal changes in terrestrial water storage (TWS). In western United States (WUSA) mountain ranges, snow water equivalent (SWE) is the dominant component of TWS and an essential water resource. While several studies have estimated SWE from GPS-measured vertical displacements, the error associated with this method remains poorly constrained. We examine the accuracy of SWE estimated from synthetic displacements at 1,395 continuous GPS station locations in the WUSA. Displacement at each station is calculated from the predicted elastic response to variations in SWE from SNODAS and soil moisture from the NLDAS-2 Noah model. We invert synthetic displacements for TWS, showing that both seasonal accumulation and melt as well as year-to-year fluctuations in peak SWE can be estimated from data recorded by the existing GPS network. Because we impose a smoothness constraint in the inversion, recovered TWS exhibits mass leakage from mountain ranges to surrounding areas. This leakage bias is removed via linear rescaling in which the magnitude of the gain factor depends on station distribution and TWS anomaly patterns. The synthetic GPS-derived estimates reproduce approximately half of the spatial variability (unbiased root mean square error ˜50%) of TWS loading within mountain ranges, a considerable improvement over GRACE. The inclusion of additional simulated GPS stations improves representation of spatial variations. GPS data can be used to estimate mountain-range-scale SWE, but effects of soil moisture and other TWS components must first be subtracted from the GPS-derived load estimates.

  8. Improving the Global Precipitation Record: GPCP Version 2.1

    NASA Technical Reports Server (NTRS)

    Huffman, George J.; Adler, Robert F.; Bolvin, David t.; Gu, Guojun

    2009-01-01

    The GPCP has developed Version 2.1 of its long-term (1979-present) global Satellite-Gauge (SG) data sets to take advantage of the improved GPCC gauge analysis, which is one key input. As well, the OPI estimates used in the pre-SSM/I era have been rescaled to 20 years of the SSM/I-era SG. The monthly, pentad, and daily GPCP products have been entirely reprocessed, continuing to enforce consistency of the submonthly estimates to the monthly. Version 2.1 is close to Version 2, with the global ocean, land, and total values about 0%, 6%, and 2% higher, respectively. The revised long-term global precipitation rate is 2.68 mm/d. The corresponding tropical (25 N-S) increases are 0%, 7%, and 3%. Long-term linear changes in the data tend to be smaller in Version 2.1, but the statistics are sensitive to the threshold for land/ocean separation and use of the pre-SSM/I part of the record.

  9. Causal mediation analysis with a binary outcome and multiple continuous or ordinal mediators: Simulations and application to an alcohol intervention.

    PubMed

    Nguyen, Trang Quynh; Webb-Vargas, Yenny; Koning, Ina M; Stuart, Elizabeth A

    We investigate a method to estimate the combined effect of multiple continuous/ordinal mediators on a binary outcome: 1) fit a structural equation model with probit link for the outcome and identity/probit link for continuous/ordinal mediators, 2) predict potential outcome probabilities, and 3) compute natural direct and indirect effects. Step 2 involves rescaling the latent continuous variable underlying the outcome to address residual mediator variance/covariance. We evaluate the estimation of risk-difference- and risk-ratio-based effects (RDs, RRs) using the ML, WLSMV and Bayes estimators in Mplus. Across most variations in path-coefficient and mediator-residual-correlation signs and strengths, and confounding situations investigated, the method performs well with all estimators, but favors ML/WLSMV for RDs with continuous mediators, and Bayes for RRs with ordinal mediators. Bayes outperforms WLSMV/ML regardless of mediator type when estimating RRs with small potential outcome probabilities and in two other special cases. An adolescent alcohol prevention study is used for illustration.

  10. Entropic multiple-relaxation-time multirange pseudopotential lattice Boltzmann model for two-phase flow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qin, Feifei; Mazloomi Moqaddam, Ali; Kang, Qinjun

    Here, an entropic multiple-relaxation-time lattice Boltzmann approach is coupled to a multirange Shan-Chen pseudopotential model to study the two-phase flow. Compared with previous multiple-relaxation-time multiphase models, this model is stable and accurate for the simulation of a two-phase flow in a much wider range of viscosity and surface tension at a high liquid-vapor density ratio. A stationary droplet surrounded by equilibrium vapor is first simulated to validate this model using the coexistence curve and Laplace’s law. Then, two series of droplet impact behavior, on a liquid film and a flat surface, are simulated in comparison with theoretical or experimental results.more » Droplet impact on a liquid film is simulated for different Reynolds numbers at high Weber numbers. With the increase of the Sommerfeld parameter, onset of splashing is observed and multiple secondary droplets occur. The droplet spreading ratio agrees well with the square root of time law and is found to be independent of Reynolds number. Moreover, shapes of simulated droplets impacting hydrophilic and superhydrophobic flat surfaces show good agreement with experimental observations through the entire dynamic process. The maximum spreading ratio of a droplet impacting the superhydrophobic flat surface is studied for a large range of Weber numbers. Results show that the rescaled maximum spreading ratios are in good agreement with a universal scaling law. This series of simulations demonstrates that the proposed model accurately captures the complex fluid-fluid and fluid-solid interfacial physical processes for a wide range of Reynolds and Weber numbers at high density ratios.« less

  11. Entropic multiple-relaxation-time multirange pseudopotential lattice Boltzmann model for two-phase flow

    NASA Astrophysics Data System (ADS)

    Qin, Feifei; Mazloomi Moqaddam, Ali; Kang, Qinjun; Derome, Dominique; Carmeliet, Jan

    2018-03-01

    An entropic multiple-relaxation-time lattice Boltzmann approach is coupled to a multirange Shan-Chen pseudopotential model to study the two-phase flow. Compared with previous multiple-relaxation-time multiphase models, this model is stable and accurate for the simulation of a two-phase flow in a much wider range of viscosity and surface tension at a high liquid-vapor density ratio. A stationary droplet surrounded by equilibrium vapor is first simulated to validate this model using the coexistence curve and Laplace's law. Then, two series of droplet impact behavior, on a liquid film and a flat surface, are simulated in comparison with theoretical or experimental results. Droplet impact on a liquid film is simulated for different Reynolds numbers at high Weber numbers. With the increase of the Sommerfeld parameter, onset of splashing is observed and multiple secondary droplets occur. The droplet spreading ratio agrees well with the square root of time law and is found to be independent of Reynolds number. Moreover, shapes of simulated droplets impacting hydrophilic and superhydrophobic flat surfaces show good agreement with experimental observations through the entire dynamic process. The maximum spreading ratio of a droplet impacting the superhydrophobic flat surface is studied for a large range of Weber numbers. Results show that the rescaled maximum spreading ratios are in good agreement with a universal scaling law. This series of simulations demonstrates that the proposed model accurately captures the complex fluid-fluid and fluid-solid interfacial physical processes for a wide range of Reynolds and Weber numbers at high density ratios.

  12. Logarithmic sensing in Bacillus subtilis aerotaxis.

    PubMed

    Menolascina, Filippo; Rusconi, Roberto; Fernandez, Vicente I; Smriga, Steven; Aminzare, Zahra; Sontag, Eduardo D; Stocker, Roman

    2017-01-01

    Aerotaxis, the directed migration along oxygen gradients, allows many microorganisms to locate favorable oxygen concentrations. Despite oxygen's fundamental role for life, even key aspects of aerotaxis remain poorly understood. In Bacillus subtilis, for example, there is conflicting evidence of whether migration occurs to the maximal oxygen concentration available or to an optimal intermediate one, and how aerotaxis can be maintained over a broad range of conditions. Using precisely controlled oxygen gradients in a microfluidic device, spanning the full spectrum of conditions from quasi-anoxic to oxic (60 n mol/l-1 m mol/l), we resolved B. subtilis' 'oxygen preference conundrum' by demonstrating consistent migration towards maximum oxygen concentrations ('monotonic aerotaxis'). Surprisingly, the strength of aerotaxis was largely unchanged over three decades in oxygen concentration (131 n mol/l-196 μ mol/l). We discovered that in this range B. subtilis responds to the logarithm of the oxygen concentration gradient, a rescaling strategy called 'log-sensing' that affords organisms high sensitivity over a wide range of conditions. In these experiments, high-throughput single-cell imaging yielded the best signal-to-noise ratio of any microbial taxis study to date, enabling the robust identification of the first mathematical model for aerotaxis among a broad class of alternative models. The model passed the stringent test of predicting the transient aerotactic response despite being developed on steady-state data, and quantitatively captures both monotonic aerotaxis and log-sensing. Taken together, these results shed new light on the oxygen-seeking capabilities of B. subtilis and provide a blueprint for the quantitative investigation of the many other forms of microbial taxis.

  13. Entropic multiple-relaxation-time multirange pseudopotential lattice Boltzmann model for two-phase flow

    DOE PAGES

    Qin, Feifei; Mazloomi Moqaddam, Ali; Kang, Qinjun; ...

    2018-03-22

    Here, an entropic multiple-relaxation-time lattice Boltzmann approach is coupled to a multirange Shan-Chen pseudopotential model to study the two-phase flow. Compared with previous multiple-relaxation-time multiphase models, this model is stable and accurate for the simulation of a two-phase flow in a much wider range of viscosity and surface tension at a high liquid-vapor density ratio. A stationary droplet surrounded by equilibrium vapor is first simulated to validate this model using the coexistence curve and Laplace’s law. Then, two series of droplet impact behavior, on a liquid film and a flat surface, are simulated in comparison with theoretical or experimental results.more » Droplet impact on a liquid film is simulated for different Reynolds numbers at high Weber numbers. With the increase of the Sommerfeld parameter, onset of splashing is observed and multiple secondary droplets occur. The droplet spreading ratio agrees well with the square root of time law and is found to be independent of Reynolds number. Moreover, shapes of simulated droplets impacting hydrophilic and superhydrophobic flat surfaces show good agreement with experimental observations through the entire dynamic process. The maximum spreading ratio of a droplet impacting the superhydrophobic flat surface is studied for a large range of Weber numbers. Results show that the rescaled maximum spreading ratios are in good agreement with a universal scaling law. This series of simulations demonstrates that the proposed model accurately captures the complex fluid-fluid and fluid-solid interfacial physical processes for a wide range of Reynolds and Weber numbers at high density ratios.« less

  14. Multivariate Analysis of Longitudinal Rates of Change

    PubMed Central

    Bryan, Matthew; Heagerty, Patrick J.

    2016-01-01

    Longitudinal data allow direct comparison of the change in patient outcomes associated with treatment or exposure. Frequently, several longitudinal measures are collected that either reflect a common underlying health status, or characterize processes that are influenced in a similar way by covariates such as exposure or demographic characteristics. Statistical methods that can combine multivariate response variables into common measures of covariate effects have been proposed by Roy and Lin [1]; Proust-Lima, Letenneur and Jacqmin-Gadda [2]; and Gray and Brookmeyer [3] among others. Current methods for characterizing the relationship between covariates and the rate of change in multivariate outcomes are limited to select models. For example, Gray and Brookmeyer [3] introduce an “accelerated time” method which assumes that covariates rescale time in longitudinal models for disease progression. In this manuscript we detail an alternative multivariate model formulation that directly structures longitudinal rates of change, and that permits a common covariate effect across multiple outcomes. We detail maximum likelihood estimation for a multivariate longitudinal mixed model. We show via asymptotic calculations the potential gain in power that may be achieved with a common analysis of multiple outcomes. We apply the proposed methods to the analysis of a trivariate outcome for infant growth and compare rates of change for HIV infected and uninfected infants. PMID:27417129

  15. Combined quantitative and qualitative two-channel optical biopsy technique for discrimination of tumor borders

    NASA Astrophysics Data System (ADS)

    Bocher, Thomas; Beuthan, Juergen; Scheller, M.; Hopf, Juergen U. G.; Linnarz, Marietta; Naber, Rolf-Dieter; Minet, Olaf; Becker, Wolfgang; Mueller, Gerhard J.

    1995-12-01

    Conventional laser-induced fluorescence spectroscopy (LIFS) of endogenous chromophores like NADH (Nicotineamide Adenine Dinucleotide, reduced form) and PP IX (Protoporphyrin IX) provides information about the relative amounts of these metabolites in the observed cells. But for diagnostic applications the concentrations of these chromophores have to be determined quantitatively to establish tissue-independent differentiation criterions. It is well- known that the individually and locally varying optical tissue parameters are major obstacles for the determination of the true chromophore concentrations by simple fluorescence spectroscopy. To overcome these problems a fiber-based, 2-channel technique including a rescaled NADH-channel (delivering quantitative values) and a relative PP IX-channel was developed. Using the accumulated information of both channels can provide good tissue state separation. Ex-vivo studies with resected and frozen samples (with LN2) of squamous cells in the histologically confirmed states: normal, tumor border, inflammation and hyperplasia were performed. Each state was represented in this series with at least 7 samples. At the identical tissue spot both, the rescaled NADH-fluorescence and the relative PP IX- fluorescence, were determined. In the first case a nitrogen laser (337 nm, 500 ps, 200 microjoule, 10 Hz) in the latter case a diode laser (633 nm, 15 mW, cw) were used as excitation sources. In this ex-vivo study a good separation between the different tissue states was achieved. With a device constructed for clinical usage one quantitative, in-vivo NADH- measurement was done recently showing similar separation capabilities.

  16. Energy Bounds for a Compressed Elastic Film on a Substrate

    NASA Astrophysics Data System (ADS)

    Bourne, David P.; Conti, Sergio; Müller, Stefan

    2017-04-01

    We study pattern formation in a compressed elastic film which delaminates from a substrate. Our key tool is the determination of rigorous upper and lower bounds on the minimum value of a suitable energy functional. The energy consists of two parts, describing the two main physical effects. The first part represents the elastic energy of the film, which is approximated using the von Kármán plate theory. The second part represents the fracture or delamination energy, which is approximated using the Griffith model of fracture. A simpler model containing the first term alone was previously studied with similar methods by several authors, assuming that the delaminated region is fixed. We include the fracture term, transforming the elastic minimisation into a free boundary problem, and opening the way for patterns which result from the interplay of elasticity and delamination. After rescaling, the energy depends on only two parameters: the rescaled film thickness, {σ }, and a measure of the bonding strength between the film and substrate, {γ }. We prove upper bounds on the minimum energy of the form {σ }^a {γ }^b and find that there are four different parameter regimes corresponding to different values of a and b and to different folding patterns of the film. In some cases, the upper bounds are attained by self-similar folding patterns as observed in experiments. Moreover, for two of the four parameter regimes we prove matching, optimal lower bounds.

  17. Liquefied Bleed for Stability and Efficiency of High Speed Inlets

    NASA Technical Reports Server (NTRS)

    Saunders, J. David; Davis, David; Barsi, Stephen J.; Deans, Matthew C.; Weir, Lois J.; Sanders, Bobby W.

    2014-01-01

    A mission analysis code was developed to perform a trade study on the effectiveness of liquefying bleed for the inlet of the first stage of a TSTO vehicle. By liquefying bleed, the vehicle weight (TOGW) could be reduced by 7 to 23%. Numerous simplifying assumptions were made and lessons were learned. Increased accuracy in future analyses can be achieved by: Including a higher fidelity model to capture the effect of rescaling (variable vehicle TOGW). Refining specific thrust and impulse models ( T m a and Isp) to preserve fuel-to-air ratio. Implementing LH2 for T m a and Isp. Correlating baseline design to other mission analyses and correcting vehicle design elements. Implementing angle-of-attack effects on inlet characteristics. Refining aerodynamic performance (to improve L/D ratio at higher Mach numbers). Examining the benefit with partial cooling or densification of the bleed air stream. Incorporating higher fidelity weight estimates for the liquefied bleed system (heat exchange and liquid storage versus bleed duct weights) could be added when more fully developed. Adding trim drag or 6-degree-of-freedom trajectory analysis for higher fidelity. Investigating vehicle optimization for each of the bleed configurations.

  18. Behavioral and Emotional Dynamics of Two People Struggling to Reach Consensus about a Topic on Which They Disagree

    PubMed Central

    Kurt, Levent; Kugler, Katharina G.; Coleman, Peter T.; Liebovitch, Larry S.

    2014-01-01

    We studied the behavioral and emotional dynamics displayed by two people trying to resolve a conflict. 59 groups of two people were asked to talk for 20 minutes to try to reach a consensus about a topic on which they disagreed. The topics were abortion, affirmative action, death penalty, and euthanasia. Behavior data were determined from audio recordings where each second of the conversation was assessed as proself, neutral, or prosocial. We determined the probability density function of the durations of time spent in each behavioral state. These durations were well fit by a stretched exponential distribution, with an exponent, , of approximately 0.3. This indicates that the switching between behavioral states is not a random Markov process, but one where the probability to switch behavioral states decreases with the time already spent in that behavioral state. The degree of this “memory” was stronger in those groups who did not reach a consensus and where the conflict grew more destructive than in those that did. Emotion data were measured by having each person listen to the audio recording and moving a computer mouse to recall their negative or positive emotional valence at each moment in the conversation. We used the Hurst rescaled range analysis and power spectrum to determine the correlations in the fluctuations of the emotional valence. The emotional valence was well described by a random walk whose increments were uncorrelated. Thus, the behavior data demonstrated a “memory” of the duration already spent in a behavioral state while the emotion data fluctuated as a random walk whose steps did not have a “memory” of previous steps. This work demonstrates that statistical analysis, more commonly used to analyze physical phenomena, can also shed interesting light on the dynamics of processes in social psychology and conflict management. PMID:24427290

  19. Critical mass of public goods and its coevolution with cooperation

    NASA Astrophysics Data System (ADS)

    Shi, Dong-Mei; Wang, Bing-Hong

    2017-07-01

    In this study, the enhancing parameter represented the value of the public goods to the public in public goods game, and was rescaled to a Fermi-Dirac distribution function of critical mass. Public goods were divided into two categories, consumable and reusable public goods, and their coevolution with cooperative behavior was studied. We observed that for both types of public goods, cooperation was promoted as the enhancing parameter increased when the value of critical mass was not very large. An optimal value of critical mass which led to the best cooperation was identified. We also found that cooperations emerged earlier for reusable public goods, and defections became extinct earlier for the consumable public goods. Moreover, we observed that a moderate depreciation rate for public goods resulted in an optimal cooperation, and this range became wider as the enhancing parameter increased. The noise influence on cooperation was studied, and it was shown that cooperation density varied non-monotonically as noise amplitude increased for reusable public goods, whereas decreased monotonically for consumable public goods. Furthermore, existence of the optimal critical mass was also identified in other three regular networks. Finally, simulation results were utilized to analyze the provision of public goods in detail.

  20. Accounting for rate variation among lineages in comparative demographic analyses.

    PubMed

    Hope, Andrew G; Ho, Simon Y W; Malaney, Jason L; Cook, Joseph A; Talbot, Sandra L

    2014-09-01

    Genetic analyses of contemporary populations can be used to estimate the demographic histories of species within an ecological community. Comparison of these demographic histories can shed light on community responses to past climatic events. However, species experience different rates of molecular evolution, and this presents a major obstacle to comparative demographic analyses. We address this problem by using a Bayesian relaxed-clock method to estimate the relative evolutionary rates of 22 small mammal taxa distributed across northwestern North America. We found that estimates of the relative molecular substitution rate for each taxon were consistent across the range of sampling schemes that we compared. Using three different reference rates, we rescaled the relative rates so that they could be used to estimate absolute evolutionary timescales. Accounting for rate variation among taxa led to temporal shifts in our skyline-plot estimates of demographic history, highlighting both uniform and idiosyncratic evolutionary responses to directional climate trends for distinct ecological subsets of the small mammal community. Our approach can be used in evolutionary analyses of populations from multiple species, including comparative demographic studies. © 2014 The Author(s). Evolution © 2014 The Society for the Study of Evolution.

  1. On the nature of persistence in dendrochronologic records with implications for hydrology

    USGS Publications Warehouse

    Landwehr, J.M.; Matalas, N.C.

    1986-01-01

    Hydrologic processes are generally held to be persistent and not secularly independent. Impetus for this view was given by Hurst in his work which dealt with properties of the rescaled range of many types of long geophysical records, in particular dendrochronologic records, in addition to hydrologic records. Mandelbrot introduced an infinite memory stationary process, the fractional Gaussian noise process (F), as an explanation for Hurst's observations. This is in contrast to other explanations which have been predicated on the implicit non-stationarity of the process underlying the construction of the records. In this work, we introduce a stationary finite memory process which arises naturally from a physical concept and show that it can accommodate the persistence structures observed for dendrochronological records more successfully than an F or any other of a family of related processes examined herein. Further, some question arises as to the empirical plausibility of an F process. Dendrochronologic records are used because they are widely held to be surrogates for records of average hydrologic phenomena and the length of these records allows one to explore questions of stochastic process structure which cannot be explored with great validity in the case of generally much shorter hydrologic records. ?? 1986.

  2. Relationship between thermodynamic parameter and thermodynamic scaling parameter for orientational relaxation time for flip-flop motion of nematic liquid crystals.

    PubMed

    Satoh, Katsuhiko

    2013-03-07

    Thermodynamic parameter Γ and thermodynamic scaling parameter γ for low-frequency relaxation time, which characterize flip-flop motion in a nematic phase, were verified by molecular dynamics simulation with a simple potential based on the Maier-Saupe theory. The parameter Γ, which is the slope of the logarithm for temperature and volume, was evaluated under various conditions at a wide range of temperatures, pressures, and volumes. To simulate thermodynamic scaling so that experimental data at isobaric, isothermal, and isochoric conditions can be rescaled onto a master curve with the parameters for some liquid crystal (LC) compounds, the relaxation time was evaluated from the first-rank orientational correlation function in the simulations, and thermodynamic scaling was verified with the simple potential representing small clusters. A possibility of an equivalence relationship between Γ and γ determined from the relaxation time in the simulation was assessed with available data from the experiments and simulations. In addition, an argument was proposed for the discrepancy between Γ and γ for some LCs in experiments: the discrepancy arises from disagreement of the value of the order parameter P2 rather than the constancy of relaxation time τ1(*) on pressure.

  3. Averages of b-hadron, c-hadron, and τ-lepton properties as of summer 2016

    DOE PAGES

    Amhis, Y.; Banerjee, Sw.; Ben-Haim, E.; ...

    2017-12-21

    Here, this article reports world averages of measurements of b-hadron, c-hadron, and τ-lepton properties obtained by the Heavy Flavor Averaging Group using results available through summer 2016. For the averaging, common input parameters used in the various analyses are adjusted (rescaled) to common values, and known correlations are taken into account. The averages include branching fractions, lifetimes, neutral meson mixing parameters,more » $$C\\!P$$  violation parameters, parameters of semileptonic decays, and Cabbibo–Kobayashi–Maskawa matrix elements.« less

  4. Resonant pairing between fermions with unequal masses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Shin-Tza; Pao, C.-H.; Yip, S.-K.

    We study via mean-field theory the pairing between fermions of different masses, especially at the unitary limit. At equal populations, the thermodynamic properties are identical with the equal mass case provided an appropriate rescaling is made. At unequal populations, for sufficiently light majority species, the system does not phase separate. For sufficiently heavy majority species, the phase separated normal phase have a density larger than that of the superfluid. For atoms in harmonic traps, the density profiles for unequal mass fermions can be drastically different from their equal-mass counterparts.

  5. Exact Asymptotics of the Freezing Transition of a Logarithmically Correlated Random Energy Model

    NASA Astrophysics Data System (ADS)

    Webb, Christian

    2011-12-01

    We consider a logarithmically correlated random energy model, namely a model for directed polymers on a Cayley tree, which was introduced by Derrida and Spohn. We prove asymptotic properties of a generating function of the partition function of the model by studying a discrete time analogy of the KPP-equation—thus translating Bramson's work on the KPP-equation into a discrete time case. We also discuss connections to extreme value statistics of a branching random walk and a rescaled multiplicative cascade measure beyond the critical point.

  6. Averages of b-hadron, c-hadron, and τ-lepton properties as of summer 2016

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amhis, Y.; Banerjee, Sw.; Ben-Haim, E.

    Here, this article reports world averages of measurements of b-hadron, c-hadron, and τ-lepton properties obtained by the Heavy Flavor Averaging Group using results available through summer 2016. For the averaging, common input parameters used in the various analyses are adjusted (rescaled) to common values, and known correlations are taken into account. The averages include branching fractions, lifetimes, neutral meson mixing parameters,more » $$C\\!P$$  violation parameters, parameters of semileptonic decays, and Cabbibo–Kobayashi–Maskawa matrix elements.« less

  7. False vacuum decay in Jordan-Brans-Dicke cosmologies

    NASA Technical Reports Server (NTRS)

    Holman, Richard; Kolb, Edward W.; Vadas, Sharon L.; Wang, Yun; Weinberg, Erick J.

    1989-01-01

    The bubble nucleation rate in a first-order phase transition taking place in a background Jordan-Brans-Dicke cosmology is examined. The leading order terms in the nucleation rate when the Jordan-Brans-Dicke field is large (i.e., late times) are computed by means of a Weyl rescaling of the fields in the theory. It is found that despite the fact that the Jordan-Brans-Dicke field (hence the effective gravitational constant) has a time dependence in the false vacuum at late times the nucleation rate is time independent.

  8. Investigation of Local Hydrogen Uptake in Rescaled Model Occluded Sites Using Crevice Scaling Laws

    DTIC Science & Technology

    2005-04-01

    13- 8 Mo . Under anodic polarization, there is a combination of x and G in a crevice or crack where the stainless steel would be passive and remain...2004). 8 . G.A. Young, Jr., J.R. Scully, "The Effects of Test Temperature , Temper and Alloyed Copper on Hydrogen Controlled Crack Growth of an A1-Zn-Mg...sharp crack tip.[16] Precipitation-aged hardened martensitic stainless steels (i.e., Fe-Cr-Ni- Mo alloys) that release hydrolysable Cr and Fe cations

  9. Discrete disorder models for many-body localization

    NASA Astrophysics Data System (ADS)

    Janarek, Jakub; Delande, Dominique; Zakrzewski, Jakub

    2018-04-01

    Using exact diagonalization technique, we investigate the many-body localization phenomenon in the 1D Heisenberg chain comparing several disorder models. In particular we consider a family of discrete distributions of disorder strengths and compare the results with the standard uniform distribution. Both statistical properties of energy levels and the long time nonergodic behavior are discussed. The results for different discrete distributions are essentially identical to those obtained for the continuous distribution, provided the disorder strength is rescaled by the standard deviation of the random distribution. Only for the binary distribution significant deviations are observed.

  10. Halo-independent direct detection analyses without mass assumptions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Adam J.; Fox, Patrick J.; Kahn, Yonatan

    2015-10-01

    Results from direct detection experiments are typically interpreted by employing an assumption about the dark matter velocity distribution, with results presented in the m{sub χ}−σ{sub n} plane. Recently methods which are independent of the DM halo velocity distribution have been developed which present results in the v{sub min}− g-tilde plane, but these in turn require an assumption on the dark matter mass. Here we present an extension of these halo-independent methods for dark matter direct detection which does not require a fiducial choice of the dark matter mass. With a change of variables from v{sub min} to nuclear recoil momentummore » (p{sub R}), the full halo-independent content of an experimental result for any dark matter mass can be condensed into a single plot as a function of a new halo integral variable, which we call h-tilde (p{sub R}). The entire family of conventional halo-independent g-tilde (v{sub min}) plots for all DM masses are directly found from the single h-tilde (p{sub R}) plot through a simple rescaling of axes. By considering results in h-tilde (p{sub R}) space, one can determine if two experiments are inconsistent for all masses and all physically possible halos, or for what range of dark matter masses the results are inconsistent for all halos, without the necessity of multiple g-tilde (v{sub min}) plots for different DM masses. We conduct a sample analysis comparing the CDMS II Si events to the null results from LUX, XENON10, and SuperCDMS using our method and discuss how the results can be strengthened by imposing the physically reasonable requirement of a finite halo escape velocity.« less

  11. Composite Measures of Health Care Provider Performance: A Description of Approaches

    PubMed Central

    Shwartz, Michael; Restuccia, Joseph D; Rosen, Amy K

    2015-01-01

    Context Since the Institute of Medicine’s 2001 report Crossing the Quality Chasm, there has been a rapid proliferation of quality measures used in quality-monitoring, provider-profiling, and pay-for-performance (P4P) programs. Although individual performance measures are useful for identifying specific processes and outcomes for improvement and tracking progress, they do not easily provide an accessible overview of performance. Composite measures aggregate individual performance measures into a summary score. By reducing the amount of data that must be processed, they facilitate (1) benchmarking of an organization’s performance, encouraging quality improvement initiatives to match performance against high-performing organizations, and (2) profiling and P4P programs based on an organization’s overall performance. Methods We describe different approaches to creating composite measures, discuss their advantages and disadvantages, and provide examples of their use. Findings The major issues in creating composite measures are (1) whether to aggregate measures at the patient level through all-or-none approaches or the facility level, using one of the several possible weighting schemes; (2) when combining measures on different scales, how to rescale measures (using z scores, range percentages, ranks, or 5-star categorizations); and (3) whether to use shrinkage estimators, which increase precision by smoothing rates from smaller facilities but also decrease transparency. Conclusions Because provider rankings and rewards under P4P programs may be sensitive to both context and the data, careful analysis is warranted before deciding to implement a particular method. A better understanding of both when and where to use composite measures and the incentives created by composite measures are likely to be important areas of research as the use of composite measures grows. PMID:26626986

  12. Halo-independent direct detection analyses without mass assumptions

    DOE PAGES

    Anderson, Adam J.; Fox, Patrick J.; Kahn, Yonatan; ...

    2015-10-06

    Results from direct detection experiments are typically interpreted by employing an assumption about the dark matter velocity distribution, with results presented in the m χ – σ n plane. Recently methods which are independent of the DM halo velocity distribution have been developed which present results in the v min – g ~ plane, but these in turn require an assumption on the dark matter mass. Here we present an extension of these halo-independent methods for dark matter direct detection which does not require a fiducial choice of the dark matter mass. With a change of variables from v minmore » to nuclear recoil momentum (p R), the full halo-independent content of an experimental result for any dark matter mass can be condensed into a single plot as a function of a new halo integral variable, which we call tilde h(p R). The entire family of conventional halo-independent tilde g ~(v min) plots for all DM masses are directly found from the single tilde h ~(p R) plot through a simple rescaling of axes. By considering results in tildeh ~(p R) space, one can determine if two experiments are inconsistent for all masses and all physically possible halos, or for what range of dark matter masses the results are inconsistent for all halos, without the necessity of multiple tilde g ~(v min) plots for different DM masses. As a result, we conduct a sample analysis comparing the CDMS II Si events to the null results from LUX, XENON10, and SuperCDMS using our method and discuss how the results can be strengthened by imposing the physically reasonable requirement of a finite halo escape velocity.« less

  13. Critical behaviour and filed dependence of magnetic entropy change in K-doped manganites Pr0.8Na0.2-xKxMnO3 (x = 0.10 and 0.15)

    NASA Astrophysics Data System (ADS)

    Ben Khlifa, H.; M'nassri, R.; Tarhouni, S.; Regaieg, Y.; Cheikhrouhou-Koubaa, W.; Chniba-Boudjada, N.; Cheikhrouhou, A.

    2018-01-01

    The orthorhombic Pr0.8Na0.2-xKxMnO3 (x = 0.10 and 0.15) manganites are prepared by using the solid state reaction at high temperatures. The critical exponents (β, γ, δ) are investigated through various techniques such as modified Arrott plot, Kouvel-Fisher method and critical isotherm analysis based on the data of the magnetic measurements recorded around the Curie temperature. The critical exponents are derived from the magnetization data using the Kouvel-Fisher method, are found to be β = 0.32(4) and γ = 1.29(2) at TC 123 K for x = 0.10 and β = 0.31(1) and γ = 1.25(2) at TC 133 K for x = 0.15. The critical exponent values obtained for both samples are comparable to the values predicted by the 3D-Ising model, and have also been verified by the scaling equation of state. Such results demonstrate the existence of ferromagnetic short-range order in our materials. The magnetic entropy changes of polycrystalline samples with a second-order phase transition are investigated. A large magnetic entropy change deduced from isothermal magnetization curves, is observed in our samples with a peak centered on their respective Curie temperatures (TC). The field dependence of the magnetic entropy changes are analyzed, which show power law dependence ΔSmax ≈ a(μ0 H) n at transition temperature. The values of n obey to the Curie Weiss law above the transition temperature. It is shown that for the investigated materials, the magnetic entropy change follow a master curve behaviour. The rescaled magnetic entropy change curves for different applied fields collapse onto a single curve for both samples.

  14. High excitation rovibrational molecular analysis in warm environments

    NASA Astrophysics Data System (ADS)

    Zhang, Ziwei; Stancil, Phillip C.; Cumbee, Renata; Ferland, Gary J.

    2017-06-01

    Inspired by advances in infrared observation (e.g., Spitzer, Herschel and ALMA), we investigate rovibrational emission CO and SiO in warm astrophysical environments. With recent innovation in collisional rate coefficients and rescaling methods, we are able to construct more comprehensive collisional data with high rovibrational states (vibration up to v=5 and rotation up to J=40) and multiple colliders (H2, H and He). These comprehensive data sets are used in spectral simulations with the radiative transfer codes RADEX and Cloudy. We obtained line ratio diagnostic plots and line spectra for both near- and far-infrared emission lines over a broad range of density and temperature for the case of a uniform medium. Considering the importance of both molecules in probing conditions and activities of UV-irradiated interstellar gas, we model rovibrational emission in photodissociation region (PDR) and AGB star envelopes (such as VY Canis Majoris, IK Tau and IRC +10216) with Cloudy. Rotational diagrams, energy distribution diagrams, and spectra are produced to examine relative state abundances, line emission intensity, and other properties. With these diverse models, we expect to have a better understanding of PDRs and expand our scope in the chemical architecture and evolution of AGB stars and other UV-irradiated regions. The soon to be launched James Webb Space Telescope (JWST) will provide high resolution observations at near- to mid-infrared wavelengths, which opens a new window to study molecular vibrational emission calling for more detailed chemical modeling and comprehensive laboratory astrophysics data on more molecules. This work was partially supported by NASA grants NNX12AF42G and NNX15AI61G. We thank Benhui Yang, Kyle Walker, Robert Forrey, and N. Balakrishnan for collaborating on the collisional data adopted in the current work.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Adam J.; Fox, Patrick J.; Kahn, Yonatan

    Results from direct detection experiments are typically interpreted by employing an assumption about the dark matter velocity distribution, with results presented in the m{sub χ}−σ{sub n} plane. Recently methods which are independent of the DM halo velocity distribution have been developed which present results in the v{sub min}−g-tilde plane, but these in turn require an assumption on the dark matter mass. Here we present an extension of these halo-independent methods for dark matter direct detection which does not require a fiducial choice of the dark matter mass. With a change of variables from v{sub min} to nuclear recoil momentum (p{submore » R}), the full halo-independent content of an experimental result for any dark matter mass can be condensed into a single plot as a function of a new halo integral variable, which we call h-til-tilde(p{sub R}). The entire family of conventional halo-independent g-tilde(v{sub min}) plots for all DM masses are directly found from the single h-tilde(p{sub R}) plot through a simple rescaling of axes. By considering results in h-tilde(p{sub R}) space, one can determine if two experiments are inconsistent for all masses and all physically possible halos, or for what range of dark matter masses the results are inconsistent for all halos, without the necessity of multiple g-tilde(v{sub min}) plots for different DM masses. We conduct a sample analysis comparing the CDMS II Si events to the null results from LUX, XENON10, and SuperCDMS using our method and discuss how the results can be strengthened by imposing the physically reasonable requirement of a finite halo escape velocity.« less

  16. Cross-borehole slug test analysis in a fractured limestone aquifer

    NASA Astrophysics Data System (ADS)

    Audouin, Olivier; Bodin, Jacques

    2008-01-01

    SummaryThis work proposes new semi-analytical solutions for the interpretation of cross-borehole slug tests in fractured media. Our model is an extension of a previous work by Barker [Barker, J.A., 1988. A generalized radial flow model for hydraulic tests in fractured rock. Water Resources Research 24 (10), 1796-1804; Butler Jr., J.J., Zhan X., 2004. Hydraulic tests in highly permeable aquifers. Water Resources Research 40, W12402. doi:10.1029/2003/WR002998]. It includes inertial effects at both test and observation wells and a fractional flow dimension in the aquifer. The model has five fitting parameters: flow dimension n, hydraulic conductivity K, specific storage coefficient Ss, and effective lengths of test well Le and of observation well Leo. The results of a sensitivity analysis show that the most sensitive parameter is the flow dimension n. The model sensitivity to other parameters may be ranked as follows: K > Le ˜ Leo > Ss. The sensitivity to aquifer storage remains one or two orders of magnitude lower than that to other parameters. The model has been coupled to an automatic inversion algorithm for facilitating the interpretation of real field data. This inversion algorithm is based on a Gauss-Newton optimization procedure conditioned by re-scaled sensitivities. It has been used to interpret successfully cross-borehole slug test data from the Hydrogeological Experimental Site (HES) of Poitiers, France, consisting of fractured and karstic limestones. HES data provide flow dimension values ranging between 1.6 and 2.5, and hydraulic conductivity values ranging between 4.4 × 10 -5 and 7.7 × 10 -4 m s -1. These values are consistent with previous interpretations of single-well slug tests. The results of the sensitivity analysis are confirmed by calculations of relative errors on parameter estimates, which show that accuracy on n and K is below 20% and that on Ss is about one order of magnitude. The K-values interpreted from cross-borehole slug tests are one order of magnitude higher than those previously interpreted from interference pumping tests. These findings suggest that cross-borehole slug tests focus on preferential flowpath networks made by fractures and karstic channels, i.e. the head perturbation induced by a slug test propagates only through those flowpaths with the lowest hydraulic resistance. As a result, cross-borehole slug tests are expected to identify the hydrodynamic properties of karstic-channels and fracture flowpaths, and may be considered as complementary to pumping tests which more likely provide bulk properties of the whole fracture/karstic-channel/matrix system.

  17. Non-Newtonian effects of blood flow on hemodynamics in distal vascular graft anastomoses.

    PubMed

    Chen, Jie; Lu, Xi-Yun; Wang, Wen

    2006-01-01

    Non-Newtonian fluid flow in a stenosed coronary bypass is investigated numerically using the Carreau-Yasuda model for the shear thinning behavior of the blood. End-to-side coronary bypass anastomosis is considered in a simplified model geometry where the host coronary artery has a 75% severity stenosis. Different locations of the bypass graft to the stenosis and different flow rates in the graft and in the host artery are studied. Particular attention is given to the non-Newtonian effect of the blood on the primary and secondary flow patterns in the host coronary artery and the wall shear stress (WSS) distribution there. Interaction between the jet flow from the stenosed artery and the flow from the graft is simulated by solving the three-dimensional Navier-Stokes equation coupled with the non-Newtonian constitutive model. Results for the non-Newtonian flow, the Newtonian flow and the rescaled Newtonian flow are presented. Significant differences in axial velocity profiles, secondary flow streamlines and WSS between the non-Newtonian and Newtonian fluid flows are revealed. However, reasonable agreement between the non-Newtonian and the rescaled Newtonian flows is found. Results from this study support the view that the residual flow in a partially occluded coronary artery interacts with flow in the bypass graft and may have significant hemodynamic effects in the host vessel downstream of the graft. Non-Newtonian property of the blood alters the flow pattern and WSS distribution and is an important factor to be considered in simulating hemodynamic effects of blood flow in arterial bypass grafts.

  18. Comparing solutions to the expectancy-value muddle in the theory of planned behaviour.

    PubMed

    O' Sullivan, B; McGee, H; Keegan, O

    2008-11-01

    The authors of the Theories of Reasoned Action (TRA) and Planned Behaviour (TPB) recommended a method for statistically analysing the relationship between the indirect belief-based measures and the direct measures of attitude, subjective norm, and perceived behavioural control (PBC). However, there is a growing awareness that this yields statistically uninterpretable results. This study's objective was to compare two solutions to what has been called the 'expectancy-value muddle'. These solutions were (i) optimal scoring of modal beliefs and (ii) individual beliefs without multiplicative composites. Cross-sectional data were collected by telephone interview. Participants were 110 first-degree relatives (FDRs) of patients diagnosed with colorectal cancer (CRC), who were offered CRC screening in the study hospital (83% response rate). Participants were asked to rate the TPB constructs in relation to attending for CRC screening. There was no significant difference in the correlation between behavioural beliefs and attitude for rescaled modal and individual beliefs. This was also the case for control beliefs and PBC. By contrast, there was a large correlation between rescaled modal normative beliefs and subjective norm, whereas individual normative beliefs did not correlate with subjective norm. Using individual beliefs without multiplicative composites allows for a fairly unproblematic interpretation of the relationship between the indirect and direct TPB constructs (French & Hankins, 2003). Therefore, it is recommended that future studies consider using individual measures of behavioural and control beliefs without multiplicative composites and examine a different way of measuring individual normative beliefs without multiplicative composites to that used in this study.

  19. Methodological issues in volumetric magnetic resonance imaging of the brain in the Edinburgh High Risk Project.

    PubMed

    Whalley, H C; Kestelman, J N; Rimmington, J E; Kelso, A; Abukmeil, S S; Best, J J; Johnstone, E C; Lawrie, S M

    1999-07-30

    The Edinburgh High Risk Project is a longitudinal study of brain structure (and function) in subjects at high risk of developing schizophrenia in the next 5-10 years for genetic reasons. In this article we describe the methods of volumetric analysis of structural magnetic resonance images used in the study. We also consider potential sources of error in these methods: the validity of our image analysis techniques; inter- and intra-rater reliability; possible positional variation; and thresholding criteria used in separating brain from cerebro-spinal fluid (CSF). Investigation with a phantom test object (of similar imaging characteristics to the brain) provided evidence for the validity of our image acquisition and analysis techniques. Both inter- and intra-rater reliability were found to be good in whole brain measures but less so for smaller regions. There were no statistically significant differences in positioning across the three study groups (patients with schizophrenia, high risk subjects and normal volunteers). A new technique for thresholding MRI scans longitudinally is described (the 'rescale' method) and compared with our established method (thresholding by eye). Few differences between the two techniques were seen at 3- and 6-month follow-up. These findings demonstrate the validity and reliability of the structural MRI analysis techniques used in the Edinburgh High Risk Project, and highlight methodological issues of general concern in cross-sectional and longitudinal studies of brain structure in healthy control subjects and neuropsychiatric populations.

  20. Can nuclear physics explain the anomaly observed in the internal pair production in the Beryllium-8 nucleus?

    NASA Astrophysics Data System (ADS)

    Zhang, Xilin; Miller, Gerald A.

    2017-10-01

    Recently the experimentalists in Krasznahorkay (2016) [1] announced observing an unexpected enhancement of the e+-e- pair production signal in one of the 8Be nuclear transitions. The subsequent studies have been focused on possible explanations based on introducing new types of particle. In this work, we improve the nuclear physics modeling of the reaction by studying the pair emission anisotropy and the interferences between different multipoles in an effective field theory inspired framework, and examine their possible relevance to the anomaly. The connection between the previously measured on-shell photon production and the pair production in the same nuclear transitions is established. These improvements, absent in the original experimental analysis, should be included in extracting new particle's properties from the experiment of this type. However, the improvements can not explain the anomaly. We then explore the nuclear transition form factor as a possible origin of the anomaly, and find the required form factor to be unrealistic for the 8Be nucleus. The reduction of the anomaly's significance by simply rescaling our predicted event count is also investigated.

  1. From heavy-tailed to exponential distribution of interevent time in cellphone top-up behavior

    NASA Astrophysics Data System (ADS)

    Wang, Peng; Ma, Qiang

    2017-05-01

    Cellphone top-up is a kind of activities, to a great extent, driven by individual consumption rather than personal interest and this behavior should be stable in common sense. However, our researches find there are heavy-tails both in interevent time distribution and purchase frequency distribution at the global level. Moreover, we find both memories of interevent time and unit price series are negative, which is different from previous bursty activities. We divide individuals into five groups according to the purchase frequency and the average unit price respectively. Then, the group analysis shows some significant heterogeneity in this behavior. On one hand, we obtain only the individuals with high purchase frequency have the heavy-tailed nature in interevent time distribution. On the contrary, the negative memory is only caused by low purchase-frequency individuals without burstiness. On the other hand, the individuals with different preferential price also have different power-law exponents at the group level and there is no data collapse after rescaling between these distributions. Our findings produce the evidence for the significant heterogeneity of human activity in many aspects.

  2. Effective theories of universal theories

    DOE PAGES

    Wells, James D.; Zhang, Zhengkang

    2016-01-20

    It is well-known but sometimes overlooked that constraints on the oblique parameters (most notably S and T parameters) are generally speaking only applicable to a special class of new physics scenarios known as universal theories. The oblique parameters should not be associated with Wilson coefficients in a particular operator basis in the effective field theory (EFT) framework, unless restrictions have been imposed on the EFT so that it describes universal theories. Here, we work out these restrictions, and present a detailed EFT analysis of universal theories. We find that at the dimension-6 level, universal theories are completely characterized by 16more » parameters. They are conveniently chosen to be: 5 oblique parameters that agree with the commonly-adopted ones, 4 anomalous triple-gauge couplings, 3 rescaling factors for the h 3, hff, hV V vertices, 3 parameters for hV V vertices absent in the Standard Model, and 1 four-fermion coupling of order yf 2. Furthermore, all these parameters are defined in an unambiguous and basis-independent way, allowing for consistent constraints on the universal theories parameter space from precision electroweak and Higgs data.« less

  3. Effective theories of universal theories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wells, James D.; Zhang, Zhengkang

    It is well-known but sometimes overlooked that constraints on the oblique parameters (most notably S and T parameters) are generally speaking only applicable to a special class of new physics scenarios known as universal theories. The oblique parameters should not be associated with Wilson coefficients in a particular operator basis in the effective field theory (EFT) framework, unless restrictions have been imposed on the EFT so that it describes universal theories. Here, we work out these restrictions, and present a detailed EFT analysis of universal theories. We find that at the dimension-6 level, universal theories are completely characterized by 16more » parameters. They are conveniently chosen to be: 5 oblique parameters that agree with the commonly-adopted ones, 4 anomalous triple-gauge couplings, 3 rescaling factors for the h 3, hff, hV V vertices, 3 parameters for hV V vertices absent in the Standard Model, and 1 four-fermion coupling of order yf 2. Furthermore, all these parameters are defined in an unambiguous and basis-independent way, allowing for consistent constraints on the universal theories parameter space from precision electroweak and Higgs data.« less

  4. Universality of Citation Distributions for Academic Institutions and Journals

    PubMed Central

    Chatterjee, Arnab; Ghosh, Asim; Chakrabarti, Bikas K.

    2016-01-01

    Citations measure the importance of a publication, and may serve as a proxy for its popularity and quality of its contents. Here we study the distributions of citations to publications from individual academic institutions for a single year. The average number of citations have large variations between different institutions across the world, but the probability distributions of citations for individual institutions can be rescaled to a common form by scaling the citations by the average number of citations for that institution. We find this feature seems to be universal for a broad selection of institutions irrespective of the average number of citations per article. A similar analysis for citations to publications in a particular journal in a single year reveals similar results. We find high absolute inequality for both these sets, Gini coefficients being around 0.66 and 0.58 for institutions and journals respectively. We also find that the top 25% of the articles hold about 75% of the total citations for institutions and the top 29% of the articles hold about 71% of the total citations for journals. PMID:26751563

  5. Causal mediation analysis with a binary outcome and multiple continuous or ordinal mediators: Simulations and application to an alcohol intervention

    PubMed Central

    Nguyen, Trang Quynh; Webb-Vargas, Yenny; Koning, Ina M.; Stuart, Elizabeth A.

    2016-01-01

    We investigate a method to estimate the combined effect of multiple continuous/ordinal mediators on a binary outcome: 1) fit a structural equation model with probit link for the outcome and identity/probit link for continuous/ordinal mediators, 2) predict potential outcome probabilities, and 3) compute natural direct and indirect effects. Step 2 involves rescaling the latent continuous variable underlying the outcome to address residual mediator variance/covariance. We evaluate the estimation of risk-difference- and risk-ratio-based effects (RDs, RRs) using the ML, WLSMV and Bayes estimators in Mplus. Across most variations in path-coefficient and mediator-residual-correlation signs and strengths, and confounding situations investigated, the method performs well with all estimators, but favors ML/WLSMV for RDs with continuous mediators, and Bayes for RRs with ordinal mediators. Bayes outperforms WLSMV/ML regardless of mediator type when estimating RRs with small potential outcome probabilities and in two other special cases. An adolescent alcohol prevention study is used for illustration. PMID:27158217

  6. Universality of Citation Distributions for Academic Institutions and Journals.

    PubMed

    Chatterjee, Arnab; Ghosh, Asim; Chakrabarti, Bikas K

    2016-01-01

    Citations measure the importance of a publication, and may serve as a proxy for its popularity and quality of its contents. Here we study the distributions of citations to publications from individual academic institutions for a single year. The average number of citations have large variations between different institutions across the world, but the probability distributions of citations for individual institutions can be rescaled to a common form by scaling the citations by the average number of citations for that institution. We find this feature seems to be universal for a broad selection of institutions irrespective of the average number of citations per article. A similar analysis for citations to publications in a particular journal in a single year reveals similar results. We find high absolute inequality for both these sets, Gini coefficients being around 0.66 and 0.58 for institutions and journals respectively. We also find that the top 25% of the articles hold about 75% of the total citations for institutions and the top 29% of the articles hold about 71% of the total citations for journals.

  7. The Kadomtsev{endash}Petviashvili equation as a source of integrable model equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maccari, A.

    1996-12-01

    A new integrable and nonlinear partial differential equation (PDE) in 2+1 dimensions is obtained, by an asymptotically exact reduction method based on Fourier expansion and spatiotemporal rescaling, from the Kadomtsev{endash}Petviashvili equation. The integrability property is explicitly demonstrated, by exhibiting the corresponding Lax pair, that is obtained by applying the reduction technique to the Lax pair of the Kadomtsev{endash}Petviashvili equation. This model equation is likely to be of applicative relevance, because it may be considered a consistent approximation of a large class of nonlinear evolution PDEs. {copyright} {ital 1996 American Institute of Physics.}

  8. Wall-pressure fluctuations beneath a spatially evolving turbulent boundary layer

    NASA Astrophysics Data System (ADS)

    Mahesh, Krishnan; Kumar, Praveen

    2016-11-01

    Wall-pressure fluctuations beneath a turbulent boundary layer are important in applications dealing with structural deformation and acoustics. Simulations are performed for flat plate and axisymmetric, spatially evolving zero-pressure-gradient turbulent boundary layers at inflow Reynolds number of 1400 and 2200 based on momentum thickness. The simulations generate their own inflow using the recycle-rescale method. The results for mean velocity and second-order statistics show excellent agreement with the data available in literature. The spectral characteristics of wall-pressure fluctuations and their relation to flow structure will be discussed. This work is supported by ONR.

  9. The Effective Dynamics of the Volume Preserving Mean Curvature Flow

    NASA Astrophysics Data System (ADS)

    Chenn, Ilias; Fournodavlos, G.; Sigal, I. M.

    2018-04-01

    We consider the dynamics of small closed submanifolds (`bubbles') under the volume preserving mean curvature flow. We construct a map from (n+1 )-dimensional Euclidean space into a given (n+1 )-dimensional Riemannian manifold which characterizes the existence, stability and dynamics of constant mean curvature submanifolds. This is done in terms of a reduced area function on the Euclidean space, which is given constructively and can be computed perturbatively. This allows us to derive adiabatic and effective dynamics of the bubbles. The results can be mapped by rescaling to the dynamics of fixed size bubbles in almost Euclidean Riemannian manifolds.

  10. Averages of B-Hadron, C-Hadron, and tau-lepton properties as of early 2012

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amhis, Y.; et al.

    2012-07-01

    This article reports world averages of measurements of b-hadron, c-hadron, and tau-lepton properties obtained by the Heavy Flavor Averaging Group (HFAG) using results available through the end of 2011. In some cases results available in the early part of 2012 are included. For the averaging, common input parameters used in the various analyses are adjusted (rescaled) to common values, and known correlations are taken into account. The averages include branching fractions, lifetimes, neutral meson mixing parameters, CP violation parameters, parameters of semileptonic decays and CKM matrix elements.

  11. Cosmological models with a hybrid scale factor in an extended gravity theory

    NASA Astrophysics Data System (ADS)

    Mishra, B.; Tripathy, S. K.; Tarai, Sankarsan

    2018-03-01

    A general formalism to investigate Bianchi type V Ih universes is developed in an extended theory of gravity. A minimally coupled geometry and matter field is considered with a rescaled function of f(R,T) substituted in place of the Ricci scalar R in the geometrical action. Dynamical aspects of the models are discussed by using a hybrid scale factor (HSF) that behaves as power law in an initial epoch and as an exponential form at late epoch. The power law behavior and the exponential behavior appear as two extreme cases of the present model.

  12. Non-equilibrium scale invariance and shortcuts to adiabaticity in a one-dimensional Bose gas

    PubMed Central

    Rohringer, W.; Fischer, D.; Steiner, F.; Mazets, I. E.; Schmiedmayer, J.; Trupke, M.

    2015-01-01

    We present experimental evidence for scale invariant behaviour of the excitation spectrum in phase-fluctuating quasi-1d Bose gases after a rapid change of the external trapping potential. Probing density correlations in free expansion, we find that the temperature of an initial thermal state scales with the spatial extension of the cloud as predicted by a model based on adiabatic rescaling of initial eigenmodes with conserved quasiparticle occupation numbers. Based on this result, we demonstrate that shortcuts to adiabaticity for the rapid expansion or compression of the gas do not induce additional heating. PMID:25867640

  13. Reduction of Radiometric Miscalibration—Applications to Pushbroom Sensors

    PubMed Central

    Rogaß, Christian; Spengler, Daniel; Bochow, Mathias; Segl, Karl; Lausch, Angela; Doktor, Daniel; Roessner, Sigrid; Behling, Robert; Wetzel, Hans-Ulrich; Kaufmann, Hermann

    2011-01-01

    The analysis of hyperspectral images is an important task in Remote Sensing. Foregoing radiometric calibration results in the assignment of incident electromagnetic radiation to digital numbers and reduces the striping caused by slightly different responses of the pixel detectors. However, due to uncertainties in the calibration some striping remains. This publication presents a new reduction framework that efficiently reduces linear and nonlinear miscalibrations by an image-driven, radiometric recalibration and rescaling. The proposed framework—Reduction Of Miscalibration Effects (ROME)—considering spectral and spatial probability distributions, is constrained by specific minimisation and maximisation principles and incorporates image processing techniques such as Minkowski metrics and convolution. To objectively evaluate the performance of the new approach, the technique was applied to a variety of commonly used image examples and to one simulated and miscalibrated EnMAP (Environmental Mapping and Analysis Program) scene. Other examples consist of miscalibrated AISA/Eagle VNIR (Visible and Near Infrared) and Hawk SWIR (Short Wave Infrared) scenes of rural areas of the region Fichtwald in Germany and Hyperion scenes of the Jalal-Abad district in Southern Kyrgyzstan. Recovery rates of approximately 97% for linear and approximately 94% for nonlinear miscalibrated data were achieved, clearly demonstrating the benefits of the new approach and its potential for broad applicability to miscalibrated pushbroom sensor data. PMID:22163960

  14. Anisotropic magnetocaloric effect in single crystals of CrI3

    NASA Astrophysics Data System (ADS)

    Liu, Yu; Petrovic, C.

    2018-05-01

    We report a systematic investigation of dc magnetization and ac susceptibility, as well as anisotropic magnetocaloric effect in bulk CrI3 single crystals. A second-stage magnetic transition was observed just below the Curie temperature Tc, indicating a two-step magnetic ordering. The low temperature thermal demagnetization could be well fitted by the spin-wave model rather than the single-particle model, confirming its localized magnetism. The maximum magnetic entropy change -Δ SMmax˜5.65 J kg-1K-1 and the corresponding adiabatic temperature change Δ Tad˜2.34 K are achieved from heat capacity analysis with the magnetic field up to 9 T. Anisotropy of Δ SM(T ,H ) was further investigated by isothermal magnetization, showing that the difference of -Δ SMmax between the a b plane and the c axis reaches a maximum value ˜1.56 J kg-1K-1 with the field change of 5 T. With the scaling analysis of Δ SM , the rescaled Δ SM(T ,H ) curves collapse onto a universal curve, indicating a second-order type of the magnetic transition. Furthermore, the -Δ SMmax follows the power law of Hn with n =0.64 (1 ) , and the relative cooling power depends on Hm with m =1.12 (1 ) .

  15. Automated x-ray/light field congruence using the LINAC EPID panel.

    PubMed

    Polak, Wojciech; O'Doherty, Jim; Jones, Matt

    2013-03-01

    X-ray/light field alignment is a test described in many guidelines for the routine quality control of clinical linear accelerators (LINAC). Currently, the gold standard method for measuring alignment is through utilization of radiographic film. However, many modern LINACs are equipped with an electronic portal imaging device (EPID) that may be used to perform this test and thus subsequently reducing overall cost, processing, and analysis time, removing operator dependency and the requirement to sustain the departmental film processor. This work describes a novel method of utilizing the EPID together with a custom inhouse designed jig and automatic image processing software allowing measurement of the light field size, x-ray field size, and congruence between them. The authors present results of testing the method for aS1000 and aS500 Varian EPID detectors for six LINACs at a range of energies (6, 10, and 15 MV) in comparison with the results obtained from the use of radiographic film. Reproducibility of the software in fully automatic operation under a range of operating conditions for a single image showed a congruence of 0.01 cm with a coefficient of variation of 0. Slight variation in congruence repeatability was noted through semiautomatic processing by four independent operators due to manual marking of positions on the jig. Testing of the methodology using the automatic method shows a high precision of 0.02 mm compared to a maximum of 0.06 mm determined by film processing. Intraindividual examination of operator measurements of congruence was shown to vary as much as 0.75 mm. Similar congruence measurements of 0.02 mm were also determined for a lower resolution EPID (aS500 model), after rescaling of the image to the aS1000 image size. The designed methodology was proven to be time efficient, cost effective, and at least as accurate as using the gold standard radiographic film. Additionally, congruence testing can be easily performed for all four cardinal gantry angles which can be difficult when using radiographic film. Therefore, the authors propose it can be used as an alternative to the radiographic film method allowing decommissioning of the film processor.

  16. Implementation of the Jacobian-free Newton-Krylov method for solving the for solving the first-order ice sheet momentum balance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salinger, Andy; Evans, Katherine J; Lemieux, Jean-Francois

    2011-01-01

    We have implemented the Jacobian-free Newton-Krylov (JFNK) method for solving the rst-order ice sheet momentum equation in order to improve the numerical performance of the Community Ice Sheet Model (CISM), the land ice component of the Community Earth System Model (CESM). Our JFNK implementation is based on signicant re-use of existing code. For example, our physics-based preconditioner uses the original Picard linear solver in CISM. For several test cases spanning a range of geometries and boundary conditions, our JFNK implementation is 1.84-3.62 times more efficient than the standard Picard solver in CISM. Importantly, this computational gain of JFNK over themore » Picard solver increases when rening the grid. Global convergence of the JFNK solver has been signicantly improved by rescaling the equation for the basal boundary condition and through the use of an inexact Newton method. While a diverse set of test cases show that our JFNK implementation is usually robust, for some problems it may fail to converge with increasing resolution (as does the Picard solver). Globalization through parameter continuation did not remedy this problem and future work to improve robustness will explore a combination of Picard and JFNK and the use of homotopy methods.« less

  17. Time-Warp–Invariant Neuronal Processing

    PubMed Central

    Gütig, Robert; Sompolinsky, Haim

    2009-01-01

    Fluctuations in the temporal durations of sensory signals constitute a major source of variability within natural stimulus ensembles. The neuronal mechanisms through which sensory systems can stabilize perception against such fluctuations are largely unknown. An intriguing instantiation of such robustness occurs in human speech perception, which relies critically on temporal acoustic cues that are embedded in signals with highly variable duration. Across different instances of natural speech, auditory cues can undergo temporal warping that ranges from 2-fold compression to 2-fold dilation without significant perceptual impairment. Here, we report that time-warp–invariant neuronal processing can be subserved by the shunting action of synaptic conductances that automatically rescales the effective integration time of postsynaptic neurons. We propose a novel spike-based learning rule for synaptic conductances that adjusts the degree of synaptic shunting to the temporal processing requirements of a given task. Applying this general biophysical mechanism to the example of speech processing, we propose a neuronal network model for time-warp–invariant word discrimination and demonstrate its excellent performance on a standard benchmark speech-recognition task. Our results demonstrate the important functional role of synaptic conductances in spike-based neuronal information processing and learning. The biophysics of temporal integration at neuronal membranes can endow sensory pathways with powerful time-warp–invariant computational capabilities. PMID:19582146

  18. A technique to detect microclimatic inhomogeneities in historical temperature records

    NASA Astrophysics Data System (ADS)

    Runnalls, K. E.; Oke, T. R.

    2003-04-01

    A technique to identify inhomogeneities in historical temperature records caused by microclimatic changes to the surroundings of a climate station (e.g. minor instrument relocations, vegetation growth/removal, construction of houses, roads, runways) is presented. The technique uses daily maximum and minimum temperatures to estimate the magnitude of nocturnal cooling. The test station is compared to a nearby reference station by constructing time series of monthly "cooling ratios". It is argued that the cooling ratio is a particularly sensitive measure of microclimatic differences between neighbouring climate stations. Firstly, because microclimatic character is best expressed at night in stable conditions. Secondly, because larger-scale climatic influences common to both stations are removed by the use of a ratio and, because the ratio can be shown to be invariant in the mean with weather variables such as wind and cloud. Inflections (change points) in time series of cooling ratios therefore signal microclimatic change in one of the station records. Hurst rescaling is applied to the time series to aid in the identification of change points, which can then be compared to documented station history events, if sufficient metatdata is available. Results for a variety of air temperature records, ranging from rural to urban stations, are presented to illustrate the applicability of the technique.

  19. Global Assessment of New GRACE Mascons Solutions for Hydrologic Applications

    NASA Astrophysics Data System (ADS)

    Save, H.; Zhang, Z.; Scanlon, B. R.; Wiese, D. N.; Landerer, F. W.; Long, D.; Longuevergne, L.; Chen, J.

    2016-12-01

    Advances in GRACE (Gravity Recovery and Climate Experiment) satellite data processing using new mass concentration (mascon) solutions have greatly increased the spatial localization and amplitude of recovered total Terrestrial Water Storage (TWS) signals; however, limited testing has been conduct on land hydrologic applications. In this study we compared TWS anomalies from (1) Center for Space Research mascons (CSR-M) solution with (2) NASA JPL mascon (JPL-M) solution, and with (3) a CSR gridded spherical harmonic rescaled (sf) solution from Tellus (CSRT-GSH.sf) in 176 river basins covering 80% of the global land area. There is good correspondence in TWS anomalies from mascons (CSR-M and JPL-M) and SH solutions based on high correlations between time series (rank correlation coefficients mostly >0.9). The long-term trends in basin TWS anomalies represent a relatively small signal (up to ±20 mm/yr) with differences among GRACE solutions and inter-basin variability increasing with decreasing basin size. Long-term TWS declines are greatest in (semi)arid and irrigated basins. Annual and semiannual signals have much larger amplitudes (up to ±250 mm). There is generally good agreement among GRACE solutions, increasing confidence in seasonal fluctuations from GRACE data. Rescaling spherical harmonics to restore lost signal increases agreement with mascons solutions for long-term trends and seasonal fluctuations. There are many advantages to using GRACE mascons solutions relative to SH solutions, such as reduced leakage from land to ocean increasing signal amplitude, and constraining results by applying geophysical data during processing with little or no post-processing requirements, making mascons more user friendly for non-geodetic users. This inter-comparison of various GRACE solutions should allow hydrologists to better select suitable GRACE products for hydrologic applications.

  20. Global evaluation of new GRACE mascon products for hydrologic applications

    NASA Astrophysics Data System (ADS)

    Scanlon, Bridget R.; Zhang, Zizhan; Save, Himanshu; Wiese, David N.; Landerer, Felix W.; Long, Di; Longuevergne, Laurent; Chen, Jianli

    2016-12-01

    Recent developments in mascon (mass concentration) solutions for GRACE (Gravity Recovery and Climate Experiment) satellite data have significantly increased the spatial localization and amplitude of recovered terrestrial Total Water Storage anomalies (TWSA); however, land hydrology applications have been limited. Here we compare TWSA from April 2002 through March 2015 from (1) newly released GRACE mascons from the Center for Space Research (CSR-M) with (2) NASA JPL mascons (JPL-M), and with (3) CSR Tellus gridded spherical harmonics rescaled (sf) (CSRT-GSH.sf) in 176 river basins, ˜60% of the global land area. Time series in TWSA mascons (CSR-M and JPL-M) and spherical harmonics are highly correlated (rank correlation coefficients mostly >0.9). The signal from long-term trends (up to ±20 mm/yr) is much less than that from seasonal amplitudes (up to 250 mm). Net long-term trends, summed over all 176 basins, are similar for CSR and JPL mascons (66-69 km3/yr) but are lower for spherical harmonics (˜14 km3/yr). Long-term TWSA declines are found mostly in irrigated basins (-41 to -69 km3/yr). Seasonal amplitudes agree among GRACE solutions, increasing confidence in GRACE-based seasonal fluctuations. Rescaling spherical harmonics significantly increases agreement with mascons for seasonal fluctuations, but less for long-term trends. Mascons provide advantages relative to spherical harmonics, including (1) reduced leakage from land to ocean increasing signal amplitude, and (2) application of geophysical data constraints during processing with little empirical postprocessing requirements, making it easier for nongeodetic users. Results of this product intercomparison should allow hydrologists to better select suitable GRACE solutions for hydrologic applications.

  1. Dealing with the health state 'dead' when using discrete choice experiments to obtain values for EQ-5D-5L heath states.

    PubMed

    Ramos-Goñi, Juan Manuel; Rivero-Arias, Oliver; Errea, María; Stolk, Elly A; Herdman, Michael; Cabasés, Juan Manuel

    2013-07-01

    To evaluate two different methods to obtain a dead (0)--full health (1) scale for EQ-5D-5L valuation studies when using discrete choice (DC) modeling. The study was carried out among 400 respondents from Barcelona who were representative of the Spanish population in terms of age, sex, and level of education. The DC design included 50 pairs of health states in five blocks. Participants were forced to choose between two EQ-5D-5L states (A and B). Two extra questions concerned whether A and B were considered worse than dead. Each participant performed ten choice exercises. In addition, values were collected using lead-time trade-off (lead-time TTO), for which 100 states in ten blocks were selected. Each participant performed five lead-time TTO exercises. These consisted of DC models offering the health state 'dead' as one of the choices--for which all participants' responses were used (DCdead)--and a model that included only the responses of participants who chose at least one state as worse than dead (WTD) (DCWTD). The study also estimated DC models rescaled with lead-time TTO data and a lead-time TTO linear model. The DC(dead) and DCWTD models produced relatively similar results, although the coefficients in the DCdead model were slightly lower. The DC model rescaled with lead-time TTO data produced higher utility decrements. Lead-time TTO produced the highest utility decrements. The incorporation of the state 'dead' in the DC models produces results in concordance with DC models that do not include 'dead'.

  2. Zirconium Evaluations for ENDF/B-VII.2 for the Fast Region

    NASA Astrophysics Data System (ADS)

    Brown, D. A.; Arcilla, R.; Capote, R.; Mughabghab, S. F.; Herman, M. W.; Trkov, A.; Kim, H. I.

    2014-04-01

    We have performed a new combined set of evaluations for 90-96Zr, including new resolved resonance parameterizations from Said Mughabghab for 90,91,92,94,96Zr and fast region calculations made with EMPIRE-3.1. Because 90Zr is a magic nucleus, stable Zr isotopes are nearly spherical. A new soft-rotor optical model potential is used allowing calculations of the inelastic scattering on low-lying coupled levels of vibrational nature. A soft rotor model describes dynamical deformations of the nucleus around the spherical shape and is implemented in EMPIRE/OPTMAN code. The same potential is used with rigid rotor couplings for odd-A nuclei. This then led to improved elastic angular distributions, helping to resolve improper leakage in the older ENDF/B-VII.1β evaluation in KAPL proprietary, ZPR and TRIGA benchmarks. Another consequence of 90Zr being a magic nucleus is that the level densities in both 90Zr and 91Zr are unusually low causing the (n,el) and (n,tot) cross sections to exhibit large fluctuations above the resolved resonance region. To accommodate these fluctuations, we performed a simultaneous constrained generalized least-square fit to (n,tot) for all isotopic and elemental Zr data in EXFOR, using EMPIRE's TOTRED scaling factor. TOTRED rescales total cross sections so that the optical model calculations are unaltered by the rescaling and the correct competition between channels is maintained. In this fit, all (n,tot) data in EXFOR was used for Ein>100 keV, provided the target isotopic makeup could be correctly understood, including spectrum averaged data and data with broad energy resolution. As a result of our fitting procedure, we will have full cross material and cross reaction covariance for all Zr isotopes and reactions.

  3. Target switching in curved human arm movements is predicted by changing a single control parameter.

    PubMed

    Hoffmann, Heiko

    2011-01-01

    Straight-line movements have been studied extensively in the human motor-control literature, but little is known about how to generate curved movements and how to adjust them in a dynamic environment. The present work studied, for the first time to my knowledge, how humans adjust curved hand movements to a target that switches location. Subjects (n = 8) sat in front of a drawing tablet and looked at a screen. They moved a cursor on a curved trajectory (spiral or oval shaped) toward a goal point. In half of the trials, this goal switched 200 ms after movement onset to either one of two alternative positions, and subjects smoothly adjusted their movements to the new goal. To explain this adjustment, we compared three computational models: a superposition of curved and minimum-jerk movements (Flash and Henis in J Cogn Neurosci 3(3):220-230, 1991), Vector Planning (Gordon et al. in Exp Brain Res 99(1):97-111, 1994) adapted to curved movements (Rescale), and a nonlinear dynamical system, which could generate arbitrarily curved smooth movements and had a point attractor at the goal. For each model, we predicted the trajectory adjustment to the target switch by changing only the goal position in the model. As result, the dynamical model could explain the observed switch behavior significantly better than the two alternative models (spiral: P = 0.0002 vs. Flash, P = 0.002 vs. Rescale; oval: P = 0.04 vs. Flash; P values obtained from Wilcoxon test on R (2) values). We conclude that generalizing arbitrary hand trajectories to new targets may be explained by switching a single control command, without the need to re-plan or re-optimize the whole movement or superimpose movements.

  4. Anthropometry and Biomechanics Facility Presentation to Open EVA Research Forum

    NASA Technical Reports Server (NTRS)

    Rajulu, Sudhakar

    2017-01-01

    NASA is required to accommodate individuals who fall within a 1st to 99th percentile range on a variety of critical dimensions. The hardware the crew interacts with must therefore be designed and verified to allow these selected individuals to complete critical mission tasks safely and at an optimal performance level. Until now, designers have been provided simpler univariate critical dimensional analyses. The multivariate characteristics of intra-individual and inter-individual size variation must be accounted for, since an individual who is 1st percentile in one body dimension will not be 1st percentile in all other dimensions. A more simplistic approach, assuming every measurement of an individual will fall within the same percentile range, can lead to a model that does not represent realistic members of the population. In other words, there is no '1st percentile female' or '99th percentile male', and designing for these unrealistic body types can lead to hardware issues down the road. Furthermore, due to budget considerations, designers are normally limited to providing only 1 size of a prototype suit, thus requiring other possible means to ensure that a given suit architecture would yield the necessary suit sizes to accommodate the entire user population. Fortunately, modeling tools can be used to more accurately model the types of human body sizes and shapes that will be encountered in a population. Anthropometry toolkits have been designed with a variety of capabilities, including grouping the population into clusters based on critical dimensions, providing percentile information given test subject measurements, and listing measurement ranges for critical dimensions in the 1st-99th percentile range. These toolkits can be combined with full body laser scans to allow designers to build human models that better represent the astronaut population. More recently, some rescaling and reposing capabilities have been developed, to allow reshaping of these static laser scans in more representative postures, such as an abducted shoulder. All of the hardware designed for use with the crew must be sized to accommodate the user population, but the interaction between subject size and hardware fit is complicated with multi-component, complex systems like a space suit. Again, prototype suits are normally only provided in a limited size range, and suited testing is an expensive endeavor; both of these factors limit the number and size of people who can be used to benchmark a spacesuit. However, modeling tools for assessing suit-human interaction can allow potential issues to be modeled and visualized. These types of modeling tools can be used for analysis of a larger combination of anthropometries and hardware types than could feasibly be done with actual human subjects and physical mockups.

  5. On the Choice of Variable for Atmospheric Moisture Analysis

    NASA Technical Reports Server (NTRS)

    Dee, Dick P.; DaSilva, Arlindo M.; Atlas, Robert (Technical Monitor)

    2002-01-01

    The implications of using different control variables for the analysis of moisture observations in a global atmospheric data assimilation system are investigated. A moisture analysis based on either mixing ratio or specific humidity is prone to large extrapolation errors, due to the high variability in space and time of these parameters and to the difficulties in modeling their error covariances. Using the logarithm of specific humidity does not alleviate these problems, and has the further disadvantage that very dry background estimates cannot be effectively corrected by observations. Relative humidity is a better choice from a statistical point of view, because this field is spatially and temporally more coherent and error statistics are therefore easier to obtain. If, however, the analysis is designed to preserve relative humidity in the absence of moisture observations, then the analyzed specific humidity field depends entirely on analyzed temperature changes. If the model has a cool bias in the stratosphere this will lead to an unstable accumulation of excess moisture there. A pseudo-relative humidity can be defined by scaling the mixing ratio by the background saturation mixing ratio. A univariate pseudo-relative humidity analysis will preserve the specific humidity field in the absence of moisture observations. A pseudorelative humidity analysis is shown to be equivalent to a mixing ratio analysis with flow-dependent covariances. In the presence of multivariate (temperature-moisture) observations it produces analyzed relative humidity values that are nearly identical to those produced by a relative humidity analysis. Based on a time series analysis of radiosonde observed-minus-background differences it appears to be more justifiable to neglect specific humidity-temperature correlations (in a univariate pseudo-relative humidity analysis) than to neglect relative humidity-temperature correlations (in a univariate relative humidity analysis). A pseudo-relative humidity analysis is easily implemented in an existing moisture analysis system, by simply scaling observed-minus background moisture residuals prior to solving the analysis equation, and rescaling the analyzed increments afterward.

  6. Stages and Discharges of the Mississippi River and Tributaries and Other Watersheds in the New Orleans District for 1981.

    DTIC Science & Technology

    1981-01-01

    Iq 1- V MISSISSIPPI RIVER AT CIALPETTE. LA. P LUCATIU4. LAT. 29-56-42, LONG. q0-00-12 (CO-CRDINATES RESCALED, lbg). PIER ON UPSTREAM ENC OF AMERICAN...INTERMITTENTLY 10 CAIF, LUMPUTFI) 0AILY. SINCE lAdo . WAHEMkS. .1I,ML~ST, 󈧫.11 FEET UP, MAY 10., 1q27. LCOFSY, 01.70 FOOT ON OICY. 17, 1970. MAXIMUM, JUNE.4 24...WATERWAY ATCLCSELLC (LA.) (WEST AUTC.)% LUCATIO𔃾. LAT. 30-,)5-19, L[NG. q3-17-0. AT WEST ENC UF LOCK, 10.5 MILES SOLTIHIEST CF LAKE CHARLES, LA. (STA. 709

  7. Econophysics: Master curve for price-impact function

    NASA Astrophysics Data System (ADS)

    Lillo, Fabrizio; Farmer, J. Doyne; Mantegna, Rosario N.

    2003-01-01

    The price reaction to a single transaction depends on transaction volume, the identity of the stock, and possibly many other factors. Here we show that, by taking into account the differences in liquidity for stocks of different size classes of market capitalization, we can rescale both the average price shift and the transaction volume to obtain a uniform price-impact curve for all size classes of firm for four different years (1995-98). This single-curve collapse of the price-impact function suggests that fluctuations from the supply-and-demand equilibrium for many financial assets, differing in economic sectors of activity and market capitalization, are governed by the same statistical rule.

  8. Universal scaling law in human behavioral organization.

    PubMed

    Nakamura, Toru; Kiyono, Ken; Yoshiuchi, Kazuhiro; Nakahara, Rika; Struzik, Zbigniew R; Yamamoto, Yoshiharu

    2007-09-28

    We describe the nature of human behavioral organization, specifically how resting and active periods are interwoven throughout daily life. Active period durations with physical activity count successively above a predefined threshold, when rescaled with individual means, follow a universal stretched exponential (gamma-type) cumulative distribution with characteristic time, both in healthy individuals and in patients with major depressive disorder. On the other hand, resting period durations below the threshold for both groups obey a scale-free power-law cumulative distribution over two decades, with significantly lower scaling exponents in the patients. We thus find universal distribution laws governing human behavioral organization, with a parameter altered in depression.

  9. A new chaotic oscillator with free control

    NASA Astrophysics Data System (ADS)

    Li, Chunbiao; Sprott, Julien Clinton; Akgul, Akif; Iu, Herbert H. C.; Zhao, Yibo

    2017-08-01

    A novel chaotic system is explored in which all terms are quadratic except for a linear function. The slope of the linear function rescales the amplitude and frequency of the variables linearly while its zero intercept allows offset boosting for one of the variables. Therefore, a free-controlled chaotic oscillation can be obtained with any desired amplitude, frequency, and offset by an easy modification of the linear function. When implemented as an electronic circuit, the corresponding chaotic signal can be controlled by two independent potentiometers, which is convenient for constructing a chaos-based application system. To the best of our knowledge, this class of chaotic oscillators has never been reported.

  10. Fast downscaled inverses for images compressed with M-channel lapped transforms.

    PubMed

    de Queiroz, R L; Eschbach, R

    1997-01-01

    Compressed images may be decompressed and displayed or printed using different devices at different resolutions. Full decompression and rescaling in space domain is a very expensive method. We studied downscaled inverses where the image is decompressed partially, and a reduced inverse transform is used to recover the image. In this fashion, fewer transform coefficients are used and the synthesis process is simplified. We studied the design of fast inverses, for a given forward transform. General solutions are presented for M-channel finite impulse response (FIR) filterbanks, of which block and lapped transforms are a subset. Designs of faster inverses are presented for popular block and lapped transforms.

  11. Distribution of G concurrence of random pure states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cappellini, Valerio; Sommers, Hans-Juergen; Zyczkowski, Karol

    2006-12-15

    The average entanglement of random pure states of an NxN composite system is analyzed. We compute the average value of the determinant D of the reduced state, which forms an entanglement monotone. Calculating higher moments of the determinant, we characterize the probability distribution P(D). Similar results are obtained for the rescaled Nth root of the determinant, called the G concurrence. We show that in the limit N{yields}{infinity} this quantity becomes concentrated at a single point G{sub *}=1/e. The position of the concentration point changes if one consider an arbitrary NxK bipartite system, in the joint limit N,K{yields}{infinity}, with K/N fixed.

  12. Universal Scaling Law in Human Behavioral Organization

    NASA Astrophysics Data System (ADS)

    Nakamura, Toru; Kiyono, Ken; Yoshiuchi, Kazuhiro; Nakahara, Rika; Struzik, Zbigniew R.; Yamamoto, Yoshiharu

    2007-09-01

    We describe the nature of human behavioral organization, specifically how resting and active periods are interwoven throughout daily life. Active period durations with physical activity count successively above a predefined threshold, when rescaled with individual means, follow a universal stretched exponential (gamma-type) cumulative distribution with characteristic time, both in healthy individuals and in patients with major depressive disorder. On the other hand, resting period durations below the threshold for both groups obey a scale-free power-law cumulative distribution over two decades, with significantly lower scaling exponents in the patients. We thus find universal distribution laws governing human behavioral organization, with a parameter altered in depression.

  13. Calcium ions in aqueous solutions: Accurate force field description aided by ab initio molecular dynamics and neutron scattering

    NASA Astrophysics Data System (ADS)

    Martinek, Tomas; Duboué-Dijon, Elise; Timr, Štěpán; Mason, Philip E.; Baxová, Katarina; Fischer, Henry E.; Schmidt, Burkhard; Pluhařová, Eva; Jungwirth, Pavel

    2018-06-01

    We present a combination of force field and ab initio molecular dynamics simulations together with neutron scattering experiments with isotopic substitution that aim at characterizing ion hydration and pairing in aqueous calcium chloride and formate/acetate solutions. Benchmarking against neutron scattering data on concentrated solutions together with ion pairing free energy profiles from ab initio molecular dynamics allows us to develop an accurate calcium force field which accounts in a mean-field way for electronic polarization effects via charge rescaling. This refined calcium parameterization is directly usable for standard molecular dynamics simulations of processes involving this key biological signaling ion.

  14. Strategic Environmental Assessment Framework for Landscape-Based, Temporal Analysis of Wetland Change in Urban Environments.

    PubMed

    Sizo, Anton; Noble, Bram F; Bell, Scott

    2016-03-01

    This paper presents and demonstrates a spatial framework for the application of strategic environmental assessment (SEA) in the context of change analysis for urban wetland environments. The proposed framework is focused on two key stages of the SEA process: scoping and environmental baseline assessment. These stages are arguably the most information-intense phases of SEA and have a significant effect on the quality of the SEA results. The study aims to meet the needs for proactive frameworks to assess and protect wetland habitat and services more efficiently, toward the goal of advancing more intelligent urban planning and development design. The proposed framework, adopting geographic information system and remote sensing tools and applications, supports the temporal evaluation of wetland change and sustainability assessment based on landscape indicator analysis. The framework was applied to a rapidly developing urban environment in the City of Saskatoon, Saskatchewan, Canada, analyzing wetland change and land-use pressures from 1985 to 2011. The SEA spatial scale was rescaled from administrative urban planning units to an ecologically meaningful area. Landscape change assessed was based on a suite of indicators that were subsequently rolled up into a single, multi-dimensional, and easy to understand and communicate index to examine the implications of land-use change for wetland sustainability. The results show that despite the recent extremely wet period in the Canadian prairie region, land-use change contributed to increasing threats to wetland sustainability.

  15. Strategic Environmental Assessment Framework for Landscape-Based, Temporal Analysis of Wetland Change in Urban Environments

    NASA Astrophysics Data System (ADS)

    Sizo, Anton; Noble, Bram F.; Bell, Scott

    2016-03-01

    This paper presents and demonstrates a spatial framework for the application of strategic environmental assessment (SEA) in the context of change analysis for urban wetland environments. The proposed framework is focused on two key stages of the SEA process: scoping and environmental baseline assessment. These stages are arguably the most information-intense phases of SEA and have a significant effect on the quality of the SEA results. The study aims to meet the needs for proactive frameworks to assess and protect wetland habitat and services more efficiently, toward the goal of advancing more intelligent urban planning and development design. The proposed framework, adopting geographic information system and remote sensing tools and applications, supports the temporal evaluation of wetland change and sustainability assessment based on landscape indicator analysis. The framework was applied to a rapidly developing urban environment in the City of Saskatoon, Saskatchewan, Canada, analyzing wetland change and land-use pressures from 1985 to 2011. The SEA spatial scale was rescaled from administrative urban planning units to an ecologically meaningful area. Landscape change assessed was based on a suite of indicators that were subsequently rolled up into a single, multi-dimensional, and easy to understand and communicate index to examine the implications of land-use change for wetland sustainability. The results show that despite the recent extremely wet period in the Canadian prairie region, land-use change contributed to increasing threats to wetland sustainability.

  16. Can superhorizon cosmological perturbations explain the acceleration of the universe?

    NASA Astrophysics Data System (ADS)

    Hirata, Christopher M.; Seljak, Uroš

    2005-10-01

    We investigate the recent suggestions by Barausse et al. and Kolb et al. that the acceleration of the universe could be explained by large superhorizon fluctuations generated by inflation. We show that no acceleration can be produced by this mechanism. We begin by showing how the application of Raychaudhuri equation to inhomogeneous cosmologies results in several “no go” theorems for accelerated expansion. Next we derive an exact solution for a specific case of initial perturbations, for which application of the Kolb et al. expressions leads to an acceleration, while the exact solution reveals that no acceleration is present. We show that the discrepancy can be traced to higher-order terms that were dropped in the Kolb et al. analysis. We proceed with the analysis of initial value formulation of general relativity to argue that causality severely limits what observable effects can be derived from superhorizon perturbations. By constructing a Riemann normal coordinate system on initial slice we show that no infrared divergence terms arise in this coordinate system. Thus any divergences found previously can be eliminated by a local rescaling of coordinates and are unobservable. We perform an explicit analysis of the variance of the deceleration parameter for the case of single-field inflation using usual coordinates and show that the infrared-divergent terms found by Barausse et al. and Kolb et al. cancel against several additional terms not considered in their analysis. Finally, we argue that introducing isocurvature perturbations does not alter our conclusion that the accelerating expansion of the universe cannot be explained by superhorizon modes.

  17. Rigorous embedding of cell dynamics simulations in the Cahn-Hilliard-Cook framework: Imposing stability and isotropy

    NASA Astrophysics Data System (ADS)

    Sevink, G. J. A.

    2015-05-01

    We have rigorously analyzed the stability of the efficient cell dynamics simulations (CDS) method by making use of the special properties of the local averaging operator <<*>>-* in matrix form. Besides resolving a theoretical issue that has puzzled many over the past three decades, this analysis has considerable practical value: It relates CDS directly to finite-difference approximations of the Cahn-Hilliard-Cook equations and provides a straightforward recipe for replacing the original two- or three-dimensional (2D or 3D) averaging operators in CDS by an equivalent (in terms of stability) discrete Laplacian with superior isotropy and scaling behavior. As such, we open up a route to suppress the unphysical reflection of the computational grid in CDS results (grid artifacts). We found that proper rescaling of discrete Laplacians, needed to employ them in CDS, is equivalent to introducing a well-chosen time step in CDS. In turn, our analysis provides stability conditions for phase-field simulations based on the Cahn-Hilliard-Cook equations. Subsequently, we have quantitatively compared the isotropy and scaling behavior of several discrete 2D or 3D Laplacians, thereby extending the significance of this work to general field-based methodology. We found that all considered discrete Laplacians have equivalent scaling behavior along the Cartesian directions. In addition, and somewhat surprisingly, known "isotropic" discrete Laplacians, i.e., isotropic up to fourth order in |k | , become quite anisotropic for larger wave vectors, whereas "less isotropic" discrete Laplacians (second order) are only slightly anisotropic on the whole |k | range. We identified a hard limit to the accuracy with which the discrete Laplacian can emulate the two important properties of the optimal (continuum) Laplacian, as an improvement of the isotropy, by introducing additional points to the stencil, will negatively affect the scaling behavior. Within this limitation, the discrete compact Laplacians in the D n Q m class known from lattice hydrodynamics, D 2 Q 9 in 2D and D 3 Q 19 in 3D, are found to be optimal in terms of isotropy. However, by being only slightly anisotropic on the whole range and enabling larger time steps, the discrete Laplacians that relate to the local averaging operator of Oono and Puri (2D) and Shinozaki and Oono (3D) as well as the less familiar 3D discrete B v V Laplacian developed for dynamic density functional theory are valid alternatives.

  18. Dynamics of large-scale solar wind streams obtained by the double superposed epoch analysis

    NASA Astrophysics Data System (ADS)

    Yermolaev, Yu. I.; Lodkina, I. G.; Nikolaeva, N. S.; Yermolaev, M. Yu.

    2015-09-01

    Using the OMNI data for period 1976-2000, we investigate the temporal profiles of 20 plasma and field parameters in the disturbed large-scale types of solar wind (SW): corotating interaction regions (CIR), interplanetary coronal mass ejections (ICME) (both magnetic cloud (MC) and Ejecta), and Sheath as well as the interplanetary shock (IS). To take into account the different durations of SW types, we use the double superposed epoch analysis (DSEA) method: rescaling the duration of the interval for all types in such a manner that, respectively, beginning and end for all intervals of selected type coincide. As the analyzed SW types can interact with each other and change parameters as a result of such interaction, we investigate separately eights sequences of SW types: (1) CIR, (2) IS/CIR, (3) Ejecta, (4) Sheath/Ejecta, (5) IS/Sheath/Ejecta, (6) MC, (7) Sheath/MC, and (8) IS/Sheath/MC. The main conclusion is that the behavior of parameters in Sheath and in CIR are very similar both qualitatively and quantitatively. Both the high-speed stream (HSS) and the fast ICME play a role of pistons which push the plasma located ahead them. The increase of speed in HSS and ICME leads at first to formation of compression regions (CIR and Sheath, respectively) and then to IS. The occurrence of compression regions and IS increases the probability of growth of magnetospheric activity.

  19. Source data supported high resolution carbon emissions inventory for urban areas of the Beijing-Tianjin-Hebei region: Spatial patterns, decomposition and policy implications.

    PubMed

    Cai, Bofeng; Li, Wanxin; Dhakal, Shobhakar; Wang, Jianghao

    2018-01-15

    This paper developed internationally compatible methods for delineating boundaries of urban areas in China. By integrating emission source data with existing official statistics as well as using rescaling methodology of data mapping for 1 km grid, the authors constructed high resolution emission gridded data in Beijing-Tianjin-Hebei (Jing-Jin-Ji) region in China for 2012. Comparisons between urban and non-urban areas of carbon emissions from industry, agriculture, household and transport exhibited regional disparities as well as sectoral differences. Except for the Hebei province, per capita total direct carbon emissions from urban extents in Beijing and Tianjin were both lower than provincial averages, indicating the climate benefit of urbanization, comparable to results from developed countries. Urban extents in the Hebei province were mainly industrial centers while those in Beijing and Tianjin were more service oriented. Further decomposition analysis revealed population to be a common major driver for increased carbon emissions but climate implications of urban design, economic productivity of land use, and carbon intensity of GDP were both cluster- and sector-specific. This study disapproves the one-size-fits-all solution for carbon mitigation but calls for down-scaled analysis of carbon emissions and formulation of localized carbon reduction strategies in the Jing-Jin-Ji as well as other regions in China. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. MULTIFRACTAL STRUCTURES DETECTED BY VOYAGER 1 AT THE HELIOSPHERIC BOUNDARIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Macek, W. M.; Wawrzaszek, A.; Burlaga, L. F., E-mail: macek@cbk.waw.pl, E-mail: anna.wawrzaszek@cbk.waw.pl, E-mail: lburlagahsp@verizon.net

    To better understand the dynamics of turbulent systems, we have proposed a phenomenological model based on a generalized Cantor set with two rescaling and one weight parameters. In this Letter, using recent Voyager 1 magnetic field data, we extend our two-scale multifractal analysis further in the heliosheath beyond the heliospheric termination shock, and even now near the heliopause, when entering the interstellar medium for the first time in human history. We have identified the scaling inertial region for magnetized heliospheric plasma between the termination shock and the heliopause. We also show that the degree of multifractality decreases with the heliocentricmore » distance and is still modulated by the phases of the solar cycle in the entire heliosphere including the heliosheath. Moreover, we observe the change of scaling toward a nonintermittent (nonmultifractal) behavior in the nearby interstellar medium, just beyond the heliopause. We argue that this loss of multifractal behavior could be a signature of the expected crossing of the heliopause by Voyager 2 in the near future. The results obtained demonstrate that our phenomenological multifractal model exhibits some properties of intermittent turbulence in the solar system plasmas, and we hope that it could shed light on universal characteristics of turbulence.« less

  1. Closer look at time averages of the logistic map at the edge of chaos

    NASA Astrophysics Data System (ADS)

    Tirnakli, Ugur; Tsallis, Constantino; Beck, Christian

    2009-05-01

    The probability distribution of sums of iterates of the logistic map at the edge of chaos has been recently shown [U. Tirnakli , Phys. Rev. E 75, 040106(R) (2007)] to be numerically consistent with a q -Gaussian, the distribution which—under appropriate constraints—maximizes the nonadditive entropy Sq , which is the basis of nonextensive statistical mechanics. This analysis was based on a study of the tails of the distribution. We now check the entire distribution, in particular, its central part. This is important in view of a recent q generalization of the central limit theorem, which states that for certain classes of strongly correlated random variables the rescaled sum approaches a q -Gaussian limit distribution. We numerically investigate for the logistic map with a parameter in a small vicinity of the critical point under which conditions there is convergence to a q -Gaussian both in the central region and in the tail region and find a scaling law involving the Feigenbaum constant δ . Our results are consistent with a large number of already available analytical and numerical evidences that the edge of chaos is well described in terms of the entropy Sq and its associated concepts.

  2. Oceanic ensemble forecasting in the Gulf of Mexico: An application to the case of the Deep Water Horizon oil spill

    NASA Astrophysics Data System (ADS)

    Khade, Vikram; Kurian, Jaison; Chang, Ping; Szunyogh, Istvan; Thyng, Kristen; Montuoro, Raffaele

    2017-05-01

    This paper demonstrates the potential of ocean ensemble forecasting in the Gulf of Mexico (GoM). The Bred Vector (BV) technique with one week rescaling frequency is implemented on a 9 km resolution version of the Regional Ocean Modelling System (ROMS). Numerical experiments are carried out by using the HYCOM analysis products to define the initial conditions and the lateral boundary conditions. The growth rates of the forecast uncertainty are estimated to be about 10% of initial amplitude per week. By carrying out ensemble forecast experiments with and without perturbed surface forcing, it is demonstrated that in the coastal regions accounting for uncertainties in the atmospheric forcing is more important than accounting for uncertainties in the ocean initial conditions. In the Loop Current region, the initial condition uncertainties, are the dominant source of the forecast uncertainty. The root-mean-square error of the Lagrangian track forecasts at the 15-day forecast lead time can be reduced by about 10 - 50 km using the ensemble mean Eulerian forecast of the oceanic flow for the computation of the tracks, instead of the single-initial-condition Eulerian forecast.

  3. Spatial evolutionary games with weak selection.

    PubMed

    Nanda, Mridu; Durrett, Richard

    2017-06-06

    Recently, a rigorous mathematical theory has been developed for spatial games with weak selection, i.e., when the payoff differences between strategies are small. The key to the analysis is that when space and time are suitably rescaled, the spatial model converges to the solution of a partial differential equation (PDE). This approach can be used to analyze all [Formula: see text] games, but there are a number of [Formula: see text] games for which the behavior of the limiting PDE is not known. In this paper, we give rules for determining the behavior of a large class of [Formula: see text] games and check their validity using simulation. In words, the effect of space is equivalent to making changes in the payoff matrix, and once this is done, the behavior of the spatial game can be predicted from the behavior of the replicator equation for the modified game. We say predicted here because in some cases the behavior of the spatial game is different from that of the replicator equation for the modified game. For example, if a rock-paper-scissors game has a replicator equation that spirals out to the boundary, space stabilizes the system and produces an equilibrium.

  4. Spatial evolutionary games with weak selection

    PubMed Central

    Nanda, Mridu; Durrett, Richard

    2017-01-01

    Recently, a rigorous mathematical theory has been developed for spatial games with weak selection, i.e., when the payoff differences between strategies are small. The key to the analysis is that when space and time are suitably rescaled, the spatial model converges to the solution of a partial differential equation (PDE). This approach can be used to analyze all 2×2 games, but there are a number of 3×3 games for which the behavior of the limiting PDE is not known. In this paper, we give rules for determining the behavior of a large class of 3×3 games and check their validity using simulation. In words, the effect of space is equivalent to making changes in the payoff matrix, and once this is done, the behavior of the spatial game can be predicted from the behavior of the replicator equation for the modified game. We say predicted here because in some cases the behavior of the spatial game is different from that of the replicator equation for the modified game. For example, if a rock–paper–scissors game has a replicator equation that spirals out to the boundary, space stabilizes the system and produces an equilibrium. PMID:28533405

  5. Integrable subsectors from holography

    NASA Astrophysics Data System (ADS)

    de Mello Koch, Robert; Kim, Minkyoo; Van Zyl, Hendrik J. R.

    2018-05-01

    We consider operators in N=4 super Yang-Mills theory dual to closed string states propagating on a class of LLM geometries. The LLM geometries we consider are specified by a boundary condition that is a set of black rings on the LLM plane. When projected to the LLM plane, the closed strings are polygons with all corners lying on the outer edge of a single ring. The large N limit of correlators of these operators receives contributions from non-planar diagrams even for the leading large N dynamics. Our interest in these fluctuations is because a previous weak coupling analysis argues that the net effect of summing the huge set of non-planar diagrams, is a simple rescaling of the 't Hooft coupling. We carry out some nontrivial checks of this proposal. Using the su(2|2)2 symmetry we determine the two magnon S-matrix and demonstrate that it agrees, up to two loops, with a weak coupling computation performed in the CFT. We also compute the first finite size corrections to both the magnon and the dyonic magnon by constructing solutions to the Nambu-Goto action that carry finite angular momentum. These finite size computations constitute a strong coupling confirmation of the proposal.

  6. Bose-Einstein condensation in diamond hierarchical lattices.

    PubMed

    Lyra, M L; de Moura, F A B F; de Oliveira, I N; Serva, M

    2014-05-01

    The Bose-Einstein condensation of noninteracting particles restricted to move on the sites of hierarchical diamond lattices is investigated. Using a tight-binding single-particle Hamiltonian with properly rescaled hopping amplitudes, we are able to employ an orthogonal basis transformation to exactly map it on a set of decoupled linear chains with sizes and degeneracies written in terms of the network branching parameter q and generation number n. The integrated density of states is shown to have a fractal structure of gaps and degeneracies with a power-law decay at the band bottom. The spectral dimension d(s) coincides with the network topological dimension d(f) = ln(2q)/ln(2). We perform a finite-size scaling analysis of the fraction of condensed particles and specific heat to characterize the critical behavior of the BEC transition that occurs for q > 2 (d(s) > 2). The critical exponents are shown to follow those for lattices with a pure power-law spectral density, with non-mean-field values for q < 8 (d(s) < 4). The transition temperature is shown to grow monotonically with the branching parameter, obeying the relation 1/T(c) = a + b/(q - 2).

  7. Far-UV Spectroscopy of the Planet-hosting Star WASP-13: High-energy Irradiance, Distance, Age, Planetary Mass-loss Rate, and Circumstellar Environment

    NASA Astrophysics Data System (ADS)

    Fossati, L.; France, K.; Koskinen, T.; Juvan, I. G.; Haswell, C. A.; Lendl, M.

    2015-12-01

    Several transiting hot Jupiters orbit relatively inactive main-sequence stars. For some of those, the {log}{R}{HK}\\prime activity parameter lies below the basal level (-5.1). Two explanations have been proposed so far: (i) the planet affects the stellar dynamo, (ii) the {log}{R}{HK}\\prime measurements are biased by extrinsic absorption, either by the interstellar medium (ISM) or by material local to the system. We present here Hubble Space Telescope/COS far-UV spectra of WASP-13, which hosts an inflated hot Jupiter and has a measured {log}{R}{HK}\\prime value (-5.26), well below the basal level. From the star’s spectral energy distribution we obtain an extinction E(B - V) = 0.045 ± 0.025 mag and a distance d = 232 ± 8 pc. We detect at ≳4σ lines belonging to three different ionization states of carbon (C i, C ii, and C iv) and the Si iv doublet at ˜3σ. Using far-UV spectra of nearby early G-type stars of known age, we derive a C iv/C i flux ratio-age relation, from which we estimate WASP-13's age to be 5.1 ± 2.0 Gyr. We rescale the solar irradiance reference spectrum to match the flux of the C iv 1548 doublet. By integrating the rescaled solar spectrum, we obtain an XUV flux at 1 AU of 5.4 erg s-1 cm-2. We use a detailed model of the planet’s upper atmosphere, deriving a mass-loss rate of 1.5 × 1011 g s-1. Despite the low {log}{R}{HK}\\prime value, the star shows a far-UV spectrum typical of middle-aged solar-type stars, pointing toward the presence of significant extrinsic absorption. The analysis of a high-resolution spectrum of the Ca ii H&K lines indicates that the ISM absorption could be the origin of the low {log}{R}{HK}\\prime value. Nevertheless, the large uncertainty in the Ca ii ISM abundance does not allow us to firmly exclude the presence of circumstellar gas. Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from MAST at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. These observations are associated with program #13859.

  8. Fractal And Multi-fractal Analysis Of The Hydraulic Property Variations Of Karst Aquifers

    NASA Astrophysics Data System (ADS)

    Majone, B.; Bellin, A.; Borsato, A.

    Karst aquifers are very heterogeneous systems with hydraulic property variations acting at several continuous and discrete scales, as a result of the fact that macro- structural elements, such as faults and karst channels, and fractures are intertwined in a complex, and largely unknown, manner. Many experimental studies on karst springs showed that the recession limb of the typical storm hydrograph can be divided into several regions with different decreasing rate, suggesting that the discharge is com- posed of contributions experiencing different travel times. Despite the importance of karst aquifers as a source of fresh water for most Mediterranean countries fostered the attention of scientists and practitioners, the mechanisms controlling runoff production in such a complex subsurface environment need to be further explored. A detailed sur- vey, lasting for one year and conducted by the Museo Tridentino di Scienze Naturali of Trento, represents a unique opportunity to analyze the imprint of hydraulic prop- erty variations on the hydrological signal recorded at the spring of Prese Val, located in the Dolomiti group near Trento. Data include water discharge (Q), temperature (T) and electric conductivity of water (E). Analysis of the data revealed that the power spectrum of E scales as 1/f, with slightly, but significantly, smaller than 1. The scaling nature of the E-signal has been confirmed by rescaled range analysis of the time series. Since the electric conductivity is proportional to the concentration of ions in the spring water, which increases with the residence time, one may conclude that the fractal structure of the E signal is the consequence of a similar structure in the hydraulic property variations. This finding confirms previous results of Kirchner et al. (2000), who reported a similar behavior for chloride concentration in the streamflow of three small Welsh catchments. A more detailed analysis revealed that E and T are both multifractal signals suggesting that transport is controlled by hydraulic property variations interesting several scales of variability. However, the travel time distribution is also shaped by the spatial variability of the dissolution rate and of the rainfall, as well as by the occurrence of rate limited dissolution processes. These phenomena may conspire to hide the imprint of the hydraulic property variations on the observed signal, complicating the inference of the geostatistical model of hydraulic property variations from the E signal. The discharge at Prese Val shows a multiscale power spectrum with convexity directed upward, such that the low frequency, long range, contributions to discharge are characterized by a much smaller slope than the high frequency contri- butions, which are characterized by much shorter travel times. This interpretation is consistent with the overall structure of the karst aquifers which is composed of the intertwined arrangement of macro-structures, such as faults and karstic channels, and small-scale diffused fractures, the latter showing a fractal dimension much smaller than that of the former.

  9. Shifting stream planform state decreases stream productivity yet increases riparian animal production.

    PubMed

    Venarsky, Michael P; Walters, David M; Hall, Robert O; Livers, Bridget; Wohl, Ellen

    2018-05-01

    In the Colorado Front Range (USA), disturbance history dictates stream planform. Undisturbed, old-growth streams have multiple channels and large amounts of wood and depositional habitat. Disturbed streams (wildfires and logging < 200 years ago) are single-channeled with mostly erosional habitat. We tested how these opposing stream states influenced organic matter, benthic macroinvertebrate secondary production, emerging aquatic insect flux, and riparian spider biomass. Organic matter and macroinvertebrate production did not differ among sites per unit area (m -2 ), but values were 2 ×-21 × higher in undisturbed reaches per unit of stream valley (m -1 valley) because total stream area was higher in undisturbed reaches. Insect emergence was similar among streams at the per unit area and per unit of stream valley. However, rescaling insect emergence to per meter of stream bank showed that the emerging insect biomass reaching the stream bank was lower in undisturbed sites because multi-channel reaches had 3 × more stream bank than single-channel reaches. Riparian spider biomass followed the same pattern as emerging aquatic insects, and we attribute this to bottom-up limitation caused by the multi-channeled undisturbed sites diluting prey quantity (emerging insects) reaching the stream bank (riparian spider habitat). These results show that historic landscape disturbances continue to influence stream and riparian communities in the Colorado Front Range. However, these legacy effects are only weakly influencing habitat-specific function and instead are primarily influencing stream-riparian community productivity by dictating both stream planform (total stream area, total stream bank length) and the proportional distribution of specific habitat types (pools vs riffles).

  10. Rugged, Low Cost, Environmental Sensors for a Turbulent World

    NASA Astrophysics Data System (ADS)

    Schulz, B.; Sandell, C. T.; Wickert, A. D.

    2017-12-01

    Ongoing scientific research and resource management require a diverse range of high-quality and low-cost sensors to maximize the number and type of measurements that can be obtained. To accomplish this, we have developed a series of diversified sensors for common environmental applications. The TP-DownHole is an ultra-compact temperature and pressure sensor designed for use in CMT (Continuous Multi-channel Tubing) multi-level wells. Its 1 mm water depth resolution, 30 cm altitude resolution, and rugged design make it ideal for both water level measurements and monitoring barometric pressure and associated temperature changes. The TP-DownHole sensor has also been incorporated into a self-contained, fully independent data recorder for extreme and remote environments. This device (the TP-Solo) is based around the TP-DownHole design, but has self-contained power and data storage and is designed to collect data independently for up to 6 months (logging at once an hour), creating a specialized tool for extreme environment data collection. To gather spectral information, we have also developed a very low cost photodiode-based Lux sensor to measure spectral irradiance; while this does not measure the entire solar radiation spectrum, simple modeling to rescale the remainder of the solar spectrum makes this a cost-effective alternative to a thermopile pyranometer. Lastly, we have developed an instrumentation amplifier which is designed to interface a wide range of sensitive instruments to common data logging systems, such as thermopile pyranometers, thermocouples, and many other analog output sensors. These three instruments are the first in a diverse family aimed to give researchers a set of powerful and low-cost tools for environmental instrumentation.

  11. Shifting stream planform state decreases stream productivity yet increases riparian animal production

    USGS Publications Warehouse

    Venarsky, Michael P.; Walters, David M.; Hall, Robert O.; Livers, Bridget; Wohl, Ellen

    2018-01-01

    In the Colorado Front Range (USA), disturbance history dictates stream planform. Undisturbed, old-growth streams have multiple channels and large amounts of wood and depositional habitat. Disturbed streams (wildfires and logging < 200 years ago) are single-channeled with mostly erosional habitat. We tested how these opposing stream states influenced organic matter, benthic macroinvertebrate secondary production, emerging aquatic insect flux, and riparian spider biomass. Organic matter and macroinvertebrate production did not differ among sites per unit area (m−2), but values were 2 ×–21 × higher in undisturbed reaches per unit of stream valley (m−1 valley) because total stream area was higher in undisturbed reaches. Insect emergence was similar among streams at the per unit area and per unit of stream valley. However, rescaling insect emergence to per meter of stream bank showed that the emerging insect biomass reaching the stream bank was lower in undisturbed sites because multi-channel reaches had 3 × more stream bank than single-channel reaches. Riparian spider biomass followed the same pattern as emerging aquatic insects, and we attribute this to bottom-up limitation caused by the multi-channeled undisturbed sites diluting prey quantity (emerging insects) reaching the stream bank (riparian spider habitat). These results show that historic landscape disturbances continue to influence stream and riparian communities in the Colorado Front Range. However, these legacy effects are only weakly influencing habitat-specific function and instead are primarily influencing stream–riparian community productivity by dictating both stream planform (total stream area, total stream bank length) and the proportional distribution of specific habitat types (pools vs riffles).

  12. Inverse curvature flows in asymptotically Robertson Walker spaces

    NASA Astrophysics Data System (ADS)

    Kröner, Heiko

    2018-04-01

    In this paper we consider inverse curvature flows in a Lorentzian manifold N which is the topological product of the real numbers with a closed Riemannian manifold and equipped with a Lorentzian metric having a future singularity so that N is asymptotically Robertson Walker. The flow speeds are future directed and given by 1 / F where F is a homogeneous degree one curvature function of class (K*) of the principal curvatures, i.e. the n-th root of the Gauss curvature. We prove longtime existence of these flows and that the flow hypersurfaces converge to smooth functions when they are rescaled with a proper factor which results from the asymptotics of the metric.

  13. Summary of current radiometric calibration coefficients for Landsat MSS, TM, ETM+, and EO-1 ALI sensors

    USGS Publications Warehouse

    Chander, G.; Markham, B.L.; Helder, D.L.

    2009-01-01

    This paper provides a summary of the current equations and rescaling factors for converting calibrated Digital Numbers (DNs) to absolute units of at-sensor spectral radiance, Top-Of-Atmosphere (TOA) reflectance, and at-sensor brightness temperature. It tabulates the necessary constants for the Multispectral Scanner (MSS), Thematic Mapper (TM), Enhanced Thematic Mapper Plus (ETM+), and Advanced Land Imager (ALI) sensors. These conversions provide a basis for standardized comparison of data in a single scene or between images acquired on different dates or by different sensors. This paper forms a needed guide for Landsat data users who now have access to the entire Landsat archive at no cost.

  14. Summary of Current Radiometric Calibration Coefficients for Landsat MSS, TM, ETM+, and EO-1 ALI Sensors

    NASA Technical Reports Server (NTRS)

    Chander, Gyanesh; Markham, Brian L.; Helder, Dennis L.

    2009-01-01

    This paper provides a summary of the current equations and rescaling factors for converting calibrated Digital Numbers (DNs) to absolute units of at-sensor spectral radiance, Top-Of- Atmosphere (TOA) reflectance, and at-sensor brightness temperature. It tabulates the necessary constants for the Multispectral Scanner (MSS), Thematic Mapper (TM), Enhanced Thematic Mapper Plus (ETM+), and Advanced Land Imager (ALI) sensors. These conversions provide a basis for standardized comparison of data in a single scene or between images acquired on different dates or by different sensors. This paper forms a needed guide for Landsat data users who now have access to the entire Landsat archive at no cost.

  15. Multi-fidelity methods for uncertainty quantification in transport problems

    NASA Astrophysics Data System (ADS)

    Tartakovsky, G.; Yang, X.; Tartakovsky, A. M.; Barajas-Solano, D. A.; Scheibe, T. D.; Dai, H.; Chen, X.

    2016-12-01

    We compare several multi-fidelity approaches for uncertainty quantification in flow and transport simulations that have a lower computational cost than the standard Monte Carlo method. The cost reduction is achieved by combining a small number of high-resolution (high-fidelity) simulations with a large number of low-resolution (low-fidelity) simulations. We propose a new method, a re-scaled Multi Level Monte Carlo (rMLMC) method. The rMLMC is based on the idea that the statistics of quantities of interest depends on scale/resolution. We compare rMLMC with existing multi-fidelity methods such as Multi Level Monte Carlo (MLMC) and reduced basis methods and discuss advantages of each approach.

  16. Condensate oscillations in a Penrose tiling lattice

    NASA Astrophysics Data System (ADS)

    Akdeniz, Z.; Vignolo, P.

    2017-07-01

    We study the dynamics of a Bose-Einstein condensate subject to a particular Penrose tiling lattice. In such a lattice, the potential energy at each site depends on the neighbour sites, accordingly to the model introduced by Sutherland [16]. The Bose-Einstein wavepacket, initially at rest at the lattice symmetry center, is released. We observe a very complex time-evolution that strongly depends on the symmetry center (two choices are possible), on the potential energy landscape dispersion, and on the interaction strength. The condensate-width oscillates at different frequencies and we can identify large-frequency reshaping oscillations and low-frequency rescaling oscillations. We discuss in which conditions these oscillations are spatially bounded, denoting a self-trapping dynamics.

  17. Reducing charging effects in scanning electron microscope images by Rayleigh contrast stretching method (RCS).

    PubMed

    Wan Ismail, W Z; Sim, K S; Tso, C P; Ting, H Y

    2011-01-01

    To reduce undesirable charging effects in scanning electron microscope images, Rayleigh contrast stretching is developed and employed. First, re-scaling is performed on the input image histograms with Rayleigh algorithm. Then, contrast stretching or contrast adjustment is implemented to improve the images while reducing the contrast charging artifacts. This technique has been compared to some existing histogram equalization (HE) extension techniques: recursive sub-image HE, contrast stretching dynamic HE, multipeak HE and recursive mean separate HE. Other post processing methods, such as wavelet approach, spatial filtering, and exponential contrast stretching, are compared as well. Overall, the proposed method produces better image compensation in reducing charging artifacts. Copyright © 2011 Wiley Periodicals, Inc.

  18. Mechanical design principles of a mitotic spindle.

    PubMed

    Ward, Jonathan J; Roque, Hélio; Antony, Claude; Nédélec, François

    2014-12-18

    An organised spindle is crucial to the fidelity of chromosome segregation, but the relationship between spindle structure and function is not well understood in any cell type. The anaphase B spindle in fission yeast has a slender morphology and must elongate against compressive forces. This 'pushing' mode of chromosome transport renders the spindle susceptible to breakage, as observed in cells with a variety of defects. Here we perform electron tomographic analyses of the spindle, which suggest that it organises a limited supply of structural components to increase its compressive strength. Structural integrity is maintained throughout the spindle's fourfold elongation by organising microtubules into a rigid transverse array, preserving correct microtubule number and dynamically rescaling microtubule length.

  19. Alignment theory of parallel-beam computed tomography image reconstruction for elastic-type objects using virtual focusing method.

    PubMed

    Jun, Kyungtaek; Kim, Dongwook

    2018-01-01

    X-ray computed tomography has been studied in various fields. Considerable effort has been focused on reconstructing the projection image set from a rigid-type specimen. However, reconstruction of images projected from an object showing elastic motion has received minimal attention. In this paper, a mathematical solution to reconstructing the projection image set obtained from an object with specific elastic motions-periodically, regularly, and elliptically expanded or contracted specimens-is proposed. To reconstruct the projection image set from expanded or contracted specimens, methods are presented for detection of the sample's motion modes, mathematical rescaling of pixel values, and conversion of the projection angle for a common layer.

  20. QX MAN: Q and X file manipulation

    NASA Technical Reports Server (NTRS)

    Krein, Mark A.

    1992-01-01

    QX MAN is a grid and solution file manipulation program written primarily for the PARC code and the GRIDGEN family of grid generation codes. QX MAN combines many of the features frequently encountered in grid generation, grid refinement, the setting-up of initial conditions, and post processing. QX MAN allows the user to manipulate single block and multi-block grids (and their accompanying solution files) by splitting, concatenating, rotating, translating, re-scaling, and stripping or adding points. In addition, QX MAN can be used to generate an initial solution file for the PARC code. The code was written to provide several formats for input and output in order for it to be useful in a broad spectrum of applications.

  1. Superintegrable three-body systems on the line

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chanu, Claudia; Degiovanni, Luca; Rastelli, Giovanni

    2008-11-15

    We consider classical three-body interactions on a Euclidean line depending on the reciprocal distance of the particles and admitting four functionally independent quadratic in the momentum first integrals. These systems are multiseparable, superintegrable, and equivalent (up to rescalings) to a one-particle system in the three-dimensional Euclidean space. Common features of the dynamics are discussed. We show how to determine quantum symmetry operators associated with the first integrals considered here but do not analyze the corresponding quantum dynamics. The conformal multiseparability is discussed and examples of conformal first integrals are given. The systems considered here in generality include the Calogero, Wolfes,more » and other three-body interactions widely studied in mathematical physics.« less

  2. Random Walks in a One-Dimensional Lévy Random Environment

    NASA Astrophysics Data System (ADS)

    Bianchi, Alessandra; Cristadoro, Giampaolo; Lenci, Marco; Ligabò, Marilena

    2016-04-01

    We consider a generalization of a one-dimensional stochastic process known in the physical literature as Lévy-Lorentz gas. The process describes the motion of a particle on the real line in the presence of a random array of marked points, whose nearest-neighbor distances are i.i.d. and long-tailed (with finite mean but possibly infinite variance). The motion is a continuous-time, constant-speed interpolation of a symmetric random walk on the marked points. We first study the quenched random walk on the point process, proving the CLT and the convergence of all the accordingly rescaled moments. Then we derive the quenched and annealed CLTs for the continuous-time process.

  3. The FLUKA Monte Carlo code coupled with the NIRS approach for clinical dose calculations in carbon ion therapy

    NASA Astrophysics Data System (ADS)

    Magro, G.; Dahle, T. J.; Molinelli, S.; Ciocca, M.; Fossati, P.; Ferrari, A.; Inaniwa, T.; Matsufuji, N.; Ytre-Hauge, K. S.; Mairani, A.

    2017-05-01

    Particle therapy facilities often require Monte Carlo (MC) simulations to overcome intrinsic limitations of analytical treatment planning systems (TPS) related to the description of the mixed radiation field and beam interaction with tissue inhomogeneities. Some of these uncertainties may affect the computation of effective dose distributions; therefore, particle therapy dedicated MC codes should provide both absorbed and biological doses. Two biophysical models are currently applied clinically in particle therapy: the local effect model (LEM) and the microdosimetric kinetic model (MKM). In this paper, we describe the coupling of the NIRS (National Institute for Radiological Sciences, Japan) clinical dose to the FLUKA MC code. We moved from the implementation of the model itself to its application in clinical cases, according to the NIRS approach, where a scaling factor is introduced to rescale the (carbon-equivalent) biological dose to a clinical dose level. A high level of agreement was found with published data by exploring a range of values for the MKM input parameters, while some differences were registered in forward recalculations of NIRS patient plans, mainly attributable to differences with the analytical TPS dose engine (taken as reference) in describing the mixed radiation field (lateral spread and fragmentation). We presented a tool which is being used at the Italian National Center for Oncological Hadrontherapy to support the comparison study between the NIRS clinical dose level and the LEM dose specification.

  4. Computation of Reacting Flows in Combustion Processes

    NASA Technical Reports Server (NTRS)

    Keith, Theo G., Jr.; Chen, Kuo-Huey

    1997-01-01

    The main objective of this research was to develop an efficient three-dimensional computer code for chemically reacting flows. The main computer code developed is ALLSPD-3D. The ALLSPD-3D computer program is developed for the calculation of three-dimensional, chemically reacting flows with sprays. The ALL-SPD code employs a coupled, strongly implicit solution procedure for turbulent spray combustion flows. A stochastic droplet model and an efficient method for treatment of the spray source terms in the gas-phase equations are used to calculate the evaporating liquid sprays. The chemistry treatment in the code is general enough that an arbitrary number of reaction and species can be defined by the users. Also, it is written in generalized curvilinear coordinates with both multi-block and flexible internal blockage capabilities to handle complex geometries. In addition, for general industrial combustion applications, the code provides both dilution and transpiration cooling capabilities. The ALLSPD algorithm, which employs the preconditioning and eigenvalue rescaling techniques, is capable of providing efficient solution for flows with a wide range of Mach numbers. Although written for three-dimensional flows in general, the code can be used for two-dimensional and axisymmetric flow computations as well. The code is written in such a way that it can be run in various computer platforms (supercomputers, workstations and parallel processors) and the GUI (Graphical User Interface) should provide a user-friendly tool in setting up and running the code.

  5. Score distributions of gapped multiple sequence alignments down to the low-probability tail

    NASA Astrophysics Data System (ADS)

    Fieth, Pascal; Hartmann, Alexander K.

    2016-08-01

    Assessing the significance of alignment scores of optimally aligned DNA or amino acid sequences can be achieved via the knowledge of the score distribution of random sequences. But this requires obtaining the distribution in the biologically relevant high-scoring region, where the probabilities are exponentially small. For gapless local alignments of infinitely long sequences this distribution is known analytically to follow a Gumbel distribution. Distributions for gapped local alignments and global alignments of finite lengths can only be obtained numerically. To obtain result for the small-probability region, specific statistical mechanics-based rare-event algorithms can be applied. In previous studies, this was achieved for pairwise alignments. They showed that, contrary to results from previous simple sampling studies, strong deviations from the Gumbel distribution occur in case of finite sequence lengths. Here we extend the studies to multiple sequence alignments with gaps, which are much more relevant for practical applications in molecular biology. We study the distributions of scores over a large range of the support, reaching probabilities as small as 10-160, for global and local (sum-of-pair scores) multiple alignments. We find that even after suitable rescaling, eliminating the sequence-length dependence, the distributions for multiple alignment differ from the pairwise alignment case. Furthermore, we also show that the previously discussed Gaussian correction to the Gumbel distribution needs to be refined, also for the case of pairwise alignments.

  6. Evidence for water vapor in Titan's atmosphere from ISO/SWS data

    NASA Astrophysics Data System (ADS)

    Coustenis, A.; Salama, A.; Lellouch, E.; Encrenaz, Th.; Bjoraker, G. L.; Samuelson, R. E.; de Graauw, Th.; Feuchtgruber, H.; Kessler, M. F.

    1998-08-01

    The infrared spectrum of Titan around 40 mu m was recorded in the grating mode of the Short Wavelength Spectrometer (SWS) of ISO, with a resolving power of about 1900. Two emission features appear at 43.9 and 39.4 mu m, where pure rotational water lines are expected. Line strengths are about 8 times the 1sigma statistical noise level. The H_2O vertical profile for water suggested by the photochemical model of Lara et al. (1996), rescaled by a factor of about 0.4(+0.3}_{-0.2) , is compatible with the data. The associated water mole fraction is about 8(+6}_{-4) x 10(-9) at an altitude of 400 km (column density of 2.6(+1.9}_{-1.6) x 10(14) mol cm(-2) above the surface). The inferred water influx at 700 km in Titan's atmosphere is in the range (0.8-2.8) x 10(6) mol cm(-2) s(-1) . Based on observations with ISO, an ESA project with instruments funded by ESA Member States (especially the PI countries: France, Germany, the Netherlands and the United Kingdom) with the participation of NASA and ISAS. The SWS instrument (P.I. Th. de Graauw) is a joint project of the SRON and the MPE. RES would like to thank D. Hamilton for illuminating discussions regarding dust transport in the Saturn System.

  7. Quantitative Earthquake Prediction on Global and Regional Scales

    NASA Astrophysics Data System (ADS)

    Kossobokov, Vladimir G.

    2006-03-01

    The Earth is a hierarchy of volumes of different size. Driven by planetary convection these volumes are involved into joint and relative movement. The movement is controlled by a wide variety of processes on and around the fractal mesh of boundary zones, and does produce earthquakes. This hierarchy of movable volumes composes a large non-linear dynamical system. Prediction of such a system in a sense of extrapolation of trajectory into the future is futile. However, upon coarse-graining the integral empirical regularities emerge opening possibilities of prediction in a sense of the commonly accepted consensus definition worked out in 1976 by the US National Research Council. Implications of the understanding hierarchical nature of lithosphere and its dynamics based on systematic monitoring and evidence of its unified space-energy similarity at different scales help avoiding basic errors in earthquake prediction claims. They suggest rules and recipes of adequate earthquake prediction classification, comparison and optimization. The approach has already led to the design of reproducible intermediate-term middle-range earthquake prediction technique. Its real-time testing aimed at prediction of the largest earthquakes worldwide has proved beyond any reasonable doubt the effectiveness of practical earthquake forecasting. In the first approximation, the accuracy is about 1-5 years and 5-10 times the anticipated source dimension. Further analysis allows reducing spatial uncertainty down to 1-3 source dimensions, although at a cost of additional failures-to-predict. Despite of limited accuracy a considerable damage could be prevented by timely knowledgeable use of the existing predictions and earthquake prediction strategies. The December 26, 2004 Indian Ocean Disaster seems to be the first indication that the methodology, designed for prediction of M8.0+ earthquakes can be rescaled for prediction of both smaller magnitude earthquakes (e.g., down to M5.5+ in Italy) and for mega-earthquakes of M9.0+. The monitoring at regional scales may require application of a recently proposed scheme for the spatial stabilization of the intermediate-term middle-range predictions. The scheme guarantees a more objective and reliable diagnosis of times of increased probability and is less restrictive to input seismic data. It makes feasible reestablishment of seismic monitoring aimed at prediction of large magnitude earthquakes in Caucasus and Central Asia, which to our regret, has been discontinued in 1991. The first results of the monitoring (1986-1990) were encouraging, at least for M6.5+.

  8. Using pan-sharpened high resolution satellite data to improve impervious surfaces estimation

    NASA Astrophysics Data System (ADS)

    Xu, Ru; Zhang, Hongsheng; Wang, Ting; Lin, Hui

    2017-05-01

    Impervious surface is an important environmental and socio-economic indicator for numerous urban studies. While a large number of researches have been conducted to estimate the area and distribution of impervious surface from satellite data, the accuracy for impervious surface estimation (ISE) is insufficient due to high diversity of urban land cover types. This study evaluated the use of panchromatic (PAN) data in very high resolution satellite image for improving the accuracy of ISE by various pan-sharpening approaches, with a further comprehensive analysis of its scale effects. Three benchmark pan-sharpening approaches, Gram-Schmidt (GS), PANSHARP and principal component analysis (PCA) were applied to WorldView-2 in three spots of Hong Kong. The on-screen digitization were carried out based on Google Map and the results were viewed as referenced impervious surfaces. The referenced impervious surfaces and the ISE results were then re-scaled to various spatial resolutions to obtain the percentage of impervious surfaces. The correlation coefficient (CC) and root mean square error (RMSE) were adopted as the quantitative indicator to assess the accuracy. The accuracy differences between three research areas were further illustrated by the average local variance (ALV) which was used for landscape pattern analysis. The experimental results suggested that 1) three research regions have various landscape patterns; 2) ISE accuracy extracted from pan-sharpened data was better than ISE from original multispectral (MS) data; and 3) this improvement has a noticeable scale effects with various resolutions. The improvement was reduced slightly as the resolution became coarser.

  9. DNA Translator and Aligner: HyperCard utilities to aid phylogenetic analysis of molecules.

    PubMed

    Eernisse, D J

    1992-04-01

    DNA Translator and Aligner are molecular phylogenetics HyperCard stacks for Macintosh computers. They manipulate sequence data to provide graphical gene mapping, conversions, translations and manual multiple-sequence alignment editing. DNA Translator is able to convert documented GenBank or EMBL documented sequences into linearized, rescalable gene maps whose gene sequences are extractable by clicking on the corresponding map button or by selection from a scrolling list. Provided gene maps, complete with extractable sequences, consist of nine metazoan, one yeast, and one ciliate mitochondrial DNAs and three green plant chloroplast DNAs. Single or multiple sequences can be manipulated to aid in phylogenetic analysis. Sequences can be translated between nucleic acids and proteins in either direction with flexible support of alternate genetic codes and ambiguous nucleotide symbols. Multiple aligned sequence output from diverse sources can be converted to Nexus, Hennig86 or PHYLIP format for subsequent phylogenetic analysis. Input or output alignments can be examined with Aligner, a convenient accessory stack included in the DNA Translator package. Aligner is an editor for the manual alignment of up to 100 sequences that toggles between display of matched characters and normal unmatched sequences. DNA Translator also generates graphic displays of amino acid coding and codon usage frequency relative to all other, or only synonymous, codons for approximately 70 select organism-organelle combinations. Codon usage data is compatible with spreadsheet or UWGCG formats for incorporation of additional molecules of interest. The complete package is available via anonymous ftp and is free for non-commercial uses.

  10. Utility of the Mayo-Portland adaptability inventory-4 for self-reported outcomes in a military sample with traumatic brain injury.

    PubMed

    Kean, Jacob; Malec, James F; Cooper, Douglas B; Bowles, Amy O

    2013-12-01

    To investigate the psychometric properties of the Mayo-Portland Adaptability Inventory-4 (MPAI-4) obtained by self-report in a large sample of active duty military personnel with traumatic brain injury (TBI). Consecutive cohort who completed the MPAI-4 as a part of a larger battery of clinical outcome measures at the time of intake to an outpatient brain injury clinic. Medical center. Consecutively referred sample of active duty military personnel (N=404) who suffered predominantly mild (n=355), but also moderate (n=37) and severe (n=12), TBI. Not applicable. MPAI-4 RESULTS: Initial factor analysis suggested 2 salient dimensions. In subsequent analysis, the ratio of the first and second eigenvalues (6.84:1) and parallel analysis indicated sufficient unidimensionality in 26 retained items. Iterative Rasch analysis resulted in the rescaling of the measure and the removal of 5 additional items for poor fit. The items of the final 21-item Mayo-Portland Adaptability Inventory-military were locally independent, demonstrated monotonically increasing responses, adequately fit the item response model, and permitted the identification of nearly 5 statistically distinct levels of disability in the study population. Slight mistargeting of the population resulted in the global outcome, as measured by the Mayo-Portland Adaptability Inventory-military, tending to be less reflective of very mild levels of disability. These data collected in a relatively large sample of active duty service members with TBI provide insight into the ability of patients to self-report functional impairment and the distinct effects of military deployment on outcome, providing important guidance for the meaningful measurement of outcome in this population. Copyright © 2013 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  11. Revealing the Physics of Galactic Winds Through Massively-Parallel Hydrodynamics Simulations

    NASA Astrophysics Data System (ADS)

    Schneider, Evan Elizabeth

    This thesis documents the hydrodynamics code Cholla and a numerical study of multiphase galactic winds. Cholla is a massively-parallel, GPU-based code designed for astrophysical simulations that is freely available to the astrophysics community. A static-mesh Eulerian code, Cholla is ideally suited to carrying out massive simulations (> 20483 cells) that require very high resolution. The code incorporates state-of-the-art hydrodynamics algorithms including third-order spatial reconstruction, exact and linearized Riemann solvers, and unsplit integration algorithms that account for transverse fluxes on multidimensional grids. Operator-split radiative cooling and a dual-energy formalism for high mach number flows are also included. An extensive test suite demonstrates Cholla's superior ability to model shocks and discontinuities, while the GPU-native design makes the code extremely computationally efficient - speeds of 5-10 million cell updates per GPU-second are typical on current hardware for 3D simulations with all of the aforementioned physics. The latter half of this work comprises a comprehensive study of the mixing between a hot, supernova-driven wind and cooler clouds representative of those observed in multiphase galactic winds. Both adiabatic and radiatively-cooling clouds are investigated. The analytic theory of cloud-crushing is applied to the problem, and adiabatic turbulent clouds are found to be mixed with the hot wind on similar timescales as the classic spherical case (4-5 t cc) with an appropriate rescaling of the cloud-crushing time. Radiatively cooling clouds survive considerably longer, and the differences in evolution between turbulent and spherical clouds cannot be reconciled with a simple rescaling. The rapid incorporation of low-density material into the hot wind implies efficient mass-loading of hot phases of galactic winds. At the same time, the extreme compression of high-density cloud material leads to long-lived but slow-moving clumps that are unlikely to escape the galaxy.

  12. Drought monitoring with soil moisture active passive (SMAP) measurements

    NASA Astrophysics Data System (ADS)

    Mishra, Ashok; Vu, Tue; Veettil, Anoop Valiya; Entekhabi, Dara

    2017-09-01

    Recent launch of space-borne systems to estimate surface soil moisture may expand the capability to map soil moisture deficit and drought with global coverage. In this study, we use Soil Moisture Active Passive (SMAP) soil moisture geophysical retrieval products from passive L-band radiometer to evaluate its applicability to forming agricultural drought indices. Agricultural drought is quantified using the Soil Water Deficit Index (SWDI) based on SMAP and soil properties (field capacity and available water content) information. The soil properties are computed using pedo-transfer function with soil characteristics derived from Harmonized World Soil Database. The SMAP soil moisture product needs to be rescaled to be compatible with the soil parameters derived from the in situ stations. In most locations, the rescaled SMAP information captured the dynamics of in situ soil moisture well and shows the expected lag between accumulations of precipitation and delayed increased in surface soil moisture. However, the SMAP soil moisture itself does not reveal the drought information. Therefore, the SMAP based SWDI (SMAP_SWDI) was computed to improve agriculture drought monitoring by using the latest soil moisture retrieval satellite technology. The formulation of SWDI does not depend on longer data and it will overcome the limited (short) length of SMAP data for agricultural drought studies. The SMAP_SWDI is further compared with in situ Atmospheric Water Deficit (AWD) Index. The comparison shows close agreement between SMAP_SWDI and AWD in drought monitoring over Contiguous United States (CONUS), especially in terms of drought characteristics. The SMAP_SWDI was used to construct drought maps for CONUS and compared with well-known drought indices, such as, AWD, Palmer Z-Index, sc-PDSI and SPEI. Overall the SMAP_SWDI is an effective agricultural drought indicator and it provides continuity and introduces new spatial mapping capability for drought monitoring. As an agricultural drought index, SMAP_SWDI has potential to capture short term moisture information similar to AWD and related drought indices.

  13. Investigating different filter and rescaling methods on simulated GRACE-like TWS variations for hydrological applications

    NASA Astrophysics Data System (ADS)

    Zhang, Liangjing; Dahle, Christoph; Neumayer, Karl-Hans; Dobslaw, Henryk; Flechtner, Frank; Thomas, Maik

    2016-04-01

    Terrestrial water storage (TWS) variations obtained from GRACE play an increasingly important role in various hydrological and hydro-meteorological applications. Since monthly-mean gravity fields are contaminated by errors caused by a number of sources with distinct spatial correlation structures, filtering is needed to remove in particular high frequency noise. Subsequently, bias and leakage caused by the filtering need to be corrected before the final results are interpreted as GRACE-based observations of TWS. Knowledge about the reliability and performance of different post-processing methods is highly important for the GRACE users. In this contribution, we re-assess a number of commonly used post-processing methods using a simulated GRACE-like gravity field time-series based on realistic orbits and instrument error assumptions as well as background error assumptions out of the updated ESA Earth System Model. Two non-isotropic filter methods from Kusche (2007) and Swenson and Wahr (2006) are tested. Rescaling factors estimated from five different hydrological models and the ensemble median are applied to the post-processed simulated GRACE-like TWS estimates to correct the bias and leakage. Since TWS anomalies out of the post-processed simulation results can be readily compared to the time-variable Earth System Model initially used as "truth" during the forward simulation step, we are able to thoroughly check the plausibility of our error estimation assessment and will subsequently recommend a processing strategy that shall also be applied to planned GRACE and GRACE-FO Level-3 products for hydrological applications provided by GFZ. Kusche, J. (2007): Approximate decorrelation and non-isotropic smoothing of time-variable GRACE-type gravity field models. J. Geodesy, 81 (11), 733-749, doi:10.1007/s00190-007-0143-3. Swenson, S. and Wahr, J. (2006): Post-processing removal of correlated errors in GRACE data. Geophysical Research Letters, 33(8):L08402.

  14. A multiresolution processing method for contrast enhancement in portal imaging.

    PubMed

    Gonzalez-Lopez, Antonio

    2018-06-18

    Portal images have a unique feature among the imaging modalities used in radiotherapy: they provide direct visualization of the irradiated volumes. However, contrast and spatial resolution are strongly limited due to the high energy of the radiation sources. Because of this, imaging modalities using x-ray energy beams have gained importance in the verification of patient positioning, replacing portal imaging. The purpose of this work was to develop a method for the enhancement of local contrast in portal images. The method operates in the subbands of a wavelet decomposition of the image, re-scaling them in such a way that coefficients in the high and medium resolution subbands are amplified, an approach totally different of those operating on the image histogram, widely used nowadays. Portal images of an anthropomorphic phantom were acquired in an electronic portal imaging device (EPID). Then, different re-scaling strategies were investigated, studying the effects of the scaling parameters on the enhanced images. Also, the effect of using different types of transforms was studied. Finally, the implemented methods were combined with histogram equalization methods like the contrast limited adaptive histogram equalization (CLAHE), and these combinations were compared. Uniform amplification of the detail subbands shows the best results in contrast enhancement. On the other hand, linear re-escalation of the high resolution subbands increases the visibility of fine detail of the images, at the expense of an increase in noise levels. Also, since processing is applied only to detail subbands, not to the approximation, the mean gray level of the image is minimally modified and no further display adjustments are required. It is shown that re-escalation of the detail subbands of portal images can be used as an efficient method for the enhancement of both, the local contrast and the resolution of these images. © 2018 Institute of Physics and Engineering in Medicine.

  15. Rescaling quality of life values from discrete choice experiments for use as QALYs: a cautionary tale

    PubMed Central

    Flynn, Terry N; Louviere, Jordan J; Marley, Anthony AJ; Coast, Joanna; Peters, Tim J

    2008-01-01

    Background Researchers are increasingly investigating the potential for ordinal tasks such as ranking and discrete choice experiments to estimate QALY health state values. However, the assumptions of random utility theory, which underpin the statistical models used to provide these estimates, have received insufficient attention. In particular, the assumptions made about the decisions between living states and the death state are not satisfied, at least for some people. Estimated values are likely to be incorrectly anchored with respect to death (zero) in such circumstances. Methods Data from the Investigating Choice Experiments for the preferences of older people CAPability instrument (ICECAP) valuation exercise were analysed. The values (previously anchored to the worst possible state) were rescaled using an ordinal model proposed previously to estimate QALY-like values. Bootstrapping was conducted to vary artificially the proportion of people who conformed to the conventional random utility model underpinning the analyses. Results Only 26% of respondents conformed unequivocally to the assumptions of conventional random utility theory. At least 14% of respondents unequivocally violated the assumptions. Varying the relative proportions of conforming respondents in sensitivity analyses led to large changes in the estimated QALY values, particularly for lower-valued states. As a result these values could be either positive (considered to be better than death) or negative (considered to be worse than death). Conclusion Use of a statistical model such as conditional (multinomial) regression to anchor quality of life values from ordinal data to death is inappropriate in the presence of respondents who do not conform to the assumptions of conventional random utility theory. This is clearest when estimating values for that group of respondents observed in valuation samples who refuse to consider any living state to be worse than death: in such circumstances the model cannot be estimated. Only a valuation task requiring respondents to make choices in which both length and quality of life vary can produce estimates that properly reflect the preferences of all respondents. PMID:18945358

  16. Limited-memory fast gradient descent method for graph regularized nonnegative matrix factorization.

    PubMed

    Guan, Naiyang; Wei, Lei; Luo, Zhigang; Tao, Dacheng

    2013-01-01

    Graph regularized nonnegative matrix factorization (GNMF) decomposes a nonnegative data matrix X[Symbol:see text]R(m x n) to the product of two lower-rank nonnegative factor matrices, i.e.,W[Symbol:see text]R(m x r) and H[Symbol:see text]R(r x n) (r < min {m,n}) and aims to preserve the local geometric structure of the dataset by minimizing squared Euclidean distance or Kullback-Leibler (KL) divergence between X and WH. The multiplicative update rule (MUR) is usually applied to optimize GNMF, but it suffers from the drawback of slow-convergence because it intrinsically advances one step along the rescaled negative gradient direction with a non-optimal step size. Recently, a multiple step-sizes fast gradient descent (MFGD) method has been proposed for optimizing NMF which accelerates MUR by searching the optimal step-size along the rescaled negative gradient direction with Newton's method. However, the computational cost of MFGD is high because 1) the high-dimensional Hessian matrix is dense and costs too much memory; and 2) the Hessian inverse operator and its multiplication with gradient cost too much time. To overcome these deficiencies of MFGD, we propose an efficient limited-memory FGD (L-FGD) method for optimizing GNMF. In particular, we apply the limited-memory BFGS (L-BFGS) method to directly approximate the multiplication of the inverse Hessian and the gradient for searching the optimal step size in MFGD. The preliminary results on real-world datasets show that L-FGD is more efficient than both MFGD and MUR. To evaluate the effectiveness of L-FGD, we validate its clustering performance for optimizing KL-divergence based GNMF on two popular face image datasets including ORL and PIE and two text corpora including Reuters and TDT2. The experimental results confirm the effectiveness of L-FGD by comparing it with the representative GNMF solvers.

  17. X-cube model on generic lattices: Fracton phases and geometric order

    NASA Astrophysics Data System (ADS)

    Slagle, Kevin; Kim, Yong Baek

    2018-04-01

    Fracton order is a new kind of quantum order characterized by topological excitations that exhibit remarkable mobility restrictions and a robust ground-state degeneracy (GSD) which can increase exponentially with system size. In this paper, we present a generic lattice construction (in three dimensions) for a generalized X-cube model of fracton order, where the mobility restrictions of the subdimensional particles inherit the geometry of the lattice. This helps explain a previous result that lattice curvature can produce a robust GSD, even on a manifold with trivial topology. We provide explicit examples to show that the (zero-temperature) phase of matter is sensitive to the lattice geometry. In one example, the lattice geometry confines the dimension-1 particles to small loops, which allows the fractons to be fully mobile charges, and the resulting phase is equivalent to (3+1)-dimensional toric code. However, the phase is sensitive to more than just lattice curvature; different lattices without curvature (e.g., cubic or stacked kagome lattices) also result in different phases of matter, which are separated by phase transitions. Unintuitively, however, according to a previous definition of phase [X. Chen et al., Phys. Rev. B 82, 155138 (2010), 10.1103/PhysRevB.82.155138], even just a rotated or rescaled cubic results in different phases of matter, which motivates us to propose a coarser definition of phase for gapped ground states and fracton order. This equivalence relation between ground states is given by the composition of a local unitary transformation and a quasi-isometry (which can rotate and rescale the lattice); equivalently, ground states are in the same phase if they can be adiabatically connected by varying both the Hamiltonian and the positions of the degrees of freedom (via a quasi-isometry). In light of the importance of geometry, we further propose that fracton orders should be regarded as a geometric order.

  18. Disorder profile of nebulin encodes a vernierlike position sensor for the sliding thin and thick filaments of the skeletal muscle sarcomere

    NASA Astrophysics Data System (ADS)

    Wu, Ming-Chya; Forbes, Jeffrey G.; Wang, Kuan

    2016-06-01

    Nebulin is an about 1 μ m long intrinsically disordered scaffold for the thin filaments of skeletal muscle sarcomere. It is a multifunctional elastic protein that wraps around actin filament, stabilizes thin filaments, and regulates Ca-dependent actomyosin interactions. This study investigates whether the disorder profile of nebulin might encode guidelines for thin and thick filament interactions in the sarcomere of the skeletal muscle. The question was addressed computationally by analyzing the predicted disorder profile of human nebulin (6669 residues, ˜200 actin-binding repeats) by pondr and the periodicity of the A-band stripes (reflecting the locations of myosin-associated proteins) in the electron micrographs of the sarcomere. Using the detrended fluctuation analysis, a scale factor for the A-band stripe image data with respect to the nebulin disorder profile was determined to make the thin and thick filaments aligned to have maximum correlation. The empirical mode decomposition method was then applied to identify hidden periodicities in both the nebulin disorder profile and the rescaled A-band data. The decomposition reveals three characteristic length scales (45 nm, 100 nm, and 200 nm) that are relevant for correlational analysis. The dynamical cross-correlation analyses with moving windows at various sarcomere lengths depict a vernierlike design for both periodicities, thus enabling nebulin to sense position and fine tune sarcomere overlap. This shows that the disorder profile of scaffolding proteins may encode a guideline for cellular architecture.

  19. Quasinormal modes of modified gravity (MOG) black holes

    NASA Astrophysics Data System (ADS)

    Manfredi, Luciano; Mureika, Jonas; Moffat, John

    2018-04-01

    The Quasinormal modes (QNMs) for gravitational and electromagnetic perturbations are calculated in a Scalar-Tensor-Vector (Modified Gravity) spacetime, which was initially proposed to obtain correct dynamics of galaxies and galaxy clusters without the need for dark matter. It is found that for the increasing model parameter α, both the real and imaginary parts of the QNMs decrease compared to those for a standard Schwarzschild black hole. On the other hand, when taking into account the 1 / (1 + α) mass re-scaling factor present in MOG, Im (ω) matches almost identically that of GR, while Re (ω) is higher. These results can be identified in the ringdown phase of massive compact object mergers, and are thus timely in light of the recent gravitational wave detections by LIGO.

  20. Copy-move forgery detection utilizing Fourier-Mellin transform log-polar features

    NASA Astrophysics Data System (ADS)

    Dixit, Rahul; Naskar, Ruchira

    2018-03-01

    In this work, we address the problem of region duplication or copy-move forgery detection in digital images, along with detection of geometric transforms (rotation and rescale) and postprocessing-based attacks (noise, blur, and brightness adjustment). Detection of region duplication, following conventional techniques, becomes more challenging when an intelligent adversary brings about such additional transforms on the duplicated regions. In this work, we utilize Fourier-Mellin transform with log-polar mapping and a color-based segmentation technique using K-means clustering, which help us to achieve invariance to all the above forms of attacks in copy-move forgery detection of digital images. Our experimental results prove the efficiency of the proposed method and its superiority to the current state of the art.

  1. Wake excited in plasma by an ultrarelativistic pointlike bunch

    DOE PAGES

    Stupakov, G.; Breizman, B.; Khudik, V.; ...

    2016-10-05

    We study propagation of a relativistic electron bunch through a cold plasma assuming that the transverse and longitudinal dimensions of the bunch are much smaller than the plasma collisionless skin depth. Treating the bunch as a point charge and assuming that its charge is small, we derive a simplified system of equations for the plasma electrons and show that, through a simple rescaling of variables, the bunch charge can be eliminated from the equations. The equations demonstrate an ion cavity formed behind the driver. They are solved numerically and the scaling of the cavity parameters with the driver charge ismore » obtained. As a result, a numerical solution for the case of a positively charged driver is also found.« less

  2. Scaling Law of Urban Ride Sharing.

    PubMed

    Tachet, R; Sagarra, O; Santi, P; Resta, G; Szell, M; Strogatz, S H; Ratti, C

    2017-03-06

    Sharing rides could drastically improve the efficiency of car and taxi transportation. Unleashing such potential, however, requires understanding how urban parameters affect the fraction of individual trips that can be shared, a quantity that we call shareability. Using data on millions of taxi trips in New York City, San Francisco, Singapore, and Vienna, we compute the shareability curves for each city, and find that a natural rescaling collapses them onto a single, universal curve. We explain this scaling law theoretically with a simple model that predicts the potential for ride sharing in any city, using a few basic urban quantities and no adjustable parameters. Accurate extrapolations of this type will help planners, transportation companies, and society at large to shape a sustainable path for urban growth.

  3. The Oseen-Frank Limit of Onsager's Molecular Theory for Liquid Crystals

    NASA Astrophysics Data System (ADS)

    Liu, Yuning; Wang, Wei

    2018-03-01

    We study the relationship between Onsager's molecular theory, which involves the effects of nonlocal molecular interactions and the Oseen-Frank theory for nematic liquid crystals. Under the molecular setting, we prove the existence of global minimizers for the generalized Onsager's free energy, subject to a nonlocal boundary condition which prescribes the second moment of the number density function near the boundary. Moreover, when the re-scaled interaction distance tends to zero, the global minimizers will converge to a uniaxial distribution predicted by a minimizing harmonic map. This is achieved through the investigations of the compactness property and the boundary behaviors of the corresponding second moments. A similar result is established for critical points of the free energy that fulfill a natural energy bound.

  4. Subnormalized states and trace-nonincreasing maps

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cappellini, Valerio; Sommers, Hans-Juergen; Zyczkowski, Karol

    2007-05-15

    We investigate the set of completely positive, trace-nonincreasing linear maps acting on the set M{sub N} of mixed quantum states of size N. Extremal point of this set of maps are characterized and its volume with respect to the Hilbert-Schmidt (HS) (Euclidean) measure is computed explicitly for an arbitrary N. The spectra of partially reduced rescaled dynamical matrices associated with trace-nonincreasing completely positive maps belong to the N cube inscribed in the set of subnormalized states of size N. As a by-product we derive the measure in M{sub N} induced by partial trace of mixed quantum states distributed uniformly withmore » respect to the HS measure in M{sub N{sup 2}}.« less

  5. Scaling and Multifractality in Road Accidental Distances

    NASA Astrophysics Data System (ADS)

    Qiu, Tian; Wan, Chi; Zou, Xiang-Xiang; Wang, Xiao-Fan

    Accidental distance dynamics is investigated, based on the road accidental data of the Great Britain. The distance distribution of all the districts as an ensemble presents a power law tail, which is different from that of the individual district. A universal distribution is found for different districts, by rescaling the distribution functions of individual districts, which can be well fitted by the Weibull distribution. The male and female drivers behave similarly in the distance distribution. The multifractal characteristic is further studied for the individual district and all the districts as an ensemble, and different behaviors are also revealed between them. The accidental distances of the individual district show a weak multifractality, whereas of all the districts present a strong multifractality when taking them as an ensemble.

  6. Gauge theory for finite-dimensional dynamical systems.

    PubMed

    Gurfil, Pini

    2007-06-01

    Gauge theory is a well-established concept in quantum physics, electrodynamics, and cosmology. This concept has recently proliferated into new areas, such as mechanics and astrodynamics. In this paper, we discuss a few applications of gauge theory in finite-dimensional dynamical systems. We focus on the concept of rescriptive gauge symmetry, which is, in essence, rescaling of an independent variable. We show that a simple gauge transformation of multiple harmonic oscillators driven by chaotic processes can render an apparently "disordered" flow into a regular dynamical process, and that there exists a strong connection between gauge transformations and reduction theory of ordinary differential equations. Throughout the discussion, we demonstrate the main ideas by considering examples from diverse fields, including quantum mechanics, chemistry, rigid-body dynamics, and information theory.

  7. Role of volcanic and anthropogenic aerosols in the recent global surface warming slowdown

    NASA Astrophysics Data System (ADS)

    Smith, Doug M.; Booth, Ben B. B.; Dunstone, Nick J.; Eade, Rosie; Hermanson, Leon; Jones, Gareth S.; Scaife, Adam A.; Sheen, Katy L.; Thompson, Vikki

    2016-10-01

    The rate of global mean surface temperature (GMST) warming has slowed this century despite the increasing concentrations of greenhouse gases. Climate model experiments show that this slowdown was largely driven by a negative phase of the Pacific Decadal Oscillation (PDO), with a smaller external contribution from solar variability, and volcanic and anthropogenic aerosols. The prevailing view is that this negative PDO occurred through internal variability. However, here we show that coupled models from the Fifth Coupled Model Intercomparison Project robustly simulate a negative PDO in response to anthropogenic aerosols implying a potentially important role for external human influences. The recovery from the eruption of Mount Pinatubo in 1991 also contributed to the slowdown in GMST trends. Our results suggest that a slowdown in GMST trends could have been predicted in advance, and that future reduction of anthropogenic aerosol emissions, particularly from China, would promote a positive PDO and increased GMST trends over the coming years. Furthermore, the overestimation of the magnitude of recent warming by models is substantially reduced by using detection and attribution analysis to rescale their response to external factors, especially cooling following volcanic eruptions. Improved understanding of external influences on climate is therefore crucial to constrain near-term climate predictions.

  8. Explorations in statistics: the log transformation.

    PubMed

    Curran-Everett, Douglas

    2018-06-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This thirteenth installment of Explorations in Statistics explores the log transformation, an established technique that rescales the actual observations from an experiment so that the assumptions of some statistical analysis are better met. A general assumption in statistics is that the variability of some response Y is homogeneous across groups or across some predictor variable X. If the variability-the standard deviation-varies in rough proportion to the mean value of Y, a log transformation can equalize the standard deviations. Moreover, if the actual observations from an experiment conform to a skewed distribution, then a log transformation can make the theoretical distribution of the sample mean more consistent with a normal distribution. This is important: the results of a one-sample t test are meaningful only if the theoretical distribution of the sample mean is roughly normal. If we log-transform our observations, then we want to confirm the transformation was useful. We can do this if we use the Box-Cox method, if we bootstrap the sample mean and the statistic t itself, and if we assess the residual plots from the statistical model of the actual and transformed sample observations.

  9. The Development of Models for Carbon Dioxide Reduction Technologies for Spacecraft Air Revitalization

    NASA Technical Reports Server (NTRS)

    Swickrath, Michael J.; Anderson, Molly

    2012-01-01

    Through the respiration process, humans consume oxygen (O2) while producing carbon dioxide (CO2) and water (H2O) as byproducts. For long term space exploration, CO2 concentration in the atmosphere must be managed to prevent hypercapnia. Moreover, CO2 can be used as a source of oxygen through chemical reduction serving to minimize the amount of oxygen required at launch. Reduction can be achieved through a number of techniques. NASA is currently exploring the Sabatier reaction, the Bosch reaction, and co- electrolysis of CO2 and H2O for this process. Proof-of-concept experiments and prototype units for all three processes have proven capable of returning useful commodities for space exploration. All three techniques have demonstrated the capacity to reduce CO2 in the laboratory, yet there is interest in understanding how all three techniques would perform at a system level within a spacecraft. Consequently, there is an impetus to develop predictive models for these processes that can be readily rescaled and integrated into larger system models. Such analysis tools provide the ability to evaluate each technique on a comparable basis with respect to processing rates. This manuscript describes the current models for the carbon dioxide reduction processes under parallel developmental efforts. Comparison to experimental data is provided were available for verification purposes.

  10. Optimal Representation of Anuran Call Spectrum in Environmental Monitoring Systems Using Wireless Sensor Networks.

    PubMed

    Luque, Amalia; Gómez-Bellido, Jesús; Carrasco, Alejandro; Barbancho, Julio

    2018-06-03

    The analysis and classification of the sounds produced by certain animal species, notably anurans, have revealed these amphibians to be a potentially strong indicator of temperature fluctuations and therefore of the existence of climate change. Environmental monitoring systems using Wireless Sensor Networks are therefore of interest to obtain indicators of global warming. For the automatic classification of the sounds recorded on such systems, the proper representation of the sound spectrum is essential since it contains the information required for cataloguing anuran calls. The present paper focuses on this process of feature extraction by exploring three alternatives: the standardized MPEG-7, the Filter Bank Energy (FBE), and the Mel Frequency Cepstral Coefficients (MFCC). Moreover, various values for every option in the extraction of spectrum features have been considered. Throughout the paper, it is shown that representing the frame spectrum with pure FBE offers slightly worse results than using the MPEG-7 features. This performance can easily be increased, however, by rescaling the FBE in a double dimension: vertically, by taking the logarithm of the energies; and, horizontally, by applying mel scaling in the filter banks. On the other hand, representing the spectrum in the cepstral domain, as in MFCC, has shown additional marginal improvements in classification performance.

  11. Robust Bayesian linear regression with application to an analysis of the CODATA values for the Planck constant

    NASA Astrophysics Data System (ADS)

    Wübbeler, Gerd; Bodnar, Olha; Elster, Clemens

    2018-02-01

    Weighted least-squares estimation is commonly applied in metrology to fit models to measurements that are accompanied with quoted uncertainties. The weights are chosen in dependence on the quoted uncertainties. However, when data and model are inconsistent in view of the quoted uncertainties, this procedure does not yield adequate results. When it can be assumed that all uncertainties ought to be rescaled by a common factor, weighted least-squares estimation may still be used, provided that a simple correction of the uncertainty obtained for the estimated model is applied. We show that these uncertainties and credible intervals are robust, as they do not rely on the assumption of a Gaussian distribution of the data. Hence, common software for weighted least-squares estimation may still safely be employed in such a case, followed by a simple modification of the uncertainties obtained by that software. We also provide means of checking the assumptions of such an approach. The Bayesian regression procedure is applied to analyze the CODATA values for the Planck constant published over the past decades in terms of three different models: a constant model, a straight line model and a spline model. Our results indicate that the CODATA values may not have yet stabilized.

  12. Lévy/Anomalous Diffusion as a Mean-Field Theory for 3D Cloud Effects in Shortwave Radiative Transfer: Empirical Support, New Analytical Formulation, and Impact on Atmospheric Absorption

    NASA Astrophysics Data System (ADS)

    Buldyrev, S.; Davis, A.; Marshak, A.; Stanley, H. E.

    2001-12-01

    Two-stream radiation transport models, as used in all current GCM parameterization schemes, are mathematically equivalent to ``standard'' diffusion theory where the physical picture is a slow propagation of the diffuse radiation by Gaussian random walks. The space/time spread (technically, the Green function) of this diffusion process is described exactly by a Gaussian distribution; from the statistical physics viewpoint, this follows from the convergence of the sum of many (rescaled) steps between scattering events with a finite variance. This Gaussian picture follows directly from first principles (the radiative transfer equation) under the assumptions of horizontal uniformity and large optical depth, i.e., there is a homogeneous plane-parallel cloud somewhere in the column. The first-order effect of 3D variability of cloudiness, the main source of scattering, is to perturb the distribution of single steps between scatterings which, modulo the ``1-g'' rescaling, can be assumed effectively isotropic. The most natural generalization of the Gaussian distribution is the 1-parameter family of symmetric Lévy-stable distributions because the sum of many zero-mean random variables with infinite variance, but finite moments of order q < α (0 < α < 2), converge to them. It has been shown on heuristic grounds that for these Lévy-based random walks the typical number of scatterings is now (1-g)τ α for transmitted light. The appearance of a non-rational exponent is why this is referred to as ``anomalous'' diffusion. Note that standard/Gaussian diffusion is retrieved in the limit α = 2-. Lévy transport theory has been successfully used in the statistical physics literature to investigate a wide variety of systems with strongly nonlinear dynamics; these applications range from random advection in turbulent fluids to the erratic behavior of financial time-series and, most recently, self-regulating ecological systems. We will briefly survey the state-of-the-art observations that offer compelling empirical support for the Lévy/anomalous diffusion model in atmospheric radiation: (1) high-resolution spectroscopy of differential absorption in the O2 A-band from ground; (2) temporal transient records of lightning strokes transmitted through clouds to a sensitive detector in space; and (3) the Gamma-distributions of optical depths derived from Landsat cloud scenes at 30-m resolution. We will then introduce a rigorous analytical formulation of Lévy/anomalous transport through finite media based on fractional derivatives and Sonin calculus. A remarkable result from this new theoretical development is an extremal property of the α = 1+ case (divergent mean-free-path), as is observed in the cloudy atmosphere. Finally, we will discuss the implications of anomalous transport theory for bulk 3D effects on the current enhanced absorption problem as well as its role as the basis of a next-generation GCM radiation parameterization.

  13. Exploring 4D Flow Data in an Immersive Virtual Environment

    NASA Astrophysics Data System (ADS)

    Stevens, A. H.; Butkiewicz, T.

    2017-12-01

    Ocean models help us to understand and predict a wide range of intricate physical processes which comprise the atmospheric and oceanic systems of the Earth. Because these models output an abundance of complex time-varying three-dimensional (i.e., 4D) data, effectively conveying the myriad information from a given model poses a significant visualization challenge. The majority of the research effort into this problem has concentrated around synthesizing and examining methods for representing the data itself; by comparison, relatively few studies have looked into the potential merits of various viewing conditions and virtual environments. We seek to improve our understanding of the benefits offered by current consumer-grade virtual reality (VR) systems through an immersive, interactive 4D flow visualization system. Our dataset is a Regional Ocean Modeling System (ROMS) model representing a 12-hour tidal cycle of the currents within New Hampshire's Great Bay estuary. The model data was loaded into a custom VR particle system application using the OpenVR software library and the HTC Vive hardware, which tracks a headset and two six-degree-of-freedom (6DOF) controllers within a 5m-by-5m area. The resulting visualization system allows the user to coexist in the same virtual space as the data, enabling rapid and intuitive analysis of the flow model through natural interactions with the dataset and within the virtual environment. Whereas a traditional computer screen typically requires the user to reposition a virtual camera in the scene to obtain the desired view of the data, in virtual reality the user can simply move their head to the desired viewpoint, completely eliminating the mental context switches from data exploration/analysis to view adjustment and back. The tracked controllers become tools to quickly manipulate (reposition, reorient, and rescale) the dataset and to interrogate it by, e.g., releasing dye particles into the flow field, probing scalar velocities, placing a cutting plane through a region of interest, etc. It is hypothesized that the advantages afforded by head-tracked viewing and 6DOF interaction devices will lead to faster and more efficient examination of 4D flow data. A human factors study is currently being prepared to empirically evaluate this method of visualization and interaction.

  14. Magnetic hierarchical deposition

    NASA Astrophysics Data System (ADS)

    Posazhennikova, Anna I.; Indekeu, Joseph O.

    2014-11-01

    We consider random deposition of debris or blocks on a line, with block sizes following a rigorous hierarchy: the linear size equals 1/λn in generation n, in terms of a rescaling factor λ. Without interactions between the blocks, this model is described by a logarithmic fractal, studied previously, which is characterized by a constant increment of the length, area or volume upon proliferation. We study to what extent the logarithmic fractality survives, if each block is equipped with an Ising (pseudo-)spin s=±1 and the interactions between those spins are switched on (ranging from antiferromagnetic to ferromagnetic). It turns out that the dependence of the surface topology on the interaction sign and strength is not trivial. For instance, deep in the ferromagnetic regime, our numerical experiments and analytical results reveal a sharp crossover from a Euclidean transient, consisting of aggregated domains of aligned spins, to an asymptotic logarithmic fractal growth. In contrast, deep into the antiferromagnetic regime the surface roughness is important and is shown analytically to be controlled by vacancies induced by frustrated spins. Finally, in the weak interaction regime, we demonstrate that the non-interacting model is extremal in the sense that the effect of the introduction of interactions is only quadratic in the magnetic coupling strength. In all regimes, we demonstrate the adequacy of a mean-field approximation whenever vacancies are rare. In sum, the logarithmic fractal character is robust with respect to the introduction of spatial correlations in the hierarchical deposition process.

  15. Deploying a quantum annealing processor to detect tree cover in aerial imagery of California

    PubMed Central

    Basu, Saikat; Ganguly, Sangram; Michaelis, Andrew; Mukhopadhyay, Supratik; Nemani, Ramakrishna R.

    2017-01-01

    Quantum annealing is an experimental and potentially breakthrough computational technology for handling hard optimization problems, including problems of computer vision. We present a case study in training a production-scale classifier of tree cover in remote sensing imagery, using early-generation quantum annealing hardware built by D-wave Systems, Inc. Beginning within a known boosting framework, we train decision stumps on texture features and vegetation indices extracted from four-band, one-meter-resolution aerial imagery from the state of California. We then impose a regulated quadratic training objective to select an optimal voting subset from among these stumps. The votes of the subset define the classifier. For optimization, the logical variables in the objective function map to quantum bits in the hardware device, while quadratic couplings encode as the strength of physical interactions between the quantum bits. Hardware design limits the number of couplings between these basic physical entities to five or six. To account for this limitation in mapping large problems to the hardware architecture, we propose a truncation and rescaling of the training objective through a trainable metaparameter. The boosting process on our basic 108- and 508-variable problems, thus constituted, returns classifiers that incorporate a diverse range of color- and texture-based metrics and discriminate tree cover with accuracies as high as 92% in validation and 90% on a test scene encompassing the open space preserves and dense suburban build of Mill Valley, CA. PMID:28241028

  16. Coalescent Processes with Skewed Offspring Distributions and Nonequilibrium Demography.

    PubMed

    Matuszewski, Sebastian; Hildebrandt, Marcel E; Achaz, Guillaume; Jensen, Jeffrey D

    2018-01-01

    Nonequilibrium demography impacts coalescent genealogies leaving detectable, well-studied signatures of variation. However, similar genomic footprints are also expected under models of large reproductive skew, posing a serious problem when trying to make inference. Furthermore, current approaches consider only one of the two processes at a time, neglecting any genomic signal that could arise from their simultaneous effects, preventing the possibility of jointly inferring parameters relating to both offspring distribution and population history. Here, we develop an extended Moran model with exponential population growth, and demonstrate that the underlying ancestral process converges to a time-inhomogeneous psi-coalescent. However, by applying a nonlinear change of time scale-analogous to the Kingman coalescent-we find that the ancestral process can be rescaled to its time-homogeneous analog, allowing the process to be simulated quickly and efficiently. Furthermore, we derive analytical expressions for the expected site-frequency spectrum under the time-inhomogeneous psi-coalescent, and develop an approximate-likelihood framework for the joint estimation of the coalescent and growth parameters. By means of extensive simulation, we demonstrate that both can be estimated accurately from whole-genome data. In addition, not accounting for demography can lead to serious biases in the inferred coalescent model, with broad implications for genomic studies ranging from ecology to conservation biology. Finally, we use our method to analyze sequence data from Japanese sardine populations, and find evidence of high variation in individual reproductive success, but few signs of a recent demographic expansion. Copyright © 2018 by the Genetics Society of America.

  17. Temporal information partitioning: Characterizing synergy, uniqueness, and redundancy in interacting environmental variables

    NASA Astrophysics Data System (ADS)

    Goodwell, Allison E.; Kumar, Praveen

    2017-07-01

    Information theoretic measures can be used to identify nonlinear interactions between source and target variables through reductions in uncertainty. In information partitioning, multivariate mutual information is decomposed into synergistic, unique, and redundant components. Synergy is information shared only when sources influence a target together, uniqueness is information only provided by one source, and redundancy is overlapping shared information from multiple sources. While this partitioning has been applied to provide insights into complex dependencies, several proposed partitioning methods overestimate redundant information and omit a component of unique information because they do not account for source dependencies. Additionally, information partitioning has only been applied to time-series data in a limited context, using basic pdf estimation techniques or a Gaussian assumption. We develop a Rescaled Redundancy measure (Rs) to solve the source dependency issue, and present Gaussian, autoregressive, and chaotic test cases to demonstrate its advantages over existing techniques in the presence of noise, various source correlations, and different types of interactions. This study constitutes the first rigorous application of information partitioning to environmental time-series data, and addresses how noise, pdf estimation technique, or source dependencies can influence detected measures. We illustrate how our techniques can unravel the complex nature of forcing and feedback within an ecohydrologic system with an application to 1 min environmental signals of air temperature, relative humidity, and windspeed. The methods presented here are applicable to the study of a broad range of complex systems composed of interacting variables.

  18. Constraining continuous rainfall simulations for derived design flood estimation

    NASA Astrophysics Data System (ADS)

    Woldemeskel, F. M.; Sharma, A.; Mehrotra, R.; Westra, S.

    2016-11-01

    Stochastic rainfall generation is important for a range of hydrologic and water resources applications. Stochastic rainfall can be generated using a number of models; however, preserving relevant attributes of the observed rainfall-including rainfall occurrence, variability and the magnitude of extremes-continues to be difficult. This paper develops an approach to constrain stochastically generated rainfall with an aim of preserving the intensity-durationfrequency (IFD) relationships of the observed data. Two main steps are involved. First, the generated annual maximum rainfall is corrected recursively by matching the generated intensity-frequency relationships to the target (observed) relationships. Second, the remaining (non-annual maximum) rainfall is rescaled such that the mass balance of the generated rain before and after scaling is maintained. The recursive correction is performed at selected storm durations to minimise the dependence between annual maximum values of higher and lower durations for the same year. This ensures that the resulting sequences remain true to the observed rainfall as well as represent the design extremes that may have been developed separately and are needed for compliance reasons. The method is tested on simulated 6 min rainfall series across five Australian stations with different climatic characteristics. The results suggest that the annual maximum and the IFD relationships are well reproduced after constraining the simulated rainfall. While our presentation focusses on the representation of design rainfall attributes (IFDs), the proposed approach can also be easily extended to constrain other attributes of the generated rainfall, providing an effective platform for post-processing of stochastic rainfall generators.

  19. Maturity of lumped element kinetic inductance detectors for space-borne instruments in the range between 80 and 180 GHz

    NASA Astrophysics Data System (ADS)

    Catalano, A.; Benoit, A.; Bourrion, O.; Calvo, M.; Coiffard, G.; D'Addabbo, A.; Goupy, J.; Le Sueur, H.; Macías-Pérez, J.; Monfardini, A.

    2016-07-01

    This work intends to give the state-of-the-art of our knowledge of the performance of lumped element kinetic inductance detectors (LEKIDs) at millimetre wavelengths (from 80 to 180 GHz). We evaluate their optical sensitivity under typical background conditions that are representative of a space environment and their interaction with ionising particles. Two LEKID arrays, originally designed for ground-based applications and composed of a few hundred pixels each, operate at a central frequency of 100 and 150 GHz (Δν/ν about 0.3). Their sensitivities were characterised in the laboratory using a dedicated closed-cycle 100 mK dilution cryostat and a sky simulator, allowing for the reproduction of realistic, space-like observation conditions. The impact of cosmic rays was evaluated by exposing the LEKID arrays to alpha particles (241Am) and X sources (109Cd), with a read-out sampling frequency similar to those used for Planck HFI (about 200 Hz), and also with a high resolution sampling level (up to 2 MHz) to better characterise and interpret the observed glitches. In parallel, we developed an analytical model to rescale the results to what would be observed by such a LEKID array at the second Lagrangian point. We show that LEKID arrays behave adequately in space-like conditions with a measured noise equivalent power close to the cosmic microwave background photon noise and an impact of cosmic rays smaller with respect to those observed with Planck satellite detectors.

  20. Empirical Recommendations for Improving the Stability of the Dot-Probe Task in Clinical Research

    PubMed Central

    Price, Rebecca B.; Kuckertz, Jennie M.; Siegle, Greg J.; Ladouceur, Cecile D.; Silk, Jennifer S.; Ryan, Neal D.; Dahl, Ronald E.; Amir, Nader

    2014-01-01

    The dot-probe task has been widely used in research to produce an index of biased attention based on reaction times (RTs). Despite its popularity, very few published studies have examined psychometric properties of the task, including test-retest reliability, and no previous study has examined reliability in clinically anxious samples or systematically explored the effects of task design and analysis decisions on reliability. In the current analysis, we utilized dot-probe data from three studies where attention bias towards threat-related faces was assessed at multiple (≥5) timepoints. Two of the studies were similar (adults with Social Anxiety Disorder, similar design features) while one was much more disparate (pediatric healthy volunteers, distinct task design). We explored the effects of analysis choices (e.g., bias score calculation formula, methods for outlier handling) on reliability and searched for convergence of findings across the three studies. We found that, when considering the three studies concurrently, the most reliable RT bias index utilized data from dot-bottom trials, comparing congruent to incongruent trials, with rescaled outliers, particularly after averaging across more than one assessment point. Although reliability of RT bias indices was moderate to low under most circumstances, within-session variability in bias (attention bias variability; ABV), a recently proposed RT index, was more reliable across sessions. Several eyetracking-based indices of attention bias (available in the pediatric healthy sample only) showed reliability that matched the optimal RT index (ABV). On the basis of these findings, we make specific recommendations to researchers using the dot probe, particularly those wishing to investigate individual differences and/or single-patient applications. PMID:25419646

  1. Holographic non-Fermi liquid in a background magnetic field

    NASA Astrophysics Data System (ADS)

    Basu, Pallab; He, Jianyang; Mukherjee, Anindya; Shieh, Hsien-Hang

    2010-08-01

    We study the effects of a nonzero magnetic field on a class of 2+1 dimensional non-Fermi liquids, recently found in [Hong Liu, John McGreevy, and David Vegh, arXiv:0903.2477.] by considering properties of a Fermionic probe in an extremal AdS4 black hole background. Introducing a similar fermionic probe in a dyonic AdS4 black hole geometry, we find that the effect of a magnetic field could be incorporated in a rescaling of the probe fermion’s charge. From this simple fact, we observe interesting effects like gradual disappearance of the Fermi surface and quasiparticle peaks at large magnetic fields and changes in other properties of the system. We also find Landau level like structures and oscillatory phenomena similar to the de-Haas-van Alphen effect.

  2. Oscillations and chaos in neural networks: an exactly solvable model.

    PubMed Central

    Wang, L P; Pichler, E E; Ross, J

    1990-01-01

    We consider a randomly diluted higher-order network with noise, consisting of McCulloch-Pitts neurons that interact by Hebbian-type connections. For this model, exact dynamical equations are derived and solved for both parallel and random sequential updating algorithms. For parallel dynamics, we find a rich spectrum of different behaviors including static retrieving and oscillatory and chaotic phenomena in different parts of the parameter space. The bifurcation parameters include first- and second-order neuronal interaction coefficients and a rescaled noise level, which represents the combined effects of the random synaptic dilution, interference between stored patterns, and additional background noise. We show that a marked difference in terms of the occurrence of oscillations or chaos exists between neural networks with parallel and random sequential dynamics. Images PMID:2251287

  3. Calculating corner singularities by boundary integral equations.

    PubMed

    Shi, Hualiang; Lu, Ya Yan; Du, Qiang

    2017-06-01

    Accurate numerical solutions for electromagnetic fields near sharp corners and edges are important for nanophotonics applications that rely on strong near fields to enhance light-matter interactions. For cylindrical structures, the singularity exponents of electromagnetic fields near sharp edges can be solved analytically, but in general the actual fields can only be calculated numerically. In this paper, we use a boundary integral equation method to compute electromagnetic fields near sharp edges, and construct the leading terms in asymptotic expansions based on numerical solutions. Our integral equations are formulated for rescaled unknown functions to avoid unbounded field components, and are discretized with a graded mesh and properly chosen quadrature schemes. The numerically found singularity exponents agree well with the exact values in all the test cases presented here, indicating that the numerical solutions are accurate.

  4. Universal and integrable nonlinear evolution systems of equations in 2+1 dimensions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maccari, A.

    1997-08-01

    Integrable systems of nonlinear partial differential equations (PDEs) are obtained from integrable equations in 2+1 dimensions, by means of a reduction method of broad applicability based on Fourier expansion and spatio{endash}temporal rescalings, which is asymptotically exact in the limit of weak nonlinearity. The integrability by the spectral transform is explicitly demonstrated, because the corresponding Lax pairs have been derived, applying the same reduction method to the Lax pair of the initial equation. These systems of nonlinear PDEs are likely to be of applicative relevance and have a {open_quotes}universal{close_quotes} character, inasmuch as they may be derived from a very large classmore » of nonlinear evolution equations with a linear dispersive part. {copyright} {ital 1997 American Institute of Physics.}« less

  5. Spectral ratio method for measuring emissivity

    USGS Publications Warehouse

    Watson, K.

    1992-01-01

    The spectral ratio method is based on the concept that although the spectral radiances are very sensitive to small changes in temperature the ratios are not. Only an approximate estimate of temperature is required thus, for example, we can determine the emissivity ratio to an accuracy of 1% with a temperature estimate that is only accurate to 12.5 K. Selecting the maximum value of the channel brightness temperatures is an unbiased estimate. Laboratory and field spectral data are easily converted into spectral ratio plots. The ratio method is limited by system signal:noise and spectral band-width. The images can appear quite noisy because ratios enhance high frequencies and may require spatial filtering. Atmospheric effects tend to rescale the ratios and require using an atmospheric model or a calibration site. ?? 1992.

  6. Neural reactivations during sleep determine network credit assignment

    PubMed Central

    Gulati, Tanuj; Guo, Ling; Ramanathan, Dhakshin S.; Bodepudi, Anitha; Ganguly, Karunesh

    2018-01-01

    A fundamental goal of motor learning is to establish neural patterns that produce a desired behavioral outcome. It remains unclear how and when the nervous system solves this “credit–assignment” problem. Using neuroprosthetic learning where we could control the causal relationship between neurons and behavior, here we show that sleep–dependent processing is required for credit-assignment and the establishment of task-related functional connectivity reflecting the casual neuron-behavior relationship. Importantly, we found a strong link between the microstructure of sleep reactivations and credit assignment, with downscaling of non–causal activity. Strikingly, decoupling of spiking to slow–oscillations using optogenetic methods eliminated rescaling. Thus, our results suggest that coordinated firing during sleep plays an essential role in establishing sparse activation patterns that reflect the causal neuron–behavior relationship. PMID:28692062

  7. Mechanical design principles of a mitotic spindle

    PubMed Central

    Ward, Jonathan J; Roque, Hélio; Antony, Claude; Nédélec, François

    2014-01-01

    An organised spindle is crucial to the fidelity of chromosome segregation, but the relationship between spindle structure and function is not well understood in any cell type. The anaphase B spindle in fission yeast has a slender morphology and must elongate against compressive forces. This ‘pushing’ mode of chromosome transport renders the spindle susceptible to breakage, as observed in cells with a variety of defects. Here we perform electron tomographic analyses of the spindle, which suggest that it organises a limited supply of structural components to increase its compressive strength. Structural integrity is maintained throughout the spindle's fourfold elongation by organising microtubules into a rigid transverse array, preserving correct microtubule number and dynamically rescaling microtubule length. DOI: http://dx.doi.org/10.7554/eLife.03398.001 PMID:25521247

  8. Tightness Entropic Uncertainty Relation in Quantum Markovian-Davies Environment

    NASA Astrophysics Data System (ADS)

    Zhang, Jun; Liu, Liang; Han, Yan

    2018-05-01

    In this paper, we investigate the tightness of entropic uncertainty relation in the absence (presence) of the quantum memory which the memory particle being weakly coupled to a decohering Davies-type Markovian environment. The results show that the tightness of the quantum uncertainty relation can be controlled by the energy relaxation time F, the dephasing time G and the rescaled temperature p, the perfect tightness can be arrived by dephasing and energy relaxation satisfying F = 2G and p = 1/2. In addition, the tightness of the memory-assisted entropic uncertainty relation and the entropic uncertainty relation can be influenced mainly by the purity. While in memory-assisted model, the purity and quantum correlation can also influence the tightness actively while the quantum entanglement can influence the tightness slightly.

  9. Compressible Boundary Layer Predictions at High Reynolds Number using Hybrid LES/RANS Methods

    NASA Technical Reports Server (NTRS)

    Choi, Jung-Il; Edwards, Jack R.; Baurle, Robert A.

    2008-01-01

    Simulations of compressible boundary layer flow at three different Reynolds numbers (Re(sub delta) = 5.59x10(exp 4), 1.78x10(exp 5), and 1.58x10(exp 6) are performed using a hybrid large-eddy/Reynolds-averaged Navier-Stokes method. Variations in the recycling/rescaling method, the higher-order extension, the choice of primitive variables, the RANS/LES transition parameters, and the mesh resolution are considered in order to assess the model. The results indicate that the present model can provide good predictions of the mean flow properties and second-moment statistics of the boundary layers considered. Normalized Reynolds stresses in the outer layer are found to be independent of Reynolds number, similar to incompressible turbulent boundary layers.

  10. Gauge theory for finite-dimensional dynamical systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gurfil, Pini

    2007-06-15

    Gauge theory is a well-established concept in quantum physics, electrodynamics, and cosmology. This concept has recently proliferated into new areas, such as mechanics and astrodynamics. In this paper, we discuss a few applications of gauge theory in finite-dimensional dynamical systems. We focus on the concept of rescriptive gauge symmetry, which is, in essence, rescaling of an independent variable. We show that a simple gauge transformation of multiple harmonic oscillators driven by chaotic processes can render an apparently ''disordered'' flow into a regular dynamical process, and that there exists a strong connection between gauge transformations and reduction theory of ordinary differentialmore » equations. Throughout the discussion, we demonstrate the main ideas by considering examples from diverse fields, including quantum mechanics, chemistry, rigid-body dynamics, and information theory.« less

  11. Flux of granular particles through a shaken sieve plate

    PubMed Central

    Wen, Pingping; Zheng, Ning; Nian, Junwei; Li, Liangsheng; Shi, Qingfan

    2015-01-01

    We experimentally investigate a discharging flux of granular particles through a sieve plate subject to vertical vibrations. The mean mass flux shows a non-monotonic relation with the vibration strength. High-speed photography reveals that two stages, the free flight of the particles’ bulk over the plate and the adhesion of the particles’ bulk with the plate, alternately appear, where only the adhesion stage contributes to the flow. With two independent methods, we then measure the adhesion time under different vibration conditions, and define an adhesion flux. The adhesion flux monotonically increases with increasing vibration strength. By rescaling the adhesion flux, we find that the adhesion flux is approximately determined by the peak vibration velocity of the shaker. The conclusion is examined with other sieve geometries. PMID:26056080

  12. On some Approximation Schemes for Steady Compressible Viscous Flow

    NASA Astrophysics Data System (ADS)

    Bause, M.; Heywood, J. G.; Novotny, A.; Padula, M.

    This paper continues our development of approximation schemes for steady compressible viscous flow based on an iteration between a Stokes like problem for the velocity and a transport equation for the density, with the aim of improving their suitability for computations. Such schemes seem attractive for computations because they offer a reduction to standard problems for which there is already highly refined software, and because of the guidance that can be drawn from an existence theory based on them. Our objective here is to modify a recent scheme of Heywood and Padula [12], to improve its convergence properties. This scheme improved upon an earlier scheme of Padula [21], [23] through the use of a special ``effective pressure'' in linking the Stokes and transport problems. However, its convergence is limited for several reasons. Firstly, the steady transport equation itself is only solvable for general velocity fields if they satisfy certain smallness conditions. These conditions are met here by using a rescaled variant of the steady transport equation based on a pseudo time step for the equation of continuity. Another matter limiting the convergence of the scheme in [12] is that the Stokes linearization, which is a linearization about zero, has an inevitably small range of convergence. We replace it here with an Oseen or Newton linearization, either of which has a wider range of convergence, and converges more rapidly. The simplicity of the scheme offered in [12] was conducive to a relatively simple and clearly organized proof of its convergence. The proofs of convergence for the more complicated schemes proposed here are structured along the same lines. They strengthen the theorems of existence and uniqueness in [12] by weakening the smallness conditions that are needed. The expected improvement in the computational performance of the modified schemes has been confirmed by Bause [2], in an ongoing investigation.

  13. Relationship between specific surface area and the dry end of the water retention curve for soils with varying clay and organic carbon contents

    NASA Astrophysics Data System (ADS)

    Resurreccion, Augustus C.; Moldrup, Per; Tuller, Markus; Ferré, T. P. A.; Kawamoto, Ken; Komatsu, Toshiko; de Jonge, Lis Wollesen

    2011-06-01

    Accurate description of the soil water retention curve (SWRC) at low water contents is important for simulating water dynamics and biochemical vadose zone processes in arid environments. Soil water retention data corresponding to matric potentials of less than -10 MPa, where adsorptive forces dominate over capillary forces, have also been used to estimate soil specific surface area (SA). In the present study, the dry end of the SWRC was measured with a chilled-mirror dew point psychrometer for 41 Danish soils covering a wide range of clay (CL) and organic carbon (OC) contents. The 41 soils were classified into four groups on the basis of the Dexter number (n = CL/OC), and the Tuller-Or (TO) general scaling model describing water film thickness at a given matric potential (<-10 MPa) was evaluated. The SA estimated from the dry end of the SWRC (SA_SWRC) was in good agreement with the SA measured with ethylene glycol monoethyl ether (SA_EGME) only for organic soils with n > 10. A strong correlation between the ratio of the two surface area estimates and the Dexter number was observed and applied as an additional scaling function in the TO model to rescale the soil water retention curve at low water contents. However, the TO model still overestimated water film thickness at potentials approaching ovendry condition (about -800 MPa). The semi-log linear Campbell-Shiozawa-Rossi-Nimmo (CSRN) model showed better fits for all investigated soils from -10 to -800 MPa and yielded high correlations with CL and SA. It is therefore recommended to apply the empirical CSRN model for predicting the dry part of the water retention curve (-10 to -800 MPa) from measured soil texture or surface area. Further research should aim to modify the more physically based TO model to obtain better descriptions of the SWRC in the very dry range (-300 to -800 MPa).

  14. The Hurst Phenomenon in Error Estimates Related to Atmospheric Turbulence

    NASA Astrophysics Data System (ADS)

    Dias, Nelson Luís; Crivellaro, Bianca Luhm; Chamecki, Marcelo

    2018-05-01

    The Hurst phenomenon is a well-known feature of long-range persistence first observed in hydrological and geophysical time series by E. Hurst in the 1950s. It has also been found in several cases in turbulence time series measured in the wind tunnel, the atmosphere, and in rivers. Here, we conduct a systematic investigation of the value of the Hurst coefficient H in atmospheric surface-layer data, and its impact on the estimation of random errors. We show that usually H > 0.5 , which implies the non-existence (in the statistical sense) of the integral time scale. Since the integral time scale is present in the Lumley-Panofsky equation for the estimation of random errors, this has important practical consequences. We estimated H in two principal ways: (1) with an extension of the recently proposed filtering method to estimate the random error (H_p ), and (2) with the classical rescaled range introduced by Hurst (H_R ). Other estimators were tried but were found less able to capture the statistical behaviour of the large scales of turbulence. Using data from three micrometeorological campaigns we found that both first- and second-order turbulence statistics display the Hurst phenomenon. Usually, H_R is larger than H_p for the same dataset, raising the question that one, or even both, of these estimators, may be biased. For the relative error, we found that the errors estimated with the approach adopted by us, that we call the relaxed filtering method, and that takes into account the occurrence of the Hurst phenomenon, are larger than both the filtering method and the classical Lumley-Panofsky estimates. Finally, we found that there is no apparent relationship between H and the Obukhov stability parameter. The relative errors, however, do show stability dependence, particularly in the case of the error of the kinematic momentum flux in unstable conditions, and that of the kinematic sensible heat flux in stable conditions.

  15. CONSTRAINTS ON CHARON'S ORBITAL ELEMENTS FROM THE DOUBLE STELLAR OCCULTATION OF 2008 JUNE 22

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sicardy, B.; Lecacheux, J.; Boissel, Y.

    Pluto and its main satellite, Charon, occulted the same star on 2008 June 22. This event was observed from Australia and La Reunion Island, providing the east and north Charon Plutocentric offset in the sky plane (J2000): X= + 12,070.5 {+-} 4 km (+ 546.2 {+-} 0.2 mas), Y= + 4,576.3 {+-} 24 km (+ 207.1 {+-} 1.1 mas) at 19:20:33.82 UT on Earth, corresponding to JD 2454640.129964 at Pluto. This yields Charon's true longitude L= 153.483 {+-} 0.{sup 0}071 in the satellite orbital plane (counted from the ascending node on J2000 mean equator) and orbital radius r= 19,564 {+-}more » 14 km at that time. We compare this position to that predicted by (1) the orbital solution of Tholen and Buie (the 'TB97' solution), (2) the PLU017 Charon ephemeris, and (3) the solution of Tholen et al. (the 'T08' solution). We conclude that (1) our result rules out solution TB97, (2) our position agrees with PLU017, with differences of {Delta}L= + 0.073 {+-} 0.{sup 0}071 in longitude, and {Delta}r= + 0.6 {+-} 14 km in radius, and (3) while the difference with the T08 ephemeris amounts to only {Delta}L= 0.033 {+-} 0.{sup 0}071 in longitude, it exhibits a significant radial discrepancy of {Delta}r= 61.3 {+-} 14 km. We discuss this difference in terms of a possible image scale relative error of 3.35 x 10{sup -3}in the 2002-2003 Hubble Space Telescope images upon which the T08 solution is mostly based. Rescaling the T08 Charon semi-major axis, a = 19, 570.45 km, to the TB97 value, a = 19636 km, all other orbital elements remaining the same ('T08/TB97' solution), we reconcile our position with the re-scaled solution by better than 12 km (or 0.55 mas) for Charon's position in its orbital plane, thus making T08/TB97 our preferred solution.« less

  16. A Statistical Framework for Microbial Source Attribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Velsko, S P; Allen, J E; Cunningham, C T

    2009-04-28

    This report presents a general approach to inferring transmission and source relationships among microbial isolates from their genetic sequences. The outbreak transmission graph (also called the transmission tree or transmission network) is the fundamental structure which determines the statistical distributions relevant to source attribution. The nodes of this graph are infected individuals or aggregated sub-populations of individuals in which transmitted bacteria or viruses undergo clonal expansion, leading to a genetically heterogeneous population. Each edge of the graph represents a transmission event in which one or a small number of bacteria or virions infects another node thus increasing the size ofmore » the transmission network. Recombination and re-assortment events originate in nodes which are common to two distinct networks. In order to calculate the probability that one node was infected by another, given the observed genetic sequences of microbial isolates sampled from them, we require two fundamental probability distributions. The first is the probability of obtaining the observed mutational differences between two isolates given that they are separated by M steps in a transmission network. The second is the probability that two nodes sampled randomly from an outbreak transmission network are separated by M transmission events. We show how these distributions can be obtained from the genetic sequences of isolates obtained by sampling from past outbreaks combined with data from contact tracing studies. Realistic examples are drawn from the SARS outbreak of 2003, the FMDV outbreak in Great Britain in 2001, and HIV transmission cases. The likelihood estimators derived in this report, and the underlying probability distribution functions required to calculate them possess certain compelling general properties in the context of microbial forensics. These include the ability to quantify the significance of a sequence 'match' or 'mismatch' between two isolates; the ability to capture non-intuitive effects of network structure on inferential power, including the 'small world' effect; the insensitivity of inferences to uncertainties in the underlying distributions; and the concept of rescaling, i.e. ability to collapse sub-networks into single nodes and examine transmission inferences on the rescaled network.« less

  17. Health- and vision-related quality of life in intellectually disabled children.

    PubMed

    Cui, Yu; Stapleton, Fiona; Suttle, Catherine; Bundy, Anita

    2010-01-01

    To investigate the psychometric properties of instruments for the assessment of self-reported functional vision performance and health-related quality of life in children with intellectual disabilities (IDs). Two instruments [Autoquestionnaire Enfant Image (AUQUEI), LV Prasad-Functional Vision Questionnaire (LVP-FVQ)] designed for the assessment of functional vision and health-related quality of life were adapted and administered to 168 school children with ID, aged 8 to 18 years. Rasch analysis was used to determine the appropriateness of the rating scales of these instruments and to identify any redundant items. Redundant items were excluded based on descriptive statistics and Rasch analysis, leaving 17 of 23 items in the revised AUQUEI and 16 of 22 in the LVP-FVQ. The AUQUEI items showed disordered thresholds on the rating scale. A modified step calibration (collapsed from four categories to three categories) resulted in ordered response thresholds for all items. The adjusted instrument produced an overall fit to the model (mean item infit = 1.06, SD = 0.32; mean item outfit = 1.11, SD = 0.35), indicating good construct validity. After Rasch analysis, the AUQUEI showed good content validity (person separation = 2.18; item reliability = 0.99; Cronbach alpha = 0.89). Increased similarity of person and item means and SDs on the logit scale after modification would indicate that the instrument was more applicable to the target population in its modified form. In contrast, the LVP-FVQ had a low person separation (1.35), suggesting that a more appropriate instrument is needed for assessment of vision-related quality of life in children with ID. The psychometric properties of two instruments were explored using Rasch analysis. By rescaling and reduction of items, the instruments were modified for use in a population of children with at least mild to moderate ID. However, an alternative instrument is needed for the assessment of vision-related quality of life in intellectually disabled children with normal vision or mild visual abnormalities.

  18. Community detection in sequence similarity networks based on attribute clustering

    DOE PAGES

    Chowdhary, Janamejaya; Loeffler, Frank E.; Smith, Jeremy C.

    2017-07-24

    Networks are powerful tools for the presentation and analysis of interactions in multi-component systems. A commonly studied mesoscopic feature of networks is their community structure, which arises from grouping together similar nodes into one community and dissimilar nodes into separate communities. Here in this paper, the community structure of protein sequence similarity networks is determined with a new method: Attribute Clustering Dependent Communities (ACDC). Sequence similarity has hitherto typically been quantified by the alignment score or its expectation value. However, pair alignments with the same score or expectation value cannot thus be differentiated. To overcome this deficiency, the method constructs,more » for pair alignments, an extended alignment metric, the link attribute vector, which includes the score and other alignment characteristics. Rescaling components of the attribute vectors qualitatively identifies a systematic variation of sequence similarity within protein superfamilies. The problem of community detection is then mapped to clustering the link attribute vectors, selection of an optimal subset of links and community structure refinement based on the partition density of the network. ACDC-predicted communities are found to be in good agreement with gold standard sequence databases for which the "ground truth" community structures (or families) are known. ACDC is therefore a community detection method for sequence similarity networks based entirely on pair similarity information. A serial implementation of ACDC is available from https://cmb.ornl.gov/resources/developments« less

  19. ICRS Recommendation Document

    PubMed Central

    Roos, Ewa M.; Engelhart, Luella; Ranstam, Jonas; Anderson, Allen F.; Irrgang, Jay J.; Marx, Robert G.; Tegner, Yelverton; Davis, Aileen M.

    2011-01-01

    Objective: The purpose of this article is to describe and recommend patient-reported outcome instruments for use in patients with articular cartilage lesions undergoing cartilage repair interventions. Methods: Nonsystematic literature search identifying measures addressing pain and function evaluated for validity and psychometric properties in patients with articular cartilage lesions. Results: The knee-specific instruments, titled the International Knee Documentation Committee Subjective Knee Form and the Knee injury and Osteoarthritis and Outcome Score, both fulfill the basic requirements for reliability, validity, and responsiveness in cartilage repair patients. A major difference between them is that the former results in a single score and the latter results in 5 subscores. A single score is preferred for simplicity’s sake, whereas subscores allow for evaluation of separate constructs at all levels according to the International Classification of Functioning. Conclusions: Because there is no obvious superiority of either instrument at this time, both outcome measures are recommended for use in cartilage repair. Rescaling of the Lysholm Scoring Scale has been suggested, and confirmatory longitudinal studies are needed prior to recommending this scale for use in cartilage repair. Inclusion of a generic measure is feasible in cartilage repair studies and allows analysis of health-related quality of life and health economic outcomes. The Marx or Tegner Activity Rating Scales are feasible and have been evaluated in patients with knee injuries. However, activity measures require age and sex adjustment, and data are lacking in people with cartilage repair. PMID:26069575

  20. Universal analytical scattering form factor for shell-, core-shell, or homogeneous particles with continuously variable density profile shape.

    PubMed

    Foster, Tobias

    2011-09-01

    A novel analytical and continuous density distribution function with a widely variable shape is reported and used to derive an analytical scattering form factor that allows us to universally describe the scattering from particles with the radial density profile of homogeneous spheres, shells, or core-shell particles. Composed by the sum of two Fermi-Dirac distribution functions, the shape of the density profile can be altered continuously from step-like via Gaussian-like or parabolic to asymptotically hyperbolic by varying a single "shape parameter", d. Using this density profile, the scattering form factor can be calculated numerically. An analytical form factor can be derived using an approximate expression for the original Fermi-Dirac distribution function. This approximation is accurate for sufficiently small rescaled shape parameters, d/R (R being the particle radius), up to values of d/R ≈ 0.1, and thus captures step-like, Gaussian-like, and parabolic as well as asymptotically hyperbolic profile shapes. It is expected that this form factor is particularly useful in a model-dependent analysis of small-angle scattering data since the applied continuous and analytical function for the particle density profile can be compared directly with the density profile extracted from the data by model-free approaches like the generalized inverse Fourier transform method. © 2011 American Chemical Society

  1. The Development of Models for Carbon Dioxide Reduction Technologies for Spacecraft Air Revitalization

    NASA Technical Reports Server (NTRS)

    Swickrath, Michael J.; Anderson, Molly

    2011-01-01

    Through the respiration process, humans consume oxygen (O2) while producing carbon dioxide (CO2) and water (H2O) as byproducts. For long term space exploration, CO2 concentration in the atmosphere must be managed to prevent hypercapnia. Moreover, CO2 can be used as a source of oxygen through chemical reduction serving to minimize the amount of oxygen required at launch. Reduction can be achieved through a number of techniques. The National Aeronautics and Space Administration (NASA) is currently exploring the Sabatier reaction, the Bosch reaction, and co-electrolysis of CO2 and H2O for this process. Proof-of-concept experiments and prototype units for all three processes have proven capable of returning useful commodities for space exploration. While all three techniques have demonstrated the capacity to reduce CO2 in the laboratory, there is interest in understanding how all three techniques would perform at a system-level within a spacecraft. Consequently, there is an impetus to develop predictive models for these processes that can be readily re-scaled and integrated into larger system models. Such analysis tools provide the ability to evaluate each technique on a comparable basis with respect to processing rates. This manuscript describes the current models for the carbon dioxide reduction processes under parallel developmental e orts. Comparison to experimental data is provided were available for veri cation purposes.

  2. Community detection in sequence similarity networks based on attribute clustering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chowdhary, Janamejaya; Loeffler, Frank E.; Smith, Jeremy C.

    Networks are powerful tools for the presentation and analysis of interactions in multi-component systems. A commonly studied mesoscopic feature of networks is their community structure, which arises from grouping together similar nodes into one community and dissimilar nodes into separate communities. Here in this paper, the community structure of protein sequence similarity networks is determined with a new method: Attribute Clustering Dependent Communities (ACDC). Sequence similarity has hitherto typically been quantified by the alignment score or its expectation value. However, pair alignments with the same score or expectation value cannot thus be differentiated. To overcome this deficiency, the method constructs,more » for pair alignments, an extended alignment metric, the link attribute vector, which includes the score and other alignment characteristics. Rescaling components of the attribute vectors qualitatively identifies a systematic variation of sequence similarity within protein superfamilies. The problem of community detection is then mapped to clustering the link attribute vectors, selection of an optimal subset of links and community structure refinement based on the partition density of the network. ACDC-predicted communities are found to be in good agreement with gold standard sequence databases for which the "ground truth" community structures (or families) are known. ACDC is therefore a community detection method for sequence similarity networks based entirely on pair similarity information. A serial implementation of ACDC is available from https://cmb.ornl.gov/resources/developments« less

  3. KiDS-450: tomographic cross-correlation of galaxy shear with Planck lensing

    NASA Astrophysics Data System (ADS)

    Harnois-Déraps, Joachim; Tröster, Tilman; Chisari, Nora Elisa; Heymans, Catherine; van Waerbeke, Ludovic; Asgari, Marika; Bilicki, Maciej; Choi, Ami; Erben, Thomas; Hildebrandt, Hendrik; Hoekstra, Henk; Joudaki, Shahab; Kuijken, Konrad; Merten, Julian; Miller, Lance; Robertson, Naomi; Schneider, Peter; Viola, Massimo

    2017-10-01

    We present the tomographic cross-correlation between galaxy lensing measured in the Kilo Degree Survey (KiDS-450) with overlapping lensing measurements of the cosmic microwave background (CMB), as detected by Planck 2015. We compare our joint probe measurement to the theoretical expectation for a flat Λ cold dark matter cosmology, assuming the best-fitting cosmological parameters from the KiDS-450 cosmic shear and Planck CMB analyses. We find that our results are consistent within 1σ with the KiDS-450 cosmology, with an amplitude re-scaling parameter AKiDS = 0.86 ± 0.19. Adopting a Planck cosmology, we find our results are consistent within 2σ, with APlanck = 0.68 ± 0.15. We show that the agreement is improved in both cases when the contamination to the signal by intrinsic galaxy alignments is accounted for, increasing A by ∼0.1. This is the first tomographic analysis of the galaxy lensing - CMB lensing cross-correlation signal, and is based on five photometric redshift bins. We use this measurement as an independent validation of the multiplicative shear calibration and of the calibrated source redshift distribution at high redshifts. We find that constraints on these two quantities are strongly correlated when obtained from this technique, which should therefore not be considered as a stand-alone competitive calibration tool.

  4. The efficacy of calibrating hydrologic model using remotely sensed evapotranspiration and soil moisture for streamflow prediction

    NASA Astrophysics Data System (ADS)

    Kunnath-Poovakka, A.; Ryu, D.; Renzullo, L. J.; George, B.

    2016-04-01

    Calibration of spatially distributed hydrologic models is frequently limited by the availability of ground observations. Remotely sensed (RS) hydrologic information provides an alternative source of observations to inform models and extend modelling capability beyond the limits of ground observations. This study examines the capability of RS evapotranspiration (ET) and soil moisture (SM) in calibrating a hydrologic model and its efficacy to improve streamflow predictions. SM retrievals from the Advanced Microwave Scanning Radiometer-EOS (AMSR-E) and daily ET estimates from the CSIRO MODIS ReScaled potential ET (CMRSET) are used to calibrate a simplified Australian Water Resource Assessment - Landscape model (AWRA-L) for a selection of parameters. The Shuffled Complex Evolution Uncertainty Algorithm (SCE-UA) is employed for parameter estimation at eleven catchments in eastern Australia. A subset of parameters for calibration is selected based on the variance-based Sobol' sensitivity analysis. The efficacy of 15 objective functions for calibration is assessed based on streamflow predictions relative to control cases, and relative merits of each are discussed. Synthetic experiments were conducted to examine the effect of bias in RS ET observations on calibration. The objective function containing the root mean square deviation (RMSD) of ET result in best streamflow predictions and the efficacy is superior for catchments with medium to high average runoff. Synthetic experiments revealed that accurate ET product can improve the streamflow predictions in catchments with low average runoff.

  5. Testing Dissipative Magnetosphere Model Light Curves and Spectra with Fermi Pulsars

    NASA Technical Reports Server (NTRS)

    Brambilla, Gabriele; Kalapotharakos, Constantinos; Harding, Alice K.; Kazanas, Demosthenes

    2015-01-01

    We explore the emission properties of a dissipative pulsar magnetosphere model introduced by Kalapotharakos et al. comparing its high-energy light curves and spectra, due to curvature radiation, with data collected by the Fermi LAT. The magnetosphere structure is assumed to be near the force-free solution. The accelerating electric field, inside the light cylinder (LC), is assumed to be negligible, while outside the LC it rescales with a finite conductivity (sigma). In our approach we calculate the corresponding high-energy emission by integrating the trajectories of test particles that originate from the stellar surface, taking into account both the accelerating electric field components and the radiation reaction forces. First, we explore the parameter space assuming different value sets for the stellar magnetic field, stellar period, and conductivity. We show that the general properties of the model are in a good agreement with observed emission characteristics of young gamma-ray pulsars, including features of the phase-resolved spectra. Second, we find model parameters that fit each pulsar belonging to a group of eight bright pulsars that have a published phase-resolved spectrum. The sigma values that best describe each of the pulsars in this group show an increase with the spin-down rate (E? ) and a decrease with the pulsar age, expected if pair cascades are providing the magnetospheric conductivity. Finally, we explore the limits of our analysis and suggest future directions for improving such models.

  6. Finite-size scaling of survival probability in branching processes

    NASA Astrophysics Data System (ADS)

    Garcia-Millan, Rosalba; Font-Clos, Francesc; Corral, Álvaro

    2015-04-01

    Branching processes pervade many models in statistical physics. We investigate the survival probability of a Galton-Watson branching process after a finite number of generations. We derive analytically the existence of finite-size scaling for the survival probability as a function of the control parameter and the maximum number of generations, obtaining the critical exponents as well as the exact scaling function, which is G (y ) =2 y ey /(ey-1 ) , with y the rescaled distance to the critical point. Our findings are valid for any branching process of the Galton-Watson type, independently of the distribution of the number of offspring, provided its variance is finite. This proves the universal behavior of the finite-size effects in branching processes, including the universality of the metric factors. The direct relation to mean-field percolation is also discussed.

  7. DBI potential, DBI inflation action and general Lagrangian relative to phantom, K-essence and quintessence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Qing; Huang, Yong-Chang, E-mail: ychuang@bjut.edu.cn

    We derive a Dirac-Born-Infeld (DBI) potential and DBI inflation action by rescaling the metric. The determinant of the induced metric naturally includes the kinetic energy and the potential energy. In particular, the potential energy and kinetic energy can convert into each other in any order, which is in agreement with the limit of classical physics. This is quite different from the usual DBI action. We show that the Taylor expansion of the DBI action can be reduced into the form in the non-linear classical physics. These investigations are the support for the statement that the results of string theory aremore » consistent with quantum mechanics and classical physics. We deduce the Phantom, K-essence, Quintessence and Generalized Klein-Gordon Equation from the DBI model.« less

  8. Experiments and simulation of the growth of droplets on a surface (breath figures)

    NASA Astrophysics Data System (ADS)

    Fritter, Daniela; Knobler, Charles M.; Beysens, Daniel A.

    1991-03-01

    Detailed experiments are reported of the growth of droplets when water vapor condenses from a saturated carrier gas onto a hydrophobic plane substrate. We have investigated the effects of the carrier-gas flow velocity, the nature of the gas, the experimental geometry, and heat transfer through the substrate. Individual drops grow according to a power law with exponent μ=1/3. At high flow velocities, the temperature of the substrate can rise significantly, which lowers the condensation rate and leads to lower apparent growth-law exponents. A self-similar regime is reached when droplets interact by coalescences. The coalescences continuously rescale the pattern, produce spatial correlations between the droplets, and accelerate the growth, leading to a power law with an exponent μ0=3μ. The experiments are compared to predictions of scaling laws and to simulations.

  9. Quantum corrections to the gravitational potentials of a point source due to conformal fields in de Sitter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fröb, Markus B.; Verdaguer, Enric, E-mail: mfroeb@itp.uni-leipzig.de, E-mail: enric.verdaguer@ub.edu

    We derive the leading quantum corrections to the gravitational potentials in a de Sitter background, due to the vacuum polarization from loops of conformal fields. Our results are valid for arbitrary conformal theories, even strongly interacting ones, and are expressed using the coefficients b and b' appearing in the trace anomaly. Apart from the de Sitter generalization of the known flat-space results, we find two additional contributions: one which depends on the finite coefficients of terms quadratic in the curvature appearing in the renormalized effective action, and one which grows logarithmically with physical distance. While the first contribution corresponds tomore » a rescaling of the effective mass, the second contribution leads to a faster fall-off of the Newton potential at large distances, and is potentially measurable.« less

  10. Disentangling Random Motion and Flow in a Complex Medium

    PubMed Central

    Koslover, Elena F.; Chan, Caleb K.; Theriot, Julie A.

    2016-01-01

    We describe a technique for deconvolving the stochastic motion of particles from large-scale fluid flow in a dynamic environment such as that found in living cells. The method leverages the separation of timescales to subtract out the persistent component of motion from single-particle trajectories. The mean-squared displacement of the resulting trajectories is rescaled so as to enable robust extraction of the diffusion coefficient and subdiffusive scaling exponent of the stochastic motion. We demonstrate the applicability of the method for characterizing both diffusive and fractional Brownian motion overlaid by flow and analytically calculate the accuracy of the method in different parameter regimes. This technique is employed to analyze the motion of lysosomes in motile neutrophil-like cells, showing that the cytoplasm of these cells behaves as a viscous fluid at the timescales examined. PMID:26840734

  11. Instantons in Lifshitz field theories

    NASA Astrophysics Data System (ADS)

    Fujimori, Toshiaki; Nitta, Muneto

    2015-10-01

    BPS instantons are discussed in Lifshitz-type anisotropic field theories. We consider generalizations of the sigma model/Yang-Mills instantons in renormalizable higher dimensional models with the classical Lifshitz scaling invariance. In each model, BPS instanton equation takes the form of the gradient flow equations for "the superpotential" defining "the detailed balance condition". The anisotropic Weyl rescaling and the coset space dimensional reduction are used to map rotationally symmetric instantons to vortices in two-dimensional anisotropic systems on the hyperbolic plane. As examples, we study anisotropic BPS baby Skyrmion 1+1 dimensions and BPS Skyrmion in 2+1 dimensions, for which we take Kähler 1-form and the Wess-Zumiono-Witten term as the superpotentials, respectively, and an anisotropic generalized Yang-Mills instanton in 4 + 1 dimensions, for which we take the Chern-Simons term as the superpotential.

  12. Universality of citation distributions: toward an objective measure of scientific impact.

    PubMed

    Radicchi, Filippo; Fortunato, Santo; Castellano, Claudio

    2008-11-11

    We study the distributions of citations received by a single publication within several disciplines, spanning broad areas of science. We show that the probability that an article is cited c times has large variations between different disciplines, but all distributions are rescaled on a universal curve when the relative indicator c(f) = c/c(0) is considered, where c(0) is the average number of citations per article for the discipline. In addition we show that the same universal behavior occurs when citation distributions of articles published in the same field, but in different years, are compared. These findings provide a strong validation of c(f) as an unbiased indicator for citation performance across disciplines and years. Based on this indicator, we introduce a generalization of the h index suitable for comparing scientists working in different fields.

  13. Traces on orbifolds: anomalies and one-loop amplitudes

    NASA Astrophysics Data System (ADS)

    Groot Nibbelink, Stefan

    2003-07-01

    In the recent literature one can find calculations of various one-loop amplitudes, like anomalies, tadpoles and vacuum energies, on specific types of orbifolds, like S1/Bbb Z2. This work aims to give a general description of such one-loop computations for a large class of orbifold models. In order to achieve a high degree of generality, we formulate these calculations as evaluations of traces of operators over orbifold Hilbert spaces. We find that in general the result is expressed as a sum of traces over hyper surfaces with local projections, and the derivatives perpendicular to these hyper surfaces are rescaled. These local projectors naturally takes into account possible non-periodic boundary conditions. As the examples T6/Bbb Z4 and T4/D4 illustrate, the methods can be applied to non-prime as well as non-abelian orbifolds.

  14. Projective flatness in the quantisation of bosons and fermions

    NASA Astrophysics Data System (ADS)

    Wu, Siye

    2015-07-01

    We compare the quantisation of linear systems of bosons and fermions. We recall the appearance of projectively flat connection and results on parallel transport in the quantisation of bosons. We then discuss pre-quantisation and quantisation of fermions using the calculus of fermionic variables. We define a natural connection on the bundle of Hilbert spaces and show that it is projectively flat. This identifies, up to a phase, equivalent spinor representations constructed by various polarisations. We introduce the concept of metaplectic correction for fermions and show that the bundle of corrected Hilbert spaces is naturally flat. We then show that the parallel transport in the bundle of Hilbert spaces along a geodesic is a rescaled projection provided that the geodesic lies within the complement of a cut locus. Finally, we study the bundle of Hilbert spaces when there is a symmetry.

  15. Pointless strings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Periwal, V.

    1988-01-01

    The author proves that bosonic string perturbation theory diverges and is not Borel summable. This is an indication of a non-perturbative instability of the bosonic string vacuum. He formulates two-dimensional sigma models in terms of algebras of functions. He extends this formulation to general C* algebras. He illustrates the utility of these algebraic notions by calculating some determinants of interest in the study of string propagation in orbifold backgrounds. He studies the geometry of spaces of field theories and show that the vanishing of the curvature of the natural Gel'fand-Naimark-Segal metric on such spaces is exactly the strong associativity conditionmore » of the operator product expansion.He shows that string scattering amplitudes arise as invariants of renormalization, when he formulates renormalization in terms of rescalings of the metric on the string world-sheet.« less

  16. First-Principles Propagation of Geoelectric Fields from Ionosphere to Ground using LANLGeoRad

    NASA Astrophysics Data System (ADS)

    Jeffery, C. A.; Woodroffe, J. R.; Henderson, M. G.

    2017-12-01

    A notable deficiency in the current SW forecasting chain is the propagation of geoelectric fields from ionosphere to ground using Biot-Savart integrals, which ignore the localized complexity of lithospheric electrical conductivity and the relatively high conductivity of ocean water compared to the lithosphere. Three-dimensional models of Earth conductivity with mesoscale spatial resolution are being developed, but a new approach is needed to incorporate this information into the SW forecast chain. We present initial results from a first-principles geoelectric propagation model call LANLGeoRad, which solves Maxwell's equations on an unstructured geodesic grid. Challenges associated with the disparate response times of millisecond electromagnetic propagation and 10-second geomagnetic fluctuations are highlighted, and a novel rescaling of the ionosphere/ground system is presented that renders this geoelectric system computationally tractable.

  17. Modelling the large-scale redshift-space 3-point correlation function of galaxies

    NASA Astrophysics Data System (ADS)

    Slepian, Zachary; Eisenstein, Daniel J.

    2017-08-01

    We present a configuration-space model of the large-scale galaxy 3-point correlation function (3PCF) based on leading-order perturbation theory and including redshift-space distortions (RSD). This model should be useful in extracting distance-scale information from the 3PCF via the baryon acoustic oscillation method. We include the first redshift-space treatment of biasing by the baryon-dark matter relative velocity. Overall, on large scales the effect of RSD is primarily a renormalization of the 3PCF that is roughly independent of both physical scale and triangle opening angle; for our adopted Ωm and bias values, the rescaling is a factor of ˜1.8. We also present an efficient scheme for computing 3PCF predictions from our model, important for allowing fast exploration of the space of cosmological parameters in future analyses.

  18. Polynomial interpretation of multipole vectors

    NASA Astrophysics Data System (ADS)

    Katz, Gabriel; Weeks, Jeff

    2004-09-01

    Copi, Huterer, Starkman, and Schwarz introduced multipole vectors in a tensor context and used them to demonstrate that the first-year Wilkinson microwave anisotropy probe (WMAP) quadrupole and octopole planes align at roughly the 99.9% confidence level. In the present article, the language of polynomials provides a new and independent derivation of the multipole vector concept. Bézout’s theorem supports an elementary proof that the multipole vectors exist and are unique (up to rescaling). The constructive nature of the proof leads to a fast, practical algorithm for computing multipole vectors. We illustrate the algorithm by finding exact solutions for some simple toy examples and numerical solutions for the first-year WMAP quadrupole and octopole. We then apply our algorithm to Monte Carlo skies to independently reconfirm the estimate that the WMAP quadrupole and octopole planes align at the 99.9% level.

  19. VizieR Online Data Catalog: Neutron-capture elements abundances in Cepheids (da Silva+ 2016)

    NASA Astrophysics Data System (ADS)

    da Silva, R.; Lemasle, B.; Bono, G.; Genovali, K.; McWilliam, A.; Cristallo, S.; Bergemann, M.; Buonanno, R.; Fabrizio, M.; Ferraro, I.; Francois, P.; Iannicola, G.; Inno, L.; Laney, C. D.; Kudritzki, R.-P.; Matsunaga, N.; Nonino, M.; Primas, F.; Przybilla, N.; Romaniello, M.; Thevenin, F.; Urbaneja, M. A.

    2015-11-01

    The abundances of Fe, Y, La, Ce, Nd, and Eu for our sample of 73 Cepheids plus data available in the literature for other 362 Cepheids are shown. We first show the abundances derived based on individual spectra for the 73 stars, then the averaged values, and finally the data from the literature. The original abundances available in the literature were rescaled according to the zero-point differences listed in Table 5. The priority was given in the following order: we first adopt the abundances provided by our group, this study (TS) and Lemasle et al. (2013A&A...558A..31L, LEM), and then those provided by the other studies, Luck & Lambert (2011AJ....142..136L, LIII), and Luck et al. (2011AJ....142...51L, LII). (4 data files).

  20. Performance and scaling of a novel locomotor structure: adhesive capacity of climbing gobiid fishes.

    PubMed

    Maie, Takashi; Schoenfuss, Heiko L; Blob, Richard W

    2012-11-15

    Many species of gobiid fishes adhere to surfaces using a sucker formed from fusion of the pelvic fins. Juveniles of many amphidromous species use this pelvic sucker to scale waterfalls during migrations to upstream habitats after an oceanic larval phase. However, adults may still use suckers to re-scale waterfalls if displaced. If attachment force is proportional to sucker area and if growth of the sucker is isometric, then increases in the forces that climbing fish must resist might outpace adhesive capacity, causing climbing performance to decline through ontogeny. To test for such trends, we measured pressure differentials and adhesive suction forces generated by the pelvic sucker across wide size ranges in six goby species, including climbing and non-climbing taxa. Suction was achieved via two distinct growth strategies: (1) small suckers with isometric (or negatively allometric) scaling among climbing gobies and (2) large suckers with positively allometric growth in non-climbing gobies. Species using the first strategy show a high baseline of adhesive capacity that may aid climbing performance throughout ontogeny, with pressure differentials and suction forces much greater than expected if adhesion were a passive function of sucker area. In contrast, large suckers possessed by non-climbing species may help compensate for reduced pressure differentials, thereby producing suction sufficient to support body weight. Climbing Sicyopterus species also use oral suckers during climbing waterfalls, and these exhibited scaling patterns similar to those for pelvic suckers. However, oral suction force was considerably lower than that for pelvic suckers, reducing the ability for these fish to attach to substrates by the oral sucker alone.

  1. Wigner phase space distribution via classical adiabatic switching

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bose, Amartya; Makri, Nancy; Department of Physics, University of Illinois, 1110 W. Green Street, Urbana, Illinois 61801

    2015-09-21

    Evaluation of the Wigner phase space density for systems of many degrees of freedom presents an extremely demanding task because of the oscillatory nature of the Fourier-type integral. We propose a simple and efficient, approximate procedure for generating the Wigner distribution that avoids the computational difficulties associated with the Wigner transform. Starting from a suitable zeroth-order Hamiltonian, for which the Wigner density is available (either analytically or numerically), the phase space distribution is propagated in time via classical trajectories, while the perturbation is gradually switched on. According to the classical adiabatic theorem, each trajectory maintains a constant action if themore » perturbation is switched on infinitely slowly. We show that the adiabatic switching procedure produces the exact Wigner density for harmonic oscillator eigenstates and also for eigenstates of anharmonic Hamiltonians within the Wentzel-Kramers-Brillouin (WKB) approximation. We generalize the approach to finite temperature by introducing a density rescaling factor that depends on the energy of each trajectory. Time-dependent properties are obtained simply by continuing the integration of each trajectory under the full target Hamiltonian. Further, by construction, the generated approximate Wigner distribution is invariant under classical propagation, and thus, thermodynamic properties are strictly preserved. Numerical tests on one-dimensional and dissipative systems indicate that the method produces results in very good agreement with those obtained by full quantum mechanical methods over a wide temperature range. The method is simple and efficient, as it requires no input besides the force fields required for classical trajectory integration, and is ideal for use in quasiclassical trajectory calculations.« less

  2. Impinging laminar jets at moderate Reynolds numbers and separation distances.

    PubMed

    Bergthorson, Jeffrey M; Sone, Kazuo; Mattner, Trent W; Dimotakis, Paul E; Goodwin, David G; Meiron, Dan I

    2005-12-01

    An experimental and numerical study of impinging, incompressible, axisymmetric, laminar jets is described, where the jet axis of symmetry is aligned normal to the wall. Particle streak velocimetry (PSV) is used to measure axial velocities along the centerline of the flow field. The jet-nozzle pressure drop is measured simultaneously and determines the Bernoulli velocity. The flow field is simulated numerically by an axisymmetric Navier-Stokes spectral-element code, an axisymmetric potential-flow model, and an axisymmetric one-dimensional stream-function approximation. The axisymmetric viscous and potential-flow simulations include the nozzle in the solution domain, allowing nozzle-wall proximity effects to be investigated. Scaling the centerline axial velocity by the Bernoulli velocity collapses the experimental velocity profiles onto a single curve that is independent of the nozzle-to-plate separation distance. Axisymmetric direct numerical simulations yield good agreement with experiment and confirm the velocity profile scaling. Potential-flow simulations reproduce the collapse of the data; however, viscous effects result in disagreement with experiment. Axisymmetric one-dimensional stream-function simulations can predict the flow in the stagnation region if the boundary conditions are correctly specified. The scaled axial velocity profiles are well characterized by an error function with one Reynolds-number-dependent parameter. Rescaling the wall-normal distance by the boundary-layer displacement-thickness-corrected diameter yields a collapse of the data onto a single curve that is independent of the Reynolds number. These scalings allow the specification of an analytical expression for the velocity profile of an impinging laminar jet over the Reynolds number range investigated of .

  3. Measurement and inference of profile soil-water dynamics at different hillslope positions in a semiarid agricultural watershed

    NASA Astrophysics Data System (ADS)

    Green, Timothy R.; Erskine, Robert H.

    2011-12-01

    Dynamics of profile soil water vary with terrain, soil, and plant characteristics. The objectives addressed here are to quantify dynamic soil water content over a range of slope positions, infer soil profile water fluxes, and identify locations most likely influenced by multidimensional flow. The instrumented 56 ha watershed lies mostly within a dryland (rainfed) wheat field in semiarid eastern Colorado. Dielectric capacitance sensors were used to infer hourly soil water content for approximately 8 years (minus missing data) at 18 hillslope positions and four or more depths. Based on previous research and a new algorithm, sensor measurements (resonant frequency) were rescaled to estimate soil permittivity, then corrected for temperature effects on bulk electrical conductivity before inferring soil water content. Using a mass-conservation method, we analyzed multitemporal changes in soil water content at each sensor to infer the dynamics of water flux at different depths and landscape positions. At summit positions vertical processes appear to control profile soil water dynamics. At downslope positions infrequent overland flow and unsaturated subsurface lateral flow appear to influence soil water dynamics. Crop water use accounts for much of the variability in soil water between transects that are either cropped or fallow in alternating years, while soil hydraulic properties and near-surface hydrology affect soil water variability across landscape positions within each management zone. The observed spatiotemporal patterns exhibit the joint effects of short-term hydrology and long-term soil development. Quantitative methods of analyzing soil water patterns in space and time improve our understanding of dominant soil hydrological processes and provide alternative measures of model performance.

  4. Flow over Canopies with Complex Morphologies

    NASA Astrophysics Data System (ADS)

    Rubol, S.; Ling, B.; Battiato, I.

    2017-12-01

    Quantifying and predicting how submerged vegetation affects the velocity profile of riverine systems is crucial in ecohydraulics to properly assess the water quality and ecological functions or rivers. The state of the art includes a plethora of models to study the flow and transport over submerged canopies. However, most of them are validated against data collected in flume experiments with rigid cylinders. With the objective of investigating the capability of a simple analytical solution for vegetated flow to reproduce and predict the velocity profile of complex shaped flexible canopies, we use the flow model proposed by Battiato and Rubol [WRR 2013] as the analytical approximation of the mean velocity profile above and within the canopy layer. This model has the advantages (i) to threat the canopy layer as a porous medium, whose geometrical properties are associated with macroscopic effective permeability and (ii) to use input parameters that can be estimated by remote sensing techniques, such us the heights of the water level and the canopy. The analytical expressions for the average velocity profile and the discharge are tested against data collected across a wide range of canopy morphologies commonly encountered in riverine systems, such as grasses, woody vegetation and bushes. Results indicate good agreement between the analytical expressions and the data for both simple and complex plant geometry shapes. The rescaled low submergence velocities in the canopy layer followed the same scaling found in arrays of rigid cylinders. In addition, for the dataset analyzed, the Darcy friction factor scaled with the inverse of the bulk Reynolds number multiplied by the ratio of the fluid to turbulent viscosity.

  5. Multifractal Turbulence in the Heliosphere

    NASA Astrophysics Data System (ADS)

    Macek, Wieslaw M.; Wawrzaszek, Anna

    2010-05-01

    We consider a solar wind plasma with frozen-in interplanetary magnetic fields, which is a complex nonlinear system that may exhibit chaos and intermittency, resulting in a multifractal scaling of plasma characteristics. We analyze time series of plasma velocity and interplanetary magnetic field strengths measured during space missions onboard various spacecraft, such as Helios, Advanced Composition Explorer, Ulysses, and Voyager, exploring different regions of the heliosphere during solar minimum and maximum. To quantify the multifractality of solar wind turbulence, we use a generalized two-scale weighted Cantor set with two different rescaling parameters [1]. We investigate the resulting spectrum of generalized dimensions and the corresponding multifractal singularity spectrum depending on the parameters of this new cascade model [2]. We show that using the model with two different scaling parameters one can explain the multifractal singularity spectrum, which is often asymmetric. In particular, the multifractal scaling of magnetic fields is asymmetric in the outer heliosphere, in contrast to the symmetric spectrum observed in the heliosheath as described by the standard one-scale model [3]. We hope that the generalized multifractal model will be a useful tool for analysis of intermittent turbulence in the heliospheric plasma. We thus believe that multifractal analysis of various complex environments can shed light on the nature of turbulence. [1] W. M. Macek and A. Szczepaniak, Generalized two-scale weighted Cantor set model for solar wind turbulence, Geophys. Res. Lett., 35, L02108 (2008), doi:10.1029/2007GL032263. [2] W. M. Macek and A. Wawrzaszek, Evolution of asymmetric multifractal scaling of solar wind turbulence in the outer heliosphere, J. Geophys. Res., A013795 (2009), doi:10.1029/2008JA013795. [3] W. M. Macek and A. Wawrzaszek, Multifractal turbulence at the termination shock, in Solar Wind Twelve, edited by M. Maximovic et al., American Institute of Physics, 2010.

  6. Dynamics of Large-Scale Solar-Wind Streams Obtained by the Double Superposed Epoch Analysis: 2. Comparisons of CIRs vs. Sheaths and MCs vs. Ejecta

    NASA Astrophysics Data System (ADS)

    Yermolaev, Y. I.; Lodkina, I. G.; Nikolaeva, N. S.; Yermolaev, M. Y.

    2017-12-01

    This work is a continuation of our previous article (Yermolaev et al. in J. Geophys. Res. 120, 7094, 2015), which describes the average temporal profiles of interplanetary plasma and field parameters in large-scale solar-wind (SW) streams: corotating interaction regions (CIRs), interplanetary coronal mass ejections (ICMEs including both magnetic clouds (MCs) and ejecta), and sheaths as well as interplanetary shocks (ISs). As in the previous article, we use the data of the OMNI database, our catalog of large-scale solar-wind phenomena during 1976 - 2000 (Yermolaev et al. in Cosmic Res., 47, 2, 81, 2009) and the method of double superposed epoch analysis (Yermolaev et al. in Ann. Geophys., 28, 2177, 2010a). We rescale the duration of all types of structures in such a way that the beginnings and endings for all of them coincide. We present new detailed results comparing pair phenomena: 1) both types of compression regions ( i.e. CIRs vs. sheaths) and 2) both types of ICMEs (MCs vs. ejecta). The obtained data allow us to suggest that the formation of the two types of compression regions responds to the same physical mechanism, regardless of the type of piston (high-speed stream (HSS) or ICME); the differences are connected to the geometry ( i.e. the angle between the speed gradient in front of the piston and the satellite trajectory) and the jumps in speed at the edges of the compression regions. In our opinion, one of the possible reasons behind the observed differences in the parameters in MCs and ejecta is that when ejecta are observed, the satellite passes farther from the nose of the area of ICME than when MCs are observed.

  7. Magnetocaloric effect and critical field analysis in Eu substituted La0.7-xEuxSr0.3MnO3 (x = 0.0, 0.1, 0.2, 0.3) manganites

    NASA Astrophysics Data System (ADS)

    Vadnala, Sudharshan; Asthana, Saket

    2018-01-01

    In this study, we have investigated magnetic behavior, magnetocaloric effect and critical exponent analysis of La0.7-xEuxSr0.3MnO3 (x = 0.0, 0.1, 0.2, 0.3) manganites synthesized through solid state reaction route. The crystallographic data obtained from refinement of X-ray diffraction patterns reveal that crystal structure changes from rhombohedral (for x = 0.0) to orthorhombic (for x ≥ 0.1). The average ionic radius of A-site is decreased from 1.384 Å (for x = 0.0) to 1.360 Å (for x = 0.3) with Eu3+ substitution which in turn decreases the Mn-O-Mn bond angles. Magnetization measurements are performed in the vicinity of TC to determine magnetocaloric effect (MCE) and critical field behavior. The maximum magnetic entropy change (Δ SMmax) (for μ0ΔH = 6T) increases with the Eu3+ substitution from 3.88 J/kg K (for x = 0.0) to 5.03 J/kg K (for x = 0.3) at the transition temperature. The critical field behaviour of compounds was analysed using various methods such as modified Arrott plots, Kouvel-Fisher method and critical isotherm to determine critical temperature and critical exponents (β, γ and δ). The obtained critical exponents are in good accordance with scaling relation. The temperature dependence of the order parameter n, for different magnetic fields, is studied using the relation ΔSMαHn. The values of n are found to obey the Curie-Weiss law for temperatures above the transition temperature. The rescaled change in entropy data for all compounds collapses into the same universal curve, revealing a second order phase transition.

  8. The Internet As a Large-Scale Complex System

    NASA Astrophysics Data System (ADS)

    Park, Kihong; Willinger, Walter

    2005-06-01

    The Internet may be viewed as a "complex system" with diverse features and many components that can give rise to unexpected emergent phenomena, revealing much about its own engineering. This book brings together chapter contributions from a workshop held at the Santa Fe Institute in March 2001. This volume captures a snapshot of some features of the Internet that may be fruitfully approached using a complex systems perspective, meaning using interdisciplinary tools and methods to tackle the subject area. The Internet penetrates the socioeconomic fabric of everyday life; a broader and deeper grasp of the Internet may be needed to meet the challenges facing the future. The resulting empirical data have already proven to be invaluable for gaining novel insights into the network's spatio-temporal dynamics, and can be expected to become even more important when tryin to explain the Internet's complex and emergent behavior in terms of elementary networking-based mechanisms. The discoveries of fractal or self-similar network traffic traces, power-law behavior in network topology and World Wide Web connectivity are instances of unsuspected, emergent system traits. Another important factor at the heart of fair, efficient, and stable sharing of network resources is user behavior. Network systems, when habited by selfish or greedy users, take on the traits of a noncooperative multi-party game, and their stability and efficiency are integral to understanding the overall system and its dynamics. Lastly, fault-tolerance and robustness of large-scale network systems can exhibit spatial and temporal correlations whose effective analysis and management may benefit from rescaling techniques applied in certain physical and biological systems. The present book will bring together several of the leading workers involved in the analysis of complex systems with the future development of the Internet.

  9. A systematic review and meta-analysis to revise the Fenton growth chart for preterm infants.

    PubMed

    Fenton, Tanis R; Kim, Jae H

    2013-04-20

    The aim of this study was to revise the 2003 Fenton Preterm Growth Chart, specifically to: a) harmonize the preterm growth chart with the new World Health Organization (WHO) Growth Standard, b) smooth the data between the preterm and WHO estimates, informed by the Preterm Multicentre Growth (PreM Growth) study while maintaining data integrity from 22 to 36 and at 50 weeks, and to c) re-scale the chart x-axis to actual age (rather than completed weeks) to support growth monitoring. Systematic review, meta-analysis, and growth chart development. We systematically searched published and unpublished literature to find population-based preterm size at birth measurement (weight, length, and/or head circumference) references, from developed countries with: Corrected gestational ages through infant assessment and/or statistical correction; Data percentiles as low as 24 weeks gestational age or lower; Sample with greater than 500 infants less than 30 weeks. Growth curves for males and females were produced using cubic splines to 50 weeks post menstrual age. LMS parameters (skew, median, and standard deviation) were calculated. Six large population-based surveys of size at preterm birth representing 3,986,456 births (34,639 births < 30 weeks) from countries Germany, United States, Italy, Australia, Scotland, and Canada were combined in meta-analyses. Smooth growth chart curves were developed, while ensuring close agreement with the data between 24 and 36 weeks and at 50 weeks. The revised sex-specific actual-age growth charts are based on the recommended growth goal for preterm infants, the fetus, followed by the term infant. These preterm growth charts, with the disjunction between these datasets smoothing informed by the international PreM Growth study, may support an improved transition of preterm infant growth monitoring to the WHO growth charts.

  10. On Land Ice Mass Change in Southernmost South America, Antarctic Peninsula and Coastal Antarctica consistent with GRACE, GPS and Reconstructed Ice History for Past 1000 years.

    NASA Astrophysics Data System (ADS)

    Ivins, Erik; Wiese, David; Watkins, Michael; Yuan, Dah-Ning; Landerer, Felix; Simms, Alex; Boening, Carmen

    2014-05-01

    The improved spatial coverage provided by high-quality Global Positioning System observing systems on exposed bedrock has allowed these space geodetic experiments to play an increasingly important role in constraining both glacial isostatic adjustment (GIA) processes and viscoelastic responses to present-day glacial mass changes (PGMC). Improved constraints on models of ice mass change in the Southern Hemisphere at present-day, during the Little Ice Age, and during the Late Holocene are invaluable for reconciling climate and sea-level variability on a global scale during the present solar radiation forcing and Milankovic orbital configuration. Studies by Jacobs et al. (1992), Whitehouse et al. (2012), King et al. (2012), Boening et al (2012), and others, support the contention that GRACE observations of both GIA and PGMC in the Southern Hemisphere are dominated by the geography and climate of coastal environments. This makes the proper masking of those environments for GRACE-determinations of secular mass balance especially sensitive, and downscaling, rescaling, and use of correlation mascon methods a non-trivial part of the analysis. Here we employ two analysis methods to determine the mass balances of the Antarctic Peninsula and Patagonia and incorporate GPS observations of ongoing uplift for GIA correction into both. Using data that roughly span 2002-2013, we determine -25 ± 5 Gt/yr for the uncorrected Antarctic Peninsula (AP) and -12 Gt/yr for southern Patagonia and the Cordillera Darwin (PCD). With corrections for GIA these are increased to -34 ± 8 Gt/yr for AP and -22 ± 6 Gt/yr for PCD.

  11. The power laws of violence against women: rescaling research and policies.

    PubMed

    Kappler, Karolin E; Kaltenbrunner, Andreas

    2012-01-01

    Violence against Women -despite its perpetuation over centuries and its omnipresence at all social levels- entered into social consciousness and the general agenda of Social Sciences only recently, mainly thanks to feminist research, campaigns, and general social awareness. The present article analyzes in a secondary analysis of German prevalence data on Violence against Women, whether the frequency and severity of Violence against Women can be described with power laws. Although the investigated distributions all resemble power-law distributions, a rigorous statistical analysis accepts this hypothesis at a significance level of 0.1 only for 1 of 5 cases of the tested frequency distributions and with some restrictions for the severity of physical violence. Lowering the significance level to 0.01 leads to the acceptance of the power-law hypothesis in 2 of the 5 tested frequency distributions and as well for the severity of domestic violence. The rejections might be mainly due to the noise in the data, with biases caused by self-reporting, errors through rounding, desirability response bias, and selection bias. Future victimological surveys should be designed explicitly to avoid these deficiencies in the data to be able to clearly answer the question whether Violence against Women follows a power-law pattern. This finding would not only have statistical implications for the processing and presentation of the data, but also groundbreaking consequences on the general understanding of Violence against Women and policy modeling, as the skewed nature of the underlying distributions makes evident that Violence against Women is a highly disparate and unequal social problem. This opens new questions for interdisciplinary research, regarding the interplay between environmental, experimental, and social factors on victimization.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oegetbil, O.

    After reviewing the existing results we give an extensive analysis of the critical points of the potentials of the gauged N=2 Yang-Mills/Einstein supergravity theories coupled to tensor multiplets and hypermultiplets. Our analysis includes all the possible gaugings of all N=2 Maxwell-Einstein supergravity theories whose scalar manifolds are symmetric spaces. In general, the scalar potential gets contributions from R-symmetry gauging, tensor couplings, and hypercouplings. We show that the coupling of a hypermultiplet into a theory whose potential has a nonzero value at its critical point, and gauging a compact subgroup of the hyperscalar isometry group will only rescale the value ofmore » the potential at the critical point by a positive factor, and therefore will not change the nature of an existing critical point. However this is not the case for noncompact SO(1,1) gaugings. An SO(1,1) gauging of the hyperisometry will generally lead to de Sitter vacua, which is analogous to the ground states found by simultaneously gauging SO(1,1) symmetry of the real scalar manifold with U(1){sub R} in earlier literature. SO(m,1) gaugings with m>1, which give contributions to the scalar potential only in the magical Jordan family theories, on the other hand, do not lead to de Sitter vacua. Anti-de Sitter vacua are generically obtained when the U(1){sub R} symmetry is gauged. We also show that it is possible to embed certain generic Jordan family theories into the magical Jordan family preserving the nature of the ground states. However the magical Jordan family theories have additional ground states which are not found in the generic Jordan family theories.« less

  13. Testing the Hydrological Coherence of High-Resolution Gridded Precipitation and Temperature Data Sets

    NASA Astrophysics Data System (ADS)

    Laiti, L.; Mallucci, S.; Piccolroaz, S.; Bellin, A.; Zardi, D.; Fiori, A.; Nikulin, G.; Majone, B.

    2018-03-01

    Assessing the accuracy of gridded climate data sets is highly relevant to climate change impact studies, since evaluation, bias correction, and statistical downscaling of climate models commonly use these products as reference. Among all impact studies those addressing hydrological fluxes are the most affected by errors and biases plaguing these data. This paper introduces a framework, coined Hydrological Coherence Test (HyCoT), for assessing the hydrological coherence of gridded data sets with hydrological observations. HyCoT provides a framework for excluding meteorological forcing data sets not complying with observations, as function of the particular goal at hand. The proposed methodology allows falsifying the hypothesis that a given data set is coherent with hydrological observations on the basis of the performance of hydrological modeling measured by a metric selected by the modeler. HyCoT is demonstrated in the Adige catchment (southeastern Alps, Italy) for streamflow analysis, using a distributed hydrological model. The comparison covers the period 1989-2008 and includes five gridded daily meteorological data sets: E-OBS, MSWEP, MESAN, APGD, and ADIGE. The analysis highlights that APGD and ADIGE, the data sets with highest effective resolution, display similar spatiotemporal precipitation patterns and produce the largest hydrological efficiency indices. Lower performances are observed for E-OBS, MESAN, and MSWEP, especially in small catchments. HyCoT reveals deficiencies in the representation of spatiotemporal patterns of gridded climate data sets, which cannot be corrected by simply rescaling the meteorological forcing fields, as often done in bias correction of climate model outputs. We recommend this framework to assess the hydrological coherence of gridded data sets to be used in large-scale hydroclimatic studies.

  14. The Power Laws of Violence against Women: Rescaling Research and Policies

    PubMed Central

    Kappler, Karolin E.; Kaltenbrunner, Andreas

    2012-01-01

    Background Violence against Women –despite its perpetuation over centuries and its omnipresence at all social levels– entered into social consciousness and the general agenda of Social Sciences only recently, mainly thanks to feminist research, campaigns, and general social awareness. The present article analyzes in a secondary analysis of German prevalence data on Violence against Women, whether the frequency and severity of Violence against Women can be described with power laws. Principal Findings Although the investigated distributions all resemble power-law distributions, a rigorous statistical analysis accepts this hypothesis at a significance level of 0.1 only for 1 of 5 cases of the tested frequency distributions and with some restrictions for the severity of physical violence. Lowering the significance level to 0.01 leads to the acceptance of the power-law hypothesis in 2 of the 5 tested frequency distributions and as well for the severity of domestic violence. The rejections might be mainly due to the noise in the data, with biases caused by self-reporting, errors through rounding, desirability response bias, and selection bias. Conclusion Future victimological surveys should be designed explicitly to avoid these deficiencies in the data to be able to clearly answer the question whether Violence against Women follows a power-law pattern. This finding would not only have statistical implications for the processing and presentation of the data, but also groundbreaking consequences on the general understanding of Violence against Women and policy modeling, as the skewed nature of the underlying distributions makes evident that Violence against Women is a highly disparate and unequal social problem. This opens new questions for interdisciplinary research, regarding the interplay between environmental, experimental, and social factors on victimization. PMID:22768348

  15. Unsightly urban menaces and the rescaling of residential segregation in the United States.

    PubMed

    Hanlon, James

    2011-01-01

    In this article, the author uses a slum clearance project in Lexington, Kentucky, as a lens through which to examine the spatial dynamics of racial residential segregation during the first half of the twentieth century. At the time, urban migration and upward socioeconomic mobility on the part of African Americans destabilized extant residential segregation patterns. Amid this instability, various spatial practices were employed in the interest of maintaining white social and economic supremacy. The author argues that such practices were indicative of a thoroughgoing reinvention of urban socio-spatial order that in turn precipitated the vastly expanded scale of residential segregation still found in U.S. cities today. Evidence of this reinvented ordering of urban space lies in the rendering of some long-standing African American neighborhoods as “out of place” within it and the use of slum clearance to remove the “menace” such neighborhoods posed to it.

  16. Strehl ratio: a tool for optimizing optical nulls and singularities.

    PubMed

    Hénault, François

    2015-07-01

    In this paper a set of radial and azimuthal phase functions are reviewed that have a null Strehl ratio, which is equivalent to generating a central extinction in the image plane of an optical system. The study is conducted in the framework of Fraunhofer scalar diffraction, and is oriented toward practical cases where optical nulls or singularities are produced by deformable mirrors or phase plates. The identified solutions reveal unexpected links with the zeros of type-J Bessel functions of integer order. They include linear azimuthal phase ramps giving birth to an optical vortex, azimuthally modulated phase functions, and circular phase gratings (CPGs). It is found in particular that the CPG radiometric efficiency could be significantly improved by the null Strehl ratio condition. Simple design rules for rescaling and combining the different phase functions are also defined. Finally, the described analytical solutions could also serve as starting points for an automated searching software tool.

  17. Thermodynamic properties of non-conformal soft-sphere fluids with effective hard-sphere diameters.

    PubMed

    Rodríguez-López, Tonalli; del Río, Fernando

    2012-01-28

    In this work we study a set of soft-sphere systems characterised by a well-defined variation of their softness. These systems represent an extension of the repulsive Lennard-Jones potential widely used in statistical mechanics of fluids. This type of soft spheres is of interest because they represent quite accurately the effective intermolecular repulsion in fluid substances and also because they exhibit interesting properties. The thermodynamics of the soft-sphere fluids is obtained via an effective hard-sphere diameter approach that leads to a compact and accurate equation of state. The virial coefficients of soft spheres are shown to follow quite simple relationships that are incorporated into the equation of state. The approach followed exhibits the rescaling of the density that produces a unique equation for all systems and temperatures. The scaling is carried through to the level of the structure of the fluids.

  18. Memory effects in nanoparticle dynamics and transport

    NASA Astrophysics Data System (ADS)

    Sanghi, Tarun; Bhadauria, Ravi; Aluru, N. R.

    2016-10-01

    In this work, we use the generalized Langevin equation (GLE) to characterize and understand memory effects in nanoparticle dynamics and transport. Using the GLE formulation, we compute the memory function and investigate its scaling with the mass, shape, and size of the nanoparticle. It is observed that changing the mass of the nanoparticle leads to a rescaling of the memory function with the reduced mass of the system. Further, we show that for different mass nanoparticles it is the initial value of the memory function and not its relaxation time that determines the "memory" or "memoryless" dynamics. The size and the shape of the nanoparticle are found to influence both the functional-form and the initial value of the memory function. For a fixed mass nanoparticle, increasing its size enhances the memory effects. Using GLE simulations we also investigate and highlight the role of memory in nanoparticle dynamics and transport.

  19. Learning multiple variable-speed sequences in striatum via cortical tutoring.

    PubMed

    Murray, James M; Escola, G Sean

    2017-05-08

    Sparse, sequential patterns of neural activity have been observed in numerous brain areas during timekeeping and motor sequence tasks. Inspired by such observations, we construct a model of the striatum, an all-inhibitory circuit where sequential activity patterns are prominent, addressing the following key challenges: (i) obtaining control over temporal rescaling of the sequence speed, with the ability to generalize to new speeds; (ii) facilitating flexible expression of distinct sequences via selective activation, concatenation, and recycling of specific subsequences; and (iii) enabling the biologically plausible learning of sequences, consistent with the decoupling of learning and execution suggested by lesion studies showing that cortical circuits are necessary for learning, but that subcortical circuits are sufficient to drive learned behaviors. The same mechanisms that we describe can also be applied to circuits with both excitatory and inhibitory populations, and hence may underlie general features of sequential neural activity pattern generation in the brain.

  20. Science and Facebook: The same popularity law!

    PubMed

    Néda, Zoltán; Varga, Levente; Biró, Tamás S

    2017-01-01

    The distribution of scientific citations for publications selected with different rules (author, topic, institution, country, journal, etc…) collapse on a single curve if one plots the citations relative to their mean value. We find that the distribution of "shares" for the Facebook posts rescale in the same manner to the very same curve with scientific citations. This finding suggests that citations are subjected to the same growth mechanism with Facebook popularity measures, being influenced by a statistically similar social environment and selection mechanism. In a simple master-equation approach the exponential growth of the number of publications and a preferential selection mechanism leads to a Tsallis-Pareto distribution offering an excellent description for the observed statistics. Based on our model and on the data derived from PubMed we predict that according to the present trend the average citations per scientific publications exponentially relaxes to about 4.

  1. A predictive universal fractional-order differential model of wall-turbulence

    NASA Astrophysics Data System (ADS)

    Song, Fangying; Karniadakis, George

    2017-11-01

    Fractional calculus has been around for centuries but its use in computational since and engineering has emerged only recently. Here we develop a relatively simple one-dimensional model for fully-developed wall-turbulence that involves a fractional operator with variable fractional order. We use available DNS data bases to ``learn'' the function that describes the fractional order, which has a high value at the wall and decays monotonically to an asymptotic value at the centerline. We show that this function is universal upon re-scaling and hence it can be used to predict the mean velocity profile at all Reynolds numbers. We demonstrate the accuracy of our universal fractional model for channel flow at high Reynolds number as well as for pipe flow and we obtain good agreement with the Princeton super-pipe data up to Reynolds numbers 35,000,000. This work was supported by an ARO MURI Number: W911NF-15-1-0562.

  2. Long-time asymptotics of the Navier-Stokes and vorticity equations on R(3).

    PubMed

    Gallay, Thierry; Wayne, C Eugene

    2002-10-15

    We use the vorticity formulation to study the long-time behaviour of solutions to the Navier-Stokes equation on R(3). We assume that the initial vorticity is small and decays algebraically at infinity. After introducing self-similar variables, we compute the long-time asymptotics of the rescaled vorticity equation up to second order. Each term in the asymptotics is a self-similar divergence-free vector field with Gaussian decay at infinity, and the coefficients in the expansion can be determined by solving a finite system of ordinary differential equations. As a consequence of our results, we are able to characterize the set of solutions for which the velocity field satisfies ||u(.,t)||(L(2)) = o(t(-5/4)) as t-->+ infinity. In particular, we show that these solutions lie on a smooth invariant submanifold of codimension 11 in our function space.

  3. Large-displacement statistics of the rightmost particle of the one-dimensional branching Brownian motion.

    PubMed

    Derrida, Bernard; Meerson, Baruch; Sasorov, Pavel V

    2016-04-01

    Consider a one-dimensional branching Brownian motion and rescale the coordinate and time so that the rates of branching and diffusion are both equal to 1. If X_{1}(t) is the position of the rightmost particle of the branching Brownian motion at time t, the empirical velocity c of this rightmost particle is defined as c=X_{1}(t)/t. Using the Fisher-Kolmogorov-Petrovsky-Piscounov equation, we evaluate the probability distribution P(c,t) of this empirical velocity c in the long-time t limit for c>2. It is already known that, for a single seed particle, P(c,t)∼exp[-(c^{2}/4-1)t] up to a prefactor that can depend on c and t. Here we show how to determine this prefactor. The result can be easily generalized to the case of multiple seed particles and to branching random walks associated with other traveling-wave equations.

  4. Implicit Large-Eddy Simulations of Zero-Pressure Gradient, Turbulent Boundary Layer

    NASA Technical Reports Server (NTRS)

    Sekhar, Susheel; Mansour, Nagi N.

    2015-01-01

    A set of direct simulations of zero-pressure gradient, turbulent boundary layer flows are conducted using various span widths (62-630 wall units), to document their influence on the generated turbulence. The FDL3DI code that solves compressible Navier-Stokes equations using high-order compact-difference scheme and filter, with the standard recycling/rescaling method of turbulence generation, is used. Results are analyzed at two different Re values (500 and 1,400), and compared with spectral DNS data. They show that a minimum span width is required for the mere initiation of numerical turbulence. Narrower domains ((is) less than 100 w.u.) result in relaminarization. Wider spans ((is) greater than 600 w.u.) are required for the turbulent statistics to match reference DNS. The upper-wall boundary condition for this setup spawns marginal deviations in the mean velocity and Reynolds stress profiles, particularly in the buffer region.

  5. Effects of demographic and health variables on Rasch scaled cognitive scores.

    PubMed

    Zelinski, Elizabeth M; Gilewski, Michael J

    2003-08-01

    To determine whether demographic and health variables interact to predict cognitive scores in Asset and Health Dynamics of the Oldest-Old (AHEAD), a representative survey of older Americans, as a test of the developmental discontinuity hypothesis. Rasch modeling procedures were used to rescale cognitive measures into interval scores, equating scales across measures, making it possible to compare predictor effects directly. Rasch scaling also reduces the likelihood of obtaining spurious interactions. Tasks included combined immediate and delayed recall, the Telephone Interview for Cognitive Status (TICS), Series 7, and an overall cognitive score. Demographic variables most strongly predicted performance on all scores, with health variables having smaller effects. Age interacted with both demographic and health variables, but patterns of effects varied. Demographic variables have strong effects on cognition. The developmental discontinuity hypothesis that health variables have stronger effects than demographic ones on cognition in older adults was not supported.

  6. Noncommutative mapping from the symplectic formalism

    NASA Astrophysics Data System (ADS)

    De Andrade, M. A.; Neves, C.

    2018-01-01

    Bopp's shifts will be generalized through a symplectic formalism. A special procedure, like "diagonalization," which drives the completely deformed symplectic matrix to the standard symplectic form was found as suggested by Faddeev-Jackiw. Consequently, the correspondent transformation matrix guides the mapping from commutative to noncommutative (NC) phase-space coordinates. Bopp's shifts may be directly generalized from this mapping. In this context, all the NC and scale parameters, introduced into the brackets, will be lifted to the Hamiltonian. Well-known results, obtained using ⋆-product, will be reproduced without considering that the NC parameters are small (≪1). Besides, it will be shown that different choices for NC algebra among the symplectic variables generate distinct dynamical systems, in which they may not even connect with each other, and that some of them can preserve, break, or restore the symmetry of the system. Further, we will also discuss the charge and mass rescaling in a simple model.

  7. Universal rescaling of flow curves for yield-stress fluids close to jamming

    NASA Astrophysics Data System (ADS)

    Dinkgreve, M.; Paredes, J.; Michels, M. A. J.; Bonn, D.

    2015-07-01

    The experimental flow curves of four different yield-stress fluids with different interparticle interactions are studied near the jamming concentration. By appropriate scaling with the distance to jamming all rheology data can be collapsed onto master curves below and above jamming that meet in the shear-thinning regime and satisfy the Herschel-Bulkley and Cross equations, respectively. In spite of differing interactions in the different systems, master curves characterized by universal scaling exponents are found for the four systems. A two-state microscopic theory of heterogeneous dynamics is presented to rationalize the observed transition from Herschel-Bulkley to Cross behavior and to connect the rheological exponents to microscopic exponents for the divergence of the length and time scales of the heterogeneous dynamics. The experimental data and the microscopic theory are compared with much of the available literature data for yield-stress systems.

  8. Catenaries in Drag

    NASA Astrophysics Data System (ADS)

    Chakrabarti, Brato; Hanna, James

    2014-11-01

    Dynamical equilibria of towed cables and sedimenting filaments have been the targets of much numerical work; here, we provide analytical expressions for the configurations of a translating and axially moving string subjected to a uniform body force and local, linear, anisotropic drag forces. Generically, these configurations comprise a five-parameter family of planar shapes determined by the ratio of tangential (axial) and normal drag coefficients, the angle between the translational velocity and the body force, the relative magnitudes of translational and axial drag forces with respect to the body force, and a scaling parameter. This five-parameter family of shapes is, in fact, a degenerate six-parameter family of equilibria in which inertial forces rescale the tension in the string without affecting its shape. Each configuration is represented by a first order dynamical system for the tangential angle of the body. Limiting cases include the dynamic catenaries with or without drag, and purely sedimenting or towed strings.

  9. Gravitational Instabilities in the Disks of Massive Protostars as an Explanation for Linear Distributions of Methanol Masers

    NASA Astrophysics Data System (ADS)

    Durisen, Richard H.; Mejia, Annie C.; Pickett, Brian K.; Hartquist, Thomas W.

    2001-12-01

    Evidence suggests that some masers associated with massive protostars may originate in the outer regions of large disks, at radii of hundreds to thousands of AU from the central mass. This is particularly true for methanol (CH3OH), for which linear distributions of masers are found with disklike kinematics. In three-dimensional hydrodynamics simulations we have made to study the effects of gravitational instabilities in the outer parts of disks around young low-mass stars, the nonlinear development of the instabilities leads to a complex of intersecting spiral shocks, clumps, and arclets within the disk and to significant time-dependent, nonaxisymmetric distortions of the disk surface. A rescaling of our disk simulations to the case of a massive protostar shows that conditions in the disturbed outer disk seem conducive to the appearance of masers if it is viewed edge-on.

  10. Nanoscale temperature mapping in operating microelectronic devices

    DOE PAGES

    Mecklenburg, Matthew; Hubbard, William A.; White, E. R.; ...

    2015-02-05

    We report that modern microelectronic devices have nanoscale features that dissipate power nonuniformly, but fundamental physical limits frustrate efforts to detect the resulting temperature gradients. Contact thermometers disturb the temperature of a small system, while radiation thermometers struggle to beat the diffraction limit. Exploiting the same physics as Fahrenheit’s glass-bulb thermometer, we mapped the thermal expansion of Joule-heated, 80-nanometer-thick aluminum wires by precisely measuring changes in density. With a scanning transmission electron microscope (STEM) and electron energy loss spectroscopy (EELS), we quantified the local density via the energy of aluminum’s bulk plasmon. Rescaling density to temperature yields maps with amore » statistical precision of 3 kelvin/hertz ₋1/2, an accuracy of 10%, and nanometer-scale resolution. Lastly, many common metals and semiconductors have sufficiently sharp plasmon resonances to serve as their own thermometers.« less

  11. MATHEMATICAL SCIENCE: A Homogenization Driven Multiscale Approach for Characterizing the Dynamics of Granular Media and its Implementation on Massively Parallel Heterogeneous Hardware Architectures

    DTIC Science & Technology

    2016-05-31

    In addition  to being used  for off‐road mobility  studies, Chrono  is being used by UC‐San  Diego for the motion of molecules as well as by  NASA ... gov . labs. This effort  has continued as a series of twice a year meetings with a continually increasing number of participants.  We  are  well... NASA ,  Japan  Aerospace  Exploration  Agency,  Caterpillar,  P&H  Mining,  MSC.Software,  Simertis  Gmbh,  BAE  Systems,  Eaton  Corporation,  Rescale

  12. Neonatal jaundice and human milk.

    PubMed

    Soldi, Antonella; Tonetto, Paola; Varalda, Alessia; Bertino, Enrico

    2011-10-01

    Breastfeeding is linked both to a greater jaundice frequency and intensity in the first postnatal days ("breastfeeding jaundice") and to visible jaundice persisting beyond the first two weeks of life ("breast milk jaundice"), but the appearance of skin jaundice is not a reason for interrupting breastfeeding which can and should continue without any interruption in most cases. There have been numerous contributions to the literature, which have rescaled the direct role of breast milk, both in early jaundice and in the more severe cases of late jaundice. In fact, the reviewed guidelines for detection and management of hyperbilirubinemia underline, how prevention of badly managed breastfeeding and early support for the couple mother-child are effective prevention measures against severe early-onset jaundice; furthermore, the breastfeeding interruption is no longer recommended as a diagnostic procedure to identify breast milk jaundice because of its low specificity and the risk to disregarding the detection of a potentially dangerous disease.

  13. Testing an H-mode Pedestal Model Using DIII-D Data

    NASA Astrophysics Data System (ADS)

    Kritz, A. H.; Onjun, T.; Bateman, G.; Guzdar, P. N.; Mahajan, S. M.; Osborne, T.

    2004-11-01

    Tests against experimental data are carried out for a model of the pedestal at the edge of H-mode plasmas based on double-Beltrami solutions of the two-fluid Hall-MHD equations for the interaction of the magnetic and velocity fields.(S.M. Mahajan and Z. Yoshida, PRL 81 (1998) 4863, Phys. Plasmas 7 (2000) 635.) The width and height of the pedestal predicted by the model are tested against experimental data from the DIII-D tokamak. The model for the pedestal width, which has a particularly simple form, namely, inversely proportional to the square root of the density, does not appear to capture the parameter dependence of the experimental data. When the model for the pedestal temperature is rescaled to optimize agreement with data, the RMS error is found to be comparable with the RMS error found using other pedestal models.(T. Onjun, G. Bateman, A.H. Kritz, G. Hammett, Phys. Plasmas 9 (2002) 5018.)

  14. Science and Facebook: The same popularity law!

    PubMed Central

    Varga, Levente; Biró, Tamás S.

    2017-01-01

    The distribution of scientific citations for publications selected with different rules (author, topic, institution, country, journal, etc…) collapse on a single curve if one plots the citations relative to their mean value. We find that the distribution of “shares” for the Facebook posts rescale in the same manner to the very same curve with scientific citations. This finding suggests that citations are subjected to the same growth mechanism with Facebook popularity measures, being influenced by a statistically similar social environment and selection mechanism. In a simple master-equation approach the exponential growth of the number of publications and a preferential selection mechanism leads to a Tsallis-Pareto distribution offering an excellent description for the observed statistics. Based on our model and on the data derived from PubMed we predict that according to the present trend the average citations per scientific publications exponentially relaxes to about 4. PMID:28678796

  15. Parallel Solver for H(div) Problems Using Hybridization and AMG

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Chak S.; Vassilevski, Panayot S.

    2016-01-15

    In this paper, a scalable parallel solver is proposed for H(div) problems discretized by arbitrary order finite elements on general unstructured meshes. The solver is based on hybridization and algebraic multigrid (AMG). Unlike some previously studied H(div) solvers, the hybridization solver does not require discrete curl and gradient operators as additional input from the user. Instead, only some element information is needed in the construction of the solver. The hybridization results in a H1-equivalent symmetric positive definite system, which is then rescaled and solved by AMG solvers designed for H1 problems. Weak and strong scaling of the method are examinedmore » through several numerical tests. Our numerical results show that the proposed solver provides a promising alternative to ADS, a state-of-the-art solver [12], for H(div) problems. In fact, it outperforms ADS for higher order elements.« less

  16. Nature of self-diffusion in two-dimensional fluids

    NASA Astrophysics Data System (ADS)

    Choi, Bongsik; Han, Kyeong Hwan; Kim, Changho; Talkner, Peter; Kidera, Akinori; Lee, Eok Kyun

    2017-12-01

    Self-diffusion in a two-dimensional simple fluid is investigated by both analytical and numerical means. We investigate the anomalous aspects of self-diffusion in two-dimensional fluids with regards to the mean square displacement, the time-dependent diffusion coefficient, and the velocity autocorrelation function (VACF) using a consistency equation relating these quantities. We numerically confirm the consistency equation by extensive molecular dynamics simulations for finite systems, corroborate earlier results indicating that the kinematic viscosity approaches a finite, non-vanishing value in the thermodynamic limit, and establish the finite size behavior of the diffusion coefficient. We obtain the exact solution of the consistency equation in the thermodynamic limit and use this solution to determine the large time asymptotics of the mean square displacement, the diffusion coefficient, and the VACF. An asymptotic decay law of the VACF resembles the previously known self-consistent form, 1/(t\\sqrt{{ln}t}), however with a rescaled time.

  17. Using Data Assimilation Diagnostics to Assess the SMAP Level-4 Soil Moisture Product

    NASA Technical Reports Server (NTRS)

    Reichle, Rolf; Liu, Qing; De Lannoy, Gabrielle; Crow, Wade; Kimball, John; Koster, Randy; Ardizzone, Joe

    2018-01-01

    The Soil Moisture Active Passive (SMAP) mission Level-4 Soil Moisture (L4_SM) product provides 3-hourly, 9-km resolution, global estimates of surface (0-5 cm) and root-zone (0-100 cm) soil moisture and related land surface variables from 31 March 2015 to present with approx.2.5-day latency. The ensemble-based L4_SM algorithm assimilates SMAP brightness temperature (Tb) observations into the Catchment land surface model. This study describes the spatially distributed L4_SM analysis and assesses the observation-minus-forecast (O-F) Tb residuals and the soil moisture and temperature analysis increments. Owing to the climatological rescaling of the Tb observations prior to assimilation, the analysis is essentially unbiased, with global mean values of approx. 0.37 K for the O-F Tb residuals and practically zero for the soil moisture and temperature increments. There are, however, modest regional (absolute) biases in the O-F residuals (under approx. 3 K), the soil moisture increments (under approx. 0.01 cu m/cu m), and the surface soil temperature increments (under approx. 1 K). Typical instantaneous values are approx. 6 K for O-F residuals, approx. 0.01 (approx. 0.003) cu m/cu m for surface (root-zone) soil moisture increments, and approx. 0.6 K for surface soil temperature increments. The O-F diagnostics indicate that the actual errors in the system are overestimated in deserts and densely vegetated regions and underestimated in agricultural regions and transition zones between dry and wet climates. The O-F auto-correlations suggest that the SMAP observations are used efficiently in western North America, the Sahel, and Australia, but not in many forested regions and the high northern latitudes. A case study in Australia demonstrates that assimilating SMAP observations successfully corrects short-term errors in the L4_SM rainfall forcing.

  18. Global Assessment of the SMAP Level-4 Soil Moisture Product Using Assimilation Diagnostics

    NASA Technical Reports Server (NTRS)

    Reichle, Rolf; Liu, Qing; De Lannoy, Gabrielle; Crow, Wade; Kimball, John; Koster, Randy; Ardizzone, Joe

    2018-01-01

    The Soil Moisture Active Passive (SMAP) mission Level-4 Soil Moisture (L4_SM) product provides 3-hourly, 9-km resolution, global estimates of surface (0-5 cm) and root-zone (0-100 cm) soil moisture and related land surface variables from 31 March 2015 to present with approx. 2.5-day latency. The ensemble-based L4_SM algorithm assimilates SMAP brightness temperature (Tb) observations into the Catchment land surface model. This study describes the spatially distributed L4_SM analysis and assesses the observation-minus-forecast (O-F) Tb residuals and the soil moisture and temperature analysis increments. Owing to the climatological rescaling of the Tb observations prior to assimilation, the analysis is essentially unbiased, with global mean values of approx. 0.37 K for the O-F Tb residuals and practically zero for the soil moisture and temperature increments. There are, however, modest regional (absolute) biases in the O-F residuals (under approx. 3 K), the soil moisture increments (under approx. 0.01 cu m/cu m), and the surface soil temperature increments (under approx. 1 K). Typical instantaneous values are approx. 6 K for O-F residuals, approx. 0.01 (approx. 0.003) cu m/cu m for surface (root-zone) soil moisture increments, and approx. 0.6 K for surface soil temperature increments. The O-F diagnostics indicate that the actual errors in the system are overestimated in deserts and densely vegetated regions and underestimated in agricultural regions and transition zones between dry and wet climates. The O-F auto-correlations suggest that the SMAP observations are used efficiently in western North America, the Sahel, and Australia, but not in many forested regions and the high northern latitudes. A case study in Australia demonstrates that assimilating SMAP observations successfully corrects short-term errors in the L4_SM rainfall forcing.

  19. A New Species of Haplophyllum A. Juss. (Rutaceae) from the Iberian Peninsula: Evidence from Morphological, Karyological and Molecular Analyses

    PubMed Central

    NAVARRO, F. B.; SUÁREZ-SANTIAGO, V. N.; BLANCA, G.

    2004-01-01

    • Background and Aims The discovery of a new species, Haplophyllum bastetanum F.B. Navarro, V.N. Suárez-Santiago & Blanca sp. nov., in the south-east of Spain has prompted the comparative study of species of the Iberian Peninsula, and others related, through morphological, cytogenetic, molecular, distributional and ecological characterization. • Methods The morphological study involved a quantitative analysis of the species present in the Iberian Peninsula and a comparative analysis of the morphological characteristics between H. bastetanum and other related species. Mitotic analyses were made with root meristems taken from germinating seeds. Phylogenetic analyses of the internal transcribed spacer sequences of nuclear ribosomal DNA were performed using neighbour-joining (NJ) and maximum-parsimony methods. • Key Results Haplophyllum bastetanum is a diploid species (2n = 18) distinguished primarily for its non-trifoliate glabrous leaves, lanceolate sepals, dark-green petals with a dorsal band of hairs, and a highly hairy ovary with round-apex locules. The other two Iberian species (H. linifolium and H. rosmarinifolium) are tetraploid (2n = 36) and have yellow petals. Both phylogenetic methods generated a well-supported clade grouping H. linifolium with H. rosmarinifolium. In the NJ tree, the H. linifolium–H. rosmarinifolium clade is a sister group to H. bastetanum, while in the parsimony analysis this occurred only when the gaps were coded as a fifth base and the characters were reweighted according to the rescaled consistency index. This latter group is supported by the sequence divergence among taxa. • Conclusions The phylogenies established from DNA sequences together with morphological and cytogenetic analyses support the separation of H. bastetanum as a new species. The results suggest that the change in the number of chromosomes may be the key mechanism of speciation of the genus Haplophyllum in the Iberian Peninsula. An evolutionary scheme for them is propounded. PMID:15306560

  20. Little genetic variability in resilience among cattle exists for a range of performance traits across herds in Ireland differing in Fasciola hepatica prevalence.

    PubMed

    Twomey, Alan J; Graham, David A; Doherty, Michael L; Blom, Astrid; Berry, Donagh P

    2018-06-04

    It is anticipated that in the future, livestock will be exposed to a greater risk of infection from parasitic diseases. Therefore, future breeding strategies for livestock, which are generally long-term strategies for change, should target animals adaptable to environments with a high parasitic load. Covariance components were estimated in the present study for a selection of dairy and beef performance traits over herd-years differing in Fasciola hepatica load using random regression sire models. Herd-year prevalence of F. hepatica was determined by using F. hepatica-damaged liver phenotypes which were recorded in abattoirs nationally. The data analyzed consisted up to 83,821 lactation records from dairy cows for a range of milk production and fertility traits, as well as 105,054 young animals with carcass-related information obtained at slaughter. Reaction norms for individual sires were derived from the random regression coefficients. The heritability and additive genetic standard deviations for all traits analyzed remained relatively constant as herd-year F. hepatica prevalence gradient increased up to a prevalence level of 0.7; although there was a large increase in heritability and additive genetic standard deviation for milk and fertility traits in the observed F. hepatica prevalence levels >0.7, only 5% of the data existed in herd-year prevalence levels >0.7. Very little rescaling, therefore, exists across differing herd-year F. hepatica prevalence levels. Within-trait genetic correlations among the performance traits across different herd-year F. hepatica prevalence levels were less than unity for all traits. Nevertheless, within-trait genetic correlations for milk production and carcass traits were all >0.8 for F. hepatica prevalence levels between 0.2 and 0.8. The lowest estimate of within-trait genetic correlations for the different fertility traits ranged from -0.03 (SE = 1.09) in age of first calving to 0.54 (SE = 0.22) for calving to first service interval. Therefore, there was reranking of sires for fertility traits across different F. hepatica prevalence levels. In conclusion, there was little or no genetic variability in sensitivity to F. hepatica prevalence levels among cattle for milk production and carcass traits. But, some genetic variability in sensitivity among dairy cows did exist for fertility traits measured across herds differing in F. hepatica prevalence.

  1. Efficient Fourier-based algorithms for time-periodic unsteady problems

    NASA Astrophysics Data System (ADS)

    Gopinath, Arathi Kamath

    2007-12-01

    This dissertation work proposes two algorithms for the simulation of time-periodic unsteady problems via the solution of Unsteady Reynolds-Averaged Navier-Stokes (URANS) equations. These algorithms use a Fourier representation in time and hence solve for the periodic state directly without resolving transients (which consume most of the resources in a time-accurate scheme). In contrast to conventional Fourier-based techniques which solve the governing equations in frequency space, the new algorithms perform all the calculations in the time domain, and hence require minimal modifications to an existing solver. The complete space-time solution is obtained by iterating in a fifth pseudo-time dimension. Various time-periodic problems such as helicopter rotors, wind turbines, turbomachinery and flapping-wings can be simulated using the Time Spectral method. The algorithm is first validated using pitching airfoil/wing test cases. The method is further extended to turbomachinery problems, and computational results verified by comparison with a time-accurate calculation. The technique can be very memory intensive for large problems, since the solution is computed (and hence stored) simultaneously at all time levels. Often, the blade counts of a turbomachine are rescaled such that a periodic fraction of the annulus can be solved. This approximation enables the solution to be obtained at a fraction of the cost of a full-scale time-accurate solution. For a viscous computation over a three-dimensional single-stage rescaled compressor, an order of magnitude savings is achieved. The second algorithm, the reduced-order Harmonic Balance method is applicable only to turbomachinery flows, and offers even larger computational savings than the Time Spectral method. It simulates the true geometry of the turbomachine using only one blade passage per blade row as the computational domain. In each blade row of the turbomachine, only the dominant frequencies are resolved, namely, combinations of neighbor's blade passing. An appropriate set of frequencies can be chosen by the analyst/designer based on a trade-off between accuracy and computational resources available. A cost comparison with a time-accurate computation for an Euler calculation on a two-dimensional multi-stage compressor obtained an order of magnitude savings, and a RANS calculation on a three-dimensional single-stage compressor achieved two orders of magnitude savings, with comparable accuracy.

  2. Suppressed Far-UV Stellar Activity and Low Planetary Mass Loss in the WASP-18 System

    NASA Astrophysics Data System (ADS)

    Fossati, L.; Koskinen, T.; France, K.; Cubillos, P. E.; Haswell, C. A.; Lanza, A. F.; Pillitteri, I.

    2018-03-01

    WASP-18 hosts a massive, very close-in Jupiter-like planet. Despite its young age (<1 Gyr), the star presents an anomalously low stellar activity level: the measured {log}{R}HK}{\\prime } activity parameter lies slightly below the basal level; there is no significant time-variability in the {log}{R}HK}{\\prime } value; there is no detection of the star in the X-rays. We present results of far-UV observations of WASP-18 obtained with COS on board of Hubble Space Telescope aimed at explaining this anomaly. From the star’s spectral energy distribution, we infer the extinction (E(B-V) ≈ 0.01 mag) and then the interstellar medium (ISM) column density for a number of ions, concluding that ISM absorption is not the origin of the anomaly. We measure the flux of the four stellar emission features detected in the COS spectrum (C II, C III, C IV, Si IV). Comparing the C II/C IV flux ratio measured for WASP-18 with that derived from spectra of nearby stars with known age, we see that the far-UV spectrum of WASP-18 resembles that of old (>5 Gyr), inactive stars, in stark contrast with its young age. We conclude that WASP-18 has an intrinsically low activity level, possibly caused by star–planet tidal interaction, as suggested by previous studies. Re-scaling the solar irradiance reference spectrum to match the flux of the Si IV line, yields an XUV integrated flux at the planet orbit of 10.2 erg s‑1 cm‑2. We employ the rescaled XUV solar fluxes to models of the planetary upper atmosphere, deriving an extremely low thermal mass-loss rate of 10‑20 M J Gyr‑1. For such high-mass planets, thermal escape is not energy limited, but driven by Jeans escape. Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from MAST at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. These observations are associated with program #13859. Based on observations made with ESO Telescopes at the La Silla Paranal Observatory under programme ID 092.D-0587.

  3. Analysis of the seismicity preceding large earthquakes

    NASA Astrophysics Data System (ADS)

    Stallone, Angela; Marzocchi, Warner

    2017-04-01

    The most common earthquake forecasting models assume that the magnitude of the next earthquake is independent from the past. This feature is probably one of the most severe limitations of the capability to forecast large earthquakes. In this work, we investigate empirically on this specific aspect, exploring whether variations in seismicity in the space-time-magnitude domain encode some information on the size of the future earthquakes. For this purpose, and to verify the stability of the findings, we consider seismic catalogs covering quite different space-time-magnitude windows, such as the Alto Tiberina Near Fault Observatory (TABOO) catalogue, the California and Japanese seismic catalog. Our method is inspired by the statistical methodology proposed by Baiesi & Paczuski (2004) and elaborated by Zaliapin et al. (2008) to distinguish between triggered and background earthquakes, based on a pairwise nearest-neighbor metric defined by properly rescaled temporal and spatial distances. We generalize the method to a metric based on the k-nearest-neighbors that allows us to consider the overall space-time-magnitude distribution of k-earthquakes, which are the strongly correlated ancestors of a target event. Finally, we analyze the statistical properties of the clusters composed by the target event and its k-nearest-neighbors. In essence, the main goal of this study is to verify if different classes of target event magnitudes are characterized by distinctive "k-foreshocks" distributions. The final step is to show how the findings of this work may (or not) improve the skill of existing earthquake forecasting models.

  4. Fast flow-based algorithm for creating density-equalizing map projections

    PubMed Central

    Gastner, Michael T.; Seguy, Vivien; More, Pratyush

    2018-01-01

    Cartograms are maps that rescale geographic regions (e.g., countries, districts) such that their areas are proportional to quantitative demographic data (e.g., population size, gross domestic product). Unlike conventional bar or pie charts, cartograms can represent correctly which regions share common borders, resulting in insightful visualizations that can be the basis for further spatial statistical analysis. Computer programs can assist data scientists in preparing cartograms, but developing an algorithm that can quickly transform every coordinate on the map (including points that are not exactly on a border) while generating recognizable images has remained a challenge. Methods that translate the cartographic deformations into physics-inspired equations of motion have become popular, but solving these equations with sufficient accuracy can still take several minutes on current hardware. Here we introduce a flow-based algorithm whose equations of motion are numerically easier to solve compared with previous methods. The equations allow straightforward parallelization so that the calculation takes only a few seconds even for complex and detailed input. Despite the speedup, the proposed algorithm still keeps the advantages of previous techniques: With comparable quantitative measures of shape distortion, it accurately scales all areas, correctly fits the regions together, and generates a map projection for every point. We demonstrate the use of our algorithm with applications to the 2016 US election results, the gross domestic products of Indian states and Chinese provinces, and the spatial distribution of deaths in the London borough of Kensington and Chelsea between 2011 and 2014. PMID:29463721

  5. Acquisition and generalization of visuomotor transformations by nonhuman primates.

    PubMed

    Paz, Rony; Nathan, Chen; Boraud, Thomas; Bergman, Hagai; Vaadia, Eilon

    2005-02-01

    The kinematics of straight reaching movements can be specified vectorially by the direction of the movement and its extent. To explore the representation in the brain of these two properties, psychophysical studies have examined learning of visuomotor transformations of either rotation or gain and their generalization. However, the neuronal substrates of such complex learning are only beginning to be addressed. As an initial step in ensuring the validity of such investigations, it must be shown that monkeys indeed learn and generalize visuomotor transformations in the same manner as humans. Here, we analyze trajectories and velocities of movements as monkeys adapt to either rotational or gain transformations. We used rotations with different signs and magnitudes, and gains with different signs, and analyzed transfer of learning to untrained movements. The results show that monkeys can adapt to both types of transformation with a time course that resembles human learning. Analysis of the aftereffects reveals that rotation is learned locally and generalizes poorly to untrained directions, whereas gain is learned more globally and can be transferred to other amplitudes. The results lend additional support to the hypothesis that reaching movements are learned locally but can be easily rescaled to other magnitudes by scaling the peak velocity. The findings also indicate that reaching movements in monkeys are planned and executed very similarly to those in humans. This validates the underlying presumption that neuronal recordings in primates can help elucidate the mechanisms of motor learning in particular and motor planning in general.

  6. Modeling Photo-multiplier Gain and Regenerating Pulse Height Data for Application Development

    NASA Astrophysics Data System (ADS)

    Aspinall, Michael D.; Jones, Ashley R.

    2018-01-01

    Systems that adopt organic scintillation detector arrays often require a calibration process prior to the intended measurement campaign to correct for significant performance variances between detectors within the array. These differences exist because of low tolerances associated with photo-multiplier tube technology and environmental influences. Differences in detector response can be corrected for by adjusting the supplied photo-multiplier tube voltage to control its gain and the effect that this has on the pulse height spectra from a gamma-only calibration source with a defined photo-peak. Automated methods that analyze these spectra and adjust the photo-multiplier tube bias accordingly are emerging for hardware that integrate acquisition electronics and high voltage control. However, development of such algorithms require access to the hardware, multiple detectors and calibration source for prolonged periods, all with associated constraints and risks. In this work, we report on a software function and related models developed to rescale and regenerate pulse height data acquired from a single scintillation detector. Such a function could be used to generate significant and varied pulse height data that can be used to integration-test algorithms that are capable of automatically response matching multiple detectors using pulse height spectra analysis. Furthermore, a function of this sort removes the dependence on multiple detectors, digital analyzers and calibration source. Results show a good match between the real and regenerated pulse height data. The function has also been used successfully to develop auto-calibration algorithms.

  7. "Ideal" tearing and the transition to fast reconnection in the weakly collisional MHD and EMHD regimes

    NASA Astrophysics Data System (ADS)

    Del Sarto, Daniele; Pucci, Fulvia; Tenerani, Anna; Velli, Marco

    2016-03-01

    This paper discusses the transition to fast growth of the tearing instability in thin current sheets in the collisionless limit where electron inertia drives the reconnection process. It has been previously suggested that in resistive MHD there is a natural maximum aspect ratio (ratio of sheet length and breadth to thickness) which may be reached for current sheets with a macroscopic length L, the limit being provided by the fact that the tearing mode growth time becomes of the same order as the Alfvén time calculated on the macroscopic scale. For current sheets with a smaller aspect ratio than critical the normalized growth rate tends to zero with increasing Lundquist number S, while for current sheets with an aspect ratio greater than critical the growth rate diverges with S. Here we carry out a similar analysis but with electron inertia as the term violating magnetic flux conservation: previously found scalings of critical current sheet aspect ratios with the Lundquist number are generalized to include the dependence on the ratio de2/L2, where de is the electron skin depth, and it is shown that there are limiting scalings which, as in the resistive case, result in reconnecting modes growing on ideal time scales. Finite Larmor radius effects are then included, and the rescaling argument at the basis of "ideal" reconnection is proposed to explain secondary fast reconnection regimes naturally appearing in numerical simulations of current sheet evolution.

  8. Quenched bond randomness: Superfluidity in porous media and the strong violation of universality

    NASA Astrophysics Data System (ADS)

    Falicov, Alexis; Berker, A. Nihat

    1997-04-01

    The effects of quenched bond randomness are most readily studied with superfluidity immersed in a porous medium. A lattice model for3He-4He mixtures and incomplete4He fillings in aerogel yields the signature effect of bond randomness, namely the conversion of symmetry-breaking first-order phase transitions into second-order phase transitions, the λ-line reaching zero temperature, and the elimination of non-symmetry-breaking first-order phase transitions. The model recognizes the importance of the connected nature of aerogel randomness and thereby yields superfluidity at very low4He concentrations, a phase separation entirely within the superfluid phase, and the order-parameter contrast between mixtures and incomplete fillings, all in agreement with experiments. The special properties of the helium mixture/aerogel system are distinctly linked to the aerogel properties of connectivity, randomness, and tenuousness, via the additional study of a regularized “jungle-gym” aerogel. Renormalization-group calculations indicate that a strong violation of the empirical universality principle of critical phenomena occurs under quenched bond randomness. It is argued that helium/aerogel critical properties reflect this violation and further experiments are suggested. Renormalization-group analysis also shows that, adjoiningly to the strong universality violation (which hinges on the occurrence or non-occurrence of asymptotic strong coupling—strong randomness under rescaling), there is a new “hyperuniversality” at phase transitions with asymptotic strong coupling—strong randomness behavior, for example assigning the same critical exponents to random- bond tricriticality and random- field criticality.

  9. Adaptive contact networks change effective disease infectiousness and dynamics.

    PubMed

    Van Segbroeck, Sven; Santos, Francisco C; Pacheco, Jorge M

    2010-08-19

    Human societies are organized in complex webs that are constantly reshaped by a social dynamic which is influenced by the information individuals have about others. Similarly, epidemic spreading may be affected by local information that makes individuals aware of the health status of their social contacts, allowing them to avoid contact with those infected and to remain in touch with the healthy. Here we study disease dynamics in finite populations in which infection occurs along the links of a dynamical contact network whose reshaping may be biased based on each individual's health status. We adopt some of the most widely used epidemiological models, investigating the impact of the reshaping of the contact network on the disease dynamics. We derive analytical results in the limit where network reshaping occurs much faster than disease spreading and demonstrate numerically that this limit extends to a much wider range of time scales than one might anticipate. Specifically, we show that from a population-level description, disease propagation in a quickly adapting network can be formulated equivalently as disease spreading on a well-mixed population but with a rescaled infectiousness. We find that for all models studied here--SI, SIS and SIR--the effective infectiousness of a disease depends on the population size, the number of infected in the population, and the capacity of healthy individuals to sever contacts with the infected. Importantly, we indicate how the use of available information hinders disease progression, either by reducing the average time required to eradicate a disease (in case recovery is possible), or by increasing the average time needed for a disease to spread to the entire population (in case recovery or immunity is impossible).

  10. Hope, Interpreter Self-efficacy, and Social Impacts: Assessment of the NNOCCI Training

    NASA Astrophysics Data System (ADS)

    Fraser, J.; Swim, J.

    2012-12-01

    Conservation educators at informal science learning centers are well-positioned to teach climate science and motivate action but have resisted the topic. Our research demonstrates their resist is due to self-doubt about climate science facts and the belief they will encounter negative audience feedback. Further, this self-doubt and self-silencing is emotional taxing. As a result we have developed a National Network for Ocean Climate Change Interpretation's (NNOCCI) program that addresses educators' needs for technical training and emotional scaffolding to help them fully engage with this work. The evaluation of this program sought to understand how to support educators interested in promoting public literacy on climate change through engagement with a structured training program aimed at increased the efficacy of interpreters through teaching strategic framing strategies. The program engaged educator dyads from informal science learning sites to attend an online and in-person program that initiated a new community of practice focused on sharing techniques and tools for ocean climate change interpretation. The presentation will summarize a model for embedded assessment across all aspects of a program and how social vectors, based upon educators' interpersonal and professional relationships, impact the understanding of an educator's work across their life-world. This summary will be followed by results from qualitative front-end research that demonstrated the psychologically complex emotional conditions that describe the experience of being an environmental educator. The project evaluators will then present results from their focus groups and social network analysis to demonstrate how training impacted in-group relationships, skill development, and the layered social education strategies that help communities engage with the content. Results demonstrated that skill training increased educator's hope--in the form of increased perceived agency and plans for educational objectives. Subsequent to the program, educators experienced socially supportive feedback from colleagues and peers and increased actions to engage the public in productive discussions about climate change at informal science learning venues. The front-end and formative assessment of this program suggests new strategies for measuring interpreter training, and a way of thinking holistically about an educator's impact in their community. The results challenge the concept that interpretation is limited to the workplace and suggest that the increased likelihood of effectiveness in interpretation across all social vectors is more likely to result in changed public understanding of climate science in ways that will promote public action toward remediation strategies.Emotions before and after study circlet; Personal hope scale was rescaled to range from 1 "strongly disagree"; 4 "strongly agree"; Distress, Anxiety vs. hopeful and Energized vs. Overwhelmed range from 1 "not at all" to 4 "very much."

  11. Development and Application of Sr/Ca-δ18O-Sea Surface Temperature calibrations for Last Glacial Maximum-Aged Isopora corals in the Great Barrier Reef

    NASA Astrophysics Data System (ADS)

    Brenner, L. D.; Linsley, B. K.; Potts, D. C.; Felis, T.; Mcgregor, H. V.; Gagan, M. K.; Inoue, M.; Tudhope, A. W.; Esat, T. M.; Thompson, W. G.; Tiwari, M.; Fallon, S.; Humblet, M.; Yokoyama, Y.; Webster, J.

    2016-12-01

    Isopora (Acroporidae) are sub-massive to massive corals found on most modern and fossil Indo-Pacific reefs. Despite their abundance, they are largely absent from the paleoceanographic literature but have the potential to provide proxy data where other commonly used corals, such as Porites, are sparse. The retrieval of Isopora fossils during International Ocean Discovery Program Leg 325 in the Great Barrier Reef (GBR) signaled the need to evaluate their possible paleoceanographic utility. We developed modern skeletal Sr/Ca- and δ18O-sea surface temperature (SST) calibrations for six modern Isopora colonies collected at Heron Island in the southern GBR. Pairing the coral Sr/Ca record with monthly SST data yielded Reduced Major Axis Sr/Ca- and δ18O-SST sensitivities of -0.054 mmol/mol/°C and -0.152 ‰/°C, respectively, falling within the range of published Porites values. We applied our Isopora-based regressions and previously published sensitivities from other species to a suite (n=37) of fossil samples collected from IODP 32. The calibrations produced a range of 3-7°C of warming, averaging 5°C, in the GBR from 22 ka to modern climate. This SST change is similar or slightly larger than other coral studies and larger than planktonic foraminifera Mg/Ca records. The planktonic Mg/Ca records from the Indonesian and Western Pacific Warm Pools indicate a warming of 3-3.5°C since 23ka (Linsley et al., 2010) while a fossil coral record from Tahiti indicates a warming of 3.2°C from 9.5ka to present (DeLong et al., 2010) and western Pacific coral records suggest a cooling of 5-6°C (Gagan et al., 2010; Guilderson et al., 1994: Beck et al., 1997), although these value might require rescaling (Gagan et al., 2012) resulting in slightly warmer temperature calculations. Our Isopora fossils from the GBR speak to the spatial heterogeneity of warming since the LGM and the continued need to develop more records for a more comprehensive understanding of the deglaciation.

  12. The Association of Arsenic Exposure and Metabolism With Type 1 and Type 2 Diabetes in Youth: The SEARCH Case-Control Study

    PubMed Central

    Kuo, Chin-Chi; Spratlen, Miranda; Thayer, Kristina A.; Mendez, Michelle A.; Hamman, Richard F.; Dabelea, Dana; Adgate, John L.; Knowler, William C.; Bell, Ronny A.; Miller, Frederick W.; Liese, Angela D.; Zhang, Chongben; Douillet, Christelle; Drobná, Zuzana; Mayer-Davis, Elizabeth J.; Styblo, Miroslav

    2017-01-01

    OBJECTIVE Little is known about arsenic and diabetes in youth. We examined the association of arsenic with type 1 and type 2 diabetes in the SEARCH for Diabetes in Youth Case-Control (SEARCH-CC) study. Because one-carbon metabolism can influence arsenic metabolism, we also evaluated the potential interaction of folate and vitamin B12 with arsenic metabolism on the odds of diabetes. RESEARCH DESIGN AND METHODS Six hundred eighty-eight participants <22 years of age (429 with type 1 diabetes, 85 with type 2 diabetes, and 174 control participants) were evaluated. Arsenic species (inorganic arsenic [iAs], monomethylated arsenic [MMA], dimethylated arsenic [DMA]), and one-carbon metabolism biomarkers (folate and vitamin B12) were measured in plasma. We used the sum of iAs, MMA, and DMA (∑As) and the individual species as biomarkers of arsenic concentrations and the relative proportions of the species over their sum (iAs%, MMA%, DMA%) as biomarkers of arsenic metabolism. RESULTS Median ∑As, iAs%, MMA%, and DMA% were 83.1 ng/L, 63.4%, 10.3%, and 25.2%, respectively. ∑As was not associated with either type of diabetes. The fully adjusted odds ratios (95% CI), rescaled to compare a difference in levels corresponding to the interquartile range of iAs%, MMA%, and DMA%, were 0.68 (0.50–0.91), 1.33 (1.02–1.74), and 1.28 (1.01–1.63), respectively, for type 1 diabetes and 0.82 (0.48–1.39), 1.09 (0.65–1.82), and 1.17 (0.77–1.77), respectively, for type 2 diabetes. In interaction analysis, the odds ratio of type 1 diabetes by MMA% was 1.80 (1.25–2.58) and 0.98 (0.70–1.38) for participants with plasma folate levels above and below the median (P for interaction = 0.02), respectively. CONCLUSIONS Low iAs% versus high MMA% and DMA% was associated with a higher odds of type 1 diabetes, with a potential interaction by folate levels. These data support further research on the role of arsenic metabolism in type 1 diabetes, including the interplay with one-carbon metabolism biomarkers. PMID:27810988

  13. Relating the dynamics of climatological and hydrological droughts in semiarid Botswana

    NASA Astrophysics Data System (ADS)

    Byakatonda, Jimmy; Parida, B. P.; Kenabatho, Piet K.

    2018-06-01

    Dynamics of droughts have been an associated feature of climate variability particularly in semiarid regions which impact on the response of hydrological systems. This study attempts to determine drought timescale that is suitable for monitoring the effects of drought on hydrological systems which can then be used to assess the long term persistence or reversion and forecasts of the dynamics. Based on this, climatological and hydrological drought indices characterized by Standardized precipitation evapotranspiration index (SPEI) and Standardized flow index (SFI) respectively have been determined using monthly rainfall, temperature and flow data from two major river systems. The association between climatological and hydrological droughts in Botswana has been investigated using these river systems namely: Okavango that is predominantly a storage type and Limpopo which is non-storage for a period of 1975-2014. Dynamics of climatological and hydrological droughts are showing trends towards drying conditions at both river systems. It was also observed that hydrological droughts lag climatological droughts by 7 months in Limpopo and 6 months in Okavango river systems respectively. Analyses of the association between climatic and flow indices indicate that the degree of association becomes stronger with increasing timescale at the Okavango river system. However in the Limpopo river system, it was observed that high timescales of 18- and 24-months were not useful in drought monitoring. 15-months timescale was identified to best monitor drought dynamics at both locations. Therefore SPEIs and SFIs computed at 15-months timescale have been used to assess the variability and long term persistence in drought dynamics through rescaled range analysis (R/S). H-coefficients of 0.06 and 0.08 resulted for Limpopo and Okavango respectively. These H-coefficients being significantly less than 0.5 is an indication of high variability and suggests a change in dynamics from the existing conditions in these river systems. To forecast possible changes, the nonlinear autoregressive with exogenous input (NARX) artificial neural network model has been used. Results from this model agree with those of the R/S and projects generally dry conditions for the next 40 months. Results from this study are helpful not only in choosing a proper timescale but also in evaluating the futuristic drought dynamics necessary for water resources planning and management.

  14. The Association of Arsenic Exposure and Metabolism With Type 1 and Type 2 Diabetes in Youth: The SEARCH Case-Control Study.

    PubMed

    Grau-Pérez, Maria; Kuo, Chin-Chi; Spratlen, Miranda; Thayer, Kristina A; Mendez, Michelle A; Hamman, Richard F; Dabelea, Dana; Adgate, John L; Knowler, William C; Bell, Ronny A; Miller, Frederick W; Liese, Angela D; Zhang, Chongben; Douillet, Christelle; Drobná, Zuzana; Mayer-Davis, Elizabeth J; Styblo, Miroslav; Navas-Acien, Ana

    2017-01-01

    Little is known about arsenic and diabetes in youth. We examined the association of arsenic with type 1 and type 2 diabetes in the SEARCH for Diabetes in Youth Case-Control (SEARCH-CC) study. Because one-carbon metabolism can influence arsenic metabolism, we also evaluated the potential interaction of folate and vitamin B12 with arsenic metabolism on the odds of diabetes. Six hundred eighty-eight participants <22 years of age (429 with type 1 diabetes, 85 with type 2 diabetes, and 174 control participants) were evaluated. Arsenic species (inorganic arsenic [iAs], monomethylated arsenic [MMA], dimethylated arsenic [DMA]), and one-carbon metabolism biomarkers (folate and vitamin B12) were measured in plasma. We used the sum of iAs, MMA, and DMA (∑As) and the individual species as biomarkers of arsenic concentrations and the relative proportions of the species over their sum (iAs%, MMA%, DMA%) as biomarkers of arsenic metabolism. Median ∑As, iAs%, MMA%, and DMA% were 83.1 ng/L, 63.4%, 10.3%, and 25.2%, respectively. ∑As was not associated with either type of diabetes. The fully adjusted odds ratios (95% CI), rescaled to compare a difference in levels corresponding to the interquartile range of iAs%, MMA%, and DMA%, were 0.68 (0.50-0.91), 1.33 (1.02-1.74), and 1.28 (1.01-1.63), respectively, for type 1 diabetes and 0.82 (0.48-1.39), 1.09 (0.65-1.82), and 1.17 (0.77-1.77), respectively, for type 2 diabetes. In interaction analysis, the odds ratio of type 1 diabetes by MMA% was 1.80 (1.25-2.58) and 0.98 (0.70-1.38) for participants with plasma folate levels above and below the median (P for interaction = 0.02), respectively. Low iAs% versus high MMA% and DMA% was associated with a higher odds of type 1 diabetes, with a potential interaction by folate levels. These data support further research on the role of arsenic metabolism in type 1 diabetes, including the interplay with one-carbon metabolism biomarkers. © 2017 by the American Diabetes Association.

  15. When what we need influences what we see: choice of energetic replenishment is linked with perceived steepness.

    PubMed

    Taylor-Covill, Guy A H; Eves, Frank F

    2014-06-01

    The apparent steepness of the locomotor challenge presented by hills and staircases is overestimated in explicit awareness. Experimental evidence suggests the visual system may rescale our conscious experience of steepness in line with available energy resources. Skeptics of this "embodied" view argue that such findings reflect experimental demand. This article tested whether perceived steepness was related to resource choices in the built environment. Travelers in a station estimated the slant angle of a 6.45 m staircase (23.4°) either before (N = 302) or after (N = 109) choosing from a selection of consumable items containing differing levels of energetic resources. Participants unknowingly allocated themselves to a quasi-experimental group based on the energetic resources provided by the item they chose. Consistent with a resource based model, individuals that chose items with a greater energy density, or more rapidly available energy, estimated the staircase as steeper than those opting for items that provided less energetic resources. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  16. Enhancing the absorption and energy transfer process via quantum entanglement

    NASA Astrophysics Data System (ADS)

    Zong, Xiao-Lan; Song, Wei; Zhou, Jian; Yang, Ming; Yu, Long-Bao; Cao, Zhuo-Liang

    2018-07-01

    The quantum network model is widely used to describe the dynamics of excitation energy transfer in photosynthesis complexes. Different from the previous schemes, we explore a specific network model, which includes both light-harvesting and energy transfer process. Here, we define a rescaled measure to manifest the energy transfer efficiency from external driving to the sink, and the external driving fields are used to simulate the energy absorption process. To study the role of initial state in the light-harvesting and energy transfer process, we assume the initial state of the donors to be two-qubit and three-qubit entangled states, respectively. In the two-qubit initial state case, we find that the initial entanglement between the donors can help to improve the absorption and energy transfer process for both the near-resonant and large-detuning cases. For the case of three-qubit initial state, we can see that the transfer efficiency will reach a larger value faster in the tripartite entanglement case compared to the bipartite entanglement case.

  17. Invariant quantities in the scalar-tensor theories of gravitation

    NASA Astrophysics Data System (ADS)

    Järv, Laur; Kuusk, Piret; Saal, Margus; Vilson, Ott

    2015-01-01

    We consider the general scalar-tensor gravity without derivative couplings. By rescaling of the metric and reparametrization of the scalar field, the theory can be presented in different conformal frames and parametrizations. In this work we argue that while due to the freedom to transform the metric and the scalar field, the scalar field itself does not carry a physical meaning (in a generic parametrization), there are functions of the scalar field and its derivatives which remain invariant under the transformations. We put forward a scheme to construct these invariants, discuss how to formulate the theory in terms of the invariants, and show how the observables like parametrized post-Newtonian parameters and characteristics of the cosmological solutions can be neatly expressed in terms of the invariants. In particular, we describe the scalar field solutions in Friedmann-Lemaître-Robertson-Walker cosmology in Einstein and Jordan frames and explain their correspondence despite the approximate equations turning out to be linear and nonlinear in different frames.

  18. A priori and a posteriori analyses of the flamelet/progress variable approach for supersonic combustion

    NASA Astrophysics Data System (ADS)

    Saghafian, Amirreza; Pitsch, Heinz

    2012-11-01

    A compressible flamelet/progress variable approach (CFPV) has been devised for high-speed flows. Temperature is computed from the transported total energy and tabulated species mass fractions and the source term of the progress variable is rescaled with pressure and temperature. The combustion is thus modeled by three additional scalar equations and a chemistry table that is computed in a pre-processing step. Three-dimensional direct numerical simulation (DNS) databases of reacting supersonic turbulent mixing layer with detailed chemistry are analyzed to assess the underlying assumptions of CFPV. Large eddy simulations (LES) of the same configuration using the CFPV method have been performed and compared with the DNS results. The LES computations are based on the presumed subgrid PDFs of mixture fraction and progress variable, beta function and delta function respectively, which are assessed using DNS databases. The flamelet equation budget is also computed to verify the validity of CFPV method for high-speed flows.

  19. One-loop gravitational wave spectrum in de Sitter spacetime

    NASA Astrophysics Data System (ADS)

    Fröb, Markus B.; Roura, Albert; Verdaguer, Enric

    2012-08-01

    The two-point function for tensor metric perturbations around de Sitter spacetime including one-loop corrections from massless conformally coupled scalar fields is calculated exactly. We work in the Poincaré patch (with spatially flat sections) and employ dimensional regularization for the renormalization process. Unlike previous studies we obtain the result for arbitrary time separations rather than just equal times. Moreover, in contrast to existing results for tensor perturbations, ours is manifestly invariant with respect to the subgroup of de Sitter isometries corresponding to a simultaneous time translation and rescaling of the spatial coordinates. Having selected the right initial state for the interacting theory via an appropriate iepsilon prescription is crucial for that. Finally, we show that although the two-point function is a well-defined spacetime distribution, the equal-time limit of its spatial Fourier transform is divergent. Therefore, contrary to the well-defined distribution for arbitrary time separations, the power spectrum is strictly speaking ill-defined when loop corrections are included.

  20. Nature of self-diffusion in two-dimensional fluids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choi, Bongsik; Han, Kyeong Hwan; Kim, Changho

    Self-diffusion in a two-dimensional simple fluid is investigated by both analytical and numerical means. We investigate the anomalous aspects of self-diffusion in two-dimensional fluids with regards to the mean square displacement, the time-dependent diffusion coefficient, and the velocity autocorrelation function (VACF) using a consistency equation relating these quantities. Here, we numerically confirm the consistency equation by extensive molecular dynamics simulations for finite systems, corroborate earlier results indicating that the kinematic viscosity approaches a finite, non-vanishing value in the thermodynamic limit, and establish the finite size behavior of the diffusion coefficient. We obtain the exact solution of the consistency equation in the thermodynamic limit and use this solution to determine the large time asymptotics of the mean square displacement, the diffusion coefficient, and the VACF. An asymptotic decay law of the VACF resembles the previously known self-consistent form, 1/(more » $$t\\sqrt{In t)}$$ however with a rescaled time.« less

Top