Sample records for logarithmic time scale

  1. Mathematical model for logarithmic scaling of velocity fluctuations in wall turbulence.

    PubMed

    Mouri, Hideaki

    2015-12-01

    For wall turbulence, moments of velocity fluctuations are known to be logarithmic functions of the height from the wall. This logarithmic scaling is due to the existence of a characteristic velocity and to the nonexistence of any characteristic height in the range of the scaling. By using the mathematics of random variables, we obtain its necessary and sufficient conditions. They are compared with characteristics of a phenomenological model of eddies attached to the wall and also with those of the logarithmic scaling of the mean velocity.

  2. Logarithmic violation of scaling in anisotropic kinematic dynamo model

    NASA Astrophysics Data System (ADS)

    Antonov, N. V.; Gulitskiy, N. M.

    2016-01-01

    Inertial-range asymptotic behavior of a vector (e.g., magnetic) field, passively advected by a strongly anisotropic turbulent flow, is studied by means of the field theoretic renormalization group and the operator product expansion. The advecting velocity field is Gaussian, not correlated in time, with the pair correlation function of the form ∝δ (t -t')/k⊥d-1 +ξ , where k⊥ = |k⊥| and k⊥ is the component of the wave vector, perpendicular to the distinguished direction. The stochastic advection-diffusion equation for the transverse (divergence-free) vector field includes, as special cases, the kinematic dynamo model for magnetohydrodynamic turbulence and the linearized Navier-Stokes equation. In contrast to the well known isotropic Kraichnan's model, where various correlation functions exhibit anomalous scaling behavior with infinite sets of anomalous exponents, here the dependence on the integral turbulence scale L has a logarithmic behavior: instead of power-like corrections to ordinary scaling, determined by naive (canonical) dimensions, the anomalies manifest themselves as polynomials of logarithms of L.

  3. Double Resonances and Spectral Scaling in the Weak Turbulence Theory of Rotating and Stratified Turbulence

    NASA Technical Reports Server (NTRS)

    Rubinstein, Robert

    1999-01-01

    In rotating turbulence, stably stratified turbulence, and in rotating stratified turbulence, heuristic arguments concerning the turbulent time scale suggest that the inertial range energy spectrum scales as k(exp -2). From the viewpoint of weak turbulence theory, there are three possibilities which might invalidate these arguments: four-wave interactions could dominate three-wave interactions leading to a modified inertial range energy balance, double resonances could alter the time scale, and the energy flux integral might not converge. It is shown that although double resonances exist in all of these problems, they do not influence overall energy transfer. However, the resonance conditions cause the flux integral for rotating turbulence to diverge logarithmically when evaluated for a k(exp -2) energy spectrum; therefore, this spectrum requires logarithmic corrections. Finally, the role of four-wave interactions is briefly discussed.

  4. Logarithmic scaling for fluctuations of a scalar concentration in wall turbulence.

    PubMed

    Mouri, Hideaki; Morinaga, Takeshi; Yagi, Toshimasa; Mori, Kazuyasu

    2017-12-01

    Within wall turbulence, there is a sublayer where the mean velocity and the variance of velocity fluctuations vary logarithmically with the height from the wall. This logarithmic scaling is also known for the mean concentration of a passive scalar. By using heat as such a scalar in a laboratory experiment of a turbulent boundary layer, the existence of the logarithmic scaling is shown here for the variance of fluctuations of the scalar concentration. It is reproduced by a model of energy-containing eddies that are attached to the wall.

  5. Tolerance of ciliated protozoan Paramecium bursaria (Protozoa, Ciliophora) to ammonia and nitrites

    NASA Astrophysics Data System (ADS)

    Xu, Henglong; Song, Weibo; Lu, Lu; Alan, Warren

    2005-09-01

    The tolerance to ammonia and nitrites in freshwater ciliate Paramecium bursaria was measured in a conventional open system. The ciliate was exposed to different concentrations of ammonia and nitrites for 2h and 12h in order to determine the lethal concentrations. Linear regression analysis revealed that the 2h-LC50 value for ammonia was 95.94 mg/L and for nitrite 27.35 mg/L using probit scale method (with 95% confidence intervals). There was a linear correlation between the mortality probit scale and logarithmic concentration of ammonia which fit by a regression equation y=7.32 x 9.51 ( R 2=0.98; y, mortality probit scale; x, logarithmic concentration of ammonia), by which 2 h-LC50 value for ammonia was found to be 95.50 mg/L. A linear correlation between mortality probit scales and logarithmic concentration of nitrite is also followed the regression equation y=2.86 x+0.89 ( R 2=0.95; y, mortality probit scale; x, logarithmic concentration of nitrite). The regression analysis of toxicity curves showed that the linear correlation between exposed time of ammonia-N LC50 value and ammonia-N LC50 value followed the regression equation y=2 862.85 e -0.08 x ( R 2=0.95; y, duration of exposure to LC50 value; x, LC50 value), and that between exposed time of nitrite-N LC50 value and nitrite-N LC50 value followed the regression equation y=127.15 e -0.13 x ( R 2=0.91; y, exposed time of LC50 value; x, LC50 value). The results demonstrate that the tolerance to ammonia in P. bursaria is considerably higher than that of the larvae or juveniles of some metozoa, e.g. cultured prawns and oysters. In addition, ciliates, as bacterial predators, are likely to play a positive role in maintaining and improving water quality in aquatic environments with high-level ammonium, such as sewage treatment systems.

  6. Noise-induced phase space transport in two-dimensional Hamiltonian systems.

    PubMed

    Pogorelov, I V; Kandrup, H E

    1999-08-01

    First passage time experiments were used to explore the effects of low amplitude noise as a source of accelerated phase space diffusion in two-dimensional Hamiltonian systems, and these effects were then compared with the effects of periodic driving. The objective was to quantify and understand the manner in which "sticky" chaotic orbits that, in the absence of perturbations, are confined near regular islands for very long times, can become "unstuck" much more quickly when subjected to even very weak perturbations. For both noise and periodic driving, the typical escape time scales logarithmically with the amplitude of the perturbation. For white noise, the details seem unimportant: Additive and multiplicative noise typically have very similar effects, and the presence or absence of a friction related to the noise by a fluctuation-dissipation theorem is also largely irrelevant. Allowing for colored noise can significantly decrease the efficacy of the perturbation, but only when the autocorrelation time, which vanishes for white noise, becomes so large that there is little power at frequencies comparable to the natural frequencies of the unperturbed orbit. Similarly, periodic driving is relatively inefficient when the driving frequency is not comparable to these natural frequencies. This suggests that noise-induced extrinsic diffusion, like modulational diffusion associated with periodic driving, is a resonance phenomenon. The logarithmic dependence of the escape time on amplitude reflects the fact that the time required for perturbed and unperturbed orbits to diverge a given distance scales logarithmically in the amplitude of the perturbation.

  7. Logarithmic violation of scaling in strongly anisotropic turbulent transfer of a passive vector field

    NASA Astrophysics Data System (ADS)

    Antonov, N. V.; Gulitskiy, N. M.

    2015-01-01

    Inertial-range asymptotic behavior of a vector (e.g., magnetic) field, passively advected by a strongly anisotropic turbulent flow, is studied by means of the field-theoretic renormalization group and the operator product expansion. The advecting velocity field is Gaussian, not correlated in time, with the pair correlation function of the form ∝δ (t -t') /k⊥d -1 +ξ , where k⊥=|k⊥| and k⊥ is the component of the wave vector, perpendicular to the distinguished direction ("direction of the flow")—the d -dimensional generalization of the ensemble introduced by Avellaneda and Majda [Commun. Math. Phys. 131, 381 (1990), 10.1007/BF02161420]. The stochastic advection-diffusion equation for the transverse (divergence-free) vector field includes, as special cases, the kinematic dynamo model for magnetohydrodynamic turbulence and the linearized Navier-Stokes equation. In contrast to the well-known isotropic Kraichnan's model, where various correlation functions exhibit anomalous scaling behavior with infinite sets of anomalous exponents, here the dependence on the integral turbulence scale L has a logarithmic behavior: Instead of powerlike corrections to ordinary scaling, determined by naive (canonical) dimensions, the anomalies manifest themselves as polynomials of logarithms of L . The key point is that the matrices of scaling dimensions of the relevant families of composite operators appear nilpotent and cannot be diagonalized. The detailed proof of this fact is given for the correlation functions of arbitrary order.

  8. Passive advection of a vector field: Anisotropy, finite correlation time, exact solution, and logarithmic corrections to ordinary scaling

    NASA Astrophysics Data System (ADS)

    Antonov, N. V.; Gulitskiy, N. M.

    2015-10-01

    In this work we study the generalization of the problem considered in [Phys. Rev. E 91, 013002 (2015), 10.1103/PhysRevE.91.013002] to the case of finite correlation time of the environment (velocity) field. The model describes a vector (e.g., magnetic) field, passively advected by a strongly anisotropic turbulent flow. Inertial-range asymptotic behavior is studied by means of the field theoretic renormalization group and the operator product expansion. The advecting velocity field is Gaussian, with finite correlation time and preassigned pair correlation function. Due to the presence of distinguished direction n , all the multiloop diagrams in this model vanish, so that the results obtained are exact. The inertial-range behavior of the model is described by two regimes (the limits of vanishing or infinite correlation time) that correspond to the two nontrivial fixed points of the RG equations. Their stability depends on the relation between the exponents in the energy spectrum E ∝k⊥1 -ξ and the dispersion law ω ∝k⊥2 -η . In contrast to the well-known isotropic Kraichnan's model, where various correlation functions exhibit anomalous scaling behavior with infinite sets of anomalous exponents, here the corrections to ordinary scaling are polynomials of logarithms of the integral turbulence scale L .

  9. Uplink Downlink Rate Balancing and Throughput Scaling in FDD Massive MIMO Systems

    NASA Astrophysics Data System (ADS)

    Bergel, Itsik; Perets, Yona; Shamai, Shlomo

    2016-05-01

    In this work we extend the concept of uplink-downlink rate balancing to frequency division duplex (FDD) massive MIMO systems. We consider a base station with large number antennas serving many single antenna users. We first show that any unused capacity in the uplink can be traded off for higher throughput in the downlink in a system that uses either dirty paper (DP) coding or linear zero-forcing (ZF) precoding. We then also study the scaling of the system throughput with the number of antennas in cases of linear Beamforming (BF) Precoding, ZF Precoding, and DP coding. We show that the downlink throughput is proportional to the logarithm of the number of antennas. While, this logarithmic scaling is lower than the linear scaling of the rate in the uplink, it can still bring significant throughput gains. For example, we demonstrate through analysis and simulation that increasing the number of antennas from 4 to 128 will increase the throughput by more than a factor of 5. We also show that a logarithmic scaling of downlink throughput as a function of the number of receive antennas can be achieved even when the number of transmit antennas only increases logarithmically with the number of receive antennas.

  10. Validation of SplitVectors Encoding for Quantitative Visualization of Large-Magnitude-Range Vector Fields

    PubMed Central

    Zhao, Henan; Bryant, Garnett W.; Griffin, Wesley; Terrill, Judith E.; Chen, Jian

    2017-01-01

    We designed and evaluated SplitVectors, a new vector field display approach to help scientists perform new discrimination tasks on large-magnitude-range scientific data shown in three-dimensional (3D) visualization environments. SplitVectors uses scientific notation to display vector magnitude, thus improving legibility. We present an empirical study comparing the SplitVectors approach with three other approaches - direct linear representation, logarithmic, and text display commonly used in scientific visualizations. Twenty participants performed three domain analysis tasks: reading numerical values (a discrimination task), finding the ratio between values (a discrimination task), and finding the larger of two vectors (a pattern detection task). Participants used both mono and stereo conditions. Our results suggest the following: (1) SplitVectors improve accuracy by about 10 times compared to linear mapping and by four times to logarithmic in discrimination tasks; (2) SplitVectors have no significant differences from the textual display approach, but reduce cluttering in the scene; (3) SplitVectors and textual display are less sensitive to data scale than linear and logarithmic approaches; (4) using logarithmic can be problematic as participants' confidence was as high as directly reading from the textual display, but their accuracy was poor; and (5) Stereoscopy improved performance, especially in more challenging discrimination tasks. PMID:28113469

  11. Validation of SplitVectors Encoding for Quantitative Visualization of Large-Magnitude-Range Vector Fields.

    PubMed

    Henan Zhao; Bryant, Garnett W; Griffin, Wesley; Terrill, Judith E; Jian Chen

    2017-06-01

    We designed and evaluated SplitVectors, a new vector field display approach to help scientists perform new discrimination tasks on large-magnitude-range scientific data shown in three-dimensional (3D) visualization environments. SplitVectors uses scientific notation to display vector magnitude, thus improving legibility. We present an empirical study comparing the SplitVectors approach with three other approaches - direct linear representation, logarithmic, and text display commonly used in scientific visualizations. Twenty participants performed three domain analysis tasks: reading numerical values (a discrimination task), finding the ratio between values (a discrimination task), and finding the larger of two vectors (a pattern detection task). Participants used both mono and stereo conditions. Our results suggest the following: (1) SplitVectors improve accuracy by about 10 times compared to linear mapping and by four times to logarithmic in discrimination tasks; (2) SplitVectors have no significant differences from the textual display approach, but reduce cluttering in the scene; (3) SplitVectors and textual display are less sensitive to data scale than linear and logarithmic approaches; (4) using logarithmic can be problematic as participants' confidence was as high as directly reading from the textual display, but their accuracy was poor; and (5) Stereoscopy improved performance, especially in more challenging discrimination tasks.

  12. Logarithmic compression methods for spectral data

    DOEpatents

    Dunham, Mark E.

    2003-01-01

    A method is provided for logarithmic compression, transmission, and expansion of spectral data. A log Gabor transformation is made of incoming time series data to output spectral phase and logarithmic magnitude values. The output phase and logarithmic magnitude values are compressed by selecting only magnitude values above a selected threshold and corresponding phase values to transmit compressed phase and logarithmic magnitude values. A reverse log Gabor transformation is then performed on the transmitted phase and logarithmic magnitude values to output transmitted time series data to a user.

  13. Advantages of using a logarithmic scale in pressure-volume diagrams for Carnot and other heat engine cycles

    NASA Astrophysics Data System (ADS)

    Shieh, Lih-Yir; Kan, Hung-Chih

    2014-04-01

    We demonstrate that plotting the P-V diagram of an ideal gas Carnot cycle on a logarithmic scale results in a more intuitive approach for deriving the final form of the efficiency equation. The same approach also facilitates the derivation of the efficiency of other thermodynamic engines that employ adiabatic ideal gas processes, such as the Brayton cycle, the Otto cycle, and the Diesel engine. We finally demonstrate that logarithmic plots of isothermal and adiabatic processes help with visualization in approximating an arbitrary process in terms of an infinite number of Carnot cycles.

  14. Portable geiger counter with logarithmic scale (in Portuguese)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oliveira, L.A.C.; de Andrade Chagas, E.; de Bittencourt, F.A.

    1971-06-01

    From 23rd annual meeting of the Brazilian Society for the Advancement of Science; Curitiba, Brazil (4 Jul 1971). A portable scaler with logarithmic scale covering 3 decades: 1 to 10, 10 to 10/sup 2/, and 10/sup 2/ to 10/sup 3/ cps is presented. Electrica l energy is supplied by 6 volts given by 4 D type batteries. (INIS)

  15. Fast Quantum State Transfer and Entanglement Renormalization Using Long-Range Interactions.

    PubMed

    Eldredge, Zachary; Gong, Zhe-Xuan; Young, Jeremy T; Moosavian, Ali Hamed; Foss-Feig, Michael; Gorshkov, Alexey V

    2017-10-27

    In short-range interacting systems, the speed at which entanglement can be established between two separated points is limited by a constant Lieb-Robinson velocity. Long-range interacting systems are capable of faster entanglement generation, but the degree of the speedup possible is an open question. In this Letter, we present a protocol capable of transferring a quantum state across a distance L in d dimensions using long-range interactions with a strength bounded by 1/r^{α}. If α

  16. Fast Quantum State Transfer and Entanglement Renormalization Using Long-Range Interactions

    NASA Astrophysics Data System (ADS)

    Eldredge, Zachary; Gong, Zhe-Xuan; Young, Jeremy T.; Moosavian, Ali Hamed; Foss-Feig, Michael; Gorshkov, Alexey V.

    2017-10-01

    In short-range interacting systems, the speed at which entanglement can be established between two separated points is limited by a constant Lieb-Robinson velocity. Long-range interacting systems are capable of faster entanglement generation, but the degree of the speedup possible is an open question. In this Letter, we present a protocol capable of transferring a quantum state across a distance L in d dimensions using long-range interactions with a strength bounded by 1 /rα. If α

  17. Development and Validation of a Risk Scale for Emergence Agitation After General Anesthesia in Children: A Prospective Observational Study.

    PubMed

    Hino, Maai; Mihara, Takahiro; Miyazaki, Saeko; Hijikata, Toshiyuki; Miwa, Takaaki; Goto, Takahisa; Ka, Koui

    2017-08-01

    Emergence agitation (EA) is a common complication in children after general anesthesia. The goal of this 2-phase study was (1) to develop a predictive model (EA risk scale) for the incidence of EA in children receiving sevoflurane anesthesia by performing a retrospective analysis of data from our previous study (phase 1) and (2) to determine the validity of the EA risk scale in a prospective observational cohort study (phase 2). Using data collected from 120 patients in our previous study, logistic regression analysis was used to predict the incidence of EA in phase 1. The optimal combination of the predictors was determined by a stepwise selection procedure using Akaike information criterion. The β-coefficient for the selected predictors was calculated, and scores for predictors determined. The predictive ability of the EA risk scale was assessed by a receiver operating characteristic (ROC) curve, and the area under the ROC curve (c-index) was calculated with a 95% confidence interval (CI). In phase 2, the validity of the EA risk scale was confirmed using another data set of 100 patients (who underwent minor surgery under general anesthesia). The ROC curve, the c-index, the best cutoff point, and the sensitivity and specificity at the point were calculated. In addition, we calculated the gray zone, which ranges between the two points where sensitivity and specificity, respectively, become 90%. In phase 1, the final model of the multivariable logistic regression analysis included the following 4 predictors: age (logarithm odds ratios [OR], -0.38; 95% CI, -0.81 to 0.00), Pediatric Anesthesia Behavior score (logarithm OR, 0.65; 95% CI, -0.09 to 1.40), anesthesia time (logarithm OR, 0.60; 95% CI, -0.18 to 1.19), and operative procedure (logarithm OR, 2.53; 95% CI, 1.30-3.75 for strabismus surgery and logarithm OR, 2.71; 95% CI, 0.99-4.45 for tonsillectomy). The EA risk scale included these 4 predictors and ranged from 1 to 23 points. In phase 2, the incidence of EA was 39%. The c-index of phase 1 was 0.84 (95% CI, 0.74-0.94), and the c-index of phase 2 was 0.81 (95% CI, 0.72-0.89). The best cutoff point for the EA risk scale was 11 (sensitivity = 87% and specificity = 61%). The gray zone ranged from 10 to 13 points, and included 38% of patients. We developed and validated an EA risk scale for children receiving sevoflurane anesthesia. In our validation cohort, this scale has excellent predictive performance (c-index > 0.8). The EA risk scale could be used to predict EA in children and adopt a preventive strategy for those at high risk. This score-based preventive approach should be studied prospectively to assess the safety and efficacy of such a strategy.

  18. Time-scale effects on the gain-loss asymmetry in stock indices

    NASA Astrophysics Data System (ADS)

    Sándor, Bulcsú; Simonsen, Ingve; Nagy, Bálint Zsolt; Néda, Zoltán

    2016-08-01

    The gain-loss asymmetry, observed in the inverse statistics of stock indices is present for logarithmic return levels that are over 2 % , and it is the result of the non-Pearson-type autocorrelations in the index. These non-Pearson-type correlations can be viewed also as functionally dependent daily volatilities, extending for a finite time interval. A generalized time-window shuffling method is used to show the existence of such autocorrelations. Their characteristic time scale proves to be smaller (less than 25 trading days) than what was previously believed. It is also found that this characteristic time scale has decreased with the appearance of program trading in the stock market transactions. Connections with the leverage effect are also established.

  19. Neural spike-timing patterns vary with sound shape and periodicity in three auditory cortical fields

    PubMed Central

    Lee, Christopher M.; Osman, Ahmad F.; Volgushev, Maxim; Escabí, Monty A.

    2016-01-01

    Mammals perceive a wide range of temporal cues in natural sounds, and the auditory cortex is essential for their detection and discrimination. The rat primary (A1), ventral (VAF), and caudal suprarhinal (cSRAF) auditory cortical fields have separate thalamocortical pathways that may support unique temporal cue sensitivities. To explore this, we record responses of single neurons in the three fields to variations in envelope shape and modulation frequency of periodic noise sequences. Spike rate, relative synchrony, and first-spike latency metrics have previously been used to quantify neural sensitivities to temporal sound cues; however, such metrics do not measure absolute spike timing of sustained responses to sound shape. To address this, in this study we quantify two forms of spike-timing precision, jitter, and reliability. In all three fields, we find that jitter decreases logarithmically with increase in the basis spline (B-spline) cutoff frequency used to shape the sound envelope. In contrast, reliability decreases logarithmically with increase in sound envelope modulation frequency. In A1, jitter and reliability vary independently, whereas in ventral cortical fields, jitter and reliability covary. Jitter time scales increase (A1 < VAF < cSRAF) and modulation frequency upper cutoffs decrease (A1 > VAF > cSRAF) with ventral progression from A1. These results suggest a transition from independent encoding of shape and periodicity sound cues on short time scales in A1 to a joint encoding of these same cues on longer time scales in ventral nonprimary cortices. PMID:26843599

  20. Microscopic Spin Model for the STOCK Market with Attractor Bubbling on Regular and Small-World Lattices

    NASA Astrophysics Data System (ADS)

    Krawiecki, A.

    A multi-agent spin model for changes of prices in the stock market based on the Ising-like cellular automaton with interactions between traders randomly varying in time is investigated by means of Monte Carlo simulations. The structure of interactions has topology of a small-world network obtained from regular two-dimensional square lattices with various coordination numbers by randomly cutting and rewiring edges. Simulations of the model on regular lattices do not yield time series of logarithmic price returns with statistical properties comparable with the empirical ones. In contrast, in the case of networks with a certain degree of randomness for a wide range of parameters the time series of the logarithmic price returns exhibit intermittent bursting typical of volatility clustering. Also the tails of distributions of returns obey a power scaling law with exponents comparable to those obtained from the empirical data.

  1. Effect of Logarithmic and Linear Frequency Scales on Parametric Modelling of Tissue Dielectric Data.

    PubMed

    Salahuddin, Saqib; Porter, Emily; Meaney, Paul M; O'Halloran, Martin

    2017-02-01

    The dielectric properties of biological tissues have been studied widely over the past half-century. These properties are used in a vast array of applications, from determining the safety of wireless telecommunication devices to the design and optimisation of medical devices. The frequency-dependent dielectric properties are represented in closed-form parametric models, such as the Cole-Cole model, for use in numerical simulations which examine the interaction of electromagnetic (EM) fields with the human body. In general, the accuracy of EM simulations depends upon the accuracy of the tissue dielectric models. Typically, dielectric properties are measured using a linear frequency scale; however, use of the logarithmic scale has been suggested historically to be more biologically descriptive. Thus, the aim of this paper is to quantitatively compare the Cole-Cole fitting of broadband tissue dielectric measurements collected with both linear and logarithmic frequency scales. In this way, we can determine if appropriate choice of scale can minimise the fit error and thus reduce the overall error in simulations. Using a well-established fundamental statistical framework, the results of the fitting for both scales are quantified. It is found that commonly used performance metrics, such as the average fractional error, are unable to examine the effect of frequency scale on the fitting results due to the averaging effect that obscures large localised errors. This work demonstrates that the broadband fit for these tissues is quantitatively improved when the given data is measured with a logarithmic frequency scale rather than a linear scale, underscoring the importance of frequency scale selection in accurate wideband dielectric modelling of human tissues.

  2. Effect of Logarithmic and Linear Frequency Scales on Parametric Modelling of Tissue Dielectric Data

    PubMed Central

    Salahuddin, Saqib; Porter, Emily; Meaney, Paul M.; O’Halloran, Martin

    2016-01-01

    The dielectric properties of biological tissues have been studied widely over the past half-century. These properties are used in a vast array of applications, from determining the safety of wireless telecommunication devices to the design and optimisation of medical devices. The frequency-dependent dielectric properties are represented in closed-form parametric models, such as the Cole-Cole model, for use in numerical simulations which examine the interaction of electromagnetic (EM) fields with the human body. In general, the accuracy of EM simulations depends upon the accuracy of the tissue dielectric models. Typically, dielectric properties are measured using a linear frequency scale; however, use of the logarithmic scale has been suggested historically to be more biologically descriptive. Thus, the aim of this paper is to quantitatively compare the Cole-Cole fitting of broadband tissue dielectric measurements collected with both linear and logarithmic frequency scales. In this way, we can determine if appropriate choice of scale can minimise the fit error and thus reduce the overall error in simulations. Using a well-established fundamental statistical framework, the results of the fitting for both scales are quantified. It is found that commonly used performance metrics, such as the average fractional error, are unable to examine the effect of frequency scale on the fitting results due to the averaging effect that obscures large localised errors. This work demonstrates that the broadband fit for these tissues is quantitatively improved when the given data is measured with a logarithmic frequency scale rather than a linear scale, underscoring the importance of frequency scale selection in accurate wideband dielectric modelling of human tissues. PMID:28191324

  3. Analysis of DNA Sequences by an Optical Time-Integrating Correlator: Proposal

    DTIC Science & Technology

    1991-11-01

    OF THE PROBLEM AND CURRENT TECHNOLOGY 2 3.0 TIME-INTEGRATING CORRELATOR 2 4.0 REPRESENTATIONS OF THE DNA BASES 8 5.0 DNA ANALYSIS STRATEGY 8 6.0... DNA bases where each base is represented by a 7-bits long pseudorandom sequence. 9 Figure 5: The flow of data in a DNA analysis system based on an...logarithmic scale and a linear scale. 15 x LIST OF TABLES PAGE Table 1: Short representations of the DNA bases where each base is represented by 7-bits

  4. Hierarchical random additive process and logarithmic scaling of generalized high order, two-point correlations in turbulent boundary layer flow

    NASA Astrophysics Data System (ADS)

    Yang, X. I. A.; Marusic, I.; Meneveau, C.

    2016-06-01

    Townsend [Townsend, The Structure of Turbulent Shear Flow (Cambridge University Press, Cambridge, UK, 1976)] hypothesized that the logarithmic region in high-Reynolds-number wall-bounded flows consists of space-filling, self-similar attached eddies. Invoking this hypothesis, we express streamwise velocity fluctuations in the inertial layer in high-Reynolds-number wall-bounded flows as a hierarchical random additive process (HRAP): uz+=∑i=1Nzai . Here u is the streamwise velocity fluctuation, + indicates normalization in wall units, z is the wall normal distance, and ai's are independently, identically distributed random additives, each of which is associated with an attached eddy in the wall-attached hierarchy. The number of random additives is Nz˜ln(δ /z ) where δ is the boundary layer thickness and ln is natural log. Due to its simplified structure, such a process leads to predictions of the scaling behaviors for various turbulence statistics in the logarithmic layer. Besides reproducing known logarithmic scaling of moments, structure functions, and correlation function [" close="]3/2 uz(x ) uz(x +r ) >, new logarithmic laws in two-point statistics such as uz4(x ) > 1 /2, 1/3, etc. can be derived using the HRAP formalism. Supporting empirical evidence for the logarithmic scaling in such statistics is found from the Melbourne High Reynolds Number Boundary Layer Wind Tunnel measurements. We also show that, at high Reynolds numbers, the above mentioned new logarithmic laws can be derived by assuming the arrival of an attached eddy at a generic point in the flow field to be a Poisson process [Woodcock and Marusic, Phys. Fluids 27, 015104 (2015), 10.1063/1.4905301]. Taken together, the results provide new evidence supporting the essential ingredients of the attached eddy hypothesis to describe streamwise velocity fluctuations of large, momentum transporting eddies in wall-bounded turbulence, while observed deviations suggest the need for further extensions of the model.

  5. Quantifying Stock Return Distributions in Financial Markets

    PubMed Central

    Botta, Federico; Moat, Helen Susannah; Stanley, H. Eugene; Preis, Tobias

    2015-01-01

    Being able to quantify the probability of large price changes in stock markets is of crucial importance in understanding financial crises that affect the lives of people worldwide. Large changes in stock market prices can arise abruptly, within a matter of minutes, or develop across much longer time scales. Here, we analyze a dataset comprising the stocks forming the Dow Jones Industrial Average at a second by second resolution in the period from January 2008 to July 2010 in order to quantify the distribution of changes in market prices at a range of time scales. We find that the tails of the distributions of logarithmic price changes, or returns, exhibit power law decays for time scales ranging from 300 seconds to 3600 seconds. For larger time scales, we find that the distributions tails exhibit exponential decay. Our findings may inform the development of models of market behavior across varying time scales. PMID:26327593

  6. Quantifying Stock Return Distributions in Financial Markets.

    PubMed

    Botta, Federico; Moat, Helen Susannah; Stanley, H Eugene; Preis, Tobias

    2015-01-01

    Being able to quantify the probability of large price changes in stock markets is of crucial importance in understanding financial crises that affect the lives of people worldwide. Large changes in stock market prices can arise abruptly, within a matter of minutes, or develop across much longer time scales. Here, we analyze a dataset comprising the stocks forming the Dow Jones Industrial Average at a second by second resolution in the period from January 2008 to July 2010 in order to quantify the distribution of changes in market prices at a range of time scales. We find that the tails of the distributions of logarithmic price changes, or returns, exhibit power law decays for time scales ranging from 300 seconds to 3600 seconds. For larger time scales, we find that the distributions tails exhibit exponential decay. Our findings may inform the development of models of market behavior across varying time scales.

  7. Freezing transition of the directed polymer in a 1+d random medium: Location of the critical temperature and unusual critical properties

    NASA Astrophysics Data System (ADS)

    Monthus, Cécile; Garel, Thomas

    2006-07-01

    In dimension d⩾3 , the directed polymer in a random medium undergoes a phase transition between a free phase at high temperature and a low-temperature disorder-dominated phase. For the latter phase, Fisher and Huse have proposed a droplet theory based on the scaling of the free-energy fluctuations ΔF(l)˜lθ at scale l . On the other hand, in related growth models belonging to the Kardar-Parisi-Zhang universality class, Forrest and Tang have found that the height-height correlation function is logarithmic at the transition. For the directed polymer model at criticality, this translates into logarithmic free-energy fluctuations ΔFTc(l)˜(lnl)σ with σ=1/2 . In this paper, we propose a droplet scaling analysis exactly at criticality based on this logarithmic scaling. Our main conclusion is that the typical correlation length ξ(T) of the low-temperature phase diverges as lnξ(T)˜[-ln(Tc-T)]1/σ˜[-ln(Tc-T)]2 , instead of the usual power law ξ(T)˜(Tc-T)-ν . Furthermore, the logarithmic dependence of ΔFTc(l) leads to the conclusion that the critical temperature Tc actually coincides with the explicit upper bound T2 derived by Derrida and co-workers, where T2 corresponds to the temperature below which the ratio ZL2¯/(ZL¯)2 diverges exponentially in L . Finally, since the Fisher-Huse droplet theory was initially introduced for the spin-glass phase, we briefly mention the similarities with and differences from the directed polymer model. If one speculates that the free energy of droplet excitations for spin glasses is also logarithmic at Tc , one obtains a logarithmic decay for the mean square correlation function at criticality, C2(r)¯˜1/(lnr)σ , instead of the usual power law 1/rd-2+η .

  8. The effect of multiplicity of stellar encounters and the diffusion coefficients in a locally homogeneous three-dimensional stellar medium: Removing the classical divergence

    NASA Astrophysics Data System (ADS)

    Rastorguev, A. S.; Utkin, N. D.; Chumak, O. V.

    2017-08-01

    Agekyan's λ-factor that allows for the effect of multiplicity of stellar encounters with large impact parameters has been used for the first time to directly calculate the diffusion coefficients in the phase space of a stellar system. Simple estimates show that the cumulative effect, i.e., the total contribution of distant encounters to the change in the velocity of a test star, given the multiplicity of stellar encounters, is finite, and the logarithmic divergence inherent in the classical description of diffusion is removed, as was shown previously byKandrup using a different, more complex approach. In this case, the expressions for the diffusion coefficients, as in the classical description, contain the logarithm of the ratio of two independent quantities: the mean interparticle distance and the impact parameter of a close encounter. However, the physical meaning of this logarithmic factor changes radically: it reflects not the divergence but the presence of two characteristic length scales inherent in the stellar medium.

  9. A new "Logicle" display method avoids deceptive effects of logarithmic scaling for low signals and compensated data.

    PubMed

    Parks, David R; Roederer, Mario; Moore, Wayne A

    2006-06-01

    In immunofluorescence measurements and most other flow cytometry applications, fluorescence signals of interest can range down to essentially zero. After fluorescence compensation, some cell populations will have low means and include events with negative data values. Logarithmic presentation has been very useful in providing informative displays of wide-ranging flow cytometry data, but it fails to adequately display cell populations with low means and high variances and, in particular, offers no way to include negative data values. This has led to a great deal of difficulty in interpreting and understanding flow cytometry data, has often resulted in incorrect delineation of cell populations, and has led many people to question the correctness of compensation computations that were, in fact, correct. We identified a set of criteria for creating data visualization methods that accommodate the scaling difficulties presented by flow cytometry data. On the basis of these, we developed a new data visualization method that provides important advantages over linear or logarithmic scaling for display of flow cytometry data, a scaling we refer to as "Logicle" scaling. Logicle functions represent a particular generalization of the hyperbolic sine function with one more adjustable parameter than linear or logarithmic functions. Finally, we developed methods for objectively and automatically selecting an appropriate value for this parameter. The Logicle display method provides more complete, appropriate, and readily interpretable representations of data that includes populations with low-to-zero means, including distributions resulting from fluorescence compensation procedures, than can be produced using either logarithmic or linear displays. The method includes a specific algorithm for evaluating actual data distributions and deriving parameters of the Logicle scaling function appropriate for optimal display of that data. It is critical to note that Logicle visualization does not change the data values or the descriptive statistics computed from them. Copyright 2006 International Society for Analytical Cytology.

  10. Size-dependent standard deviation for growth rates: Empirical results and theoretical modeling

    NASA Astrophysics Data System (ADS)

    Podobnik, Boris; Horvatic, Davor; Pammolli, Fabio; Wang, Fengzhong; Stanley, H. Eugene; Grosse, I.

    2008-05-01

    We study annual logarithmic growth rates R of various economic variables such as exports, imports, and foreign debt. For each of these variables we find that the distributions of R can be approximated by double exponential (Laplace) distributions in the central parts and power-law distributions in the tails. For each of these variables we further find a power-law dependence of the standard deviation σ(R) on the average size of the economic variable with a scaling exponent surprisingly close to that found for the gross domestic product (GDP) [Phys. Rev. Lett. 81, 3275 (1998)]. By analyzing annual logarithmic growth rates R of wages of 161 different occupations, we find a power-law dependence of the standard deviation σ(R) on the average value of the wages with a scaling exponent β≈0.14 close to those found for the growth of exports, imports, debt, and the growth of the GDP. In contrast to these findings, we observe for payroll data collected from 50 states of the USA that the standard deviation σ(R) of the annual logarithmic growth rate R increases monotonically with the average value of payroll. However, also in this case we observe a power-law dependence of σ(R) on the average payroll with a scaling exponent β≈-0.08 . Based on these observations we propose a stochastic process for multiple cross-correlated variables where for each variable (i) the distribution of logarithmic growth rates decays exponentially in the central part, (ii) the distribution of the logarithmic growth rate decays algebraically in the far tails, and (iii) the standard deviation of the logarithmic growth rate depends algebraically on the average size of the stochastic variable.

  11. Size-dependent standard deviation for growth rates: empirical results and theoretical modeling.

    PubMed

    Podobnik, Boris; Horvatic, Davor; Pammolli, Fabio; Wang, Fengzhong; Stanley, H Eugene; Grosse, I

    2008-05-01

    We study annual logarithmic growth rates R of various economic variables such as exports, imports, and foreign debt. For each of these variables we find that the distributions of R can be approximated by double exponential (Laplace) distributions in the central parts and power-law distributions in the tails. For each of these variables we further find a power-law dependence of the standard deviation sigma(R) on the average size of the economic variable with a scaling exponent surprisingly close to that found for the gross domestic product (GDP) [Phys. Rev. Lett. 81, 3275 (1998)]. By analyzing annual logarithmic growth rates R of wages of 161 different occupations, we find a power-law dependence of the standard deviation sigma(R) on the average value of the wages with a scaling exponent beta approximately 0.14 close to those found for the growth of exports, imports, debt, and the growth of the GDP. In contrast to these findings, we observe for payroll data collected from 50 states of the USA that the standard deviation sigma(R) of the annual logarithmic growth rate R increases monotonically with the average value of payroll. However, also in this case we observe a power-law dependence of sigma(R) on the average payroll with a scaling exponent beta approximately -0.08 . Based on these observations we propose a stochastic process for multiple cross-correlated variables where for each variable (i) the distribution of logarithmic growth rates decays exponentially in the central part, (ii) the distribution of the logarithmic growth rate decays algebraically in the far tails, and (iii) the standard deviation of the logarithmic growth rate depends algebraically on the average size of the stochastic variable.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stevens, Mark J.; Saleh, Omar A.

    We calculated the force-extension curves for a flexible polyelectrolyte chain with varying charge separations by performing Monte Carlo simulations of a 5000 bead chain using a screened Coulomb interaction. At all charge separations, the force-extension curves exhibit a Pincus-like scaling regime at intermediate forces and a logarithmic regime at large forces. As the charge separation increases, the Pincus regime shifts to a larger range of forces and the logarithmic regime starts are larger forces. We also found that force-extension curve for the corresponding neutral chain has a logarithmic regime. Decreasing the diameter of bead in the neutral chain simulations removedmore » the logarithmic regime, and the force-extension curve tends to the freely jointed chain limit. In conclusion, this result shows that only excluded volume is required for the high force logarithmic regime to occur.« less

  13. Holographic Dark Energy in Brans-Dicke Theory with Logarithmic Form of Scalar Field

    NASA Astrophysics Data System (ADS)

    Singh, C. P.; Kumar, Pankaj

    2017-10-01

    In this paper, an interacting holographic dark energy model with Hubble horizon as an infra-red cut-off is considered in the framework of Brans-Dicke theory. We assume the Brans-Dicke scalar field as a logarithmic form ϕ = ϕ 0 l n( α + β a), where a is the scale factor, α and β are arbitrary constants, to interpret the physical phenomena of the Universe. The equation of state parameter w h and deceleration parameter q are obtained to discuss the dynamics of the evolution of the Universe. We present a unified model of holographic dark energy which explains the early time acceleration (inflation), medieval time deceleration and late time acceleration. It is also observed that w h may cross the phantom divide line in the late time evolution. We also discuss the cosmic coincidence problem. We obtain a time-varying density ratio of holographic dark energy to dark matter which is a constant of order one (r˜ O(1)) during early and late time evolution, and may evolve sufficiently slow at present time. Thus, the model successfully resolves the cosmic coincidence problem.

  14. Nonlinear interactions and their scaling in the logarithmic region of turbulent channels

    NASA Astrophysics Data System (ADS)

    Moarref, Rashad; Sharma, Ati S.; Tropp, Joel A.; McKeon, Beverley J.

    2014-11-01

    The nonlinear interactions in wall turbulence redistribute the turbulent kinetic energy across different scales and different wall-normal locations. To better understand these interactions in the logarithmic region of turbulent channels, we decompose the velocity into a weighted sum of resolvent modes (McKeon & Sharma, J. Fluid Mech., 2010). The resolvent modes represent the linear amplification mechanisms in the Navier-Stokes equations (NSE) and the weights represent the scaling influence of the nonlinearity. An explicit equation for the unknown weights is obtained by projecting the NSE onto the known resolvent modes (McKeon et al., Phys. Fluids, 2013). The weights of triad modes -the modes that directly interact via the quadratic nonlinearity in the NSE- are coupled via interaction coefficients that depend solely on the resolvent modes. We use the hierarchies of self-similar modes in the logarithmic region (Moarref et al., J. Fluid Mech., 2013) to extend the notion of triad modes to triad hierarchies. It is shown that the interaction coefficients for the triad modes that belong to a triad hierarchy follow an exponential function. These scalings can be used to better understand the interaction of flow structures in the logarithmic region and develop analytical results therein. The support of Air Force Office of Scientific Research under Grants FA 9550-09-1-0701 (P.M. Rengasamy Ponnappan) and FA 9550-12-1-0469 (P.M. Doug Smith) is gratefully acknowledged.

  15. Exactly solvable models of growing interfaces and lattice gases: the Arcetri models, ageing and logarithmic sub-ageing

    NASA Astrophysics Data System (ADS)

    Durang, Xavier; Henkel, Malte

    2017-12-01

    Motivated by an analogy with the spherical model of a ferromagnet, the three Arcetri models are defined. They present new universality classes, either for the growth of interfaces, or else for lattice gases. They are distinct from the common Edwards-Wilkinson and Kardar-Parisi-Zhang universality classes. Their non-equilibrium evolution can be studied by the exact computation of their two-time correlators and responses. In both interpretations, the first model has a critical point in any dimension and shows simple ageing at and below criticality. The exact universal exponents are found. The second and third model are solved at zero temperature, in one dimension, where both show logarithmic sub-ageing, of which several distinct types are identified. Physically, the second model describes a lattice gas and the third model describes interface growth. A clear physical picture on the subsequent time and length scales of the sub-ageing process emerges.

  16. Equilibrium Solutions of the Logarithmic Hamiltonian Leapfrog for the N-body Problem

    NASA Astrophysics Data System (ADS)

    Minesaki, Yukitaka

    2018-04-01

    We prove that a second-order logarithmic Hamiltonian leapfrog for the classical general N-body problem (CGNBP) designed by Mikkola and Tanikawa and some higher-order logarithmic Hamiltonian methods based on symmetric multicompositions of the logarithmic algorithm exactly reproduce the orbits of elliptic relative equilibrium solutions in the original CGNBP. These methods are explicit symplectic methods. Before this proof, only some implicit discrete-time CGNBPs proposed by Minesaki had been analytically shown to trace the orbits of elliptic relative equilibrium solutions. The proof is therefore the first existence proof for explicit symplectic methods. Such logarithmic Hamiltonian methods with a variable time step can also precisely retain periodic orbits in the classical general three-body problem, which generic numerical methods with a constant time step cannot do.

  17. Continuous time random walk model with asymptotical probability density of waiting times via inverse Mittag-Leffler function

    NASA Astrophysics Data System (ADS)

    Liang, Yingjie; Chen, Wen

    2018-04-01

    The mean squared displacement (MSD) of the traditional ultraslow diffusion is a logarithmic function of time. Recently, the continuous time random walk model is employed to characterize this ultraslow diffusion dynamics by connecting the heavy-tailed logarithmic function and its variation as the asymptotical waiting time density. In this study we investigate the limiting waiting time density of a general ultraslow diffusion model via the inverse Mittag-Leffler function, whose special case includes the traditional logarithmic ultraslow diffusion model. The MSD of the general ultraslow diffusion model is analytically derived as an inverse Mittag-Leffler function, and is observed to increase even more slowly than that of the logarithmic function model. The occurrence of very long waiting time in the case of the inverse Mittag-Leffler function has the largest probability compared with the power law model and the logarithmic function model. The Monte Carlo simulations of one dimensional sample path of a single particle are also performed. The results show that the inverse Mittag-Leffler waiting time density is effective in depicting the general ultraslow random motion.

  18. Simulations of stretching a flexible polyelectrolyte with varying charge separation

    DOE PAGES

    Stevens, Mark J.; Saleh, Omar A.

    2016-07-22

    We calculated the force-extension curves for a flexible polyelectrolyte chain with varying charge separations by performing Monte Carlo simulations of a 5000 bead chain using a screened Coulomb interaction. At all charge separations, the force-extension curves exhibit a Pincus-like scaling regime at intermediate forces and a logarithmic regime at large forces. As the charge separation increases, the Pincus regime shifts to a larger range of forces and the logarithmic regime starts are larger forces. We also found that force-extension curve for the corresponding neutral chain has a logarithmic regime. Decreasing the diameter of bead in the neutral chain simulations removedmore » the logarithmic regime, and the force-extension curve tends to the freely jointed chain limit. In conclusion, this result shows that only excluded volume is required for the high force logarithmic regime to occur.« less

  19. Improved maximum average correlation height filter with adaptive log base selection for object recognition

    NASA Astrophysics Data System (ADS)

    Tehsin, Sara; Rehman, Saad; Awan, Ahmad B.; Chaudry, Qaiser; Abbas, Muhammad; Young, Rupert; Asif, Afia

    2016-04-01

    Sensitivity to the variations in the reference image is a major concern when recognizing target objects. A combinational framework of correlation filters and logarithmic transformation has been previously reported to resolve this issue alongside catering for scale and rotation changes of the object in the presence of distortion and noise. In this paper, we have extended the work to include the influence of different logarithmic bases on the resultant correlation plane. The meaningful changes in correlation parameters along with contraction/expansion in the correlation plane peak have been identified under different scenarios. Based on our research, we propose some specific log bases to be used in logarithmically transformed correlation filters for achieving suitable tolerance to different variations. The study is based upon testing a range of logarithmic bases for different situations and finding an optimal logarithmic base for each particular set of distortions. Our results show improved correlation and target detection accuracies.

  20. Logarithmic r-θ mapping for hybrid optical neural network filter for multiple objects recognition within cluttered scenes

    NASA Astrophysics Data System (ADS)

    Kypraios, Ioannis; Young, Rupert C. D.; Chatwin, Chris R.; Birch, Phil M.

    2009-04-01

    θThe window unit in the design of the complex logarithmic r-θ mapping for hybrid optical neural network filter can allow multiple objects of the same class to be detected within the input image. Additionally, the architecture of the neural network unit of the complex logarithmic r-θ mapping for hybrid optical neural network filter becomes attractive for accommodating the recognition of multiple objects of different classes within the input image by modifying the output layer of the unit. We test the overall filter for multiple objects of the same and of different classes' recognition within cluttered input images and video sequences of cluttered scenes. Logarithmic r-θ mapping for hybrid optical neural network filter is shown to exhibit with a single pass over the input data simultaneously in-plane rotation, out-of-plane rotation, scale, log r-θ map translation and shift invariance, and good clutter tolerance by recognizing correctly the different objects within the cluttered scenes. We record in our results additional extracted information from the cluttered scenes about the objects' relative position, scale and in-plane rotation.

  1. Logarithmic detrapping response for holes injected into SiO2 and the influence of thermal activation and electric fields

    NASA Astrophysics Data System (ADS)

    Lakshmanna, V.; Vengurlekar, A. S.

    1988-05-01

    Relaxation of trapped holes that are introduced into silicon dioxide from silicon by the avalanche injection method is studied under various conditions of thermal activation and external electric fields. It is found that the flat band voltage recovery in time follows a universal behavior in that the response at high temperatures is a time scaled extension of the response at low temperatures. Similar universality exists in the detrapping response at different external bias fields. The recovery characteristics show a logarithmic time dependence in the time regime studied (up to 6000 s). We find that the recovery is thermally activated with the activation energy varying from 0.5 eV for a field of 2 MV/cm to 1.0 eV for a field of -1 MV/cm. There is little discharge in 3000 s at room temperature for negative fields beyond -4 MV/cm. The results suggest that the recovery is due to tunneling of electrons in the silicon conduction band into the oxide either to compensate or to remove the charge of trapped holes.

  2. Logarithmic field dependence of the Thermal Conductivity in La_2-xSr_xCuO_4

    NASA Astrophysics Data System (ADS)

    Krishana, K.; Ong, N. P.; Kimura, T.

    1997-03-01

    We have investigated the thermal conductivity κ of La_2-xSr_xCuO4 in fields B upto 14 tesla. To minimize errors caused by the field sensitivity of the thermocouple sensors, we used a sensitive null-detection technique. We find that below Tc κ varies as -logB in high fields and in the low field limit it approaches a constant. The κ vs. B data at these temperatures collapse to a universal curve , which fits very well to an expression involving the digamma function and reminiscent of 2-D weak localization. The field scale derived from this scaling is linear in T. The logarithmic dependence of κ strongly suggests an electronic origin for anomaly in κ below T_c. Our experiment precludes conventional vortex scattering of phonons as the source of the anomaly. The data fit poorly to these models and the derived mean-free-paths are non monotonic and 5 to 8 times larger than obtained from heat capacity. Also comparison of the x=0.17 and x=0.08 samples give field scales opposite to what is expected from vortex scattering.

  3. An insight on correlations between kinematic rupture parameters from dynamic ruptures on rough faults

    NASA Astrophysics Data System (ADS)

    Thingbijam, Kiran Kumar; Galis, Martin; Vyas, Jagdish; Mai, P. Martin

    2017-04-01

    We examine the spatial interdependence between kinematic parameters of earthquake rupture, which include slip, rise-time (total duration of slip), acceleration time (time-to-peak slip velocity), peak slip velocity, and rupture velocity. These parameters were inferred from dynamic rupture models obtained by simulating spontaneous rupture on faults with varying degree of surface-roughness. We observe that the correlations between these parameters are better described by non-linear correlations (that is, on logarithm-logarithm scale) than by linear correlations. Slip and rise-time are positively correlated while these two parameters do not correlate with acceleration time, peak slip velocity, and rupture velocity. On the other hand, peak slip velocity correlates positively with rupture velocity but negatively with acceleration time. Acceleration time correlates negatively with rupture velocity. However, the observed correlations could be due to weak heterogeneity of the slip distributions given by the dynamic models. Therefore, the observed correlations may apply only to those parts of rupture plane with weak slip heterogeneity if earthquake-rupture associate highly heterogeneous slip distributions. Our findings will help to improve pseudo-dynamic rupture generators for efficient broadband ground-motion simulations for seismic hazard studies.

  4. Factorization for jet radius logarithms in jet mass spectra at the LHC

    DOE PAGES

    Kolodrubetz, Daniel W.; Pietrulewicz, Piotr; Stewart, Iain W.; ...

    2016-12-14

    To predict the jet mass spectrum at a hadron collider it is crucial to account for the resummation of logarithms between the transverse momentum of the jet and its invariant mass m J . For small jet areas there are additional large logarithms of the jet radius R, which affect the convergence of the perturbative series. We present an analytic framework for exclusive jet production at the LHC which gives a complete description of the jet mass spectrum including realistic jet algorithms and jet vetoes. It factorizes the scales associated with m J , R, and the jet veto, enablingmore » in addition the systematic resummation of jet radius logarithms in the jet mass spectrum beyond leading logarithmic order. We discuss the factorization formulae for the peak and tail region of the jet mass spectrum and for small and large R, and the relations between the different regimes and how to combine them. Regions of experimental interest are classified which do not involve large nonglobal logarithms. We also present universal results for nonperturbative effects and discuss various jet vetoes.« less

  5. The asymptotic form of non-global logarithms, black disc saturation, and gluonic deserts

    NASA Astrophysics Data System (ADS)

    Neill, Duff

    2017-01-01

    We develop an asymptotic perturbation theory for the large logarithmic behavior of the non-linear integro-differential equation describing the soft correlations of QCD jet measurements, the Banfi-Marchesini-Smye (BMS) equation. This equation captures the late-time evolution of radiating color dipoles after a hard collision. This allows us to prove that at large values of the control variable (the non-global logarithm, a function of the infra-red energy scales associated with distinct hard jets in an event), the distribution has a gaussian tail. We compute the decay width analytically, giving a closed form expression, and find it to be jet geometry independent, up to the number of legs of the dipole in the active jet. Enabling the asymptotic expansion is the correct perturbative seed, where we perturb around an anzats encoding formally no real emissions, an intuition motivated by the buffer region found in jet dynamics. This must be supplemented with the correct application of the BFKL approximation to the BMS equation in collinear limits. Comparing to the asymptotics of the conformally related evolution equation encountered in small-x physics, the Balitisky-Kovchegov (BK) equation, we find that the asymptotic form of the non-global logarithms directly maps to the black-disc unitarity limit of the BK equation, despite the contrasting physical pictures. Indeed, we recover the equations of saturation physics in the final state dynamics of QCD.

  6. Evaluation of empirical process design relationships for ozone disinfection of water and wastewater

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Finch, G.R.; Smith, D.W.

    A research program was undertaken to examine the dose-response of Escherichia coli ATCC 11775 in ozone demand-free phosphate buffer solution and in a high quality secondary wastewater effluent with a total organic carbon content of 8 mg/L and a chemical oxygen demand of 26 mg/L. The studies were conducted in bench-scale batch reactors for both water types. In addition, studies using secondary effluent also were conducted in a pilot-scale, semi-batch reactor to evaluate scale-up effects. It was found that the ozone dose was the most important design parameter in both types of water. Contact time was of some importance inmore » the ozone demand-free water and had no detectable effect in the secondary effluent. Pilot-scale data confirmed the results obtained at bench-scale for the secondary effluent. Regression analysis of the logarithm of the E. coli response on the logarithm of the utilized ozone dose revealed that there was lack-of-fit using the model form which has been used frequently for the design of wastewater disinfection systems. This occurred as a result of a marked tailing effect of the log-log plot as the ozone dose increased and the kill increased. It was postulated that this was caused by some unknown physiological differences within the E. coli population due to age or another factor.« less

  7. Simple scale interpolator facilitates reading of graphs

    NASA Technical Reports Server (NTRS)

    Fazio, A.; Henry, B.; Hood, D.

    1966-01-01

    Set of cards with scale divisions and a scale finder permits accurate reading of the coordinates of points on linear or logarithmic graphs plotted on rectangular grids. The set contains 34 different scales for linear plotting and 28 single cycle scales for log plots.

  8. Scaling in the vicinity of the four-state Potts fixed point

    NASA Astrophysics Data System (ADS)

    Blöte, H. W. J.; Guo, Wenan; Nightingale, M. P.

    2017-08-01

    We study a self-dual generalization of the Baxter-Wu model, employing results obtained by transfer matrix calculations of the magnetic scaling dimension and the free energy. While the pure critical Baxter-Wu model displays the critical behavior of the four-state Potts fixed point in two dimensions, in the sense that logarithmic corrections are absent, the introduction of different couplings in the up- and down triangles moves the model away from this fixed point, so that logarithmic corrections appear. Real couplings move the model into the first-order range, away from the behavior displayed by the nearest-neighbor, four-state Potts model. We also use complex couplings, which bring the model in the opposite direction characterized by the same type of logarithmic corrections as present in the four-state Potts model. Our finite-size analysis confirms in detail the existing renormalization theory describing the immediate vicinity of the four-state Potts fixed point.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kolodrubetz, Daniel W.; Pietrulewicz, Piotr; Stewart, Iain W.

    To predict the jet mass spectrum at a hadron collider it is crucial to account for the resummation of logarithms between the transverse momentum of the jet and its invariant mass m J . For small jet areas there are additional large logarithms of the jet radius R, which affect the convergence of the perturbative series. We present an analytic framework for exclusive jet production at the LHC which gives a complete description of the jet mass spectrum including realistic jet algorithms and jet vetoes. It factorizes the scales associated with m J , R, and the jet veto, enablingmore » in addition the systematic resummation of jet radius logarithms in the jet mass spectrum beyond leading logarithmic order. We discuss the factorization formulae for the peak and tail region of the jet mass spectrum and for small and large R, and the relations between the different regimes and how to combine them. Regions of experimental interest are classified which do not involve large nonglobal logarithms. We also present universal results for nonperturbative effects and discuss various jet vetoes.« less

  10. The length and time scales of water's glass transitions

    NASA Astrophysics Data System (ADS)

    Limmer, David T.

    2014-06-01

    Using a general model for the equilibrium dynamics of supercooled liquids, I compute from molecular properties the emergent length and time scales that govern the nonequilibrium relaxation behavior of amorphous ice prepared by rapid cooling. Upon cooling, the liquid water falls out of equilibrium whereby the temperature dependence of its relaxation time is predicted to change from super-Arrhenius to Arrhenius. A consequence of this crossover is that the location of the apparent glass transition temperature depends logarithmically on cooling rate. Accompanying vitrification is the emergence of a dynamical length-scale, the size of which depends on the cooling rate and varies between angstroms and tens of nanometers. While this protocol dependence clarifies a number of previous experimental observations for amorphous ice, the arguments are general and can be extended to other glass forming liquids.

  11. The length and time scales of water's glass transitions.

    PubMed

    Limmer, David T

    2014-06-07

    Using a general model for the equilibrium dynamics of supercooled liquids, I compute from molecular properties the emergent length and time scales that govern the nonequilibrium relaxation behavior of amorphous ice prepared by rapid cooling. Upon cooling, the liquid water falls out of equilibrium whereby the temperature dependence of its relaxation time is predicted to change from super-Arrhenius to Arrhenius. A consequence of this crossover is that the location of the apparent glass transition temperature depends logarithmically on cooling rate. Accompanying vitrification is the emergence of a dynamical length-scale, the size of which depends on the cooling rate and varies between angstroms and tens of nanometers. While this protocol dependence clarifies a number of previous experimental observations for amorphous ice, the arguments are general and can be extended to other glass forming liquids.

  12. Entanglement entropy of a three-spin-interacting spin chain with a time-reversal-breaking impurity at one boundary.

    PubMed

    Nag, Tanay; Rajak, Atanu

    2018-04-01

    We investigate the effect of a time-reversal-breaking impurity term (of strength λ_{d}) on both the equilibrium and nonequilibrium critical properties of entanglement entropy (EE) in a three-spin-interacting transverse Ising model, which can be mapped to a p-wave superconducting chain with next-nearest-neighbor hopping and interaction. Importantly, we find that the logarithmic scaling of the EE with block size remains unaffected by the application of the impurity term, although, the coefficient (i.e., central charge) varies logarithmically with the impurity strength for a lower range of λ_{d} and eventually saturates with an exponential damping factor [∼exp(-λ_{d})] for the phase boundaries shared with the phase containing two Majorana edge modes. On the other hand, it receives a linear correction in term of λ_{d} for an another phase boundary. Finally, we focus to study the effect of the impurity in the time evolution of the EE for the critical quenching case where the impurity term is applied only to the final Hamiltonian. Interestingly, it has been shown that for all the phase boundaries, contrary to the equilibrium case, the saturation value of the EE increases logarithmically with the strength of impurity in a certain regime of λ_{d} and finally, for higher values of λ_{d}, it increases very slowly dictated by an exponential damping factor. The impurity-induced behavior of EE might bear some deep underlying connection to thermalization.

  13. Entanglement entropy of a three-spin-interacting spin chain with a time-reversal-breaking impurity at one boundary

    NASA Astrophysics Data System (ADS)

    Nag, Tanay; Rajak, Atanu

    2018-04-01

    We investigate the effect of a time-reversal-breaking impurity term (of strength λd) on both the equilibrium and nonequilibrium critical properties of entanglement entropy (EE) in a three-spin-interacting transverse Ising model, which can be mapped to a p -wave superconducting chain with next-nearest-neighbor hopping and interaction. Importantly, we find that the logarithmic scaling of the EE with block size remains unaffected by the application of the impurity term, although, the coefficient (i.e., central charge) varies logarithmically with the impurity strength for a lower range of λd and eventually saturates with an exponential damping factor [˜exp(-λd) ] for the phase boundaries shared with the phase containing two Majorana edge modes. On the other hand, it receives a linear correction in term of λd for an another phase boundary. Finally, we focus to study the effect of the impurity in the time evolution of the EE for the critical quenching case where the impurity term is applied only to the final Hamiltonian. Interestingly, it has been shown that for all the phase boundaries, contrary to the equilibrium case, the saturation value of the EE increases logarithmically with the strength of impurity in a certain regime of λd and finally, for higher values of λd, it increases very slowly dictated by an exponential damping factor. The impurity-induced behavior of EE might bear some deep underlying connection to thermalization.

  14. Fragmentation functions beyond fixed order accuracy

    NASA Astrophysics Data System (ADS)

    Anderle, Daniele P.; Kaufmann, Tom; Stratmann, Marco; Ringer, Felix

    2017-03-01

    We give a detailed account of the phenomenology of all-order resummations of logarithmically enhanced contributions at small momentum fraction of the observed hadron in semi-inclusive electron-positron annihilation and the timelike scale evolution of parton-to-hadron fragmentation functions. The formalism to perform resummations in Mellin moment space is briefly reviewed, and all relevant expressions up to next-to-next-to-leading logarithmic order are derived, including their explicit dependence on the factorization and renormalization scales. We discuss the details pertinent to a proper numerical implementation of the resummed results comprising an iterative solution to the timelike evolution equations, the matching to known fixed-order expressions, and the choice of the contour in the Mellin inverse transformation. First extractions of parton-to-pion fragmentation functions from semi-inclusive annihilation data are performed at different logarithmic orders of the resummations in order to estimate their phenomenological relevance. To this end, we compare our results to corresponding fits up to fixed, next-to-next-to-leading order accuracy and study the residual dependence on the factorization scale in each case.

  15. Streamflow record extension using power transformations and application to sediment transport

    NASA Astrophysics Data System (ADS)

    Moog, Douglas B.; Whiting, Peter J.; Thomas, Robert B.

    1999-01-01

    To obtain a representative set of flow rates for a stream, it is often desirable to fill in missing data or extend measurements to a longer time period by correlation to a nearby gage with a longer record. Linear least squares regression of the logarithms of the flows is a traditional and still common technique. However, its purpose is to generate optimal estimates of each day's discharge, rather than the population of discharges, for which it tends to underestimate variance. Maintenance-of-variance-extension (MOVE) equations [Hirsch, 1982] were developed to correct this bias. This study replaces the logarithmic transformation by the more general Box-Cox scaled power transformation, generating a more linear, constant-variance relationship for the MOVE extension. Combining the Box-Cox transformation with the MOVE extension is shown to improve accuracy in estimating order statistics of flow rate, particularly for the nonextreme discharges which generally govern cumulative transport over time. This advantage is illustrated by prediction of cumulative fractions of total bed load transport.

  16. Performance of the lysozyme for promoting the waste activated sludge biodegradability.

    PubMed

    He, Jun-Guo; Xin, Xiao-Dong; Qiu, Wei; Zhang, Jie; Wen, Zhi-Dan; Tang, Jian

    2014-10-01

    The fresh waste activated sludge (WAS) from a lab-scale sequencing batch reactor was used to determine the performance of the lysozyme for promoting its biodegradability. The results showed that a strict linear relationship presented between the degree of disintegration (DDM) of WAS and the lysozyme incubation time from 0 to 240min (R(2) was 0.992, 0.995 and 0.999 in accordance with the corresponding lysozyme/TS, respectively). Ratio of net SCOD increase augmented significantly by lysozyme digestion for evaluating the sludge biodegradability changes. Moreover, the protein dominated both in the EPS and SMP. In addition, the logarithm of SMP contents in supernatant presented an increasing trend similar with the ascending logarithmic relation with the lysozyme incubation time from 0 to 240min (R(2) was 0.960, 0.959 and 0.947, respectively). The SMP, especially the soluble protein, had an important contribution to the improvement of WAS biodegradability. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Endogenous and exogenous dynamics in the fluctuations of capital fluxes. An empirical analysis of the Chinese stock market

    NASA Astrophysics Data System (ADS)

    Jiang, Z.-Q.; Guo, L.; Zhou, W.-X.

    2007-06-01

    A phenomenological investigation of the endogenous and exogenous dynamics in the fluctuations of capital fluxes is carried out on the Chinese stock market using mean-variance analysis, fluctuation analysis, and their generalizations to higher orders. Non-universal dynamics have been found not only in the scaling exponent α, which is different from the universal values 1/2 and 1, but also in the distributions of the ratio η= σexo / σendo of individual stocks. Both the scaling exponent α of fluctuations and the Hurst exponent Hi increase in logarithmic form with the time scale Δt and the mean traded value per minute , respectively. We find that the scaling exponent αendo of the endogenous fluctuations is independent of the time scale. Multiscaling and multifractal features are observed in the data as well. However, the inhomogeneous impact model is not verified.

  18. Retrieving pace in vegetation growth using precipitation and soil moisture

    NASA Astrophysics Data System (ADS)

    Sohoulande Djebou, D. C.; Singh, V. P.

    2013-12-01

    The complexity of interactions between the biophysical components of the watershed increases the challenge of understanding water budget. Hence, the perspicacity of the continuum soil-vegetation-atmosphere's functionality still remains crucial for science. This study targeted the Texas Gulf watershed and evaluated the behavior of vegetation covers by coupling precipitation and soil moisture patterns. Growing season's Normalized Differential Vegetation Index NDVI for deciduous forest and grassland were used over a 23 year period as well as precipitation and soil moisture data. The role of time scales on vegetation dynamics analysis was appraised using both entropy rescaling and correlation analysis. This resulted in that soil moisture at 5 cm and 25cm are potentially more efficient to use for vegetation dynamics monitoring at finer time scale compared to precipitation. Albeit soil moisture at 5 cm and 25 cm series are highly correlated (R2>0.64), it appeared that 5 cm soil moisture series can better explain the variability of vegetation growth. A logarithmic transformation of soil moisture and precipitation data increased correlation with NDVI for the different time scales considered. Based on a monthly time scale we came out with a relationship between vegetation index and the couple soil moisture and precipitation [NDVI=a*Log(% soil moisture)+b*Log(Precipitation)+c] with R2>0.25 for each vegetation type. Further, we proposed to assess vegetation green-up using logistic regression model and transinformation entropy using the couple soil moisture and precipitation as independent variables and vegetation growth metrics (NDVI, NDVI ratio, NDVI slope) as the dependent variable. The study is still ongoing and the results will surely contribute to the knowledge in large scale vegetation monitoring. Keywords: Precipitation, soil moisture, vegetation growth, entropy Time scale, Logarithmic transformation and correlation between soil moisture and NDVI, precipitation and NDVI. The analysis is performed by combining both scenes 7 and 8 data. Schematic illustration of the two dimension transinformation entropy approach. T(P,SM;VI) stand for the transinformation contained in the couple soil moisture (SM)/precipitation (P) and explaining vegetation growth (VI).

  19. Time-Dependent Damage Investigation of Rock Mass in an In Situ Experimental Tunnel

    PubMed Central

    Jiang, Quan; Cui, Jie; Chen, Jing

    2012-01-01

    In underground tunnels or caverns, time-dependent deformation or failure of rock mass, such as extending cracks, gradual rock falls, etc., are a costly irritant and a major safety concern if the time-dependent damage of surrounding rock is serious. To understand the damage evolution of rock mass in underground engineering, an in situ experimental testing was carried out in a large belowground tunnel with a scale of 28.5 m in width, 21 m in height and 352 m in length. The time-dependent damage of rock mass was detected in succession by an ultrasonic wave test after excavation. The testing results showed that the time-dependent damage of rock mass could last a long time, i.e., nearly 30 days. Regression analysis of damage factors defined by wave velocity, resulted in the time-dependent evolutional damage equation of rock mass, which corresponded with logarithmic format. A damage viscoelastic-plastic model was developed to describe the exposed time-dependent deterioration of rock mass by field test, such as convergence of time-dependent damage, deterioration of elastic modules and logarithmic format of damage factor. Furthermore, the remedial measures for damaged surrounding rock were discussed based on the measured results and the conception of damage compensation, which provides new clues for underground engineering design.

  20. Numerical Simulation of Atmospheric Boundary Layer Flow Over Battlefield-scale Complex Terrain: Surface Fluxes From Resolved and Subgrid Scales

    DTIC Science & Technology

    2015-07-06

    provision of law , no person shall be subject to any oenalty for failing to comply with a collection of information if it does not display a currently...due to h′ (x, y) are represented by the equilibrium logarithmic law : τw,∆13 ρ = u2τ ũ U = − [ κU log (z/z0) ]2 ũ U , (2) where z0 is a momentum...topography. The equilibrium logarithmic law expression for passive scalar fluxes, q̇′′ (neutral stratification – stability correction terms not needed

  1. [Ophthalmologic reading charts : Part 2: Current logarithmically scaled reading charts].

    PubMed

    Radner, W

    2016-12-01

    To analyze currently available reading charts regarding print size, logarithmic print size progression, and the background of test-item standardization. For the present study, the following logarithmically scaled reading charts were investigated using a measuring microscope (iNexis VMA 2520; Nikon, Tokyo): Eschenbach, Zeiss, OCULUS, MNREAD (Minnesota Near Reading Test), Colenbrander, and RADNER. Calculations were made according to EN-ISO 8596 and the International Research Council recommendations. Modern reading charts and cards exhibit a logarithmic progression of print sizes. The RADNER reading charts comprise four different cards with standardized test items (sentence optotypes), a well-defined stop criterion, accurate letter sizes, and a high print quality. Numbers and Landolt rings are also given in the booklet. The OCULUS cards have currently been reissued according to recent standards and also exhibit a high print quality. In addition to letters, numbers, Landolt rings, and examples taken from a timetable and the telephone book, sheet music is also offered. The Colenbrander cards use short sentences of 44 characters, including spaces, and exhibit inaccuracy at smaller letter sizes, as do the MNREAD cards. The MNREAD cards use sentences of 60 characters, including spaces, and have a high print quality. Modern reading charts show that international standards can be achieved with test items similar to optotypes, by using recent technology and developing new concepts of test-item standardization. Accurate print sizes, high print quality, and a logarithmic progression should become the minimum requirements for reading charts and reading cards in ophthalmology.

  2. Next-to-leading order Balitsky-Kovchegov equation with resummation

    DOE PAGES

    Lappi, T.; Mantysaari, H.

    2016-05-03

    Here, we solve the Balitsky-Kovchegov evolution equation at next-to-leading order accuracy including a resummation of large single and double transverse momentum logarithms to all orders. We numerically determine an optimal value for the constant under the large transverse momentum logarithm that enables including a maximal amount of the full NLO result in the resummation. When this value is used, the contribution from the α 2 s terms without large logarithms is found to be small at large saturation scales and at small dipoles. Close to initial conditions relevant for phenomenological applications, these fixed-order corrections are shown to be numerically important.

  3. Dead-time compensation for a logarithmic display rate meter

    DOEpatents

    Larson, John A.; Krueger, Frederick P.

    1988-09-20

    An improved circuit is provided for application to a radiation survey meter that uses a detector that is subject to dead time. The circuit compensates for dead time over a wide range of count rates by producing a dead-time pulse for each detected event, a live-time pulse that spans the interval between dead-time pulses, and circuits that average the value of these pulses over time. The logarithm of each of these values is obtained and the logarithms are subtracted to provide a signal that is proportional to a count rate that is corrected for the effects of dead time. The circuit produces a meter indication and is also capable of producing an audible indication of detected events.

  4. Dead-time compensation for a logarithmic display rate meter

    DOEpatents

    Larson, J.A.; Krueger, F.P.

    1987-10-05

    An improved circuit is provided for application to a radiation survey meter that uses a detector that is subject to dead time. The circuit compensates for dead time over a wide range of count rates by producing a dead-time pulse for each detected event, a live-time pulse that spans the interval between dead-time pulses, and circuits that average the value of these pulses over time. The logarithm of each of these values is obtained and the logarithms are subtracted to provide a signal that is proportional to a count rate that is corrected for the effects of dead time. The circuit produces a meter indication and is also capable of producing an audible indication of detected events. 5 figs.

  5. The asymptotic form of non-global logarithms, black disc saturation, and gluonic deserts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neill, Duff

    Here, we develop an asymptotic perturbation theory for the large logarithmic behavior of the non-linear integro-differential equation describing the soft correlations of QCD jet measurements, the Banfi-Marchesini-Smye (BMS) equation. Furthermore, this equation captures the late-time evolution of radiating color dipoles after a hard collision. This allows us to prove that at large values of the control variable (the non-global logarithm, a function of the infra-red energy scales associated with distinct hard jets in an event), the distribution has a gaussian tail. We also compute the decay width analytically, giving a closed form expression, and find it to be jet geometrymore » independent, up to the number of legs of the dipole in the active jet. By enabling the asymptotic expansion we find that the perturbative seed is correct; we perturb around an anzats encoding formally no real emissions, an intuition motivated by the buffer region found in jet dynamics. This must be supplemented with the correct application of the BFKL approximation to the BMS equation in collinear limits. Comparing to the asymptotics of the conformally related evolution equation encountered in small-x physics, the Balitisky-Kovchegov (BK) equation, we find that the asymptotic form of the non-global logarithms directly maps to the black-disc unitarity limit of the BK equation, despite the contrasting physical pictures. Indeed, we recover the equations of saturation physics in the final state dynamics of QCD.« less

  6. The asymptotic form of non-global logarithms, black disc saturation, and gluonic deserts

    DOE PAGES

    Neill, Duff

    2017-01-25

    Here, we develop an asymptotic perturbation theory for the large logarithmic behavior of the non-linear integro-differential equation describing the soft correlations of QCD jet measurements, the Banfi-Marchesini-Smye (BMS) equation. Furthermore, this equation captures the late-time evolution of radiating color dipoles after a hard collision. This allows us to prove that at large values of the control variable (the non-global logarithm, a function of the infra-red energy scales associated with distinct hard jets in an event), the distribution has a gaussian tail. We also compute the decay width analytically, giving a closed form expression, and find it to be jet geometrymore » independent, up to the number of legs of the dipole in the active jet. By enabling the asymptotic expansion we find that the perturbative seed is correct; we perturb around an anzats encoding formally no real emissions, an intuition motivated by the buffer region found in jet dynamics. This must be supplemented with the correct application of the BFKL approximation to the BMS equation in collinear limits. Comparing to the asymptotics of the conformally related evolution equation encountered in small-x physics, the Balitisky-Kovchegov (BK) equation, we find that the asymptotic form of the non-global logarithms directly maps to the black-disc unitarity limit of the BK equation, despite the contrasting physical pictures. Indeed, we recover the equations of saturation physics in the final state dynamics of QCD.« less

  7. Transition to the Ultimate Regime in Two-Dimensional Rayleigh-Bénard Convection

    NASA Astrophysics Data System (ADS)

    Zhu, Xiaojue; Mathai, Varghese; Stevens, Richard J. A. M.; Verzicco, Roberto; Lohse, Detlef

    2018-04-01

    The possible transition to the so-called ultimate regime, wherein both the bulk and the boundary layers are turbulent, has been an outstanding issue in thermal convection, since the seminal work by Kraichnan [Phys. Fluids 5, 1374 (1962), 10.1063/1.1706533]. Yet, when this transition takes place and how the local flow induces it is not fully understood. Here, by performing two-dimensional simulations of Rayleigh-Bénard turbulence covering six decades in Rayleigh number Ra up to 1 014 for Prandtl number Pr =1 , for the first time in numerical simulations we find the transition to the ultimate regime, namely, at Ra*=1013 . We reveal how the emission of thermal plumes enhances the global heat transport, leading to a steeper increase of the Nusselt number than the classical Malkus scaling Nu ˜Ra1 /3 [Proc. R. Soc. A 225, 196 (1954), 10.1098/rspa.1954.0197]. Beyond the transition, the mean velocity profiles are logarithmic throughout, indicating turbulent boundary layers. In contrast, the temperature profiles are only locally logarithmic, namely, within the regions where plumes are emitted, and where the local Nusselt number has an effective scaling Nu ˜Ra0.38 , corresponding to the effective scaling in the ultimate regime.

  8. A first determination of the unpolarized quark TMDs from a global analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bacchetta, Alessandro; Delcarro, Filippo; Pisano, Cristian

    Transverse momentum dependent distribution and fragmentation functions of unpolarized quarks inside unpolarized protons are extracted, for the first time, through a simultaneous analysis of semi-inclusive deep-inelastic scattering, Drell-Yan and Z boson hadroproduction processes. This study is performed at leading order in perturbative QCD, with energy scale evolution at the next-to-leading logarithmic accuracy. Moreover, some specific choices are made to deal with low scale evolution around 1 GeV2. Since only data in the low transverse momentum region are considered, no matching to fixed-order calculations at high transverse momentum is needed.

  9. Entanglement between random and clean quantum spin chains

    NASA Astrophysics Data System (ADS)

    Juhász, Róbert; Kovács, István A.; Roósz, Gergő; Iglói, Ferenc

    2017-08-01

    The entanglement entropy in clean, as well as in random quantum spin chains has a logarithmic size-dependence at the critical point. Here, we study the entanglement of composite systems that consist of a clean subsystem and a random subsystem, both being critical. In the composite, antiferromagnetic XX-chain with a sharp interface, the entropy is found to grow in a double-logarithmic fashion {{ S}}∼ \\ln\\ln(L) , where L is the length of the chain. We have also considered an extended defect at the interface, where the disorder penetrates into the homogeneous region in such a way that the strength of disorder decays with the distance l from the contact point as  ∼l-κ . For κ<1/2 , the entropy scales as {{ S}}(κ) ≃ \\frac{\\ln 2 (1-2κ)}{6}{\\ln L} , while for κ ≥slant 1/2 , when the extended interface defect is an irrelevant perturbation, we recover the double-logarithmic scaling. These results are explained through strong-disorder RG arguments.

  10. Blue spectra of Kalb-Ramond axions and fully anisotropic string cosmologies

    NASA Astrophysics Data System (ADS)

    Giovannini, Massimo

    1999-03-01

    The inhomogeneities associated with massless Kalb-Ramond axions can be amplified not only in isotropic (four-dimensional) string cosmological models but also in the fully anisotropic case. If the background geometry is isotropic, the axions (which are not part of the homogeneous background) develop outside the horizon, the growing modes leading, ultimately, to logarithmic energy spectra which are ``red'' in frequency and increase at large distance scales. We show that this conclusion can be avoided not only in the case of higher dimensional backgrounds with contracting internal dimensions but also in the case of string cosmological scenarios which are completely anisotropic in four dimensions. In this case the logarithmic energy spectra turn out to be ``blue'' in frequency and, consequently, decreasing at large distance scales. We elaborate on anisotropic dilaton-driven models and we argue that, incidentally, the background models leading to blue (or flat) logarithmic energy spectra for axionic fluctuations are likely to be isotropized by the effect of string tension corrections.

  11. NMR permeability estimators in 'chalk' carbonate rocks obtained under different relaxation times and MICP size scalings

    NASA Astrophysics Data System (ADS)

    Rios, Edmilson Helton; Figueiredo, Irineu; Moss, Adam Keith; Pritchard, Timothy Neil; Glassborow, Brent Anthony; Guedes Domingues, Ana Beatriz; Bagueira de Vasconcellos Azeredo, Rodrigo

    2016-07-01

    The effect of the selection of different nuclear magnetic resonance (NMR) relaxation times for permeability estimation is investigated for a set of fully brine-saturated rocks acquired from Cretaceous carbonate reservoirs in the North Sea and Middle East. Estimators that are obtained from the relaxation times based on the Pythagorean means are compared with estimators that are obtained from the relaxation times based on the concept of a cumulative saturation cut-off. Select portions of the longitudinal (T1) and transverse (T2) relaxation-time distributions are systematically evaluated by applying various cut-offs, analogous to the Winland-Pittman approach for mercury injection capillary pressure (MICP) curves. Finally, different approaches to matching the NMR and MICP distributions using different mean-based scaling factors are validated based on the performance of the related size-scaled estimators. The good results that were obtained demonstrate possible alternatives to the commonly adopted logarithmic mean estimator and reinforce the importance of NMR-MICP integration to improving carbonate permeability estimates.

  12. Entanglement dynamics following a sudden quench: An exact solution

    NASA Astrophysics Data System (ADS)

    Ghosh, Supriyo; Gupta, Kumar S.; Srivastava, Shashi C. L.

    2017-12-01

    We present an exact and fully analytical treatment of the entanglement dynamics for an isolated system of N coupled oscillators following a sudden quench of the system parameters. The system is analyzed using the solutions of the time-dependent Schrodinger's equation, which are obtained by solving the corresponding nonlinear Ermakov equations. The entanglement entropies exhibit a multi-oscillatory behaviour, where the number of dynamically generated time scales increases with N. The harmonic chains exhibit entanglement revival and for larger values of N (> 10), we find near-critical logarithmic scaling for the entanglement entropy, which is modulated by a time-dependent factor. The N = 2 case is equivalent to the two-site Bose-Hubbard model in the tunneling regime, which is amenable to empirical realization in cold-atom systems.

  13. Can power-law scaling and neuronal avalanches arise from stochastic dynamics?

    PubMed

    Touboul, Jonathan; Destexhe, Alain

    2010-02-11

    The presence of self-organized criticality in biology is often evidenced by a power-law scaling of event size distributions, which can be measured by linear regression on logarithmic axes. We show here that such a procedure does not necessarily mean that the system exhibits self-organized criticality. We first provide an analysis of multisite local field potential (LFP) recordings of brain activity and show that event size distributions defined as negative LFP peaks can be close to power-law distributions. However, this result is not robust to change in detection threshold, or when tested using more rigorous statistical analyses such as the Kolmogorov-Smirnov test. Similar power-law scaling is observed for surrogate signals, suggesting that power-law scaling may be a generic property of thresholded stochastic processes. We next investigate this problem analytically, and show that, indeed, stochastic processes can produce spurious power-law scaling without the presence of underlying self-organized criticality. However, this power-law is only apparent in logarithmic representations, and does not survive more rigorous analysis such as the Kolmogorov-Smirnov test. The same analysis was also performed on an artificial network known to display self-organized criticality. In this case, both the graphical representations and the rigorous statistical analysis reveal with no ambiguity that the avalanche size is distributed as a power-law. We conclude that logarithmic representations can lead to spurious power-law scaling induced by the stochastic nature of the phenomenon. This apparent power-law scaling does not constitute a proof of self-organized criticality, which should be demonstrated by more stringent statistical tests.

  14. Multiscaling properties of coastal waters particle size distribution from LISST in situ measurements

    NASA Astrophysics Data System (ADS)

    Pannimpullath Remanan, R.; Schmitt, F. G.; Loisel, H.; Mériaux, X.

    2013-12-01

    An eulerian high frequency sampling of particle size distribution (PSD) is performed during 5 tidal cycles (65 hours) in a coastal environment of the eastern English Channel at 1 Hz. The particle data are recorded using a LISST-100x type C (Laser In Situ Scattering and Transmissometry, Sequoia Scientific), recording volume concentrations of particles having diameters ranging from 2.5 to 500 mu in 32 size classes in logarithmic scale. This enables the estimation at each time step (every second) of the probability density function of particle sizes. At every time step, the pdf of PSD is hyperbolic. We can thus estimate PSD slope time series. Power spectral analysis shows that the mean diameter of the suspended particles is scaling at high frequencies (from 1s to 1000s). The scaling properties of particle sizes is studied by computing the moment function, from the pdf of the size distribution. Moment functions at many different time scales (from 1s to 1000 s) are computed and their scaling properties considered. The Shannon entropy at each time scale is also estimated and is related to other parameters. The multiscaling properties of the turbidity (coefficient cp computed from the LISST) are also consider on the same time scales, using Empirical Mode Decomposition.

  15. Entropy production of doubly stochastic quantum channels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Müller-Hermes, Alexander, E-mail: muellerh@posteo.net; Department of Mathematical Sciences, University of Copenhagen, 2100 Copenhagen; Stilck França, Daniel, E-mail: dsfranca@mytum.de

    2016-02-15

    We study the entropy increase of quantum systems evolving under primitive, doubly stochastic Markovian noise and thus converging to the maximally mixed state. This entropy increase can be quantified by a logarithmic-Sobolev constant of the Liouvillian generating the noise. We prove a universal lower bound on this constant that stays invariant under taking tensor-powers. Our methods involve a new comparison method to relate logarithmic-Sobolev constants of different Liouvillians and a technique to compute logarithmic-Sobolev inequalities of Liouvillians with eigenvectors forming a projective representation of a finite abelian group. Our bounds improve upon similar results established before and as an applicationmore » we prove an upper bound on continuous-time quantum capacities. In the last part of this work we study entropy production estimates of discrete-time doubly stochastic quantum channels by extending the framework of discrete-time logarithmic-Sobolev inequalities to the quantum case.« less

  16. Resumming double non-global logarithms in the evolution of a jet

    NASA Astrophysics Data System (ADS)

    Hatta, Y.; Iancu, E.; Mueller, A. H.; Triantafyllopoulos, D. N.

    2018-02-01

    We consider the Banfi-Marchesini-Smye (BMS) equation which resums `non-global' energy logarithms in the QCD evolution of the energy lost by a pair of jets via soft radiation at large angles. We identify a new physical regime where, besides the energy logarithms, one has to also resum (anti)collinear logarithms. Such a regime occurs when the jets are highly collimated (boosted) and the relative angles between successive soft gluon emissions are strongly increasing. These anti-collinear emissions can violate the correct time-ordering for time-like cascades and result in large radiative corrections enhanced by double collinear logs, making the BMS evolution unstable beyond leading order. We isolate the first such a correction in a recent calculation of the BMS equation to next-to-leading order by Caron-Huot. To overcome this difficulty, we construct a `collinearly-improved' version of the leading-order BMS equation which resums the double collinear logarithms to all orders. Our construction is inspired by a recent treatment of the Balitsky-Kovchegov (BK) equation for the high-energy evolution of a space-like wavefunction, where similar time-ordering issues occur. We show that the conformal mapping relating the leading-order BMS and BK equations correctly predicts the physical time-ordering, but it fails to predict the detailed structure of the collinear improvement.

  17. Monte Carlo Study of Four-Dimensional Self-avoiding Walks of up to One Billion Steps

    NASA Astrophysics Data System (ADS)

    Clisby, Nathan

    2018-04-01

    We study self-avoiding walks on the four-dimensional hypercubic lattice via Monte Carlo simulations of walks with up to one billion steps. We study the expected logarithmic corrections to scaling, and find convincing evidence in support the scaling form predicted by the renormalization group, with an estimate for the power of the logarithmic factor of 0.2516(14), which is consistent with the predicted value of 1/4. We also characterize the behaviour of the pivot algorithm for sampling four dimensional self-avoiding walks, and conjecture that the probability of a pivot move being successful for an N-step walk is O([ log N ]^{-1/4}).

  18. Power-law scaling of extreme dynamics near higher-order exceptional points

    NASA Astrophysics Data System (ADS)

    Zhong, Q.; Christodoulides, D. N.; Khajavikhan, M.; Makris, K. G.; El-Ganainy, R.

    2018-02-01

    We investigate the extreme dynamics of non-Hermitian systems near higher-order exceptional points in photonic networks constructed using the bosonic algebra method. We show that strong power oscillations for certain initial conditions can occur as a result of the peculiar eigenspace geometry and its dimensionality collapse near these singularities. By using complementary numerical and analytical approaches, we show that, in the parity-time (PT ) phase near exceptional points, the logarithm of the maximum optical power amplification scales linearly with the order of the exceptional point. We focus in our discussion on photonic systems, but we note that our results apply to other physical systems as well.

  19. The energy distribution of subjets and the jet shape

    DOE PAGES

    Kang, Zhong-Bo; Ringer, Felix; Waalewijn, Wouter J.

    2017-07-13

    We present a framework that describes the energy distribution of subjets of radius r within a jet of radius R. We consider both an inclusive sample of subjets as well as subjets centered around a predetermined axis, from which the jet shape can be obtained. For r << R we factorize the physics at angular scales r and R to resum the logarithms of r/R. For central subjets, we consider both the standard jet axis and the winner-take-all axis, which involve double and single logarithms of r/R, respectively. All relevant one-loop matching coefficients are given, and an inconsistency in somemore » previous results for cone jets is resolved. Our results for the standard jet shape differ from previous calculations at next-to-leading logarithmic order, because we account for the recoil of the standard jet axis due to soft radiation. Numerical results are presented for an inclusive subjet sample for pp → jet + X at next-to-leading order plus leading logarithmic order.« less

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kang, Zhong-Bo; Ringer, Felix; Waalewijn, Wouter J.

    We present a framework that describes the energy distribution of subjets of radius r within a jet of radius R. We consider both an inclusive sample of subjets as well as subjets centered around a predetermined axis, from which the jet shape can be obtained. For r << R we factorize the physics at angular scales r and R to resum the logarithms of r/R. For central subjets, we consider both the standard jet axis and the winner-take-all axis, which involve double and single logarithms of r/R, respectively. All relevant one-loop matching coefficients are given, and an inconsistency in somemore » previous results for cone jets is resolved. Our results for the standard jet shape differ from previous calculations at next-to-leading logarithmic order, because we account for the recoil of the standard jet axis due to soft radiation. Numerical results are presented for an inclusive subjet sample for pp → jet + X at next-to-leading order plus leading logarithmic order.« less

  1. Non-autonomous Hénon--Heiles systems

    NASA Astrophysics Data System (ADS)

    Hone, Andrew N. W.

    1998-07-01

    Scaling similarity solutions of three integrable PDEs, namely the Sawada-Kotera, fifth order KdV and Kaup-Kupershmidt equations, are considered. It is shown that the resulting ODEs may be written as non-autonomous Hamiltonian equations, which are time-dependent generalizations of the well-known integrable Hénon-Heiles systems. The (time-dependent) Hamiltonians are given by logarithmic derivatives of the tau-functions (inherited from the original PDEs). The ODEs for the similarity solutions also have inherited Bäcklund transformations, which may be used to generate sequences of rational solutions as well as other special solutions related to the first Painlevé transcendent.

  2. A non-local structural derivative model for characterization of ultraslow diffusion in dense colloids

    NASA Astrophysics Data System (ADS)

    Liang, Yingjie; Chen, Wen

    2018-03-01

    Ultraslow diffusion has been observed in numerous complicated systems. Its mean squared displacement (MSD) is not a power law function of time, but instead a logarithmic function, and in some cases grows even more slowly than the logarithmic rate. The distributed-order fractional diffusion equation model simply does not work for the general ultraslow diffusion. Recent study has used the local structural derivative to describe ultraslow diffusion dynamics by using the inverse Mittag-Leffler function as the structural function, in which the MSD is a function of inverse Mittag-Leffler function. In this study, a new stretched logarithmic diffusion law and its underlying non-local structural derivative diffusion model are proposed to characterize the ultraslow diffusion in aging dense colloidal glass at both the short and long waiting times. It is observed that the aging dynamics of dense colloids is a class of the stretched logarithmic ultraslow diffusion processes. Compared with the power, the logarithmic, and the inverse Mittag-Leffler diffusion laws, the stretched logarithmic diffusion law has better precision in fitting the MSD of the colloidal particles at high densities. The corresponding non-local structural derivative diffusion equation manifests clear physical mechanism, and its structural function is equivalent to the first-order derivative of the MSD.

  3. Signal-independent noise in intracortical brain-computer interfaces causes movement time properties inconsistent with Fitts' law.

    PubMed

    Willett, Francis R; Murphy, Brian A; Memberg, William D; Blabe, Christine H; Pandarinath, Chethan; Walter, Benjamin L; Sweet, Jennifer A; Miller, Jonathan P; Henderson, Jaimie M; Shenoy, Krishna V; Hochberg, Leigh R; Kirsch, Robert F; Ajiboye, A Bolu

    2017-04-01

    Do movements made with an intracortical BCI (iBCI) have the same movement time properties as able-bodied movements? Able-bodied movement times typically obey Fitts' law: [Formula: see text] (where MT is movement time, D is target distance, R is target radius, and [Formula: see text] are parameters). Fitts' law expresses two properties of natural movement that would be ideal for iBCIs to restore: (1) that movement times are insensitive to the absolute scale of the task (since movement time depends only on the ratio [Formula: see text]) and (2) that movements have a large dynamic range of accuracy (since movement time is logarithmically proportional to [Formula: see text]). Two participants in the BrainGate2 pilot clinical trial made cortically controlled cursor movements with a linear velocity decoder and acquired targets by dwelling on them. We investigated whether the movement times were well described by Fitts' law. We found that movement times were better described by the equation [Formula: see text], which captures how movement time increases sharply as the target radius becomes smaller, independently of distance. In contrast to able-bodied movements, the iBCI movements we studied had a low dynamic range of accuracy (absence of logarithmic proportionality) and were sensitive to the absolute scale of the task (small targets had long movement times regardless of the [Formula: see text] ratio). We argue that this relationship emerges due to noise in the decoder output whose magnitude is largely independent of the user's motor command (signal-independent noise). Signal-independent noise creates a baseline level of variability that cannot be decreased by trying to move slowly or hold still, making targets below a certain size very hard to acquire with a standard decoder. The results give new insight into how iBCI movements currently differ from able-bodied movements and suggest that restoring a Fitts' law-like relationship to iBCI movements may require non-linear decoding strategies.

  4. Logarithmic singularities and quantum oscillations in magnetically doped topological insulators

    NASA Astrophysics Data System (ADS)

    Nandi, D.; Sodemann, Inti; Shain, K.; Lee, G. H.; Huang, K.-F.; Chang, Cui-Zu; Ou, Yunbo; Lee, S. P.; Ward, J.; Moodera, J. S.; Kim, P.; Yacoby, A.

    2018-02-01

    We report magnetotransport measurements on magnetically doped (Bi,Sb ) 2Te3 films grown by molecular beam epitaxy. In Hall bar devices, we observe logarithmic dependence of transport coefficients in temperature and bias voltage which can be understood to arise from electron-electron interaction corrections to the conductivity and self-heating. Submicron scale devices exhibit intriguing quantum oscillations at high magnetic fields with dependence on bias voltage. The observed quantum oscillations can be attributed to bulk and surface transport.

  5. Kinetics of drug release from ointments: Role of transient-boundary layer.

    PubMed

    Xu, Xiaoming; Al-Ghabeish, Manar; Krishnaiah, Yellela S R; Rahman, Ziyaur; Khan, Mansoor A

    2015-10-15

    In the current work, an in vitro release testing method suitable for ointment formulations was developed using acyclovir as a model drug. Release studies were carried out using enhancer cells on acyclovir ointments prepared with oleaginous, absorption, and water-soluble bases. Kinetics and mechanism of drug release was found to be highly dependent on the type of ointment bases. In oleaginous bases, drug release followed a unique logarithmic-time dependent profile; in both absorption and water-soluble bases, drug release exhibited linearity with respect to square root of time (Higuchi model) albeit differences in the overall release profile. To help understand the underlying cause of logarithmic-time dependency of drug release, a novel transient-boundary hypothesis was proposed, verified, and compared to Higuchi theory. Furthermore, impact of drug solubility (under various pH conditions) and temperature on drug release were assessed. Additionally, conditions under which deviations from logarithmic-time drug release kinetics occur were determined using in situ UV fiber-optics. Overall, the results suggest that for oleaginous ointments containing dispersed drug particles, kinetics and mechanism of drug release is controlled by expansion of transient boundary layer, and drug release increases linearly with respect to logarithmic time. Published by Elsevier B.V.

  6. Relationship between thermodynamic parameter and thermodynamic scaling parameter for orientational relaxation time for flip-flop motion of nematic liquid crystals.

    PubMed

    Satoh, Katsuhiko

    2013-03-07

    Thermodynamic parameter Γ and thermodynamic scaling parameter γ for low-frequency relaxation time, which characterize flip-flop motion in a nematic phase, were verified by molecular dynamics simulation with a simple potential based on the Maier-Saupe theory. The parameter Γ, which is the slope of the logarithm for temperature and volume, was evaluated under various conditions at a wide range of temperatures, pressures, and volumes. To simulate thermodynamic scaling so that experimental data at isobaric, isothermal, and isochoric conditions can be rescaled onto a master curve with the parameters for some liquid crystal (LC) compounds, the relaxation time was evaluated from the first-rank orientational correlation function in the simulations, and thermodynamic scaling was verified with the simple potential representing small clusters. A possibility of an equivalence relationship between Γ and γ determined from the relaxation time in the simulation was assessed with available data from the experiments and simulations. In addition, an argument was proposed for the discrepancy between Γ and γ for some LCs in experiments: the discrepancy arises from disagreement of the value of the order parameter P2 rather than the constancy of relaxation time τ1(*) on pressure.

  7. Robust Bioinformatics Recognition with VLSI Biochip Microsystem

    NASA Technical Reports Server (NTRS)

    Lue, Jaw-Chyng L.; Fang, Wai-Chi

    2006-01-01

    A microsystem architecture for real-time, on-site, robust bioinformatic patterns recognition and analysis has been proposed. This system is compatible with on-chip DNA analysis means such as polymerase chain reaction (PCR)amplification. A corresponding novel artificial neural network (ANN) learning algorithm using new sigmoid-logarithmic transfer function based on error backpropagation (EBP) algorithm is invented. Our results show the trained new ANN can recognize low fluorescence patterns better than the conventional sigmoidal ANN does. A differential logarithmic imaging chip is designed for calculating logarithm of relative intensities of fluorescence signals. The single-rail logarithmic circuit and a prototype ANN chip are designed, fabricated and characterized.

  8. Fechner's law: where does the log transform come from?

    PubMed

    Laming, Donald

    2010-01-01

    This paper looks at Fechner's law in the light of 150 years of subsequent study. In combination with the normal, equal variance, signal-detection model, Fechner's law provides a numerically accurate account of discriminations between two separate stimuli, essentially because the logarithmic transform delivers a model for Weber's law. But it cannot be taken to be a measure of internal sensation because an equally accurate account is provided by a chi(2) model in which stimuli are scaled by their physical magnitude. The logarithmic transform of Fechner's law arises because, for the number of degrees of freedom typically required in the chi(2) model, the logarithm of a chi(2) variable is, to a good approximation, normal. This argument is set within a general theory of sensory discrimination.

  9. Scaling of Loop-Erased Walks in 2 to 4 Dimensions

    NASA Astrophysics Data System (ADS)

    Grassberger, Peter

    2009-07-01

    We simulate loop-erased random walks on simple (hyper-)cubic lattices of dimensions 2, 3 and 4. These simulations were mainly motivated to test recent two loop renormalization group predictions for logarithmic corrections in d=4, simulations in lower dimensions were done for completeness and in order to test the algorithm. In d=2, we verify with high precision the prediction D=5/4, where the number of steps n after erasure scales with the number N of steps before erasure as n˜ N D/2. In d=3 we again find a power law, but with an exponent different from the one found in the most precise previous simulations: D=1.6236±0.0004. Finally, we see clear deviations from the naive scaling n˜ N in d=4. While they agree only qualitatively with the leading logarithmic corrections predicted by several authors, their agreement with the two-loop prediction is nearly perfect.

  10. A scale-entropy diffusion equation to describe the multi-scale features of turbulent flames near a wall

    NASA Astrophysics Data System (ADS)

    Queiros-Conde, D.; Foucher, F.; Mounaïm-Rousselle, C.; Kassem, H.; Feidt, M.

    2008-12-01

    Multi-scale features of turbulent flames near a wall display two kinds of scale-dependent fractal features. In scale-space, an unique fractal dimension cannot be defined and the fractal dimension of the front is scale-dependent. Moreover, when the front approaches the wall, this dependency changes: fractal dimension also depends on the wall-distance. Our aim here is to propose a general geometrical framework that provides the possibility to integrate these two cases, in order to describe the multi-scale structure of turbulent flames interacting with a wall. Based on the scale-entropy quantity, which is simply linked to the roughness of the front, we thus introduce a general scale-entropy diffusion equation. We define the notion of “scale-evolutivity” which characterises the deviation of a multi-scale system from the pure fractal behaviour. The specific case of a constant “scale-evolutivity” over the scale-range is studied. In this case, called “parabolic scaling”, the fractal dimension is a linear function of the logarithm of scale. The case of a constant scale-evolutivity in the wall-distance space implies that the fractal dimension depends linearly on the logarithm of the wall-distance. We then verified experimentally, that parabolic scaling represents a good approximation of the real multi-scale features of turbulent flames near a wall.

  11. Evaluation of the laboratory mouse model for screening topical mosquito repellents.

    PubMed

    Rutledge, L C; Gupta, R K; Wirtz, R A; Buescher, M D

    1994-12-01

    Eight commercial repellents were tested against Aedes aegypti 0 and 4 h after application in serial dilution to volunteers and laboratory mice. Results were analyzed by multiple regression of percentage of biting (probit scale) on dose (logarithmic scale) and time. Empirical correction terms for conversion of values obtained in tests on mice to values expected in tests on human volunteers were calculated from data obtained on 4 repellents and evaluated with data obtained on 4 others. Corrected values from tests on mice did not differ significantly from values obtained in tests on volunteers. Test materials used in the study were dimethyl phthalate, butopyronoxyl, butoxy polypropylene glycol, MGK Repellent 11, deet, ethyl hexanediol, Citronyl, and dibutyl phthalate.

  12. Resummation of jet veto logarithms at N 3 LL a + NNLO for W + W ? production at the LHC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dawson, S.; Jaiswal, P.; Li, Ye

    We compute the resummed on-shell W+W- production cross section under a jet veto at the LHC to partial N3LL order matched to the fixed-order NNLO result. Differential NNLO cross sections are obtained from an implementation of qT subtraction in Sherpa. The two-loop virtual corrections to the qq¯→W+W- amplitude, used in both fixed-order and resummation predictions, are extracted from the public code qqvvamp. We perform resummation using soft collinear effective theory, with approximate beam functions where only the logarithmic terms are included at two-loop. In addition to scale uncertainties from the hard matching scale and the factorization scale, rapidity scale variationsmore » are obtained within the analytic regulator approach. Our resummation results show a decrease in the jet veto cross section compared to NNLO fixed-order predictions, with reduced scale uncertainties compared to NNLL+NLO resummed predictions. We include the loop-induced gg contribution with jet veto resummation to NLL+LO. The prediction shows good agreement with recent LHC measurements.« less

  13. Resummation of jet veto logarithms at N 3 LL a + NNLO for W + W ? production at the LHC

    DOE PAGES

    Dawson, S.; Jaiswal, P.; Li, Ye; ...

    2016-12-01

    We compute the resummed on-shell W+W- production cross section under a jet veto at the LHC to partial N3LL order matched to the fixed-order NNLO result. Differential NNLO cross sections are obtained from an implementation of qT subtraction in Sherpa. The two-loop virtual corrections to the qq¯→W+W- amplitude, used in both fixed-order and resummation predictions, are extracted from the public code qqvvamp. We perform resummation using soft collinear effective theory, with approximate beam functions where only the logarithmic terms are included at two-loop. In addition to scale uncertainties from the hard matching scale and the factorization scale, rapidity scale variationsmore » are obtained within the analytic regulator approach. Our resummation results show a decrease in the jet veto cross section compared to NNLO fixed-order predictions, with reduced scale uncertainties compared to NNLL+NLO resummed predictions. We include the loop-induced gg contribution with jet veto resummation to NLL+LO. The prediction shows good agreement with recent LHC measurements.« less

  14. Multilayer material characterization using thermographic signal reconstruction

    NASA Astrophysics Data System (ADS)

    Shepard, Steven M.; Beemer, Maria Frendberg

    2016-02-01

    Active-thermography has become a well-established Nondestructive Testing (NDT) method for detection of subsurface flaws. In its simplest form, flaw detection is based on visual identification of contrast between a flaw and local intact regions in an IR image sequence of the surface temperature as the sample responds to thermal stimulation. However, additional information and insight can be obtained from the sequence, even in the absence of a flaw, through analysis of the logarithmic derivatives of individual pixel time histories using the Thermographic Signal Reconstruction (TSR) method. For example, the response of a flaw-free multilayer sample to thermal stimulation can be viewed as a simple transition between the responses of infinitely thick samples of the individual constituent layers over the lifetime of the thermal diffusion process. The transition is represented compactly and uniquely by the logarithmic derivatives, based on the ratio of thermal effusivities of the layers. A spectrum of derivative responses relative to thermal effusivity ratios allows prediction of the time scale and detectability of the interface, and measurement of the thermophysical properties of one layer if the properties of the other are known. A similar transition between steady diffusion states occurs for flat bottom holes, based on the hole aspect ratio.

  15. Emulating Many-Body Localization with a Superconducting Quantum Processor

    NASA Astrophysics Data System (ADS)

    Xu, Kai; Chen, Jin-Jun; Zeng, Yu; Zhang, Yu-Ran; Song, Chao; Liu, Wuxin; Guo, Qiujiang; Zhang, Pengfei; Xu, Da; Deng, Hui; Huang, Keqiang; Wang, H.; Zhu, Xiaobo; Zheng, Dongning; Fan, Heng

    2018-02-01

    The law of statistical physics dictates that generic closed quantum many-body systems initialized in nonequilibrium will thermalize under their own dynamics. However, the emergence of many-body localization (MBL) owing to the interplay between interaction and disorder, which is in stark contrast to Anderson localization, which only addresses noninteracting particles in the presence of disorder, greatly challenges this concept, because it prevents the systems from evolving to the ergodic thermalized state. One critical evidence of MBL is the long-time logarithmic growth of entanglement entropy, and a direct observation of it is still elusive due to the experimental challenges in multiqubit single-shot measurement and quantum state tomography. Here we present an experiment fully emulating the MBL dynamics with a 10-qubit superconducting quantum processor, which represents a spin-1 /2 X Y model featuring programmable disorder and long-range spin-spin interactions. We provide essential signatures of MBL, such as the imbalance due to the initial nonequilibrium, the violation of eigenstate thermalization hypothesis, and, more importantly, the direct evidence of the long-time logarithmic growth of entanglement entropy. Our results lay solid foundations for precisely simulating the intriguing physics of quantum many-body systems on the platform of large-scale multiqubit superconducting quantum processors.

  16. Fractality and the law of the wall

    NASA Astrophysics Data System (ADS)

    Xu, Haosen H. A.; Yang, X. I. A.

    2018-05-01

    Fluid motions in the inertial range of isotropic turbulence are fractal, with their space-filling capacity slightly below regular three-dimensional objects, which is a consequence of the energy cascade. Besides the energy cascade, the other often encountered cascading process is the momentum cascade in wall-bounded flows. Despite the long-existing analogy between the two processes, many of the thoroughly investigated aspects of the energy cascade have so far received little attention in studies of the momentum counterpart, e.g., the possibility of the momentum-transferring scales in the logarithmic region being fractal has not been considered. In this work, this possibility is pursued, and we discuss one of its implications. Following the same dimensional arguments that lead to the D =2.33 fractal dimension of wrinkled surfaces in isotropic turbulence, we show that the large-scale momentum-carrying eddies may also be fractal and non-space-filling, which then leads to the power-law scaling of the mean velocity profile. The logarithmic law of the wall, on the other hand, corresponds to space-filling eddies, as suggested by Townsend [The Structure of Turbulent Shear Flow (Cambridge University Press, Cambridge, 1980)]. Because the space-filling capacity is an integral geometric quantity, the analysis presented in this work provides us with a low-order quantity, with which, one would be able to distinguish between the logarithmic law and the power law.

  17. Efficient Queries of Stand-off Annotations for Natural Language Processing on Electronic Medical Records.

    PubMed

    Luo, Yuan; Szolovits, Peter

    2016-01-01

    In natural language processing, stand-off annotation uses the starting and ending positions of an annotation to anchor it to the text and stores the annotation content separately from the text. We address the fundamental problem of efficiently storing stand-off annotations when applying natural language processing on narrative clinical notes in electronic medical records (EMRs) and efficiently retrieving such annotations that satisfy position constraints. Efficient storage and retrieval of stand-off annotations can facilitate tasks such as mapping unstructured text to electronic medical record ontologies. We first formulate this problem into the interval query problem, for which optimal query/update time is in general logarithm. We next perform a tight time complexity analysis on the basic interval tree query algorithm and show its nonoptimality when being applied to a collection of 13 query types from Allen's interval algebra. We then study two closely related state-of-the-art interval query algorithms, proposed query reformulations, and augmentations to the second algorithm. Our proposed algorithm achieves logarithmic time stabbing-max query time complexity and solves the stabbing-interval query tasks on all of Allen's relations in logarithmic time, attaining the theoretic lower bound. Updating time is kept logarithmic and the space requirement is kept linear at the same time. We also discuss interval management in external memory models and higher dimensions.

  18. Efficient Queries of Stand-off Annotations for Natural Language Processing on Electronic Medical Records

    PubMed Central

    Luo, Yuan; Szolovits, Peter

    2016-01-01

    In natural language processing, stand-off annotation uses the starting and ending positions of an annotation to anchor it to the text and stores the annotation content separately from the text. We address the fundamental problem of efficiently storing stand-off annotations when applying natural language processing on narrative clinical notes in electronic medical records (EMRs) and efficiently retrieving such annotations that satisfy position constraints. Efficient storage and retrieval of stand-off annotations can facilitate tasks such as mapping unstructured text to electronic medical record ontologies. We first formulate this problem into the interval query problem, for which optimal query/update time is in general logarithm. We next perform a tight time complexity analysis on the basic interval tree query algorithm and show its nonoptimality when being applied to a collection of 13 query types from Allen’s interval algebra. We then study two closely related state-of-the-art interval query algorithms, proposed query reformulations, and augmentations to the second algorithm. Our proposed algorithm achieves logarithmic time stabbing-max query time complexity and solves the stabbing-interval query tasks on all of Allen’s relations in logarithmic time, attaining the theoretic lower bound. Updating time is kept logarithmic and the space requirement is kept linear at the same time. We also discuss interval management in external memory models and higher dimensions. PMID:27478379

  19. Log-polar mapping-based scale space tracking with adaptive target response

    NASA Astrophysics Data System (ADS)

    Li, Dongdong; Wen, Gongjian; Kuai, Yangliu; Zhang, Ximing

    2017-05-01

    Correlation filter-based tracking has exhibited impressive robustness and accuracy in recent years. Standard correlation filter-based trackers are restricted to translation estimation and equipped with fixed target response. These trackers produce an inferior performance when encountered with a significant scale variation or appearance change. We propose a log-polar mapping-based scale space tracker with an adaptive target response. This tracker transforms the scale variation of the target in the Cartesian space into a shift along the logarithmic axis in the log-polar space. A one-dimensional scale correlation filter is learned online to estimate the shift along the logarithmic axis. With the log-polar representation, scale estimation is achieved accurately without a multiresolution pyramid. To achieve an adaptive target response, a variance of the Gaussian function is computed from the response map and updated online with a learning rate parameter. Our log-polar mapping-based scale correlation filter and adaptive target response can be combined with any correlation filter-based trackers. In addition, the scale correlation filter can be extended to a two-dimensional correlation filter to achieve joint estimation of the scale variation and in-plane rotation. Experiments performed on an OTB50 benchmark demonstrate that our tracker achieves superior performance against state-of-the-art trackers.

  20. Definition of (so MIScalled) ''Complexity'' as UTTER-SIMPLICITY!!! Versus Deviations From it as Complicatedness-Measure

    NASA Astrophysics Data System (ADS)

    Young, F.; Siegel, Edward Carl-Ludwig

    2011-03-01

    (so MIScalled) "complexity" with INHERENT BOTH SCALE-Invariance Symmetry-RESTORING, AND 1 / w (1.000..) "pink" Zipf-law Archimedes-HYPERBOLICITY INEVITABILITY power-spectrum power-law decay algebraicity. Their CONNECTION is via simple-calculus SCALE-Invariance Symmetry-RESTORING logarithm-function derivative: (d/ d ω) ln(ω) = 1 / ω , i.e. (d/ d ω) [SCALE-Invariance Symmetry-RESTORING](ω) = 1/ ω . Via Noether-theorem continuous-symmetries relation to conservation-laws: (d/ d ω) [inter-scale 4-current 4-div-ergence} = 0](ω) = 1 / ω . Hence (so MIScalled) "complexity" is information inter-scale conservation, in agreement with Anderson-Mandell [Fractals of Brain/Mind, G. Stamov ed.(1994)] experimental-psychology!!!], i.e. (so MIScalled) "complexity" is UTTER-SIMPLICITY!!! Versus COMPLICATEDNESS either PLUS (Additive) VS. TIMES (Multiplicative) COMPLICATIONS of various system-specifics. COMPLICATEDNESS-MEASURE DEVIATIONS FROM complexity's UTTER-SIMPLICITY!!!: EITHER [SCALE-Invariance Symmetry-BREAKING] MINUS [SCALE-Invariance Symmetry-RESTORING] via power-spectrum power-law algebraicity decays DIFFERENCES: ["red"-Pareto] MINUS ["pink"-Zipf Archimedes-HYPERBOLICITY INEVITABILITY]!!!

  1. Variants of kinetically modified non-minimal Higgs inflation in supergravity

    NASA Astrophysics Data System (ADS)

    Pallis, C.

    2016-10-01

    We consider models of chaotic inflation driven by the real parts of a conjugate pair of Higgs superfields involved in the spontaneous breaking of a grand unification symmetry at a scale assuming its Supersymmetric (SUSY) value. Employing Kähler potentials with a prominent shift-symmetric part proportional to c- and a tiny violation, proportional to c+, included in a logarithm we show that the inflationary observables provide an excellent match to the recent Planck and BICAP2/Keck Array results setting, e.g., 6.4 · 10-3 lesssim r± = c+/c- lesssim 1/N where N = 2 or 3 is the prefactor of the logarithm. Deviations of these prefactors from their integer values above are also explored and a region where hilltop inflation occurs is localized. Moreover, we analyze two distinct possible stabilization mechanisms for the non-inflaton accompanying superfield, one tied to higher order terms and one with just quadratic terms within the argument of a logarithm with positive prefactor NS < 6. In all cases, inflation can be attained for subplanckian inflaton values with the corresponding effective theories retaining the perturbative unitarity up to the Planck scale.

  2. Hard matching for boosted tops at two loops

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoang, Andre H.; Pathak, Aditya; Pietrulewicz, Piotr

    2015-12-10

    Here, cross sections for top quarks provide very interesting physics opportunities, being both sensitive to new physics and also perturbatively tractable due to the large top quark mass. Rigorous factorization theorems for top cross sections can be derived in several kinematic scenarios, including the boosted regime in the peak region that we consider here. In the context of the corresponding factorization theorem for e +e – collisions we extract the last missing ingredient that is needed to evaluate the cross section differential in the jet-mass at two-loop order, namely the matching coefficient at the scale μ≃m t. Our extraction alsomore » yields the final ingredients needed to carry out logarithmic re-summation at next-to-next-to-leading logarithmic order (or N 3LL if we ignore the missing 4-loop cusp anomalous dimension). This coefficient exhibits an amplitude level rapidity logarithm starting at O(α 2 s) due to virtual top quark loops, which we treat using rapidity renormalization group (RG) evolution. Interestingly, this rapidity RG evolution appears in the matching coefficient between two effective theories around the heavy quark mass scale μ ≃ m t.« less

  3. Electronic clinical predictive thermometer using logarithm for temperature prediction

    NASA Technical Reports Server (NTRS)

    Cambridge, Vivien J. (Inventor); Koger, Thomas L. (Inventor); Nail, William L. (Inventor); Diaz, Patrick (Inventor)

    1998-01-01

    A thermometer that rapidly predicts body temperature based on the temperature signals received from a temperature sensing probe when it comes into contact with the body. The logarithms of the differences between the temperature signals in a selected time frame are determined. A line is fit through the logarithms and the slope of the line is used as a system time constant in predicting the final temperature of the body. The time constant in conjunction with predetermined additional constants are used to compute the predicted temperature. Data quality in the time frame is monitored and if unacceptable, a different time frame of temperature signals is selected for use in prediction. The processor switches to a monitor mode if data quality over a limited number of time frames is unacceptable. Determining the start time on which the measurement time frame for prediction is based is performed by summing the second derivatives of temperature signals over time frames. When the sum of second derivatives in a particular time frame exceeds a threshold, the start time is established.

  4. Universal principles governing multiple random searchers on complex networks: The logarithmic growth pattern and the harmonic law

    NASA Astrophysics Data System (ADS)

    Weng, Tongfeng; Zhang, Jie; Small, Michael; Harandizadeh, Bahareh; Hui, Pan

    2018-03-01

    We propose a unified framework to evaluate and quantify the search time of multiple random searchers traversing independently and concurrently on complex networks. We find that the intriguing behaviors of multiple random searchers are governed by two basic principles—the logarithmic growth pattern and the harmonic law. Specifically, the logarithmic growth pattern characterizes how the search time increases with the number of targets, while the harmonic law explores how the search time of multiple random searchers varies relative to that needed by individual searchers. Numerical and theoretical results demonstrate these two universal principles established across a broad range of random search processes, including generic random walks, maximal entropy random walks, intermittent strategies, and persistent random walks. Our results reveal two fundamental principles governing the search time of multiple random searchers, which are expected to facilitate investigation of diverse dynamical processes like synchronization and spreading.

  5. Fluctuations of healthy and unhealthy heartbeat intervals

    NASA Astrophysics Data System (ADS)

    Lan, Boon Leong; Toda, Mikito

    2013-04-01

    We show that the RR-interval fluctuations, defined as the difference between successive natural-logarithm of the RR interval, for healthy, congestive-heart-failure (CHF) and atrial-fibrillation (AF) subjects are well modeled by non-Gaussian stable distributions. Our results suggest that healthy or unhealthy RR-interval fluctuation can generally be modeled as a sum of a large number of independent physiological effects which are identically distributed with infinite variance. Furthermore, we show for the first time that one indicator —the scale parameter of the stable distribution— is sufficient to robustly distinguish the three groups of subjects. The scale parameters for healthy subjects are smaller than those for AF subjects but larger than those for CHF subjects —this ordering suggests that the scale parameter could be used to objectively quantify the severity of CHF and AF over time and also serve as an early warning signal for a healthy person when it approaches either boundary of the healthy range.

  6. Response to a small external force and fluctuations of a passive particle in a one-dimensional diffusive environment

    NASA Astrophysics Data System (ADS)

    Huveneers, François

    2018-04-01

    We investigate the long-time behavior of a passive particle evolving in a one-dimensional diffusive random environment, with diffusion constant D . We consider two cases: (a) The particle is pulled forward by a small external constant force and (b) there is no systematic bias. Theoretical arguments and numerical simulations provide evidence that the particle is eventually trapped by the environment. This is diagnosed in two ways: The asymptotic speed of the particle scales quadratically with the external force as it goes to zero, and the fluctuations scale diffusively in the unbiased environment, up to possible logarithmic corrections in both cases. Moreover, in the large D limit (homogenized regime), we find an important transient region giving rise to other, finite-size scalings, and we describe the crossover to the true asymptotic behavior.

  7. Estimating ice-affected streamflow by extended Kalman filtering

    USGS Publications Warehouse

    Holtschlag, D.J.; Grewal, M.S.

    1998-01-01

    An extended Kalman filter was developed to automate the real-time estimation of ice-affected streamflow on the basis of routine measurements of stream stage and air temperature and on the relation between stage and streamflow during open-water (ice-free) conditions. The filter accommodates three dynamic modes of ice effects: sudden formation/ablation, stable ice conditions, and eventual elimination. The utility of the filter was evaluated by applying it to historical data from two long-term streamflow-gauging stations, St. John River at Dickey, Maine and Platte River at North Bend, Nebr. Results indicate that the filter was stable and that parameters converged for both stations, producing streamflow estimates that are highly correlated with published values. For the Maine station, logarithms of estimated streamflows are within 8% of the logarithms of published values 87.2% of the time during periods of ice effects and within 15% 96.6% of the time. Similarly, for the Nebraska station, logarithms of estimated streamflows are within 8% of the logarithms of published values 90.7% of the time and within 15% 97.7% of the time. In addition, the correlation between temporal updates and published streamflows on days of direct measurements at the Maine station was 0.777 and 0.998 for ice-affected and open-water periods, respectively; for the Nebraska station, corresponding correlations were 0.864 and 0.997.

  8. Decibels Made Easy.

    ERIC Educational Resources Information Center

    Tindle, C. T.

    1996-01-01

    Describes a method to teach acoustics to students with minimal mathematical backgrounds. Discusses the uses of charts in teaching topics of sound intensity level and the decibel scale. Avoids the difficulties of working with logarithm functions. (JRH)

  9. Signal-independent noise in intracortical brain-computer interfaces causes movement time properties inconsistent with Fitts’ law

    PubMed Central

    Willett, Francis R.; Murphy, Brian A.; Memberg, William D.; Blabe, Christine H.; Pandarinath, Chethan; Walter, Benjamin L.; Sweet, Jennifer A.; Miller, Jonathan P.; Henderson, Jaimie M.; Shenoy, Krishna V.; Hochberg, Leigh R.; Kirsch, Robert F.; Ajiboye, A. Bolu

    2017-01-01

    Objective Do movements made with an intracortical BCI (iBCI) have the same movement time properties as able-bodied movements? Able-bodied movement times typically obey Fitts’ law: MT = a + b log2(D/R ) (where MT is movement time, D is target distance, R is target radius, and a,b are parameters). Fitts’ law expresses two properties of natural movement that would be ideal for iBCIs to restore: (1) that movement times are insensitive to the absolute scale of the task (since movement time depends only on the ratio D/R) and (2) that movements have a large dynamic range of accuracy (since movement time is logarithmically proportional to D/R). Approach Two participants in the BrainGate2 pilot clinical trial made cortically controlled cursor movements with a linear velocity decoder and acquired targets by dwelling on them. We investigated whether the movement times were well described by Fitts’ law. Main Results We found that movement times were better described by the equation MT = a + bD + cR−2, which captures how movement time increases sharply as the target radius becomes smaller, independently of distance. In contrast to able-bodied movements, the iBCI movements we studied had a low dynamic range of accuracy (absence of logarithmic proportionality) and were sensitive to the absolute scale of the task (small targets had long movement times regardless of the D/R ratio). We argue that this relationship emerges due to noise in the decoder output whose magnitude is largely independent of the user’s motor command (signal-independent noise). Signal-independent noise creates a baseline level of variability that cannot be decreased by trying to move slowly or hold still, making targets below a certain size very hard to acquire with a standard decoder. Significance The results give new insight into how iBCI movements currently differ from able-bodied movements and suggest that restoring a Fitts’ law-like relationship to iBCI movements may require nonlinear decoding strategies. PMID:28177925

  10. Signal-independent noise in intracortical brain-computer interfaces causes movement time properties inconsistent with Fitts’ law

    NASA Astrophysics Data System (ADS)

    Willett, Francis R.; Murphy, Brian A.; Memberg, William D.; Blabe, Christine H.; Pandarinath, Chethan; Walter, Benjamin L.; Sweet, Jennifer A.; Miller, Jonathan P.; Henderson, Jaimie M.; Shenoy, Krishna V.; Hochberg, Leigh R.; Kirsch, Robert F.; Bolu Ajiboye, A.

    2017-04-01

    Objective. Do movements made with an intracortical BCI (iBCI) have the same movement time properties as able-bodied movements? Able-bodied movement times typically obey Fitts’ law: \\text{MT}=a+b{{log}2}(D/R) (where MT is movement time, D is target distance, R is target radius, and a,~b are parameters). Fitts’ law expresses two properties of natural movement that would be ideal for iBCIs to restore: (1) that movement times are insensitive to the absolute scale of the task (since movement time depends only on the ratio D/R ) and (2) that movements have a large dynamic range of accuracy (since movement time is logarithmically proportional to D/R ). Approach. Two participants in the BrainGate2 pilot clinical trial made cortically controlled cursor movements with a linear velocity decoder and acquired targets by dwelling on them. We investigated whether the movement times were well described by Fitts’ law. Main results. We found that movement times were better described by the equation \\text{MT}=a+bD+c{{R}-2} , which captures how movement time increases sharply as the target radius becomes smaller, independently of distance. In contrast to able-bodied movements, the iBCI movements we studied had a low dynamic range of accuracy (absence of logarithmic proportionality) and were sensitive to the absolute scale of the task (small targets had long movement times regardless of the D/R ratio). We argue that this relationship emerges due to noise in the decoder output whose magnitude is largely independent of the user’s motor command (signal-independent noise). Signal-independent noise creates a baseline level of variability that cannot be decreased by trying to move slowly or hold still, making targets below a certain size very hard to acquire with a standard decoder. Significance. The results give new insight into how iBCI movements currently differ from able-bodied movements and suggest that restoring a Fitts’ law-like relationship to iBCI movements may require non-linear decoding strategies.

  11. Bio-Inspired Microsystem for Robust Genetic Assay Recognition

    PubMed Central

    Lue, Jaw-Chyng; Fang, Wai-Chi

    2008-01-01

    A compact integrated system-on-chip (SoC) architecture solution for robust, real-time, and on-site genetic analysis has been proposed. This microsystem solution is noise-tolerable and suitable for analyzing the weak fluorescence patterns from a PCR prepared dual-labeled DNA microchip assay. In the architecture, a preceding VLSI differential logarithm microchip is designed for effectively computing the logarithm of the normalized input fluorescence signals. A posterior VLSI artificial neural network (ANN) processor chip is used for analyzing the processed signals from the differential logarithm stage. A single-channel logarithmic circuit was fabricated and characterized. A prototype ANN chip with unsupervised winner-take-all (WTA) function was designed, fabricated, and tested. An ANN learning algorithm using a novel sigmoid-logarithmic transfer function based on the supervised backpropagation (BP) algorithm is proposed for robustly recognizing low-intensity patterns. Our results show that the trained new ANN can recognize low-fluorescence patterns better than an ANN using the conventional sigmoid function. PMID:18566679

  12. Logarithmic conformal field theory

    NASA Astrophysics Data System (ADS)

    Gainutdinov, Azat; Ridout, David; Runkel, Ingo

    2013-12-01

    Conformal field theory (CFT) has proven to be one of the richest and deepest subjects of modern theoretical and mathematical physics research, especially as regards statistical mechanics and string theory. It has also stimulated an enormous amount of activity in mathematics, shaping and building bridges between seemingly disparate fields through the study of vertex operator algebras, a (partial) axiomatisation of a chiral CFT. One can add to this that the successes of CFT, particularly when applied to statistical lattice models, have also served as an inspiration for mathematicians to develop entirely new fields: the Schramm-Loewner evolution and Smirnov's discrete complex analysis being notable examples. When the energy operator fails to be diagonalisable on the quantum state space, the CFT is said to be logarithmic. Consequently, a logarithmic CFT is one whose quantum space of states is constructed from a collection of representations which includes reducible but indecomposable ones. This qualifier arises because of the consequence that certain correlation functions will possess logarithmic singularities, something that contrasts with the familiar case of power law singularities. While such logarithmic singularities and reducible representations were noted by Rozansky and Saleur in their study of the U (1|1) Wess-Zumino-Witten model in 1992, the link between the non-diagonalisability of the energy operator and logarithmic singularities in correlators is usually ascribed to Gurarie's 1993 article (his paper also contains the first usage of the term 'logarithmic conformal field theory'). The class of CFTs that were under control at this time was quite small. In particular, an enormous amount of work from the statistical mechanics and string theory communities had produced a fairly detailed understanding of the (so-called) rational CFTs. However, physicists from both camps were well aware that applications from many diverse fields required significantly more complicated non-rational theories. Examples include critical percolation, supersymmetric string backgrounds, disordered electronic systems, sandpile models describing avalanche processes, and so on. In each case, the non-rationality and non-unitarity of the CFT suggested that a more general theoretical framework was needed. Driven by the desire to better understand these applications, the mid-1990s saw significant theoretical advances aiming to generalise the constructs of rational CFT to a more general class. In 1994, Nahm introduced an algorithm for computing the fusion product of representations which was significantly generalised two years later by Gaberdiel and Kausch who applied it to explicitly construct (chiral) representations upon which the energy operator acts non-diagonalisably. Their work made it clear that underlying the physically relevant correlation functions are classes of reducible but indecomposable representations that can be investigated mathematically to the benefit of applications. In another direction, Flohr had meanwhile initiated the study of modular properties of the characters of logarithmic CFTs, a topic which had already evoked much mathematical interest in the rational case. Since these seminal theoretical papers appeared, the field has undergone rapid development, both theoretically and with regard to applications. Logarithmic CFTs are now known to describe non-local observables in the scaling limit of critical lattice models, for example percolation and polymers, and are an integral part of our understanding of quantum strings propagating on supermanifolds. They are also believed to arise as duals of three-dimensional chiral gravity models, fill out hidden sectors in non-rational theories with non-compact target spaces, and describe certain transitions in various incarnations of the quantum Hall effect. Other physical applications range from two-dimensional turbulence and non-equilibrium systems to aspects of the AdS/CFT correspondence and describing supersymmetric sigma models beyond the topological sector. We refer the reader to the reviews in this collection for further applications and details. More recently, our understanding of logarithmic CFT has improved dramatically thanks largely to a better understanding of the underlying mathematical structures. This includes those associated to the vertex operator algebras themselves (representations, characters, modular transformations, fusion, braiding) as well as structures associated with applications to two-dimensional statistical models (diagram algebras, eg. Temperley-Lieb quantum groups). Not only are we getting to the point where we understand how these structures differ from standard (rational) theories, but we are starting to tackle applications both in the boundary and bulk settings. It is now clear that the logarithmic case is generic, so it is this case that one should expect to encounter in applications. We therefore feel that it is timely to review what has been accomplished in order to disseminate this improved understanding and motivate further applications. We now give a quick overview of the articles that constitute this special issue. Adamović and Milas provide a detailed summary of their rigorous results pertaining to logarithmic vertex operator (super)algebras constructed from lattices. This survey discusses the C2-cofiniteness of the (p, p') triplet models (this is the generalisation of rationality to the logarithmic setting), describes Zhu's algebra for (some of) these theories and outlines the difficulties involved in explicitly constructing the modules responsible for their logarithmic nature. Cardy gives an account of a popular approach to logarithmic theories that regards them, heuristically at least, as limits of ordinary (but non-rational) CFTs. More precisely, it seems that any given correlator may be computed as a limit of standard (non-logarithmic) correlators, any logarithmic singularities that arise do so because of a degeneration when taking the limit. He then illustrates this phenomenon in several theories describing statistical lattice models including the n → 0 limit of the O(n ) model and the Q → 1 limit of the Q-state Potts model. Creutzig and Ridout review the continuum approach to logarithmic CFT, using the percolation (boundary) CFT to detail the connection between module structure and logarithmic singularities in correlators before describing their proposed solution to the thorny issue of generalising modular data and Verlinde formulae to the logarithmic setting. They illustrate this proposal using the three best-understood examples of logarithmic CFTs: the (1, 2) models, related to symplectic fermions; the fractional level WZW model on , related to the beta gamma ghosts; and the WZW model on GL(1|1). The analysis in each case requires that the spectrum be continuous; C2-cofinite models are only recovered as orbifolds. Flohr and Koehn consider the characters of the irreducible modules in the spectrum of a CFT and discuss why these only span a proper subspace of the space of torus vacuum amplitudes in the logarithmic case. This is illustrated explicitly for the (1, 2) triplet model and conclusions are drawn for the action of the modular group. They then note that the irreducible characters of this model also admit fermionic sum forms which seem to fit well into Nahmrsquo;s well-known conjecture for rational theories. Quasi-particle interpretations are also introduced, leading to the conclusion that logarithmic C2-cofinite theories are not so terribly different to rational theories, at least in some respects. Fuchs, Schweigert and Stigner address the problem of constructing local logarithmic CFTs starting from the chiral theory. They first review the construction of the local theory in the non-logarithmic setting from an angle that will then generalise to logarithmic theories. In particular, they observe that the bulk space can be understood as a certain coend. The authors then show how to carry out the construction of the bulk space in the category of modules over a factorisable ribbon Hopf algebra, which shares many properties with the braided categories arising from logarithmic chiral theories. The authors proceed to construct the analogue of all-genus correlators in their setting and establish invariance under the mapping class group, i.e. locality of the correlators. Gainutdinov, Jacobsen, Read, Saleur and Vasseur review their approach based on the assumption that certain classes of logarithmic CFTs admit lattice regularisations with local degrees of freedom, for example quantum spin chains (with local interactions). They therefore study the finite-dimensional algebras generated by the hamiltonian densities (typically the Temperley-Lieb algebras and their extensions) that describe the dynamics of these lattice models. The authors then argue that the lattice algebras exhibit, in finite size, mathematical properties that are in correspondence with those of their continuum limits, allowing one to predict continuum structures directly from the lattice. Moreover, the lattice models considered admit quantum group symmetries that play a central role in the algebraic analysis (representation structure and fusion). Grumiller, Riedler, Rosseel and Zojer review the role that logarithmic CFTs may play in certain versions of the AdS/CFT correspondence, particularly for what is known as topologically massive gravity (TMG). This has been a very active subject over the last five years and the article takes great care to disentangle the contributions from the many groups that have participated. They begin with some general remarks on logarithmic behaviour, much in the spirit of Cardyrsquo;s review, before detailing the distinction between the chiral (no logs) and logarithmic proposals for critical TMG. The latter is then subjected to various consistency checks before discussing evidence for logarithmic behaviour in more general classes of gravity theories including those with boundaries, supersymmetry and galilean relativity. Gurarie has written an historical overview of his seminal contributions to this field, putting his results (and those of his collaborators) in the context of understanding applications to condensed matter physics. This includes the link between the non-diagonalisability of L0 and logarithmic singularities, a study of the c → 0 catastrophe, and a proposed resolution involving supersymmetric partners for the stress-energy tensor and its logarithmic partner field. Henkel and Rouhani describe a direction in which logarithmic singularities are observed in correlators of non-relativistic field theories. Their review covers the appropriate modifications of conformal invariance that are appropriate to non-equilibrium statistical mechanics, strongly anisotropic critical points and certain variants of TMG. The main variation away from the standard relativistic idea of conformal invariance is that time is explicitly distinguished from space when considering dilations and this leads to a variety of algebraic structures to explore. In this review, the link between non-diagonalisable representations and logarithmic singularities in correlators is generalised to these algebras, before two applications of the theory are discussed. Huang and Lepowsky give a non-technical overview of their work on braided tensor structures on suitable categories of representations of vertex operator algebras. They also place their work in historic context and compare it to related approaches. The authors sketch their construction of the so-called P(z)-tensor product of modules of a vertex operator algebra, and the construction of the associativity isomorphisms for this tensor product. They proceed to give a guide to their works leading to the first authorrsquo;s proof of modularity for a class of vertex operator algebras, and to their works, joint with Zhang, on logarithmic intertwining operators and the resulting tensor product theory. Morin-Duchesne and Saint-Aubin have contributed a research article describing their recent characterisation of when the transfer matrix of a periodic loop model fails to be diagonalisable. This generalises their recent result for non-periodic loop models and provides rigorous methods to justify what has often been assumed in the lattice approach to logarithmic CFT. The philosophy here is one of analysing lattice models with finite size, aiming to demonstrate that non-diagonalisability survives the scaling limit. This is extremely difficult in general (see also the review by Gainutdinov et al ), so it is remarkable that it is even possible to demonstrate this at any level of generality. Quella and Schomerus have prepared an extensive review covering their longstanding collaboration on the logarithmic nature of conformal sigma models on Lie supergroups and their cosets with applications to string theory and AdS/CFT. Beginning with a very welcome overview of Lie superalgebras and their representations, harmonic analysis and cohomological reduction, they then apply these mathematical tools to WZW models on type I Lie supergroups and their homogeneous subspaces. Along the way, deformations are discussed and potential dualities in the corresponding string theories are described. Ruelle provides an exhaustive account of his substantial contributions to the study of the abelian sandpile model. This is a statistical model which has the surprising feature that many correlation functions can be computed exactly, in the bulk and on the boundary, even though the spectrum of conformal weights is largely unknown. Nevertheless, there is much evidence suggesting that its scaling limit is described by an, as yet unknown, c = -2 logarithmic CFT. Semikhatov and Tipunin present their very recent results regarding the construction of logarithmic chiral W-algebra extensions of a fractional level algebra. The idea is that these algebras are the centralisers of a rank-two Nichols algebra which possesses at least one fermionic generator. In turn, these Nichols algebra generators are represented by screening operators which naturally appear in CFT bosonisation. The major advantage of using these generators is that they give strong hints about the representation theory and fusion rules of the chiral algebra. Simmons has contributed an article describing the calculation of various correlation functions in the logarithmic CFT that describes critical percolation. These calculations are interpreted geometrically in a manner that should be familiar to mathematicians studying Schramm-Loewner evolutions and point towards a (largely unexplored) bridge connecting logarithmic CFT with this branch of mathematics. Of course, the field of logarithmic CFT has benefited greatly from the work of many of researchers who are not represented in this special issue. The interested reader will find many links to their work in the bibliographies of the special issue articles and reviews. In summary, logarithmic CFT describes an extension of the incredibly successful methods of rational CFT to a more general setting. This extension is necessary to properly describe many different fundamental phenomena of physical interest. The formalism is moreover highly non-trivial from a mathematical point of view and so logarithmic theories are of significant interest to both physicists and mathematicians. We hope that the collection of articles that follows will serve as an inspiration, and a valuable resource, for both of these communities.

  13. Late-time structure of the Bunch-Davies de Sitter wavefunction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anninos, Dionysios; Anous, Tarek; Freedman, Daniel Z.

    2015-11-30

    We examine the late time behavior of the Bunch-Davies wavefunction for interacting light fields in a de Sitter background. We use perturbative techniques developed in the framework of AdS/CFT, and analytically continue to compute tree and loop level contributions to the Bunch-Davies wavefunction. We consider self-interacting scalars of general mass, but focus especially on the massless and conformally coupled cases. We show that certain contributions grow logarithmically in conformal time both at tree and loop level. We also consider gauge fields and gravitons. The four-dimensional Fefferman-Graham expansion of classical asymptotically de Sitter solutions is used to show that the wavefunctionmore » contains no logarithmic growth in the pure graviton sector at tree level. Finally, assuming a holographic relation between the wavefunction and the partition function of a conformal field theory, we interpret the logarithmic growths in the language of conformal field theory.« less

  14. Scaling rules for the final decline to extinction

    PubMed Central

    Griffen, Blaine D.; Drake, John M.

    2009-01-01

    Space–time scaling rules are ubiquitous in ecological phenomena. Current theory postulates three scaling rules that describe the duration of a population's final decline to extinction, although these predictions have not previously been empirically confirmed. We examine these scaling rules across a broader set of conditions, including a wide range of density-dependent patterns in the underlying population dynamics. We then report on tests of these predictions from experiments using the cladoceran Daphnia magna as a model. Our results support two predictions that: (i) the duration of population persistence is much greater than the duration of the final decline to extinction and (ii) the duration of the final decline to extinction increases with the logarithm of the population's estimated carrying capacity. However, our results do not support a third prediction that the duration of the final decline scales inversely with population growth rate. These findings not only support the current standard theory of population extinction but also introduce new empirical anomalies awaiting a theoretical explanation. PMID:19141422

  15. Clinical application of the pO(2)-pCO(2) diagram.

    PubMed

    Paulev, P-E; Siggaard-Andersen, O

    2004-10-01

    Based on the classic, linear blood gas diagram a logarithmic blood gas map was constructed. The scales were extended by the use of logarithmic axes in order to allow for high patient values. Patients with lung disorders often have high arterial carbon dioxide tensions, and patients on supplementary oxygen typically respond with high oxygen tensions off the scale of the classic diagram. Two case histories illustrate the clinical application of the logarithmic blood gas map. Variables from the two patients were measured by the use of blood gas analysis equipment. Measured and calculated values are tabulated. The calculations were performed using the oxygen status algorithm. When interpreting the graph for a given patient it is recommended first to observe the location of the marker for the partial pressure of oxygen in inspired, humidified air (I) to see whether the patient is breathing atmospheric air or air with supplementary oxygen. Then observe the location of the arterial point (a) to see whether hypoxemia or hypercapnia appears to be the primary disturbance. Finally observe the alveolo-arterial oxygen tension difference to estimate the degree of veno-arterial shunting. If the mixed venous point (v) is available, then observe the value of the mixed venous oxygen tension. This is the most important indicator of global tissue hypoxia.

  16. Spectral analysis of near-wall turbulence in channel flow at Reτ=4200 with emphasis on the attached-eddy hypothesis

    NASA Astrophysics Data System (ADS)

    Agostini, Lionel; Leschziner, Michael

    2017-01-01

    Direct numerical simulation data for channel flow at a friction Reynolds number of 4200, generated by Lozano-Durán and Jiménez [J. Fluid Mech. 759, 432 (2014), 10.1017/jfm.2014.575], are used to examine the properties of near-wall turbulence within subranges of eddy-length scale. Attention is primarily focused on the intermediate layer (mesolayer) covering the logarithmic velocity region within the range of wall-scaled wall-normal distance of 80-1500. The examination is based on a number of statistical properties, including premultiplied and compensated spectra, the premultiplied derivative of the second-order structure function, and three scalar parameters that characterize the anisotropic or isotropic state of the various length-scale subranges. This analysis leads to the delineation of three regions within the map of wall-normal-wise premultiplied spectra, each characterized by distinct turbulence properties. A question of particular interest is whether the Townsend-Perry attached-eddy hypothesis (AEH) can be shown to be valid across the entire mesolayer, in contrast to the usual focus on the outer portion of the logarithmic-velocity layer at high Reynolds numbers, which is populated with very-large-scale motions. This question is addressed by reference to properties in the premultiplied scalewise derivative of the second-order structure function (PMDS2) and joint probability density functions of streamwise-velocity fluctuations and their streamwise and spanwise derivatives. This examination provides evidence, based primarily on the existence of a plateau region in the PMDS2, for the qualified validity of the AEH right down the lower limit of the logarithmic velocity range.

  17. Adjustments for the display of quantized ion channel dwell times in histograms with logarithmic bins.

    PubMed

    Stark, J A; Hladky, S B

    2000-02-01

    Dwell-time histograms are often plotted as part of patch-clamp investigations of ion channel currents. The advantages of plotting these histograms with a logarithmic time axis were demonstrated by, J. Physiol. (Lond.). 378:141-174), Pflügers Arch. 410:530-553), and, Biophys. J. 52:1047-1054). Sigworth and Sine argued that the interpretation of such histograms is simplified if the counts are presented in a manner similar to that of a probability density function. However, when ion channel records are recorded as a discrete time series, the dwell times are quantized. As a result, the mapping of dwell times to logarithmically spaced bins is highly irregular; bins may be empty, and significant irregularities may extend beyond the duration of 100 samples. Using simple approximations based on the nature of the binning process and the transformation rules for probability density functions, we develop adjustments for the display of the counts to compensate for this effect. Tests with simulated data suggest that this procedure provides a faithful representation of the data.

  18. Tertiary creep test by ring shear apparatus in predicting initiation time of rainfall-induced-shallow landslide

    NASA Astrophysics Data System (ADS)

    Dok, A.; Fukuoka, H.

    2010-12-01

    Landslides are complex geo-disaters that frequently occur due to certain causes, but only one trigger such as earthquake or heavy rainfall or other related natural phenomenas. A slope failure seldom occurs without any creep deformation. Failure time of a slope as found by Fukuzono (1985) and Siato (1965) based on graphical analysis of extensometer monitoring data through large scale flume test for landslide studies, logarithm of acceleration is proportional to the logarithm of velocity of surface displacement immediately before the failure. It is expressed as d2x/dt2 = A(dx/dt)α, where x is surface displacement, t is time, and A and α are constant. And, Fukuzono (1985, 1989) proposed a simple method of predicting the time of falure by the inverse velocity (1/v) mean. The curve of inverse velocity is concave at 1< α<2, linear at α=2, and convex at α>2. Recently, Minamitani (2007) have researched on mechanism of Tertiary Creep deformation for landslide failure time prediction by increasing shear-stress development in order to understand the story behind the empirical relationship found by senior researcher Fukozono. He found a strong relationshp between constants A and α, expressed as α = 0.1781A+ 1.814. For deeper understanding, this study aims at learning in more detail on mechanism of landslides in tropical soils by ring shear apparatus (invented by DPRI, Disaster Prevention Research Institute) based on Tertiary Creep deformation theory in help issue warning on rainfall-induced landslides through back (pore-water) pressure control tests under combined conditions of particular normal stress and shear stress with pore-water pressure changes to simulate the potential sliding surface condition in the heavy rainfall, which no body experiences conducting such a test series, particularly by applying cyclic and actual groundwater change pattern to the soils. To reach the archivement, serie of back pressure control test were implemented by utilising stress-controlled ring shear apparatus which can control pore pressure, as well as monotonic increase of pore pressure at constant rate. Mixture of sand and clay materials was used to simulate actual landslide potential sliding surface. Repeated 1~5 time shear test for a specimen was also additionally conducted to produce reactivated motion landsliding. As a result, the tests were succeeded to reproduce tertiary creep to failure, through which the logarithm of acceleration-logarithm of velicity relation was found to be concave feature of 1/v trend (of safer side), and alpha value is much smaller than Fukuzono and Minamitani's works (0.3~0.7) by unknown reason. Moreover, trial repeated shear found a scatter of alpha values and the value itself did not show any significant trend of change.

  19. The Relationship between Spatial and Temporal Magnitude Estimation of Scientific Concepts at Extreme Scales

    NASA Astrophysics Data System (ADS)

    Price, Aaron; Lee, H.

    2010-01-01

    Many astronomical objects, processes, and events exist and occur at extreme scales of spatial and temporal magnitudes. Our research draws upon the psychological literature, replete with evidence of linguistic and metaphorical links between the spatial and temporal domains, to compare how students estimate spatial and temporal magnitudes associated with objects and processes typically taught in science class.. We administered spatial and temporal scale estimation tests, with many astronomical items, to 417 students enrolled in 12 undergraduate science courses. Results show that while the temporal test was more difficult, students’ overall performance patterns between the two tests were mostly similar. However, asymmetrical correlations between the two tests indicate that students think of the extreme ranges of spatial and temporal scales in different ways, which is likely influenced by their classroom experience. When making incorrect estimations, students tended to underestimate the difference between the everyday scale and the extreme scales on both tests. This suggests the use of a common logarithmic mental number line for both spatial and temporal magnitude estimation. However, there are differences between the two tests in the errors student make in the everyday range. Among the implications discussed is the use of spatio-temporal reference frames, instead of smooth bootstrapping, to help students maneuver between scales of magnitude and the use of logarithmic transformations between reference frames. Implications for astronomy range from learning about spectra to large scale galaxy structure.

  20. The complete two-loop integrated jet thrust distribution in soft-collinear effective theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    von Manteuffel, Andreas; Schabinger, Robert M.; Zhu, Hua Xing

    2014-03-01

    In this work, we complete the calculation of the soft part of the two-loop integrated jet thrust distribution in e+e- annihilation. This jet mass observable is based on the thrust cone jet algorithm, which involves a veto scale for out-of-jet radiation. The previously uncomputed part of our result depends in a complicated way on the jet cone size, r, and at intermediate stages of the calculation we actually encounter a new class of multiple polylogarithms. We employ an extension of the coproduct calculus to systematically exploit functional relations and represent our results concisely. In contrast to the individual contributions, themore » sum of all global terms can be expressed in terms of classical polylogarithms. Our explicit two-loop calculation enables us to clarify the small r picture discussed in earlier work. In particular, we show that the resummation of the logarithms of r that appear in the previously uncomputed part of the two-loop integrated jet thrust distribution is inextricably linked to the resummation of the non-global logarithms. Furthermore, we find that the logarithms of r which cannot be absorbed into the non-global logarithms in the way advocated in earlier work have coefficients fixed by the two-loop cusp anomalous dimension. We also show that in many cases one can straightforwardly predict potentially large logarithmic contributions to the integrated jet thrust distribution at L loops by making use of analogous contributions to the simpler integrated hemisphere soft function.« less

  1. Universal slow dynamics in granular solids

    PubMed

    TenCate; Smith; Guyer

    2000-07-31

    Experimental properties of a new form of creep dynamics are reported, as manifest in a variety of sandstones, limestone, and concrete. The creep is a recovery behavior, following the sharp drop in elastic modulus induced either by nonlinear acoustic straining or rapid temperature change. The extent of modulus recovery is universally proportional to the logarithm of the time after source discontinuation in all samples studied, over a scaling regime covering at least 10(3) s. Comparison of acoustically and thermally induced creep suggests a single origin based on internal strain, which breaks the symmetry of the inducing source.

  2. The spreading time in SIS epidemics on networks

    NASA Astrophysics Data System (ADS)

    He, Zhidong; Van Mieghem, Piet

    2018-03-01

    In a Susceptible-Infected-Susceptible (SIS) process, we investigate the spreading time Tm, which is the time when the number of infected nodes in the metastable state is first reached, starting from the outbreak of the epidemics. We observe that the spreading time Tm resembles a lognormal-like distribution, though with different deep tails, both for the Markovian and the non-Markovian infection process, which implies that the spreading time can be very long with a relatively high probability. In addition, we show that a stronger virus, with a higher effective infection rate τ or an earlier timing of the infection attempts, does not always lead to a shorter average spreading time E [Tm ] . We numerically demonstrate that the average spreading time E [Tm ] in the complete graph and the star graph scales logarithmically as a function of the network size N for a fixed fraction of infected nodes in the metastable state.

  3. Equal Temperament, Overtones, and the Ear.

    ERIC Educational Resources Information Center

    McGarry, Robert J.

    1984-01-01

    Discussed is how musicians can get used to playing in equal temperament--the system of tuning in which all 12 tones of the chromatic scale stand equidistant from each other in both a logarithmical and musical sense. (RM)

  4. Higgs-boson production at small transverse momentum

    NASA Astrophysics Data System (ADS)

    Becher, Thomas; Neubert, Matthias; Wilhelm, Daniel

    2013-05-01

    Using methods from effective field theory, we have recently developed a novel, systematic framework for the calculation of the cross sections for electroweak gauge-boson production at small and very small transverse momentum q T , in which large logarithms of the scale ratio m V / q T are resummed to all orders. This formalism is applied to the production of Higgs bosons in gluon fusion at the LHC. The production cross section receives logarithmically enhanced corrections from two sources: the running of the hard matching coefficient and the collinear factorization anomaly. The anomaly leads to the dynamical generation of a non-perturbative scale {q_{*}}tilde{mkern6mu} {m_H}{e^{{{{{-const}} / {{{α_s}( {{m_H}} )}} .}}}}≈ 8 GeV, which protects the process from receiving large long-distance hadronic contributions. We present numerical predictions for the transverse-momentum spectrum of Higgs bosons produced at the LHC, finding that it is quite insensitive to hadronic effects.

  5. Zero-Field Ambient-Pressure Quantum Criticality in the Stoichiometric Non-Fermi Liquid System CeRhBi

    NASA Astrophysics Data System (ADS)

    Anand, Vivek K.; Adroja, Devashibhai T.; Hillier, Adrian D.; Shigetoh, Keisuke; Takabatake, Toshiro; Park, Je-Geun; McEwen, Keith A.; Pixley, Jedediah H.; Si, Qimiao

    2018-06-01

    We present the spin dynamics study of a stoichiometric non-Fermi liquid (NFL) system CeRhBi, using low-energy inelastic neutron scattering (INS) and muon spin relaxation (μSR) measurements. It shows evidence for an energy-temperature (E/T) scaling in the INS dynamic response and a time-field (t/Hη) scaling of the μSR asymmetry function indicating a quantum critical behavior in this compound. The E/T scaling reveals a local character of quantum criticality consistent with the power-law divergence of the magnetic susceptibility, logarithmic divergence of the magnetic heat capacity and T-linear resistivity at low temperature. The occurrence of NFL behavior and local criticality over a very wide dynamical range at zero field and ambient pressure without any tuning in this stoichiometric heavy fermion compound is striking, making CeRhBi a model system amenable to in-depth studies for quantum criticality.

  6. Reducing bias and analyzing variability in the time-left procedure.

    PubMed

    Trujano, R Emmanuel; Orduña, Vladimir

    2015-04-01

    The time-left procedure was designed to evaluate the psychophysical function for time. Although previous results indicated a linear relationship, it is not clear what role the observed bias toward the time-left option plays in this procedure and there are no reports of how variability changes with predicted indifference. The purposes of this experiment were to reduce bias experimentally, and to contrast the difference limen (a measure of variability around indifference) with predictions from scalar expectancy theory (linear timing) and behavioral economic model (logarithmic timing). A control group of 6 rats performed the original time-left procedure with C=60 s and S=5, 10,…, 50, 55 s, whereas a no-bias group of 6 rats performed the same conditions in a modified time-left procedure in which only a single response per choice trial was allowed. Results showed that bias was reduced for the no-bias group, observed indifference grew linearly with predicted indifference for both groups, and difference limen and Weber ratios decreased as expected indifference increased for the control group, which is consistent with linear timing, whereas for the no-bias group they remained constant, consistent with logarithmic timing. Therefore, the time-left procedure generates results consistent with logarithmic perceived time once bias is experimentally reduced. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. Decay of Correlations, Quantitative Recurrence and Logarithm Law for Contracting Lorenz Attractors

    NASA Astrophysics Data System (ADS)

    Galatolo, Stefano; Nisoli, Isaia; Pacifico, Maria Jose

    2018-03-01

    In this paper we prove that a class of skew products maps with non uniformly hyperbolic base has exponential decay of correlations. We apply this to obtain a logarithm law for the hitting time associated to a contracting Lorenz attractor at all the points having a well defined local dimension, and a quantitative recurrence estimation.

  8. Finite-time singularities in the dynamics of hyperinflation in an economy

    NASA Astrophysics Data System (ADS)

    Szybisz, Martín A.; Szybisz, Leszek

    2009-08-01

    The dynamics of hyperinflation episodes is studied by applying a theoretical approach based on collective “adaptive inflation expectations” with a positive nonlinear feedback proposed in the literature. In such a description it is assumed that the growth rate of the logarithmic price, r(t) , changes with a velocity obeying a power law which leads to a finite-time singularity at a critical time tc . By revising that model we found that, indeed, there are two types of singular solutions for the logarithmic price, p(t) . One is given by the already reported form p(t)≈(tc-t)-α (with α>0 ) and the other exhibits a logarithmic divergence, p(t)≈ln[1/(tc-t)] . The singularity is a signature for an economic crash. In the present work we express p(t) explicitly in terms of the parameters introduced throughout the formulation avoiding the use of any combination of them defined in the original paper. This procedure allows to examine simultaneously the time series of r(t) and p(t) performing a linked error analysis of the determined parameters. For the first time this approach is applied for analyzing the very extreme historical hyperinflations occurred in Greece (1941-1944) and Yugoslavia (1991-1994). The case of Greece is compatible with a logarithmic singularity. The study is completed with an analysis of the hyperinflation spiral currently experienced in Zimbabwe. According to our results, an economic crash in this country is predicted for these days. The robustness of the results to changes of the initial time of the series and the differences with a linear feedback are discussed.

  9. One- and two-channel Kondo model with logarithmic Van Hove singularity: A numerical renormalization group solution

    NASA Astrophysics Data System (ADS)

    Zhuravlev, A. K.; Anokhin, A. O.; Irkhin, V. Yu.

    2018-02-01

    Simple scaling consideration and NRG solution of the one- and two-channel Kondo model in the presence of a logarithmic Van Hove singularity at the Fermi level is given. The temperature dependences of local and impurity magnetic susceptibility and impurity entropy are calculated. The low-temperature behavior of the impurity susceptibility and impurity entropy turns out to be non-universal in the Kondo sense and independent of the s-d coupling J. The resonant level model solution in the strong coupling regime confirms the NRG results. In the two-channel case the local susceptibility demonstrates a non-Fermi-liquid power-law behavior.

  10. Neyman-Pearson biometric score fusion as an extension of the sum rule

    NASA Astrophysics Data System (ADS)

    Hube, Jens Peter

    2007-04-01

    We define the biometric performance invariance under strictly monotonic functions on match scores as normalization symmetry. We use this symmetry to clarify the essential difference between the standard score-level fusion approaches of sum rule and Neyman-Pearson. We then express Neyman-Pearson fusion assuming match scores defined using false acceptance rates on a logarithmic scale. We show that by stating Neyman-Pearson in this form, it reduces to sum rule fusion for ROC curves with logarithmic slope. We also introduce a one parameter model of biometric performance and use it to express Neyman-Pearson fusion as a weighted sum rule.

  11. Integrated Circuit Stellar Magnitude Simulator

    ERIC Educational Resources Information Center

    Blackburn, James A.

    1978-01-01

    Describes an electronic circuit which can be used to demonstrate the stellar magnitude scale. Six rectangular light-emitting diodes with independently adjustable duty cycles represent stars of magnitudes 1 through 6. Experimentally verifies the logarithmic response of the eye. (Author/GA)

  12. Impact of long-range interactions on the disordered vortex lattice

    NASA Astrophysics Data System (ADS)

    Koopmann, J. A.; Geshkenbein, V. B.; Blatter, G.

    2003-07-01

    The interaction between the vortex lines in a type-II superconductor is mediated by currents. In the absence of transverse screening this interaction is long ranged, stiffening up the vortex lattice as expressed by the dispersive elastic moduli. The effect of disorder is strongly reduced, resulting in a mean-squared displacement correlator ≡<[u(R,L)-u(0,0)]2> characterized by a mere logarithmic growth with distance. Finite screening cuts the interaction on the scale of the London penetration depth λ and limits the above behavior to distances R<λ. Using a functional renormalization-group approach, we derive the flow equation for the disorder correlation function and calculate the disorder-averaged mean-squared relative displacement ∝ ln2σ(R/a0). The logarithmic growth (2σ=1) in the perturbative regime at small distances [A. I. Larkin and Yu. N. Ovchinnikov, J. Low Temp. Phys. 34, 409 (1979)] crosses over to a sub-logarithmic growth with 2σ=0.348 at large distances.

  13. Solving the Schroedinger equation for helium atom and its isoelectronic ions with the free iterative complement interaction (ICI) method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakashima, Hiroyuki; Nakatsuji, Hiroshi

    2007-12-14

    The Schroedinger equation was solved very accurately for helium atom and its isoelectronic ions (Z=1-10) with the free iterative complement interaction (ICI) method followed by the variational principle. We obtained highly accurate wave functions and energies of helium atom and its isoelectronic ions. For helium, the calculated energy was -2.903 724 377 034 119 598 311 159 245 194 404 446 696 905 37 a.u., correct over 40 digit accuracy, and for H{sup -}, it was -0.527 751 016 544 377 196 590 814 566 747 511 383 045 02 a.u. These results prove numerically that with the free ICImore » method, we can calculate the solutions of the Schroedinger equation as accurately as one desires. We examined several types of scaling function g and initial function {psi}{sub 0} of the free ICI method. The performance was good when logarithm functions were used in the initial function because the logarithm function is physically essential for three-particle collision area. The best performance was obtained when we introduce a new logarithm function containing not only r{sub 1} and r{sub 2} but also r{sub 12} in the same logarithm function.« less

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Katti, Amogh; Di Fatta, Giuseppe; Naughton, Thomas

    Future extreme-scale high-performance computing systems will be required to work under frequent component failures. The MPI Forum s User Level Failure Mitigation proposal has introduced an operation, MPI Comm shrink, to synchronize the alive processes on the list of failed processes, so that applications can continue to execute even in the presence of failures by adopting algorithm-based fault tolerance techniques. This MPI Comm shrink operation requires a failure detection and consensus algorithm. This paper presents three novel failure detection and consensus algorithms using Gossiping. The proposed algorithms were implemented and tested using the Extreme-scale Simulator. The results show that inmore » all algorithms the number of Gossip cycles to achieve global consensus scales logarithmically with system size. The second algorithm also shows better scalability in terms of memory and network bandwidth usage and a perfect synchronization in achieving global consensus. The third approach is a three-phase distributed failure detection and consensus algorithm and provides consistency guarantees even in very large and extreme-scale systems while at the same time being memory and bandwidth efficient.« less

  15. Monitoring and Modeling Performance of Communications in Computational Grids

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael A.; Le, Thuy T.

    2003-01-01

    Computational grids may include many machines located in a number of sites. For efficient use of the grid we need to have an ability to estimate the time it takes to communicate data between the machines. For dynamic distributed grids it is unrealistic to know exact parameters of the communication hardware and the current communication traffic and we should rely on a model of the network performance to estimate the message delivery time. Our approach to a construction of such a model is based on observation of the messages delivery time with various message sizes and time scales. We record these observations in a database and use them to build a model of the message delivery time. Our experiments show presence of multiple bands in the logarithm of the message delivery times. These multiple bands represent multiple paths messages travel between the grid machines and are incorporated in our multiband model.

  16. Generalization and capacity of extensively large two-layered perceptrons.

    PubMed

    Rosen-Zvi, Michal; Engel, Andreas; Kanter, Ido

    2002-09-01

    The generalization ability and storage capacity of a treelike two-layered neural network with a number of hidden units scaling as the input dimension is examined. The mapping from the input to the hidden layer is via Boolean functions; the mapping from the hidden layer to the output is done by a perceptron. The analysis is within the replica framework where an order parameter characterizing the overlap between two networks in the combined space of Boolean functions and hidden-to-output couplings is introduced. The maximal capacity of such networks is found to scale linearly with the logarithm of the number of Boolean functions per hidden unit. The generalization process exhibits a first-order phase transition from poor to perfect learning for the case of discrete hidden-to-output couplings. The critical number of examples per input dimension, alpha(c), at which the transition occurs, again scales linearly with the logarithm of the number of Boolean functions. In the case of continuous hidden-to-output couplings, the generalization error decreases according to the same power law as for the perceptron, with the prefactor being different.

  17. Entanglement dynamics in critical random quantum Ising chain with perturbations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Yichen, E-mail: ychuang@caltech.edu

    We simulate the entanglement dynamics in a critical random quantum Ising chain with generic perturbations using the time-evolving block decimation algorithm. Starting from a product state, we observe super-logarithmic growth of entanglement entropy with time. The numerical result is consistent with the analytical prediction of Vosk and Altman using a real-space renormalization group technique. - Highlights: • We study the dynamical quantum phase transition between many-body localized phases. • We simulate the dynamics of a very long random spin chain with matrix product states. • We observe numerically super-logarithmic growth of entanglement entropy with time.

  18. Recovery of severely compacted soils in the Mojave Desert, California, USA

    USGS Publications Warehouse

    Webb, R.H.

    2002-01-01

    Often as a result of large-scale military maneuvers in the past, many soils in the Mojave Desert are highly vulnerable to soil compaction, particularly when wet. Previous studies indicate that natural recovery of severely compacted desert soils is extremely slow, and some researchers have suggested that subsurface compaction may not recover. Poorly sorted soils, particularly those with a loamy sand texture, are most vulnerable to soil compaction, and these soils are the most common in alluvial fans of the Mojave Desert. Recovery of compacted soil is expected to vary as a function of precipitation amounts, wetting-and-drying cycles, freeze-thaw cycles, and bioturbation, particularly root growth. Compaction recovery, as estimated using penetration depth and bulk density, was measured at 19 sites with 32 site-time combinations, including the former World War II Army sites of Camps Ibis, Granite, Iron Mountain, Clipper, and Essex. Although compaction at these sites was caused by a wide variety of forces, ranging from human trampling to tank traffic, the data do not allow segregation of differences in recovery rates for different compaction forces. The recovery rate appears to be logarithmic, with the highest rate of change occurring in the first few decades following abandonment. Some higher-elevation sites have completely recovered from soil compaction after 70 years. Using a linear model of recovery, the full recovery time ranges from 92 to 100 years; using a logarithmic model, which asymptotically approaches full recovery, the time required for 85% recovery ranges from 105-124 years.

  19. Tests of nonuniversality of the stock return distributions in an emerging market

    NASA Astrophysics Data System (ADS)

    Mu, Guo-Hua; Zhou, Wei-Xing

    2010-12-01

    There is convincing evidence showing that the probability distributions of stock returns in mature markets exhibit power-law tails and both the positive and negative tails conform to the inverse cubic law. It supports the possibility that the tail exponents are universal at least for mature markets in the sense that they do not depend on stock market, industry sector, and market capitalization. We investigate the distributions of intraday returns at different time scales ( Δt=1 , 5, 15, and 30 min) of all the A-share stocks traded in the Chinese stock market, which is the largest emerging market in the world. We find that the returns can be well fitted by the q -Gaussian distribution and the tails have power-law relaxations with the exponents increasing with Δt and being well outside the Lévy stable regime for individual stocks. We provide statistically significant evidence showing that, at small time scales Δt<15min , the exponents logarithmically decrease with the turnover rate and increase with the market capitalization. When Δt>15min , no conclusive evidence is found for a possible dependence of the tail exponent on the turnover rate or the market capitalization. Our findings indicate that the intraday return distributions at small time scales are not universal in emerging stock markets but might be universal at large time scales.

  20. Finite-time singularities in the dynamics of hyperinflation in an economy.

    PubMed

    Szybisz, Martín A; Szybisz, Leszek

    2009-08-01

    The dynamics of hyperinflation episodes is studied by applying a theoretical approach based on collective "adaptive inflation expectations" with a positive nonlinear feedback proposed in the literature. In such a description it is assumed that the growth rate of the logarithmic price, r(t), changes with a velocity obeying a power law which leads to a finite-time singularity at a critical time t(c). By revising that model we found that, indeed, there are two types of singular solutions for the logarithmic price, p(t) . One is given by the already reported form p(t) approximately (t(c)-t)(-alpha) (with alpha>0 ) and the other exhibits a logarithmic divergence, p(t) approximately ln[1/(t(c)-t)] . The singularity is a signature for an economic crash. In the present work we express p(t) explicitly in terms of the parameters introduced throughout the formulation avoiding the use of any combination of them defined in the original paper. This procedure allows to examine simultaneously the time series of r(t) and p(t) performing a linked error analysis of the determined parameters. For the first time this approach is applied for analyzing the very extreme historical hyperinflations occurred in Greece (1941-1944) and Yugoslavia (1991-1994). The case of Greece is compatible with a logarithmic singularity. The study is completed with an analysis of the hyperinflation spiral currently experienced in Zimbabwe. According to our results, an economic crash in this country is predicted for these days. The robustness of the results to changes of the initial time of the series and the differences with a linear feedback are discussed.

  1. Causal analysis of self-sustaining processes in the logarithmic layer of wall-bounded turbulence

    NASA Astrophysics Data System (ADS)

    Bae, H. J.; Encinar, M. P.; Lozano-Durán, A.

    2018-04-01

    Despite the large amount of information provided by direct numerical simulations of turbulent flows, their underlying dynamics remain elusive even in the most simple and canonical configurations. Most common approaches to investigate the turbulence phenomena do not provide a clear causal inference between events, which is essential to determine the dynamics of self-sustaining processes. In the present work, we examine the causal interactions between streaks, rolls and mean shear in the logarithmic layer of a minimal turbulent channel flow. Causality between structures is assessed in a non-intrusive manner by transfer entropy, i.e., how much the uncertainty of one structure is reduced by knowing the past states of the others. We choose to represent streaks by the first Fourier modes of the streamwise velocity, while rolls are defined by the wall-normal and spanwise velocity modes. The results show that the process is mainly unidirectional rather than cyclic, and that the log-layer motions are sustained by extracting energy from the mean shear which controls the dynamics and time-scales. The well-known lift-up effect is also identified, but shown to be of secondary importance in the causal network between shear, streaks and rolls.

  2. Spinning solutions in general relativity with infinite central density

    NASA Astrophysics Data System (ADS)

    Flammer, P. D.

    2018-05-01

    This paper presents general relativistic numerical simulations of uniformly rotating polytropes. Equations are developed using MSQI coordinates, but taking a logarithm of the radial coordinate. The result is relatively simple elliptical differential equations. Due to the logarithmic scale, we can resolve solutions with near-singular mass distributions near their center, while the solution domain extends many orders of magnitude larger than the radius of the distribution (to connect with flat space-time). Rotating solutions are found with very high central energy densities for a range of adiabatic exponents. Analytically, assuming the pressure is proportional to the energy density (which is true for polytropes in the limit of large energy density), we determine the small radius behavior of the metric potentials and energy density. This small radius behavior agrees well with the small radius behavior of large central density numerical results, lending confidence to our numerical approach. We compare results with rotating solutions available in the literature, which show good agreement. We study the stability of spherical solutions: instability sets in at the first maximum in mass versus central energy density; this is also consistent with results in the literature, and further lends confidence to the numerical approach.

  3. Is working memory stored along a logarithmic timeline? Converging evidence from neuroscience, behavior and models.

    PubMed

    Singh, Inder; Tiganj, Zoran; Howard, Marc W

    2018-04-23

    A growing body of evidence suggests that short-term memory does not only store the identity of recently experienced stimuli, but also information about when they were presented. This representation of 'what' happened 'when' constitutes a neural timeline of recent past. Behavioral results suggest that people can sequentially access memories for the recent past, as if they were stored along a timeline to which attention is sequentially directed. In the short-term judgment of recency (JOR) task, the time to choose between two probe items depends on the recency of the more recent probe but not on the recency of the more remote probe. This pattern of results suggests a backward self-terminating search model. We review recent neural evidence from the macaque lateral prefrontal cortex (lPFC) (Tiganj, Cromer, Roy, Miller, & Howard, in press) and behavioral evidence from human JOR task (Singh & Howard, 2017) bearing on this question. Notably, both lines of evidence suggest that the timeline is logarithmically compressed as predicted by Weber-Fechner scaling. Taken together, these findings provide an integrative perspective on temporal organization and neural underpinnings of short-term memory. Copyright © 2018 Elsevier Inc. All rights reserved.

  4. Exact Asymptotics of the Freezing Transition of a Logarithmically Correlated Random Energy Model

    NASA Astrophysics Data System (ADS)

    Webb, Christian

    2011-12-01

    We consider a logarithmically correlated random energy model, namely a model for directed polymers on a Cayley tree, which was introduced by Derrida and Spohn. We prove asymptotic properties of a generating function of the partition function of the model by studying a discrete time analogy of the KPP-equation—thus translating Bramson's work on the KPP-equation into a discrete time case. We also discuss connections to extreme value statistics of a branching random walk and a rescaled multiplicative cascade measure beyond the critical point.

  5. Linear and Logarithmic Speed-Accuracy Trade-Offs in Reciprocal Aiming Result from Task-Specific Parameterization of an Invariant Underlying Dynamics

    ERIC Educational Resources Information Center

    Bongers, Raoul M.; Fernandez, Laure; Bootsma, Reinoud J.

    2009-01-01

    The authors examined the origins of linear and logarithmic speed-accuracy trade-offs from a dynamic systems perspective on motor control. In each experiment, participants performed 2 reciprocal aiming tasks: (a) a velocity-constrained task in which movement time was imposed and accuracy had to be maximized, and (b) a distance-constrained task in…

  6. Scaling laws for impact fragmentation of spherical solids.

    PubMed

    Timár, G; Kun, F; Carmona, H A; Herrmann, H J

    2012-07-01

    We investigate the impact fragmentation of spherical solid bodies made of heterogeneous brittle materials by means of a discrete element model. Computer simulations are carried out for four different system sizes varying the impact velocity in a broad range. We perform a finite size scaling analysis to determine the critical exponents of the damage-fragmentation phase transition and deduce scaling relations in terms of radius R and impact velocity v(0). The scaling analysis demonstrates that the exponent of the power law distributed fragment mass does not depend on the impact velocity; the apparent change of the exponent predicted by recent simulations can be attributed to the shifting cutoff and to the existence of unbreakable discrete units. Our calculations reveal that the characteristic time scale of the breakup process has a power law dependence on the impact speed and on the distance from the critical speed in the damaged and fragmented states, respectively. The total amount of damage is found to have a similar behavior, which is substantially different from the logarithmic dependence on the impact velocity observed in two dimensions.

  7. Migdal's theorem and electron-phonon vertex corrections in Dirac materials

    NASA Astrophysics Data System (ADS)

    Roy, Bitan; Sau, Jay D.; Das Sarma, S.

    2014-04-01

    Migdal's theorem plays a central role in the physics of electron-phonon interactions in metals and semiconductors, and has been extensively studied theoretically for parabolic band electronic systems in three-, two-, and one-dimensional systems over the last fifty years. In the current work, we theoretically study the relevance of Migdal's theorem in graphene and Weyl semimetals which are examples of 2D and 3D Dirac materials, respectively, with linear and chiral band dispersion. Our work also applies to 2D and 3D topological insulator systems. In Fermi liquids, the renormalization of the electron-phonon vertex scales as the ratio of sound (vs) to Fermi (vF) velocity, which is typically a small quantity. In two- and three-dimensional quasirelativistic systems, such as undoped graphene and Weyl semimetals, the one loop electron-phonon vertex renormalization, which also scales as η =vs/vF as η →0, is, however, enhanced by an ultraviolet logarithmic divergent correction, arising from the linear, chiral Dirac band dispersion. Such enhancement of the electron-phonon vertex can be significantly softened due to the logarithmic increment of the Fermi velocity, arising from the long range Coulomb interaction, and therefore, the electron-phonon vertex correction does not have a logarithmic divergence at low energy. Otherwise, the Coulomb interaction does not lead to any additional renormalization of the electron-phonon vertex. Therefore, electron-phonon vertex corrections in two- and three-dimensional Dirac fermionic systems scale as vs/vF0, where vF0 is the bare Fermi velocity, and small when vs≪vF0. These results, although explicitly derived for the intrinsic undoped systems, should hold even when the chemical potential is tuned away from the Dirac points.

  8. An Integrated Knowledge Framework to Characterize and Scaffold Size and Scale Cognition (FS2C)

    NASA Astrophysics Data System (ADS)

    Magana, Alejandra J.; Brophy, Sean P.; Bryan, Lynn A.

    2012-09-01

    Size and scale cognition is a critical ability associated with reasoning with concepts in different disciplines of science, technology, engineering, and mathematics. As such, researchers and educators have identified the need for young learners and their educators to become scale-literate. Informed by developmental psychology literature and recent findings in nanoscale science and engineering education, we propose an integrated knowledge framework for characterizing and scaffolding size and scale cognition called the FS2C framework. Five ad hoc assessment tasks were designed informed by the FS2C framework with the goal of identifying participants' understandings of size and scale. Findings identified participants' difficulties to discern different sizes of microscale and nanoscale objects and a low level of sophistication on identifying scale worlds among participants. Results also identified that as bigger the difference between the sizes of the objects is, the more difficult was for participants to identify how many times an object is bigger or smaller than another one. Similarly, participants showed difficulties to estimate approximate sizes of sub-macroscopic objects as well as a difficulty for participants to estimate the size of very large objects. Participants' accurate location of objects on a logarithmic scale was also challenging.

  9. Renormalization of dijet operators at order 1 /Q 2 in soft-collinear effective theory

    NASA Astrophysics Data System (ADS)

    Goerke, Raymond; Inglis-Whalen, Matthew

    2018-05-01

    We make progress towards resummation of power-suppressed logarithms in dijet event shapes such as thrust, which have the potential to improve high-precision fits for the value of the strong coupling constant. Using a newly developed formalism for Soft-Collinear Effective Theory (SCET), we identify and compute the anomalous dimensions of all the operators that contribute to event shapes at order 1 /Q 2. These anomalous dimensions are necessary to resum power-suppressed logarithms in dijet event shape distributions, although an additional matching step and running of observable-dependent soft functions will be necessary to complete the resummation. In contrast to standard SCET, the new formalism does not make reference to modes or λ-scaling. Since the formalism does not distinguish between collinear and ultrasoft degrees of freedom at the matching scale, fewer subleading operators are required when compared to recent similar work. We demonstrate how the overlap subtraction prescription extends to these subleading operators.

  10. Temperature fluctuation in Rayleigh-Bénard convection: Logarithmic vs power-law

    NASA Astrophysics Data System (ADS)

    He, Yu-Hao; Xia, Ke-Qing

    2016-11-01

    We present an experimental measurement of the rms temperature (σT) profile in two regions inside a large aspect ratio (Γ = 4 . 2) rectangular Rayleigh-Bénard convection cell. The Rayleigh number (Ra) is from 3 . 2 ×107 to 1 . 9 ×108 at fixed Prandtl number (Pr = 4 . 34). It is found that, in one region, where the boundary layer is sheared by a large-scale wind, σT versus the distance (z) above the bottom plate, obeys power law over one decade, whereas in another region, where plumes concentrate and move upward (plume-ejection region), the profile of σT has a logarithmic dependence on z. When normalized by a typical temperature scale θ*, the profiles of σT at different Rayleigh numbers collapse onto a single curve, indicating a university of σT profile with respect to Ra . This work is supported by the Hong Kong Research Grant Council under Grant Number N_CUHK437/15.

  11. Stochastic nature of series of waiting times.

    PubMed

    Anvari, Mehrnaz; Aghamohammadi, Cina; Dashti-Naserabadi, H; Salehi, E; Behjat, E; Qorbani, M; Nezhad, M Khazaei; Zirak, M; Hadjihosseini, Ali; Peinke, Joachim; Tabar, M Reza Rahimi

    2013-06-01

    Although fluctuations in the waiting time series have been studied for a long time, some important issues such as its long-range memory and its stochastic features in the presence of nonstationarity have so far remained unstudied. Here we find that the "waiting times" series for a given increment level have long-range correlations with Hurst exponents belonging to the interval 1/2

  12. Measuring aging rates of mice subjected to caloric restriction and genetic disruption of growth hormone signaling

    PubMed Central

    Koopman, Jacob J.E.; van Heemst, Diana; van Bodegom, David; Bonkowski, Michael S.; Sun, Liou Y.; Bartke, Andrzej

    2016-01-01

    Caloric restriction and genetic disruption of growth hormone signaling have been shown to counteract aging in mice. The effects of these interventions on aging are examined through age-dependent survival or through the increase in age-dependent mortality rates on a logarithmic scale fitted to the Gompertz model. However, these methods have limitations that impede a fully comprehensive disclosure of these effects. Here we examine the effects of these interventions on murine aging through the increase in age-dependent mortality rates on a linear scale without fitting them to a model like the Gompertz model. Whereas these interventions negligibly and non-consistently affected the aging rates when examined through the age-dependent mortality rates on a logarithmic scale, they caused the aging rates to increase at higher ages and to higher levels when examined through the age-dependent mortality rates on a linear scale. These results add to the debate whether these interventions postpone or slow aging and to the understanding of the mechanisms by which they affect aging. Since different methods yield different results, it is worthwhile to compare their results in future research to obtain further insights into the effects of dietary, genetic, and other interventions on the aging of mice and other species. PMID:26959761

  13. Measuring aging rates of mice subjected to caloric restriction and genetic disruption of growth hormone signaling.

    PubMed

    Koopman, Jacob J E; van Heemst, Diana; van Bodegom, David; Bonkowski, Michael S; Sun, Liou Y; Bartke, Andrzej

    2016-03-01

    Caloric restriction and genetic disruption of growth hormone signaling have been shown to counteract aging in mice. The effects of these interventions on aging are examined through age-dependent survival or through the increase in age-dependent mortality rates on a logarithmic scale fitted to the Gompertz model. However, these methods have limitations that impede a fully comprehensive disclosure of these effects. Here we examine the effects of these interventions on murine aging through the increase in age-dependent mortality rates on a linear scale without fitting them to a model like the Gompertz model. Whereas these interventions negligibly and non-consistently affected the aging rates when examined through the age-dependent mortality rates on a logarithmic scale, they caused the aging rates to increase at higher ages and to higher levels when examined through the age-dependent mortality rates on a linear scale. These results add to the debate whether these interventions postpone or slow aging and to the understanding of the mechanisms by which they affect aging. Since different methods yield different results, it is worthwhile to compare their results in future research to obtain further insights into the effects of dietary, genetic, and other interventions on the aging of mice and other species.

  14. Logarithmic Sobolev Inequalities on Path Spaces Over Riemannian Manifolds

    NASA Astrophysics Data System (ADS)

    Hsu, Elton P.

    Let Wo(M) be the space of paths of unit time length on a connected, complete Riemannian manifold M such that γ(0) =o, a fixed point on M, and ν the Wiener measure on Wo(M) (the law of Brownian motion on M starting at o).If the Ricci curvature is bounded by c, then the following logarithmic Sobolev inequality holds:

  15. Resumming double logarithms in the QCD evolution of color dipoles

    DOE PAGES

    Iancu, E.; Madrigal, J. D.; Mueller, A. H.; ...

    2015-05-01

    The higher-order perturbative corrections, beyond leading logarithmic accuracy, to the BFKL evolution in QCD at high energy are well known to suffer from a severe lack-of-convergence problem, due to radiative corrections enhanced by double collinear logarithms. Via an explicit calculation of Feynman graphs in light cone (time-ordered) perturbation theory, we show that the corrections enhanced by double logarithms (either energy-collinear, or double collinear) are associated with soft gluon emissions which are strictly ordered in lifetime. These corrections can be resummed to all orders by solving an evolution equation which is non-local in rapidity. This equation can be equivalently rewritten inmore » local form, but with modified kernel and initial conditions, which resum double collinear logs to all orders. We extend this resummation to the next-to-leading order BFKL and BK equations. The first numerical studies of the collinearly-improved BK equation demonstrate the essential role of the resummation in both stabilizing and slowing down the evolution.« less

  16. Dissipative quantum trajectories in complex space: Damped harmonic oscillator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chou, Chia-Chun, E-mail: ccchou@mx.nthu.edu.tw

    Dissipative quantum trajectories in complex space are investigated in the framework of the logarithmic nonlinear Schrödinger equation. The logarithmic nonlinear Schrödinger equation provides a phenomenological description for dissipative quantum systems. Substituting the wave function expressed in terms of the complex action into the complex-extended logarithmic nonlinear Schrödinger equation, we derive the complex quantum Hamilton–Jacobi equation including the dissipative potential. It is shown that dissipative quantum trajectories satisfy a quantum Newtonian equation of motion in complex space with a friction force. Exact dissipative complex quantum trajectories are analyzed for the wave and solitonlike solutions to the logarithmic nonlinear Schrödinger equation formore » the damped harmonic oscillator. These trajectories converge to the equilibrium position as time evolves. It is indicated that dissipative complex quantum trajectories for the wave and solitonlike solutions are identical to dissipative complex classical trajectories for the damped harmonic oscillator. This study develops a theoretical framework for dissipative quantum trajectories in complex space.« less

  17. Volatilities, Traded Volumes, and Price Increments in Derivative Securities

    NASA Astrophysics Data System (ADS)

    Kim, Kyungsik; Lim, Gyuchang; Kim, Soo Yong; Scalas, Enrico

    2007-03-01

    We apply the detrended fluctuation analysis (DFA) to the statistics of the Korean treasury bond (KTB) futures from which the logarithmic increments, volatilities, and traded volumes are estimated over a specific time lag. For our case, the logarithmic increment of futures prices has no long-memory property, while the volatility and the traded volume exhibit the existence of long-memory property. To analyze and calculate whether the volatility clustering is due to the inherent higher-order correlation not detected by applying directly the DFA to logarithmic increments of the KTB futures, it is of importance to shuffle the original tick data of futures prices and to generate the geometric Brownian random walk with the same mean and standard deviation. It is really shown from comparing the three tick data that the higher-order correlation inherent in logarithmic increments makes the volatility clustering. Particularly, the result of the DFA on volatilities and traded volumes may be supported the hypothesis of price changes.

  18. Volatilities, traded volumes, and the hypothesis of price increments in derivative securities

    NASA Astrophysics Data System (ADS)

    Lim, Gyuchang; Kim, SooYong; Scalas, Enrico; Kim, Kyungsik

    2007-08-01

    A detrended fluctuation analysis (DFA) is applied to the statistics of Korean treasury bond (KTB) futures from which the logarithmic increments, volatilities, and traded volumes are estimated over a specific time lag. In this study, the logarithmic increment of futures prices has no long-memory property, while the volatility and the traded volume exhibit the existence of the long-memory property. To analyze and calculate whether the volatility clustering is due to a inherent higher-order correlation not detected by with the direct application of the DFA to logarithmic increments of KTB futures, it is of importance to shuffle the original tick data of future prices and to generate a geometric Brownian random walk with the same mean and standard deviation. It was found from a comparison of the three tick data that the higher-order correlation inherent in logarithmic increments leads to volatility clustering. Particularly, the result of the DFA on volatilities and traded volumes can be supported by the hypothesis of price changes.

  19. Stochastic nature of series of waiting times

    NASA Astrophysics Data System (ADS)

    Anvari, Mehrnaz; Aghamohammadi, Cina; Dashti-Naserabadi, H.; Salehi, E.; Behjat, E.; Qorbani, M.; Khazaei Nezhad, M.; Zirak, M.; Hadjihosseini, Ali; Peinke, Joachim; Tabar, M. Reza Rahimi

    2013-06-01

    Although fluctuations in the waiting time series have been studied for a long time, some important issues such as its long-range memory and its stochastic features in the presence of nonstationarity have so far remained unstudied. Here we find that the “waiting times” series for a given increment level have long-range correlations with Hurst exponents belonging to the interval 1/2

  20. VizieR Online Data Catalog: Nebular [OIII] collision strengths - SS3 (Storey+, 2015)

    NASA Astrophysics Data System (ADS)

    Storey, P. J.; Sochi, T.

    2015-03-01

    The data set consists of ten Upsilon files labeled 'up_mn.dat' and ten Downsilon files labeled 'do_mn.dat' where m=1,2,3,4 and n=2,3,4,5 with m

  1. Differential depuration of poliovirus, Escherichia coli, and a coliphage by the common mussel, Mytilus edulis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Power, U.F.; Collins, J.K.

    1989-06-01

    The elimination of sewage effluent-associated poliovirus, Escherichia coli, and a 22-nm icosahedral coliphage by the common mussel, Mytilus edulis, was studied. Both laboratory-and commercial-scale recirculating, UV depuration systems were used in this study. In the laboratory system, the logarithms of the poliovirus, E. coli, and coliphage levels were reduced by 1.86, 2.9, and 2.16, respectively, within 52 h of depuration. The relative patterns and rates of elimination of the three organisms suggest that they are eliminated from mussels by different mechanisms during depuration under suitable conditions. Poliovirus was not included in experiments undertaken in the commercial-scale depuration system. The differencesmore » in the relative rates and patterns of elimination were maintained for E. coli and coliphage in this system, with the logarithm of the E. coli levels being reduced by 3.18 and the logarithm of the coliphage levels being reduced by 0.87. The results from both depuration systems suggest that E. coli is an inappropriate indicator of the efficiency of virus elimination during depuration. The coliphage used appears to be a more representative indicator. Depuration under stressful conditions appeared to have a negligible affect on poliovirus and coliphage elimination rates from mussels. However, the rate and pattern of E. coli elimination were dramatically affected by these conditions. Therefore, monitoring E. coli counts might prove useful in ensuring that mussels are functioning well during depuration.« less

  2. The natural logarithm transforms the abbreviated injury scale and improves accuracy scoring.

    PubMed

    Wang, Xu; Gu, Xiaoming; Zhang, Zhiliang; Qiu, Fang; Zhang, Keming

    2012-11-01

    The Injury Severity Score (ISS) and the New Injury Severity Score (NISS) are widely used for anatomic severity assessments, but they do not display a linear relation to mortality. The mortality rates are significantly different between pairs of the Abbreviated Injury Scale (AIS) triplets that generate the same ISS/NISS total. The Logarithm Injury Severity Score (LISS) is defined as a change in AIS values by raising each AIS severity score (1-6) by taking the natural logarithm to a power of 5.53 multiplied by 1.7987 and then adding the three most severe injuries (i.e. highest AIS), regardless of body region. LISS values were calculated for every patient in three large independent data sets: 3,784, 4,436, and 4,018 patients treated over a six-year period at Class A tertiary comprehensive hospitals in China. The power of LISS to predict morality was then compared with previously calculated NISS values for the same patients in each of the three data sets. We found that LISS is more predictive of survival as well (Hangzhou: receiver operating characteristic (ROC): NISS=0.931, LISS=0.949, p=0.006; Similarly, Zhejiang and Shenyang: ROC NISS vs. LISS, p<0.05). Moreover, LISS provides a better fit throughout its entire range of predicting (Hosmer-Lemeshow statistic for Hangzhou NISS=15.76, p=0.027; LISS=13.79, p=0.055; Similarly, for Zhejiang and Shenyang). LISS should be used as the standard summary measure of human trauma.

  3. Criticality in a non-equilibrium, driven system: charged colloidal rods (fd-viruses) in electric fields.

    PubMed

    Kang, K; Dhont, J K G

    2009-11-01

    Experiments on suspensions of charged colloidal rods (fd-virus particles) in external electric fields are performed, which show that a non-equilibrium critical point can be identified. Several transition lines of field-induced phases and states meet at this point and it is shown that there is a length- and time-scale which diverge at the non-equilibrium critical point. The off-critical and critical behavior is characterized, with both power law and logarithmic divergencies. These experiments show that analogous features of the classical, critical divergence of correlation lengths and relaxation times in equilibrium systems are also exhibited by driven systems that are far out of equilibrium, related to phases/states that do not exist in the absence of the external field.

  4. The Time-Dependent Wavelet Spectrum of HH 1 and 2

    NASA Astrophysics Data System (ADS)

    Raga, A. C.; Reipurth, B.; Esquivel, A.; González-Gómez, D.; Riera, A.

    2018-04-01

    We have calculated the wavelet spectra of four epochs (spanning ≍20 yr) of Hα and [S II] HST images of HH 1 and 2. From these spectra we calculated the distribution functions of the (angular) radii of the emission structures. We found that the size distributions have maxima (corresponding to the characteristic sizes of the observed structures) with radii that are logarithmically spaced with factors of ≍2→3 between the successive peaks. The positions of these peaks generally showed small shifts towards larger sizes as a function of time. This result indicates that the structures of HH 1 and 2 have a general expansion (seen at all scales), and/or are the result of a sequence of merging events resulting in the formation of knots with larger characteristic sizes.

  5. Dominance, Information, and Hierarchical Scaling of Variance Space.

    ERIC Educational Resources Information Center

    Ceurvorst, Robert W.; Krus, David J.

    1979-01-01

    A method for computation of dominance relations and for construction of their corresponding hierarchical structures is presented. The link between dominance and variance allows integration of the mathematical theory of information with least squares statistical procedures without recourse to logarithmic transformations of the data. (Author/CTM)

  6. Review of cost versus scale: water and wastewater treatment and reuse processes.

    PubMed

    Guo, Tianjiao; Englehardt, James; Wu, Tingting

    2014-01-01

    The US National Research Council recently recommended direct potable water reuse (DPR), or potable water reuse without environmental buffer, for consideration to address US water demand. However, conveyance of wastewater and water to and from centralized treatment plants consumes on average four times the energy of treatment in the USA, and centralized DPR would further require upgradient distribution of treated water. Therefore, information on the cost of unit treatment processes potentially useful for DPR versus system capacity was reviewed, converted to constant 2012 US dollars, and synthesized in this work. A logarithmic variant of the Williams Law cost function was found applicable over orders of magnitude of system capacity, for the subject processes: activated sludge, membrane bioreactor, coagulation/flocculation, reverse osmosis, ultrafiltration, peroxone and granular activated carbon. Results are demonstrated versus 10 DPR case studies. Because economies of scale found for capital equipment are counterbalanced by distribution/collection network costs, further study of the optimal scale of distributed DPR systems is suggested.

  7. Taylor’s Law of Temporal Fluctuation Scaling in Stock Illiquidity

    NASA Astrophysics Data System (ADS)

    Cai, Qing; Xu, Hai-Chuan; Zhou, Wei-Xing

    2016-08-01

    Taylor’s law of temporal fluctuation scaling, variance ˜ a(mean)b, is ubiquitous in natural and social sciences. We report for the first time convincing evidence of a solid temporal fluctuation scaling law in stock illiquidity by investigating the mean-variance relationship of the high-frequency illiquidity of almost all stocks traded on the Shanghai Stock Exchange (SHSE) and the Shenzhen Stock Exchange (SZSE) during the period from 1999 to 2011. Taylor’s law holds for A-share markets (SZSE Main Board, SZSE Small & Mediate Enterprise Board, SZSE Second Board, and SHSE Main Board) and B-share markets (SZSE B-share and SHSE B-share). We find that the scaling exponent b is greater than 2 for the A-share markets and less than 2 for the B-share markets. We further unveil that Taylor’s law holds for stocks in 17 industry categories, in 28 industrial sectors and in 31 provinces and direct-controlled municipalities with the majority of scaling exponents b ∈ (2, 3). We also investigate the Δt-min illiquidity and find that the scaling exponent b(Δt) increases logarithmically for small Δt values and decreases fast to a stable level.

  8. Are infant mortality rate declines exponential? The general pattern of 20th century infant mortality rate decline

    PubMed Central

    Bishai, David; Opuni, Marjorie

    2009-01-01

    Background Time trends in infant mortality for the 20th century show a curvilinear pattern that most demographers have assumed to be approximately exponential. Virtually all cross-country comparisons and time series analyses of infant mortality have studied the logarithm of infant mortality to account for the curvilinear time trend. However, there is no evidence that the log transform is the best fit for infant mortality time trends. Methods We use maximum likelihood methods to determine the best transformation to fit time trends in infant mortality reduction in the 20th century and to assess the importance of the proper transformation in identifying the relationship between infant mortality and gross domestic product (GDP) per capita. We apply the Box Cox transform to infant mortality rate (IMR) time series from 18 countries to identify the best fitting value of lambda for each country and for the pooled sample. For each country, we test the value of λ against the null that λ = 0 (logarithmic model) and against the null that λ = 1 (linear model). We then demonstrate the importance of selecting the proper transformation by comparing regressions of ln(IMR) on same year GDP per capita against Box Cox transformed models. Results Based on chi-squared test statistics, infant mortality decline is best described as an exponential decline only for the United States. For the remaining 17 countries we study, IMR decline is neither best modelled as logarithmic nor as a linear process. Imposing a logarithmic transform on IMR can lead to bias in fitting the relationship between IMR and GDP per capita. Conclusion The assumption that IMR declines are exponential is enshrined in the Preston curve and in nearly all cross-country as well as time series analyses of IMR data since Preston's 1975 paper, but this assumption is seldom correct. Statistical analyses of IMR trends should assess the robustness of findings to transformations other than the log transform. PMID:19698144

  9. The Logarithmic Tail of Néel Walls

    NASA Astrophysics Data System (ADS)

    Melcher, Christof

    We study the multiscale problem of a parametrized planar 180° rotation of magnetization states in a thin ferromagnetic film. In an appropriate scaling and when the film thickness is comparable to the Bloch line width, the underlying variational principle has the form where the reduced stray-field operator Q approximates (-Δ)1/2 as the quality factor Q tends to zero. We show that the associated Néel wall profile u exhibits a very long logarithmic tail. The proof relies on limiting elliptic regularity methods on the basis of the associated Euler-Lagrange equation and symmetrization arguments on the basis of the variational principle. Finally we study the renormalized limit behavior as Q tends to zero.

  10. Bioconcentration of lipophilic compounds by some aquatic organisms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hawker, D.W.; Connell, D.W.

    1986-04-01

    With nondegradable, lipophilic compounds having log P values ranging from 2 to 6, direct linear relationships have been found between the logarithms of the equilibrium bioconcentration factors, and also reciprocal clearance rate constants, with log P for daphnids and molluscs. These relationships permit calculation of the times required for equilibrium and significant bioconcentration of lipophilic chemicals. Compared with fish, these time periods are successively shorter for molluscs, then daphnids. The equilibrium biotic concentration was found to decrease with increasing chemical hydrophobicity for both molluscs and daphnids. Also, new linear relationships between the logarithm of the bioconcentration factor and log Pmore » were found for compounds not attaining equilibrium within finite exposure times.« less

  11. Modal Testing of the NPSAT1 Engineering Development Unit

    DTIC Science & Technology

    2012-07-01

    erkläre ich, dass die vorliegende Master Arbeit von mir selbstständig und nur unter Verwendung der angegebenen Quellen und Hilfsmittel angefertigt...logarithmic scale . As 5 Figure 2 shows, natural frequencies are indicated by large values of the first CMIF (peaks), and multiple modes can be detected by...structure’s behavior. Ewins even states, “that no large- scale modal test should be permitted to proceed until some preliminary SDOF analyses have

  12. Epidemic failure detection and consensus for extreme parallelism

    DOE PAGES

    Katti, Amogh; Di Fatta, Giuseppe; Naughton, Thomas; ...

    2017-02-01

    Future extreme-scale high-performance computing systems will be required to work under frequent component failures. The MPI Forum s User Level Failure Mitigation proposal has introduced an operation, MPI Comm shrink, to synchronize the alive processes on the list of failed processes, so that applications can continue to execute even in the presence of failures by adopting algorithm-based fault tolerance techniques. This MPI Comm shrink operation requires a failure detection and consensus algorithm. This paper presents three novel failure detection and consensus algorithms using Gossiping. The proposed algorithms were implemented and tested using the Extreme-scale Simulator. The results show that inmore » all algorithms the number of Gossip cycles to achieve global consensus scales logarithmically with system size. The second algorithm also shows better scalability in terms of memory and network bandwidth usage and a perfect synchronization in achieving global consensus. The third approach is a three-phase distributed failure detection and consensus algorithm and provides consistency guarantees even in very large and extreme-scale systems while at the same time being memory and bandwidth efficient.« less

  13. Modified stochastic fragmentation of an interval as an ageing process

    NASA Astrophysics Data System (ADS)

    Fortin, Jean-Yves

    2018-02-01

    We study a stochastic model based on modified fragmentation of a finite interval. The mechanism consists of cutting the interval at a random location and substituting a unique fragment on the right of the cut to regenerate and preserve the interval length. This leads to a set of segments of random sizes, with the accumulation of small fragments near the origin. This model is an example of record dynamics, with the presence of ‘quakes’ and slow dynamics. The fragment size distribution is a universal inverse power law with logarithmic corrections. The exact distribution for the fragment number as function of time is simply related to the unsigned Stirling numbers of the first kind. Two-time correlation functions are defined, and computed exactly. They satisfy scaling relations, and exhibit aging phenomena. In particular, the probability that the same number of fragments is found at two different times t>s is asymptotically equal to [4πlog(s)]-1/2 when s\\gg 1 and the ratio t/s is fixed, in agreement with the numerical simulations. The same process with a reset impedes the aging phenomenon-beyond a typical time scale defined by the reset parameter.

  14. LOGARITHMIC AMPLIFIER

    DOEpatents

    De Shong, J.A. Jr.

    1957-12-31

    A logarithmic current amplifier circuit having a high sensitivity and fast response is described. The inventor discovered the time constant of the input circuit of a system utilizing a feedback amplifier, ionization chamber, and a diode, is inversely proportional to the input current, and that the amplifier becomes unstable in amplifying signals in the upper frequency range when the amplifier's forward gain time constant equals the input circuit time constant. The described device incorporates impedance networks having low frequency response characteristic at various points in the circuit to change the forward gain of the amplifler at a rate of 0.7 of the gain magnitude for every two times increased in frequency. As a result of this improvement, the time constant of the input circuit is greatly reduced at high frequencies, and the amplifier response is increased.

  15. Anomalous scaling in an age-dependent branching model.

    PubMed

    Keller-Schmidt, Stephanie; Tuğrul, Murat; Eguíluz, Víctor M; Hernández-García, Emilio; Klemm, Konstantin

    2015-02-01

    We introduce a one-parametric family of tree growth models, in which branching probabilities decrease with branch age τ as τ(-α). Depending on the exponent α, the scaling of tree depth with tree size n displays a transition between the logarithmic scaling of random trees and an algebraic growth. At the transition (α=1) tree depth grows as (logn)(2). This anomalous scaling is in good agreement with the trend observed in evolution of biological species, thus providing a theoretical support for age-dependent speciation and associating it to the occurrence of a critical point.

  16. Renormalization Group Theory of Bolgiano Scaling in Boussinesq Turbulence

    NASA Technical Reports Server (NTRS)

    Rubinstein, Robert

    1994-01-01

    Bolgiano scaling in Boussinesq turbulence is analyzed using the Yakhot-Orszag renormalization group. For this purpose, an isotropic model is introduced. Scaling exponents are calculated by forcing the temperature equation so that the temperature variance flux is constant in the inertial range. Universal amplitudes associated with the scaling laws are computed by expanding about a logarithmic theory. Connections between this formalism and the direct interaction approximation are discussed. It is suggested that the Yakhot-Orszag theory yields a lowest order approximate solution of a regularized direct interaction approximation which can be corrected by a simple iterative procedure.

  17. The Impact of Frictional Healing on Stick-Slip Recurrence Interval and Stress Drop: Implications for Earthquake Scaling

    NASA Astrophysics Data System (ADS)

    Im, Kyungjae; Elsworth, Derek; Marone, Chris; Leeman, John

    2017-12-01

    Interseismic frictional healing is an essential process in the seismic cycle. Observations of both natural and laboratory earthquakes demonstrate that the magnitude of stress drop scales with the logarithm of recurrence time, which is a cornerstone of the rate and state friction (RSF) laws. However, the origin of this log linear behavior and short time "cutoff" for small recurrence intervals remains poorly understood. Here we use RSF laws to demonstrate that the back-projected time of null-healing intrinsically scales with the initial frictional state θi. We explore this behavior and its implications for (1) the short-term cutoff time of frictional healing and (2) the connection between healing rates derived from stick-slip sliding versus slide-hold-slide tests. We use a novel, continuous solution of RSF for a one-dimensional spring-slider system with inertia. The numerical solution continuously traces frictional state evolution (and healing) and shows that stick-slip cutoff time also scales with frictional state at the conclusion of the dynamic slip process θi (=Dc/Vpeak). This numerical investigation on the origins of stick-slip response is verified by comparing laboratory data for a range of peak slip velocities. Slower slip motions yield lesser magnitude of friction drop at a given time due to higher frictional state at the end of each slip event. Our results provide insight on the origin of log linear stick-slip evolution and suggest an approach to estimating the critical slip distance on faults that exhibit gradual accelerations, such as for slow earthquakes.

  18. Compressed Scaling of Abstract Numerosity Representations in Adult Humans and Monkeys

    ERIC Educational Resources Information Center

    Merten, Katharina; Nieder, Andreas

    2009-01-01

    There is general agreement that nonverbal animals and humans endowed with language possess an evolutionary precursor system for representing and comparing numerical values. However, whether nonverbal numerical representations in human and nonhuman primates are quantitatively similar and whether linear or logarithmic coding underlies such magnitude…

  19. Biased random walks on Kleinberg's spatial networks

    NASA Astrophysics Data System (ADS)

    Pan, Gui-Jun; Niu, Rui-Wu

    2016-12-01

    We investigate the problem of the particle or message that travels as a biased random walk toward a target node in Kleinberg's spatial network which is built from a d-dimensional (d = 2) regular lattice improved by adding long-range shortcuts with probability P(rij) ∼rij-α, where rij is the lattice distance between sites i and j, and α is a variable exponent. Bias is represented as a probability p of the packet to travel at every hop toward the node which has the smallest Manhattan distance to the target node. We study the mean first passage time (MFPT) for different exponent α and the scaling of the MFPT with the size of the network L. We find that there exists a threshold probability pth ≈ 0.5, for p ≥pth the optimal transportation condition is obtained with an optimal transport exponent αop = d, while for 0 < p pth, and increases with L less than a power law and get close to logarithmical law for 0 < p

  20. Correlation Length of Energy-Containing Structures in the Base of the Solar Corona

    NASA Astrophysics Data System (ADS)

    Abramenko, V.; Zank, G. P.; Dosch, A. M.; Yurchyshyn, V.

    2013-12-01

    An essential parameter for models of coronal heating and fast solar wind acceleration that relay on the dissipation of MHD turbulence is the characteristic energy-containing length of the squared velocity and magnetic field fluctuations transverse to the mean magnetic field inside a coronal hole (CH) at the base of the corona. The characteristic length scale defines directly the heating rate. Rather surprisingly, almost nothing is known observationally about this critical parameter. Currently, only a very rough estimate of characteristic length was obtained based on the fact that the network spacing is about 30000 km. We attempted estimation of this parameter from observations of photospheric random motions and magnetic fields measured in the photosphere inside coronal holes. We found that the characteristic length scale in the photosphere is about 600-2000 km, which is much smaller than that adopted in previous models. Our results provide a critical input parameter for current models of coronal heating and should yield an improved understanding of fast solar wind acceleration. Fig. 1-- Plotted is the natural logarithm of the correlation function of the transverse velocity fluctuations u^2 versus the spatial lag r for the two CHs. The color code refers to the accumulation time intervals of 2 (blue), 5 (green), 10 (red), and 20 (black) minutes. The values of the Batchelor integral length λ the correlation length ς and the e-folding length L in km are shown. Fig. 2-- Plot of the natural logarithm of the correlation function of magnetic fluctuations b^2 versus the spatial lag r. The insert shows this plot with linear axes.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dumitru, Adrian; Skokov, Vladimir

    The conventional and linearly polarized Weizsäcker-Williams gluon distributions at small x are defined from the two-point function of the gluon field in light-cone gauge. They appear in the cross section for dijet production in deep inelastic scattering at high energy. We determine these functions in the small-x limit from solutions of the JIMWLK evolution equations and show that they exhibit approximate geometric scaling. Also, we discuss the functional distributions of these WW gluon distributions over the JIMWLK ensemble at rapidity Y ~ 1/αs. These are determined by a 2d Liouville action for the logarithm of the covariant gauge function g2trmore » A+(q)A+(-q). For transverse momenta on the order of the saturation scale we observe large variations across configurations (evolution trajectories) of the linearly polarized distribution up to several times its average, and even to negative values.« less

  2. Brownian Dynamics simulations of model colloids in channel geometries and external fields

    NASA Astrophysics Data System (ADS)

    Siems, Ullrich; Nielaba, Peter

    2018-04-01

    We review the results of Brownian Dynamics simulations of colloidal particles in external fields confined in channels. Super-paramagnetic Brownian particles are well suited two- dimensional model systems for a variety of problems on different length scales, ranging from pedestrian walking through a bottleneck to ions passing ion-channels in living cells. In such systems confinement into channels can have a great influence on the diffusion and transport properties. Especially we will discuss the crossover from single file diffusion in a narrow channel to the diffusion in the extended two-dimensional system. Therefore a new algorithm for computing the mean square displacement (MSD) on logarithmic time scales is presented. In a different study interacting colloidal particles were dragged over a washboard potential and are additionally confined in a two-dimensional micro-channel. In this system kink and anti-kink solitons determine the depinning process of the particles from the periodic potential.

  3. Temperature Scaling Law for Quantum Annealing Optimizers.

    PubMed

    Albash, Tameem; Martin-Mayor, Victor; Hen, Itay

    2017-09-15

    Physical implementations of quantum annealing unavoidably operate at finite temperatures. We point to a fundamental limitation of fixed finite temperature quantum annealers that prevents them from functioning as competitive scalable optimizers and show that to serve as optimizers annealer temperatures must be appropriately scaled down with problem size. We derive a temperature scaling law dictating that temperature must drop at the very least in a logarithmic manner but also possibly as a power law with problem size. We corroborate our results by experiment and simulations and discuss the implications of these to practical annealers.

  4. Discrete Self-Similarity in Interfacial Hydrodynamics and the Formation of Iterated Structures.

    PubMed

    Dallaston, Michael C; Fontelos, Marco A; Tseluiko, Dmitri; Kalliadasis, Serafim

    2018-01-19

    The formation of iterated structures, such as satellite and subsatellite drops, filaments, and bubbles, is a common feature in interfacial hydrodynamics. Here we undertake a computational and theoretical study of their origin in the case of thin films of viscous fluids that are destabilized by long-range molecular or other forces. We demonstrate that iterated structures appear as a consequence of discrete self-similarity, where certain patterns repeat themselves, subject to rescaling, periodically in a logarithmic time scale. The result is an infinite sequence of ridges and filaments with similarity properties. The character of these discretely self-similar solutions as the result of a Hopf bifurcation from ordinarily self-similar solutions is also described.

  5. Controls on the Stability of Atmospheric O2 over Geologic Time Scales (Invited)

    NASA Astrophysics Data System (ADS)

    Rothman, D.; Bosak, T.

    2013-12-01

    The concentration of free oxygen in Earth's surface environment represents a balance between the accumulation of O2, due to long-term burial of organic carbon in sediments, and the consumption of O2 by weathering processes and the oxidation of reduced gases. The stability of modern O2 levels is typically attributed to a negative feedback that emerges when the production and consumption fluxes are expressed as a function of O2 concentration. Empirical studies of modern burial of organic carbon suggest that the production of O2 is a logarithmically decreasing function of the duration of time---the "oxygen exposure time (OET)"--over which sedimentary organic carbon is exposed to O2. The OET hypothesis implies that a fraction of organic matter is physically protected from anaerobic decay by its association with clay-sized mineral surface area, but susceptible to aerobic decay, either oxidatively or via free extracellular hydrolytic enzymes. By assuming that the long-term aerobic degradation is diffusion-limited, we predict the logarithmic decay of the OET curve. We note, however, that exposure to O2 may enhance not only degradation but also physical protection due to the precipitation of iron oxides and clay minerals. When the rate of transformation from the unprotected state to the protected state exceeds a small fraction of the average oxidative degradation rate, our theoretical OET curve develops a maximum at small O2 exposure times. In this case, the equilibrium O2 concentration can lose its stability. These observations may help explain major fluctuations in Earth's carbon cycle and the rise of O2 during the Proterozoic (2000--542 Ma).

  6. Elastic scattering of virtual photons via a quark loop in the double-logarithmic approximation

    NASA Astrophysics Data System (ADS)

    Ermolaev, B. I.; Ivanov, D. Yu.; Troyan, S. I.

    2018-04-01

    We calculate the amplitude of elastic photon-photon scattering via a single quark loop in the double-logarithmic approximation, presuming all external photons to be off-shell and unpolarized. At the same time we account for the running coupling effects. We consider this process in the forward kinematics at arbitrary relations between t and the external photon virtualities. We obtain explicit expressions for the photon-photon scattering amplitudes in all double-logarithmic kinematic regions. Then we calculate the small-x asymptotics of the obtained amplitudes and compare them with the parent amplitudes, thereby fixing the applicability regions of the asymptotics, i.e., fixing the applicability region for the nonvacuum Reggeons. We find that these Reggeons should be used at x <10-8 only.

  7. Wave propagation model of heat conduction and group speed

    NASA Astrophysics Data System (ADS)

    Zhang, Long; Zhang, Xiaomin; Peng, Song

    2018-03-01

    In view of the finite relaxation model of non-Fourier's law, the Cattaneo and Vernotte (CV) model and Fourier's law are presented in this work for comparing wave propagation modes. Independent variable translation is applied to solve the partial differential equation. Results show that the general form of the time spatial distribution of temperature for the three media comprises two solutions: those corresponding to the positive and negative logarithmic heating rates. The former shows that a group of heat waves whose spatial distribution follows the exponential function law propagates at a group speed; the speed of propagation is related to the logarithmic heating rate. The total speed of all the possible heat waves can be combined to form the group speed of the wave propagation. The latter indicates that the spatial distribution of temperature, which follows the exponential function law, decays with time. These features show that propagation accelerates when heated and decelerates when cooled. For the model media that follow Fourier's law and correspond to the positive heat rate of heat conduction, the propagation mode is also considered the propagation of a group of heat waves because the group speed has no upper bound. For the finite relaxation model with non-Fourier media, the interval of group speed is bounded and the maximum speed can be obtained when the logarithmic heating rate is exactly the reciprocal of relaxation time. And for the CV model with a non-Fourier medium, the interval of group speed is also bounded and the maximum value can be obtained when the logarithmic heating rate is infinite.

  8. Complex network analysis of conventional and Islamic stock market in Indonesia

    NASA Astrophysics Data System (ADS)

    Rahmadhani, Andri; Purqon, Acep; Kim, Sehyun; Kim, Soo Yong

    2015-09-01

    The rising popularity of Islamic financial products in Indonesia has become a new interesting topic to be analyzed recently. We introduce a complex network analysis to compare conventional and Islamic stock market in Indonesia. Additionally, Random Matrix Theory (RMT) has been added as a part of reference to expand the analysis of the result. Both of them are based on the cross correlation matrix of logarithmic price returns. Closing price data, which is taken from June 2011 to July 2012, is used to construct logarithmic price returns. We also introduce the threshold value using winner-take-all approach to obtain scale-free property of the network. This means that the nodes of the network that has a cross correlation coefficient below the threshold value should not be connected with an edge. As a result, we obtain 0.5 as the threshold value for all of the stock market. From the RMT analysis, we found that there is only market wide effect on both stock market and no clustering effect has been found yet. From the network analysis, both of stock market networks are dominated by the mining sector. The length of time series of closing price data must be expanded to get more valuable results, even different behaviors of the system.

  9. Estimating leaf nitrogen accumulation in maize based on canopy hyperspectrum data

    NASA Astrophysics Data System (ADS)

    Gu, Xiaohe; Wang, Lizhi; Song, Xiaoyu; Xu, Xingang

    2016-10-01

    Leaf nitrogen accumulation (LNA) has important influence on the formation of crop yield and grain protein. Monitoring leaf nitrogen accumulation of crop canopy quantitively and real-timely is helpful for mastering crop nutrition status, diagnosing group growth and managing fertilization precisely. The study aimed to develop a universal method to monitor LNA of maize by hyperspectrum data, which could provide mechanism support for mapping LNA of maize at county scale. The correlations between LNA and hyperspectrum reflectivity and its mathematical transformations were analyzed. Then the feature bands and its transformations were screened to develop the optimal model of estimating LNA based on multiple linear regression method. The in-situ samples were used to evaluate the accuracy of the estimating model. Results showed that the estimating model with one differential logarithmic transformation (lgP') of reflectivity could reach highest correlation coefficient (0.889) with lowest RMSE (0.646 g·m-2), which was considered as the optimal model for estimating LNA in maize. The determination coefficient (R2) of testing samples was 0.831, while the RMSE was 1.901 g·m-2. It indicated that the one differential logarithmic transformation of hyperspectrum had good response with LNA of maize. Based on this transformation, the optimal estimating model of LNA could reach good accuracy with high stability.

  10. Dynamical conductivity at the dirty superconductor-metal quantum phase transition.

    PubMed

    Del Maestro, Adrian; Rosenow, Bernd; Hoyos, José A; Vojta, Thomas

    2010-10-01

    We study the transport properties of ultrathin disordered nanowires in the neighborhood of the superconductor-metal quantum phase transition. To this end we combine numerical calculations with analytical strong-disorder renormalization group results. The quantum critical conductivity at zero temperature diverges logarithmically as a function of frequency. In the metallic phase, it obeys activated scaling associated with an infinite-randomness quantum critical point. We extend the scaling theory to higher dimensions and discuss implications for experiments.

  11. Event management for large scale event-driven digital hardware spiking neural networks.

    PubMed

    Caron, Louis-Charles; D'Haene, Michiel; Mailhot, Frédéric; Schrauwen, Benjamin; Rouat, Jean

    2013-09-01

    The interest in brain-like computation has led to the design of a plethora of innovative neuromorphic systems. Individually, spiking neural networks (SNNs), event-driven simulation and digital hardware neuromorphic systems get a lot of attention. Despite the popularity of event-driven SNNs in software, very few digital hardware architectures are found. This is because existing hardware solutions for event management scale badly with the number of events. This paper introduces the structured heap queue, a pipelined digital hardware data structure, and demonstrates its suitability for event management. The structured heap queue scales gracefully with the number of events, allowing the efficient implementation of large scale digital hardware event-driven SNNs. The scaling is linear for memory, logarithmic for logic resources and constant for processing time. The use of the structured heap queue is demonstrated on a field-programmable gate array (FPGA) with an image segmentation experiment and a SNN of 65,536 neurons and 513,184 synapses. Events can be processed at the rate of 1 every 7 clock cycles and a 406×158 pixel image is segmented in 200 ms. Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. Reynolds stress scaling in pipe flow turbulence—first results from CICLoPE

    PubMed Central

    Fiorini, T.; Bellani, G.; Talamelli, A.

    2017-01-01

    This paper reports the first turbulence measurements performed in the Long Pipe Facility at the Center for International Cooperation in Long Pipe Experiments (CICLoPE). In particular, the Reynolds stress components obtained from a number of straight and boundary-layer-type single-wire and X-wire probes up to a friction Reynolds number of 3.8×104 are reported. In agreement with turbulent boundary-layer experiments as well as with results from the Superpipe, the present measurements show a clear logarithmic region in the streamwise variance profile, with a Townsend–Perry constant of A2≈1.26. The wall-normal variance profile exhibits a Reynolds-number-independent plateau, while the spanwise component was found to obey a logarithmic scaling over a much wider wall-normal distance than the other two components, with a slope that is nearly half of that of the Townsend–Perry constant, i.e. A2,w≈A2/2. The present results therefore provide strong support for the scaling of the Reynolds stress tensor based on the attached-eddy hypothesis. Intriguingly, the wall-normal and spanwise components exhibit higher amplitudes than in previous studies, and therefore call for follow-up studies in CICLoPE, as well as other large-scale facilities. This article is part of the themed issue ‘Toward the development of high-fidelity models of wall turbulence at large Reynolds number’. PMID:28167586

  13. Tracer particles in two-dimensional elastic networks diffuse logarithmically slow

    NASA Astrophysics Data System (ADS)

    Lizana, Ludvig; Ambjörnsson, Tobias; Lomholt, Michael A.

    2017-01-01

    Several experiments on tagged molecules or particles in living systems suggest that they move anomalously slow—their mean squared displacement (MSD) increase slower than linearly with time. Leading models aimed at understanding these experiments predict that the MSD grows as a power law with a growth exponent that is smaller than unity. However, in some experiments the growth is so slow (fitted exponent  ˜0.1-0.2) that they hint towards other mechanisms at play. In this paper, we theoretically demonstrate how in-plane collective modes excited by thermal fluctuations in a two dimensional membrane lead to logarithmic time dependence for the the tracer particle’s MSD.

  14. True logarithmic amplification of frequency clock in SS-OCT for calibration

    PubMed Central

    Liu, Bin; Azimi, Ehsan; Brezinski, Mark E.

    2011-01-01

    With swept source optical coherence tomography (SS-OCT), imprecise signal calibration prevents optimal imaging of biological tissues such as coronary artery. This work demonstrates an approach using a true logarithmic amplifier to precondition the clock signal, with the effort to minimize the noises and phase errors for optimal calibration. This method was validated and tested with a high-speed SS-OCT. The experimental results manifest its superior ability on optimization of the calibration and improvement of the imaging performance. Particularly, this hardware-based approach is suitable for real-time calibration in a high-speed system where computation time is constrained. PMID:21698036

  15. Thermodynamic scaling of dynamic properties of liquid crystals: Verifying the scaling parameters using a molecular model

    NASA Astrophysics Data System (ADS)

    Satoh, Katsuhiko

    2013-08-01

    The thermodynamic scaling of molecular dynamic properties of rotation and thermodynamic parameters in a nematic phase was investigated by a molecular dynamic simulation using the Gay-Berne potential. A master curve for the relaxation time of flip-flop motion was obtained using thermodynamic scaling, and the dynamic property could be solely expressed as a function of TV^{γ _τ }, where T and V are the temperature and volume, respectively. The scaling parameter γτ was in excellent agreement with the thermodynamic parameter Γ, which is the logarithm of the slope of a line plotted for the temperature and volume at constant P2. This line was fairly linear, and as good as the line for p-azoxyanisole or using the highly ordered small cluster model. The equivalence relation between Γ and γτ was compared with results obtained from the highly ordered small cluster model. The possibility of adapting the molecular model for the thermodynamic scaling of other dynamic rotational properties was also explored. The rotational diffusion constant and rotational viscosity coefficients, which were calculated using established theoretical and experimental expressions, were rescaled onto master curves with the same scaling parameters. The simulation illustrates the universal nature of the equivalence relation for liquid crystals.

  16. Zero energy resonance and the logarithmically slow decay of unstable multilevel systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miyamoto, Manabu

    2006-08-15

    The long time behavior of the reduced time evolution operator for unstable multilevel systems is studied based on the N-level Friedrichs model in the presence of a zero energy resonance. The latter means the divergence of the resolvent at zero energy. Resorting to the technique developed by Jensen and Kato [Duke Math. J. 46, 583 (1979)], the zero energy resonance of this model is characterized by the zero energy eigenstate that does not belong to the Hilbert space. It is then shown that for some kinds of the rational form factors the logarithmically slow decay proportional to (log t){sup -1}more » of the reduced time evolution operator can be realized.« less

  17. Investigation of the relationship between CO2 reservoir rock property change and the surface roughness change originating from the supercritical CO2-sandstone-groundwater geochemical reaction at CO2 sequestration condition

    NASA Astrophysics Data System (ADS)

    Lee, Minhee; Wang, Sookyun; Kim, Seyoon; Park, Jinyoung

    2015-04-01

    Lab scale experiments were performed to investigate the property changes of sandstone slabs and cores, resulting from the scCO2-rock-groundwater reaction for 180 days under CO2 sequestration conditions (100 bar and 50 °C). The geochemical reactions, including the surface roughness change of minerals in the slab, resulted from the dissolution and the secondary mineral precipitation for the sandstone reservoir of the Gyeongsang basin, Korea were reproduced in laboratory scale experiments and the relationship between the geochemical reaction and the physical rock property change was derived, for the consideration of successful subsurface CO2 sequestration. The use of the surface roughness value (SRrms) change rate and the physical property change rate to quantify scCO2-rock-groundwater reaction is the novel approach on the study area for CO2 sequestration in the subsurface. From the results of SPM (Scanning Probe Microscope) analyses, the SRrms for each sandstone slab was calculated at different reaction time. The average SRrms increased more than 3.5 times during early 90 days reaction and it continued to be steady after 90 days, suggesting that the surface weathering process of sandstone occurred in the early reaction time after CO2 injection into the subsurface reservoir. The average porosity of sandstone cores increased by 8.8 % and the average density decreased by 0.5 % during 90 days reaction and these values slightly changed after 90 days. The average P and S wave velocities of sandstone cores also decreased by 10 % during 90 days reaction. The trend of physical rock property change during the geochemical reaction showed in a logarithmic manner and it was also correlated to the logarithmic increase in SRrms, suggesting that the physical property change of reservoir rocks originated from scCO2 injection directly comes from the geochemical reaction process. Results suggested that the long-term estimation of the physical property change for reservoir rocks in CO2 injection site could be possible from the extrapolation process of SRrms and rocks property change rates, acquired from laboratory scale experiments. It will be aslo useful to determine the favorite CO2 injection site from the viewpoint of the safety.

  18. Robustness of Many-Body Localization in the Presence of Dissipation

    NASA Astrophysics Data System (ADS)

    Levi, Emanuele; Heyl, Markus; Lesanovsky, Igor; Garrahan, Juan P.

    2016-06-01

    Many-body localization (MBL) has emerged as a novel paradigm for robust ergodicity breaking in closed quantum many-body systems. However, it is not yet clear to which extent MBL survives in the presence of dissipative processes induced by the coupling to an environment. Here we study heating and ergodicity for a paradigmatic MBL system—an interacting fermionic chain subject to quenched disorder—in the presence of dephasing. We find that, even though the system is eventually driven into an infinite-temperature state, heating as monitored by the von Neumann entropy can progress logarithmically slowly, implying exponentially large time scales for relaxation. This slow loss of memory of initial conditions makes signatures of nonergodicity visible over a long, but transient, time regime. We point out a potential controlled realization of the considered setup with cold atomic gases held in optical lattices.

  19. Scaling of Rényi entanglement entropies of the free fermi-gas ground state: a rigorous proof.

    PubMed

    Leschke, Hajo; Sobolev, Alexander V; Spitzer, Wolfgang

    2014-04-25

    In a remarkable paper [Phys. Rev. Lett. 96, 100503 (2006)], Gioev and Klich conjectured an explicit formula for the leading asymptotic growth of the spatially bipartite von Neumann entanglement entropy of noninteracting fermions in multidimensional Euclidean space at zero temperature. Based on recent progress by one of us (A. V. S.) in semiclassical functional calculus for pseudodifferential operators with discontinuous symbols, we provide here a complete proof of that formula and of its generalization to Rényi entropies of all orders α>0. The special case α=1/2 is also known under the name logarithmic negativity and often considered to be a particularly useful quantification of entanglement. These formulas exhibiting a "logarithmically enhanced area law" have been used already in many publications.

  20. Nonlinear Dot Plots.

    PubMed

    Rodrigues, Nils; Weiskopf, Daniel

    2018-01-01

    Conventional dot plots use a constant dot size and are typically applied to show the frequency distribution of small data sets. Unfortunately, they are not designed for a high dynamic range of frequencies. We address this problem by introducing nonlinear dot plots. Adopting the idea of nonlinear scaling from logarithmic bar charts, our plots allow for dots of varying size so that columns with a large number of samples are reduced in height. For the construction of these diagrams, we introduce an efficient two-way sweep algorithm that leads to a dense and symmetrical layout. We compensate aliasing artifacts at high dot densities by a specifically designed low-pass filtering method. Examples of nonlinear dot plots are compared to conventional dot plots as well as linear and logarithmic histograms. Finally, we include feedback from an expert review.

  1. Re'class'ification of 'quant'ified classical simulated annealing

    NASA Astrophysics Data System (ADS)

    Tanaka, Toshiyuki

    2009-12-01

    We discuss a classical reinterpretation of quantum-mechanics-based analysis of classical Markov chains with detailed balance, that is based on the quantum-classical correspondence. The classical reinterpretation is then used to demonstrate that it successfully reproduces a sufficient condition for cooling schedule in classical simulated annealing, which has the inverse-logarithmic scaling.

  2. Entanglement properties of the antiferromagnetic-singlet transition in the Hubbard model on bilayer square lattices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, Chia-Chen; Singh, Rajiv R. P.; Scalettar, Richard T.

    Here, we calculate the bipartite R enyi entanglement entropy of an L x L x 2 bilayer Hubbard model using a determinantal quantum Monte Carlo method recently proposed by Grover [Phys. Rev. Lett. 111, 130402 (2013)]. Two types of bipartition are studied: (i) One that divides the lattice into two L x L planes, and (ii) One that divides the lattice into two equal-size (L x L=2 x 2) bilayers. Furthermore, we compare our calculations with those for the tight-binding model studied by the correlation matrix method. As expected, the entropy for bipartition (i) scales as L 2, while themore » latter scales with L with possible logarithmic corrections. The onset of the antiferromagnet to singlet transition shows up by a saturation of the former to a maximal value and the latter to a small value in the singlet phase. We also comment on the large uncertainties in the numerical results with increasing U, which would have to be overcome before the critical behavior and logarithmic corrections can be quanti ed.« less

  3. Entanglement properties of the antiferromagnetic-singlet transition in the Hubbard model on bilayer square lattices

    DOE PAGES

    Chang, Chia-Chen; Singh, Rajiv R. P.; Scalettar, Richard T.

    2014-10-10

    Here, we calculate the bipartite R enyi entanglement entropy of an L x L x 2 bilayer Hubbard model using a determinantal quantum Monte Carlo method recently proposed by Grover [Phys. Rev. Lett. 111, 130402 (2013)]. Two types of bipartition are studied: (i) One that divides the lattice into two L x L planes, and (ii) One that divides the lattice into two equal-size (L x L=2 x 2) bilayers. Furthermore, we compare our calculations with those for the tight-binding model studied by the correlation matrix method. As expected, the entropy for bipartition (i) scales as L 2, while themore » latter scales with L with possible logarithmic corrections. The onset of the antiferromagnet to singlet transition shows up by a saturation of the former to a maximal value and the latter to a small value in the singlet phase. We also comment on the large uncertainties in the numerical results with increasing U, which would have to be overcome before the critical behavior and logarithmic corrections can be quanti ed.« less

  4. Modeling of cytometry data in logarithmic space: When is a bimodal distribution not bimodal?

    PubMed

    Erez, Amir; Vogel, Robert; Mugler, Andrew; Belmonte, Andrew; Altan-Bonnet, Grégoire

    2018-02-16

    Recent efforts in systems immunology lead researchers to build quantitative models of cell activation and differentiation. One goal is to account for the distributions of proteins from single-cell measurements by flow cytometry or mass cytometry as readout of biological regulation. In that context, large cell-to-cell variability is often observed in biological quantities. We show here that these readouts, viewed in logarithmic scale may result in two easily-distinguishable modes, while the underlying distribution (in linear scale) is unimodal. We introduce a simple mathematical test to highlight this mismatch. We then dissect the flow of influence of cell-to-cell variability proposing a graphical model which motivates higher-dimensional analysis of the data. Finally we show how acquiring additional biological information can be used to reduce uncertainty introduced by cell-to-cell variability, helping to clarify whether the data is uni- or bimodal. This communication has cautionary implications for manual and automatic gating strategies, as well as clustering and modeling of single-cell measurements. © 2018 International Society for Advancement of Cytometry. © 2018 International Society for Advancement of Cytometry.

  5. Optical Logarithmic Transformation of Speckle Images with Bacteriorhodopsin Films

    NASA Technical Reports Server (NTRS)

    Downie, John D.

    1995-01-01

    The application of logarithmic transformations to speckle images is sometimes desirable in converting the speckle noise distribution into an additive, constant-variance noise distribution. The optical transmission properties of some bacteriorhodopsin films are well suited to implement such a transformation optically in a parallel fashion. I present experimental results of the optical conversion of a speckle image into a transformed image with signal-independent noise statistics, using the real-time photochromic properties of bacteriorhodopsin. The original and transformed noise statistics are confirmed by histogram analysis.

  6. An Estimation of the Logarithmic Timescale in Ergodic Dynamics

    NASA Astrophysics Data System (ADS)

    Gomez, Ignacio S.

    An estimation of the logarithmic timescale in quantum systems having an ergodic dynamics in the semiclassical limit, is presented. The estimation is based on an extension of the Krieger’s finite generator theorem for discretized σ-algebras and using the time rescaling property of the Kolmogorov-Sinai entropy. The results are in agreement with those obtained in the literature but with a simpler mathematics and within the context of the ergodic theory. Moreover, some consequences of the Poincaré’s recurrence theorem are also explored.

  7. Method for determining formation quality factor from seismic data

    DOEpatents

    Taner, M. Turhan; Treitel, Sven

    2005-08-16

    A method is disclosed for calculating the quality factor Q from a seismic data trace. The method includes calculating a first and a second minimum phase inverse wavelet at a first and a second time interval along the seismic data trace, synthetically dividing the first wavelet by the second wavelet, Fourier transforming the result of the synthetic division, calculating the logarithm of this quotient of Fourier transforms and determining the slope of a best fit line to the logarithm of the quotient.

  8. Stratified Flow Past a Hill: Dividing Streamline Concept Revisited

    NASA Astrophysics Data System (ADS)

    Leo, Laura S.; Thompson, Michael Y.; Di Sabatino, Silvana; Fernando, Harindra J. S.

    2016-06-01

    The Sheppard formula (Q J R Meteorol Soc 82:528-529, 1956) for the dividing streamline height H_s assumes a uniform velocity U_∞ and a constant buoyancy frequency N for the approach flow towards a mountain of height h, and takes the form H_s/h=( {1-F} ) , where F=U_{∞}/Nh. We extend this solution to a logarithmic approach-velocity profile with constant N. An analytical solution is obtained for H_s/h in terms of Lambert-W functions, which also suggests alternative scaling for H_s/h. A `modified' logarithmic velocity profile is proposed for stably stratified atmospheric boundary-layer flows. A field experiment designed to observe H_s is described, which utilized instrumentation from the spring field campaign of the Mountain Terrain Atmospheric Modeling and Observations (MATERHORN) Program. Multiple releases of smoke at F≈ 0.3-0.4 support the new formulation, notwithstanding the limited success of experiments due to logistical constraints. No dividing streamline is discerned for F≈ 10, since, if present, it is too close to the foothill. Flow separation and vortex shedding is observed in this case. The proposed modified logarithmic profile is in reasonable agreement with experimental observations.

  9. Synthesis of instrumentally and historically recorded earthquakes and studying their spatial statistical relationship (A case study: Dasht-e-Biaz, Eastern Iran)

    NASA Astrophysics Data System (ADS)

    Jalali, Mohammad; Ramazi, Hamidreza

    2018-06-01

    Earthquake catalogues are the main source of statistical seismology for the long term studies of earthquake occurrence. Therefore, studying the spatiotemporal problems is important to reduce the related uncertainties in statistical seismology studies. A statistical tool, time normalization method, has been determined to revise time-frequency relationship in one of the most active regions of Asia, Eastern Iran and West of Afghanistan, (a and b were calculated around 8.84 and 1.99 in the exponential scale, not logarithmic scale). Geostatistical simulation method has been further utilized to reduce the uncertainties in the spatial domain. A geostatistical simulation produces a representative, synthetic catalogue with 5361 events to reduce spatial uncertainties. The synthetic database is classified using a Geographical Information System, GIS, based on simulated magnitudes to reveal the underlying seismicity patterns. Although some regions with highly seismicity correspond to known faults, significantly, as far as seismic patterns are concerned, the new method highlights possible locations of interest that have not been previously identified. It also reveals some previously unrecognized lineation and clusters in likely future strain release.

  10. Velocity space scattering coefficients with applications in antihydrogen recombination studies

    NASA Astrophysics Data System (ADS)

    Chang, Yongbin; Ordonez, C. A.

    2000-12-01

    An approach for calculating velocity space friction and diffusion coefficients with Maxwellian field particles is developed based on a kernel function derived in a previous paper [Y. Chang and C. A. Ordonez, Phys. Plasmas 6, 2947 (1999)]. The original fivefold integral expressions for the coefficients are reduced to onefold integrals, which can be used for any value of the Coulomb logarithm. The onefold integrals can be further reduced to standard analytical expressions by using a weak coupling approximation. The integral expression for the friction coefficient is used to predict a time scale that describes the rate at which a reflecting antiproton beam slows down within a positron plasma, while both species are simultaneously confined by a nested Penning trap. The time scale is used to consider the possibility of achieving antihydrogen recombination within the trap. The friction and diffusion coefficients are then used to derive an expression for calculating the energy transfer rate between antiprotons and positrons. The expression is employed to illustrate achieving antihydrogen recombination while taking into account positron heating by the antiprotons. The effect of the presence of an electric field on recombination is discussed.

  11. Natural Scale for Employee's Payment Based on the Entropy Law

    NASA Astrophysics Data System (ADS)

    Cosma, Ioan; Cosma, Adrian

    2009-05-01

    An econophysical modeling fated to establish an equitable scale of employees' salary in accordance with the importance and effectiveness of labor is considered. Our model, based on the concept and law of entropy, can designate all the parameters connected to the level of personal incomes and taxations, and also to the distribution of employees versus amount of salary in any remuneration system. Consistent with the laws of classical and statistical thermodynamics, this scale reveals that the personal incomes increased progressively in a natural logarithmic way, different compared with other scales arbitrary established by the governments of each country or by employing companies.

  12. Program to analyze aquifer test data and check for validity with the jacob method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Field, M.S.

    1993-01-01

    The Jacob straight-line method of aquifer analysis deals with the late-time data and small radius of the Theis type curve which plot as a straight line if the drawdown data are plotted on an arithmetic scale and the time data on a logarithmic (base 10) scale. Correct analysis with the Jacob method normally assumes that (1) the data lie on a straight line, (2) the value of the dimensionless time factor is less than 0.01, and (3) the site's hydrogeology conforms to the method's assumptions and limiting conditions. Items 1 and 2 are usually considered for the Jacob method, butmore » item 3 is often ignored, which can lead to incorrect calculations of aquifer parameters. A BASIC computer program was developed to analyze aquifer test data with the Jacob method to test the validity of its use. Aquifer test data are entered into the program and manipulated so that a slope and time intercept of the straight line drawn through the data (excluding early-time and late-time data) can be used to calculate transmissivity and storage coefficient. Late-time data are excluded to eliminate the effects of positive and negative boundaries. The time-drawdown data then are converted into dimensionless units to determine if the Jacob method's assumptions are valid for the hydrogeologic conditions under which the test was conducted.« less

  13. Dynamical conductivity at the dirty superconductor-metal quantum phase transition

    NASA Astrophysics Data System (ADS)

    Hoyos, J. A.; Del Maestro, Adrian; Rosenow, Bernd; Vojta, Thomas

    2011-03-01

    We study the transport properties of ultrathin disordered nanowires in the neighborhood of the superconductor-metal quantum phase transition. To this end we combine numerical calculations with analytical strong-disorder renormalization group results. The quantum critical conductivity at zero temperature diverges logarithmically as a function of frequency. In the metallic phase, it obeys activated scaling associated with an infinite-randomness quantum critical point. We extend the scaling theory to higher dimensions and discuss implications for experiments. Financial support: Fapesp, CNPq, NSF, and Research Corporation.

  14. Quantification and scaling of multipartite entanglement in continuous variable systems.

    PubMed

    Adesso, Gerardo; Serafini, Alessio; Illuminati, Fabrizio

    2004-11-26

    We present a theoretical method to determine the multipartite entanglement between different partitions of multimode, fully or partially symmetric Gaussian states of continuous variable systems. For such states, we determine the exact expression of the logarithmic negativity and show that it coincides with that of equivalent two-mode Gaussian states. Exploiting this reduction, we demonstrate the scaling of the multipartite entanglement with the number of modes and its reliable experimental estimate by direct measurements of the global and local purities.

  15. CHEMICAL TIME-SERIES SAMPLING

    EPA Science Inventory

    The rationale for chemical time-series sampling has its roots in the same fundamental relationships as govern well hydraulics. Samples of ground water are collected as a function of increasing time of pumpage. The most efficient pattern of collection consists of logarithmically s...

  16. Nonlinear isochrones in murine left ventricular pressure-volume loops: how well does the time-varying elastance concept hold?

    PubMed

    Claessens, T E; Georgakopoulos, D; Afanasyeva, M; Vermeersch, S J; Millar, H D; Stergiopulos, N; Westerhof, N; Verdonck, P R; Segers, P

    2006-04-01

    The linear time-varying elastance theory is frequently used to describe the change in ventricular stiffness during the cardiac cycle. The concept assumes that all isochrones (i.e., curves that connect pressure-volume data occurring at the same time) are linear and have a common volume intercept. Of specific interest is the steepest isochrone, the end-systolic pressure-volume relationship (ESPVR), of which the slope serves as an index for cardiac contractile function. Pressure-volume measurements, achieved with a combined pressure-conductance catheter in the left ventricle of 13 open-chest anesthetized mice, showed a marked curvilinearity of the isochrones. We therefore analyzed the shape of the isochrones by using six regression algorithms (two linear, two quadratic, and two logarithmic, each with a fixed or time-varying intercept) and discussed the consequences for the elastance concept. Our main observations were 1) the volume intercept varies considerably with time; 2) isochrones are equally well described by using quadratic or logarithmic regression; 3) linear regression with a fixed intercept shows poor correlation (R(2) < 0.75) during isovolumic relaxation and early filling; and 4) logarithmic regression is superior in estimating the fixed volume intercept of the ESPVR. In conclusion, the linear time-varying elastance fails to provide a sufficiently robust model to account for changes in pressure and volume during the cardiac cycle in the mouse ventricle. A new framework accounting for the nonlinear shape of the isochrones needs to be developed.

  17. Scaling of the velocity fluctuations in turbulent channels up to Reτ=2003

    NASA Astrophysics Data System (ADS)

    Hoyas, Sergio; Jiménez, Javier

    2006-01-01

    A new numerical simulation of a turbulent channel in a large box at Reτ=2003 is described and briefly compared with simulations at lower Reynolds numbers and with experiments. Some of the fluctuation intensities, especially the streamwise velocity, do not scale well in wall units, both near and away from the wall. Spectral analysis traces the near-wall scaling failure to the interaction of the logarithmic layer with the wall. The present statistics can be downloaded from http://torroja.dmt.upm.es/ftp/channels. Further ones will be added to the site as they become available.

  18. Abelian non-global logarithms from soft gluon clustering

    NASA Astrophysics Data System (ADS)

    Kelley, Randall; Walsh, Jonathan R.; Zuberi, Saba

    2012-09-01

    Most recombination-style jet algorithms cluster soft gluons in a complex way. This leads to previously identified correlations in the soft gluon phase space and introduces logarithmic corrections to jet cross sections, which are known as clustering logarithms. The leading Abelian clustering logarithms occur at least at next-to leading logarithm (NLL) in the exponent of the distribution. Using the framework of Soft Collinear Effective Theory (SCET), we show that new clustering effects contributing at NLL arise at each order. While numerical resummation of clustering logs is possible, it is unlikely that they can be analytically resummed to NLL. Clustering logarithms make the anti-kT algorithm theoretically preferred, for which they are power suppressed. They can arise in Abelian and non-Abelian terms, and we calculate the Abelian clustering logarithms at O ( {α_s^2} ) for the jet mass distribution using the Cambridge/Aachen and kT algorithms, including jet radius dependence, which extends previous results. We find that clustering logarithms can be naturally thought of as a class of non-global logarithms, which have traditionally been tied to non-Abelian correlations in soft gluon emission.

  19. The Behavioral Economics of Choice and Interval Timing

    PubMed Central

    Jozefowiez, J.; Staddon, J. E. R.; Cerutti, D. T.

    2009-01-01

    We propose a simple behavioral economic model (BEM) describing how reinforcement and interval timing interact. The model assumes a Weber-law-compliant logarithmic representation of time. Associated with each represented time value are the payoffs that have been obtained for each possible response. At a given real time, the response with the highest payoff is emitted. The model accounts for a wide range of data from procedures such as simple bisection, metacognition in animals, economic effects in free-operant psychophysical procedures and paradoxical choice in double-bisection procedures. Although it assumes logarithmic time representation, it can also account for data from the time-left procedure usually cited in support of linear time representation. It encounters some difficulties in complex free-operant choice procedures, such as concurrent mixed fixed-interval schedules as well as some of the data on double bisection, that may involve additional processes. Overall, BEM provides a theoretical framework for understanding how reinforcement and interval timing work together to determine choice between temporally differentiated reinforcers. PMID:19618985

  20. Perfluoroalkyl substances and time to pregnancy in couples from Greenland, Poland and Ukraine.

    PubMed

    Jørgensen, Kristian T; Specht, Ina O; Lenters, Virissa; Bach, Cathrine C; Rylander, Lars; Jönsson, Bo A G; Lindh, Christian H; Giwercman, Aleksander; Heederik, Dick; Toft, Gunnar; Bonde, Jens Peter

    2014-12-22

    Perfluoroalkyl substances (PFAS) are suggested to affect human fecundity through longer time to pregnancy (TTP). We studied the relationship between four abundant PFAS and TTP in pregnant women from Greenland, Poland and Ukraine representing varying PFAS exposures and pregnancy planning behaviors. We measured serum levels of perfluorooctanoic acid (PFOA), perfluorooctane sulfonate (PFOS), perfluorohexane sulfonic acid (PFHxS) and perfluorononanoic acid (PFNA) in 938 women from Greenland (448 women), Poland (203 women) and Ukraine (287 women). PFAS exposure was assessed on a continuous logarithm transformed scale and in country-specific tertiles. We used Cox discrete-time models and logistic regression to estimate fecundability ratios (FRs) and infertility (TTP >13 months) odds ratios (ORs), respectively, and 95% confidence intervals (CI) according to PFAS levels. Adjusted analyses of the association between PFAS and TTP were done for each study population and in a pooled sample. Higher PFNA levels were associated with longer TTP in the pooled sample (log-scale FR = 0.80; 95% CI 0.69-0.94) and specifically in women from Greenland (log-scale FR = 0.72; 95% CI 0.58-0.89). ORs for infertility were also increased in the pooled sample (log-scale OR = 1.53; 95% CI 1.08-2.15) and in women from Greenland (log-scale OR = 1.97; 95% CI 1.22-3.19). However, in a sensitivity analysis of primiparous women these associations could not be replicated. Associations with PFNA were weaker for women from Poland and Ukraine. PFOS, PFOA and PFHxS were not consistently associated with TTP. Findings do not provide consistent evidence that environmental exposure to PFAS is impairing female fecundity by delaying time taken to conceive.

  1. Leading logarithmic corrections to the muonium hyperfine splitting and to the hydrogen Lamb shift

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karshenboim, S.G.

    1994-12-31

    Main leading corrections with recoil logarithm log(M/m) and low-energy logarithm log(Za) to the Muonium hyperfine splitting axe discussed. Logarithmic corrections have magnitudes of 0.1 {divided_by} 0.3 kHz. Non-leading higher order corrections axe expected to be not larger than 0.1 kHz. Leading logarithmic correction to the Hydrogen Lamb shift is also obtained.

  2. A Proposed Alternative Measure for Climate Change Potential

    NASA Astrophysics Data System (ADS)

    DeGroff, F. A.

    2015-12-01

    Background/Issue There currently exists no comprehensive metric to measure and value anthropogenic changes in carbon flux between geospheric carbon sinks. We propose that changes in carbon residence time within geospheres be used as a metric to assess anthropogenic changes in carbon flux, and the term 'carbon quality' (cq) be used to describe such changes. Carbon residence time represents the inverse of carbon flux; as carbon flux increases, the corresponding cq will decrease, and vice versa. Focusing on atmospheric carbon emissions as a measure of anthropogenic activity on the environment ignores the fungible characteristics of carbon that are crucial in both the biosphere and the worldwide economy. The ubiquitous carbon molecule enables the enormous diversity in the biosphere, as well as the widespread, strategic economic presence of carbon in the world economy. Focusing on a single form of inorganic carbon as a proxy metric for the plethora of anthropogenic activity and carbon compounds will prove inadequate, convoluted, and unmanageable. A broader, more basic metric is needed to capture the breath and scope of carbon activity. Results/Conclusions We propose a logarithmic vector scale for cq to measure anthropogenic carbon flux. The distance between vector points, e.g. the starting and ending residence times, would represent the change in cq. A base-10 logarithmic scale would allow the addition and subtraction of exponents to calculate changes in cq. As carbon moves between carbon reservoirs, the change in cq is measured as: cq = b ( log10 [mean carbon residence time] ) where b represents the carbon price coefficient for a particular country. For any country, cq measures the climate change potential for any organic carbon when converted to inorganic CO2, or to any lower residence time carbon state. The greater the carbon fees for a country, the larger the b coefficient would be, and the greater the import fees would be to achieve carbon parity on imports from countries with lower carbon fees. By assessing embodied carbon within imports for carbon parity with domestic production, cq would eliminate the incentives to use spatial shifts in carbon emissions to avoid carbon fees. Similarity, cq would temper the incentives to use temporal displacement of carbon emissions, such as with biomass or CCS, to reduce carbon fees.

  3. Probing slow dynamics of consolidated granular multicomposite materials by diffuse acoustic wave spectroscopy.

    PubMed

    Tremblay, Nicolas; Larose, Eric; Rossetto, Vincent

    2010-03-01

    The stiffness of a consolidated granular medium experiences a drop immediately after a moderate mechanical solicitation. Then the stiffness rises back toward its initial value, following a logarithmic time evolution called slow dynamics. In the literature, slow dynamics has been probed by macroscopic quantities averaged over the sample volume, for instance, by the resonant frequency of vibrational eigenmodes. This article presents a different approach based on diffuse acoustic wave spectroscopy, a technique that is directly sensitive to the details of the sample structure. The parameters of the dynamics are found to depend on the damage of the medium. Results confirm that slow dynamics is, at least in part, due to tiny structural rearrangements at the microscopic scale, such as inter-grain contacts.

  4. Logic circuits based on molecular spider systems.

    PubMed

    Mo, Dandan; Lakin, Matthew R; Stefanovic, Darko

    2016-08-01

    Spatial locality brings the advantages of computation speed-up and sequence reuse to molecular computing. In particular, molecular walkers that undergo localized reactions are of interest for implementing logic computations at the nanoscale. We use molecular spider walkers to implement logic circuits. We develop an extended multi-spider model with a dynamic environment wherein signal transmission is triggered via localized reactions, and use this model to implement three basic gates (AND, OR, NOT) and a cascading mechanism. We develop an algorithm to automatically generate the layout of the circuit. We use a kinetic Monte Carlo algorithm to simulate circuit computations, and we analyze circuit complexity: our design scales linearly with formula size and has a logarithmic time complexity. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  5. An object recognition method based on fuzzy theory and BP networks

    NASA Astrophysics Data System (ADS)

    Wu, Chuan; Zhu, Ming; Yang, Dong

    2006-01-01

    It is difficult to choose eigenvectors when neural network recognizes object. It is possible that the different object eigenvectors is similar or the same object eigenvectors is different under scaling, shifting, rotation if eigenvectors can not be chosen appropriately. In order to solve this problem, the image is edged, the membership function is reconstructed and a new threshold segmentation method based on fuzzy theory is proposed to get the binary image. Moment invariant of binary image is extracted and normalized. Some time moment invariant is too small to calculate effectively so logarithm of moment invariant is taken as input eigenvectors of BP network. The experimental results demonstrate that the proposed approach could recognize the object effectively, correctly and quickly.

  6. Scaling Behavior of Firm Growth

    NASA Astrophysics Data System (ADS)

    Stanley, Michael H. R.; Nunes Amaral, Luis A.; Buldyrev, Sergey V.; Havlin, Shlomo; Leschhorn, Heiko; Maass, Philipp; Salinger, Michael A.; Stanley, H. Eugene

    1996-03-01

    The theory of the firm is of considerable interest in economics. The standard microeconomic theory of the firm is largely a static model and has thus proved unsatisfactory for addressing inherently dynamic issues such as the growth of economies. In recent years, many have attempted to develop richer models that provide a more accurate representation of firm dynamics due to learning, innovative effort, and the development of organizational infrastructure. The validity of these new, inherently dynamic theories depends on their consistency with the statistical properties of firm growth, e.g. the relationship between growth rates and firm size. Using the Compustat database over the time period 1975-1991, we find: (i) the distribution of annual growth rates for firms with approximately the same sales displays an exponential form with the logarithm of growth rate, and (ii) the fluctuations in the growth rates --- measured by the width of this distribution --- scale as a power law with the firm sales. We place these findings of scaling behavior in the context of conventional economics by considering firm growth dynamics with temporal correlations and also, by considering a hierarchical organization of the departments of a firm.

  7. Scaling in the distribution of intertrade durations of Chinese stocks

    NASA Astrophysics Data System (ADS)

    Jiang, Zhi-Qiang; Chen, Wei; Zhou, Wei-Xing

    2008-10-01

    The distribution of intertrade durations, defined as the waiting times between two consecutive transactions, is investigated based upon the limit order book data of 23 liquid Chinese stocks listed on the Shenzhen Stock Exchange in the whole year 2003. A scaling pattern is observed in the distributions of intertrade durations, where the empirical density functions of the normalized intertrade durations of all 23 stocks collapse onto a single curve. The scaling pattern is also observed in the intertrade duration distributions for filled and partially filled trades and in the conditional distributions. The ensemble distributions for all stocks are modeled by the Weibull and the Tsallis q-exponential distributions. Maximum likelihood estimation shows that the Weibull distribution outperforms the q-exponential for not-too-large intertrade durations which account for more than 98.5% of the data. Alternatively, nonlinear least-squares estimation selects the q-exponential as a better model, in which the optimization is conducted on the distance between empirical and theoretical values of the logarithmic probability densities. The distribution of intertrade durations is Weibull followed by a power-law tail with an asymptotic tail exponent close to 3.

  8. Board foot volumes of young growth mixed conifer timber

    Treesearch

    W. L. Jackson

    1961-01-01

    Board foot volumes have been determined for ponderosa pine, sugar pine, white fir, and Douglas-fir in 90-year-old mixed-conifer stands on the Challenge Experimental Forest, near Oroville, California. Productivity is high—site index 140 feet at 100 years. Following the technique described by Boe, the scaled volumes of felled trees were plotted on logarithmic...

  9. Quantity Representation in Children and Rhesus Monkeys: Linear Versus Logarithmic Scales

    ERIC Educational Resources Information Center

    Beran, Michael J.; Johnson-Pynn, Julie S.; Ready, Christopher

    2008-01-01

    The performances of 4- and 5-year-olds and rhesus monkeys were compared using a computerized task for quantity assessment. Participants first learned two quantity anchor values and then responded to intermediate values by classifying them as similar to either the large anchor or the small anchor. Of primary interest was an assessment of where the…

  10. Inferring Aquifer Transmissivity from River Flow Data

    NASA Astrophysics Data System (ADS)

    Trichakis, Ioannis; Pistocchi, Alberto

    2016-04-01

    Daily streamflow data is the measurable result of many different hydrological processes within a basin; therefore, it includes information about all these processes. In this work, recession analysis applied to a pan-European dataset of measured streamflow was used to estimate hydrogeological parameters of the aquifers that contribute to the stream flow. Under the assumption that base-flow in times of no precipitation is mainly due to groundwater, we estimated parameters of European shallow aquifers connected with the stream network, and identified on the basis of the 1:1,500,000 scale Hydrogeological map of Europe. To this end, Master recession curves (MRCs) were constructed based on the RECESS model of the USGS for 1601 stream gauge stations across Europe. The process consists of three stages. Firstly, the model analyses the stream flow time-series. Then, it uses regression to calculate the recession index. Finally, it infers characteristics of the aquifer from the recession index. During time-series analysis, the model identifies those segments, where the number of successive recession days is above a certain threshold. The reason for this pre-processing lies in the necessity for an adequate number of points when performing regression at a later stage. The recession index derives from the semi-logarithmic plot of stream flow over time, and the post processing involves the calculation of geometrical parameters of the watershed through a GIS platform. The program scans the full stream flow dataset of all the stations. For each station, it identifies the segments with continuous recession that exceed a predefined number of days. When the algorithm finds all the segments of a certain station, it analyses them and calculates the best linear fit between time and the logarithm of flow. The algorithm repeats this procedure for the full number of segments, thus it calculates many different values of recession index for each station. After the program has found all the recession segments, it performs calculations to determine the expression for the MRC. Further processing of the MRCs can yield estimates of transmissivity or response time representative of the aquifers upstream of the station. These estimates can be useful for large scale (e.g. continental) groundwater modelling. The above procedure allowed calculating values of transmissivity for a large share of European aquifers, ranging from Tmin = 4.13E-04 m²/d to Tmax = 8.12E+03 m²/d, with an average value Taverage = 9.65E+01 m²/d. These results are in line with the literature, indicating that the procedure may provide realistic results for large-scale groundwater modelling. In this contribution we present the results in the perspective of their application for the parameterization of a pan-European bi-dimensional shallow groundwater flow model.

  11. Cost drivers and resource allocation in military health care systems.

    PubMed

    Fulton, Larry; Lasdon, Leon S; McDaniel, Reuben R

    2007-03-01

    This study illustrates the feasibility of incorporating technical efficiency considerations in the funding of military hospitals and identifies the primary drivers for hospital costs. Secondary data collected for 24 U.S.-based Army hospitals and medical centers for the years 2001 to 2003 are the basis for this analysis. Technical efficiency was measured by using data envelopment analysis; subsequently, efficiency estimates were included in logarithmic-linear cost models that specified cost as a function of volume, complexity, efficiency, time, and facility type. These logarithmic-linear models were compared against stochastic frontier analysis models. A parsimonious, three-variable, logarithmic-linear model composed of volume, complexity, and efficiency variables exhibited a strong linear relationship with observed costs (R(2) = 0.98). This model also proved reliable in forecasting (R(2) = 0.96). Based on our analysis, as much as $120 million might be reallocated to improve the United States-based Army hospital performance evaluated in this study.

  12. Optical Processing of Speckle Images with Bacteriorhodopsin for Pattern Recognition

    NASA Technical Reports Server (NTRS)

    Downie, John D.; Tucker, Deanne (Technical Monitor)

    1994-01-01

    Logarithmic processing of images with multiplicative noise characteristics can be utilized to transform the image into one with an additive noise distribution. This simplifies subsequent image processing steps for applications such as image restoration or correlation for pattern recognition. One particularly common form of multiplicative noise is speckle, for which the logarithmic operation not only produces additive noise, but also makes it of constant variance (signal-independent). We examine the optical transmission properties of some bacteriorhodopsin films here and find them well suited to implement such a pointwise logarithmic transformation optically in a parallel fashion. We present experimental results of the optical conversion of speckle images into transformed images with additive, signal-independent noise statistics using the real-time photochromic properties of bacteriorhodopsin. We provide an example of improved correlation performance in terms of correlation peak signal-to-noise for such a transformed speckle image.

  13. Autonomous facial recognition system inspired by human visual system based logarithmical image visualization technique

    NASA Astrophysics Data System (ADS)

    Wan, Qianwen; Panetta, Karen; Agaian, Sos

    2017-05-01

    Autonomous facial recognition system is widely used in real-life applications, such as homeland border security, law enforcement identification and authentication, and video-based surveillance analysis. Issues like low image quality, non-uniform illumination as well as variations in poses and facial expressions can impair the performance of recognition systems. To address the non-uniform illumination challenge, we present a novel robust autonomous facial recognition system inspired by the human visual system based, so called, logarithmical image visualization technique. In this paper, the proposed method, for the first time, utilizes the logarithmical image visualization technique coupled with the local binary pattern to perform discriminative feature extraction for facial recognition system. The Yale database, the Yale-B database and the ATT database are used for computer simulation accuracy and efficiency testing. The extensive computer simulation demonstrates the method's efficiency, accuracy, and robustness of illumination invariance for facial recognition.

  14. Chemical origins of frictional aging.

    PubMed

    Liu, Yun; Szlufarska, Izabela

    2012-11-02

    Although the basic laws of friction are simple enough to be taught in elementary physics classes and although friction has been widely studied for centuries, in the current state of knowledge it is still not possible to predict a friction force from fundamental principles. One of the highly debated topics in this field is the origin of static friction. For most macroscopic contacts between two solids, static friction will increase logarithmically with time, a phenomenon that is referred to as aging of the interface. One known reason for the logarithmic growth of static friction is the deformation creep in plastic contacts. However, this mechanism cannot explain frictional aging observed in the absence of roughness and plasticity. Here, we discover molecular mechanisms that can lead to a logarithmic increase of friction based purely on interfacial chemistry. Predictions of our model are consistent with published experimental data on the friction of silica.

  15. Reynolds stress scaling in pipe flow turbulence-first results from CICLoPE.

    PubMed

    Örlü, R; Fiorini, T; Segalini, A; Bellani, G; Talamelli, A; Alfredsson, P H

    2017-03-13

    This paper reports the first turbulence measurements performed in the Long Pipe Facility at the Center for International Cooperation in Long Pipe Experiments (CICLoPE). In particular, the Reynolds stress components obtained from a number of straight and boundary-layer-type single-wire and X-wire probes up to a friction Reynolds number of 3.8×10 4 are reported. In agreement with turbulent boundary-layer experiments as well as with results from the Superpipe, the present measurements show a clear logarithmic region in the streamwise variance profile, with a Townsend-Perry constant of A 2 ≈1.26. The wall-normal variance profile exhibits a Reynolds-number-independent plateau, while the spanwise component was found to obey a logarithmic scaling over a much wider wall-normal distance than the other two components, with a slope that is nearly half of that of the Townsend-Perry constant, i.e. A 2,w ≈A 2 /2. The present results therefore provide strong support for the scaling of the Reynolds stress tensor based on the attached-eddy hypothesis. Intriguingly, the wall-normal and spanwise components exhibit higher amplitudes than in previous studies, and therefore call for follow-up studies in CICLoPE, as well as other large-scale facilities.This article is part of the themed issue 'Toward the development of high-fidelity models of wall turbulence at large Reynolds number'. © 2017 The Author(s).

  16. Finite size effects in epidemic spreading: the problem of overpopulated systems

    NASA Astrophysics Data System (ADS)

    Ganczarek, Wojciech

    2013-12-01

    In this paper we analyze the impact of network size on the dynamics of epidemic spreading. In particular, we investigate the pace of infection in overpopulated systems. In order to do that, we design a model for epidemic spreading on a finite complex network with a restriction to at most one contamination per time step, which can serve as a model for sexually transmitted diseases spreading in some student communes. Because of the highly discrete character of the process, the analysis cannot use the continuous approximation widely exploited for most models. Using a discrete approach, we investigate the epidemic threshold and the quasi-stationary distribution. The main results are two theorems about the mixing time for the process: it scales like the logarithm of the network size and it is proportional to the inverse of the distance from the epidemic threshold.

  17. Oxygen detection using evanescent fields

    DOEpatents

    Duan, Yixiang [Los Alamos, NM; Cao, Weenqing [Los Alamos, NM

    2007-08-28

    An apparatus and method for the detection of oxygen using optical fiber based evanescent light absorption. Methylene blue was immobilized using a sol-gel process on a portion of the exterior surface of an optical fiber for which the cladding has been removed, thereby forming an optical oxygen sensor. When light is directed through the optical fiber, transmitted light intensity varies as a result of changes in the absorption of evanescent light by the methylene blue in response to the oxygen concentration to which the sensor is exposed. The sensor was found to have a linear response to oxygen concentration on a semi-logarithmic scale within the oxygen concentration range between 0.6% and 20.9%, a response time and a recovery time of about 3 s, ant to exhibit good reversibility and repeatability. An increase in temperature from 21.degree. C. to 35.degree. C. does not affect the net absorption of the sensor.

  18. Extinction phase transitions in a model of ecological and evolutionary dynamics

    NASA Astrophysics Data System (ADS)

    Barghathi, Hatem; Tackkett, Skye; Vojta, Thomas

    2017-07-01

    We study the non-equilibrium phase transition between survival and extinction of spatially extended biological populations using an agent-based model. We especially focus on the effects of global temporal fluctuations of the environmental conditions, i.e., temporal disorder. Using large-scale Monte-Carlo simulations of up to 3 × 107 organisms and 105 generations, we find the extinction transition in time-independent environments to be in the well-known directed percolation universality class. In contrast, temporal disorder leads to a highly unusual extinction transition characterized by logarithmically slow population decay and enormous fluctuations even for large populations. The simulations provide strong evidence for this transition to be of exotic infinite-noise type, as recently predicted by a renormalization group theory. The transition is accompanied by temporal Griffiths phases featuring a power-law dependence of the life time on the population size.

  19. Exchange-driven growth.

    PubMed

    Ben-Naim, E; Krapivsky, P L

    2003-09-01

    We study a class of growth processes in which clusters evolve via exchange of particles. We show that depending on the rate of exchange there are three possibilities: (I) Growth-clusters grow indefinitely, (II) gelation-all mass is transformed into an infinite gel in a finite time, and (III) instant gelation. In regimes I and II, the cluster size distribution attains a self-similar form. The large size tail of the scaling distribution is Phi(x) approximately exp(-x(2-nu)), where nu is a homogeneity degree of the rate of exchange. At the borderline case nu=2, the distribution exhibits a generic algebraic tail, Phi(x) approximately x(-5). In regime III, the gel nucleates immediately and consumes the entire system. For finite systems, the gelation time vanishes logarithmically, T approximately [lnN](-(nu-2)), in the large system size limit N--> infinity. The theory is applied to coarsening in the infinite range Ising-Kawasaki model and in electrostatically driven granular layers.

  20. Optimization of the Monte Carlo code for modeling of photon migration in tissue.

    PubMed

    Zołek, Norbert S; Liebert, Adam; Maniewski, Roman

    2006-10-01

    The Monte Carlo method is frequently used to simulate light transport in turbid media because of its simplicity and flexibility, allowing to analyze complicated geometrical structures. Monte Carlo simulations are, however, time consuming because of the necessity to track the paths of individual photons. The time consuming computation is mainly associated with the calculation of the logarithmic and trigonometric functions as well as the generation of pseudo-random numbers. In this paper, the Monte Carlo algorithm was developed and optimized, by approximation of the logarithmic and trigonometric functions. The approximations were based on polynomial and rational functions, and the errors of these approximations are less than 1% of the values of the original functions. The proposed algorithm was verified by simulations of the time-resolved reflectance at several source-detector separations. The results of the calculation using the approximated algorithm were compared with those of the Monte Carlo simulations obtained with an exact computation of the logarithm and trigonometric functions as well as with the solution of the diffusion equation. The errors of the moments of the simulated distributions of times of flight of photons (total number of photons, mean time of flight and variance) are less than 2% for a range of optical properties, typical of living tissues. The proposed approximated algorithm allows to speed up the Monte Carlo simulations by a factor of 4. The developed code can be used on parallel machines, allowing for further acceleration.

  1. On the power law of passive scalars in turbulence

    NASA Astrophysics Data System (ADS)

    Gotoh, Toshiyuki; Watanabe, Takeshi

    2015-11-01

    It has long been considered that the moments of the scalar increment with separation distance r obey power law with scaling exponents in the inertial convective range and the exponents are insensitive to variation of pumping of scalar fluctuations at large scales, thus the scaling exponents are universal. We examine the scaling behavior of the moments of increments of passive scalars 1 and 2 by using DNS up to the grid points of 40963. They are simultaneously convected by the same isotropic steady turbulence atRλ = 805 , but excited by two different methods. Scalar 1 is excited by the random scalar injection which is isotropic, Gaussian and white in time at law wavenumber band, while Scalar 2 is excited by the uniform mean scalar gradient. It is found that the local scaling exponents of the scalar 1 has a logarithmic correction, meaning that the moments of the scalar 1 do not obey simple power law. On the other hand, the moments of the scalar 2 is found to obey the well developed power law with exponents consistent with those in the literature. Physical reasons for the difference are explored. Grants-in-Aid for Scientific Research 15H02218 and 26420106, NIFS14KNSS050, HPCI project hp150088 and hp140024, JHPCN project jh150012.

  2. Rate Dependence in Force Networks of Sheared Granular Materials

    NASA Astrophysics Data System (ADS)

    Hartley, Robert; Behringer, Robert P.

    2003-03-01

    We describe experiments that explore rate dependence in force networks of dense granular materials undergoing slow deformation by shear and by compression. The experiments were carried out using 2D photoelastic particles so that it was possible to visualize forces at the grain scale. Shear experiments were carried out in a Couette geometry with a rate Ω. Compression experiments were carried out by repetitive compaction via a piston in a rigid chamber at comparable rates to the shear experiments. Under shearing the mean stress/force grew logarithmically with Ω for at least four decades. For compression, no dependence of the mean stress on rate was observed. In related measurements, we observed relaxation of stress in static samples that had been sheared and where the shearing was abruptly stopped. Relaxation of the force network occured over time scales of days. No relaxation of the force network was observable for uniformly compressed static samples. These results are of particular interest because they provide insight into creep and failure in granular materials.

  3. Modification of the large-scale features of high Reynolds number wall turbulence by passive surface obtrusions

    NASA Astrophysics Data System (ADS)

    Monty, J. P.; Allen, J. J.; Lien, K.; Chong, M. S.

    2011-12-01

    A high Reynolds number boundary-layer wind-tunnel facility at New Mexico State University was fitted with a regularly distributed braille surface. The surface was such that braille dots were closely packed in the streamwise direction and sparsely spaced in the spanwise direction. This novel surface had an unexpected influence on the flow: the energy of the very large-scale features of wall turbulence (approximately six-times the boundary-layer thickness in length) became significantly attenuated, even into the logarithmic region. To the author's knowledge, this is the first experimental study to report a modification of `superstructures' in a rough-wall turbulent boundary layer. The result gives rise to the possibility that flow control through very small, passive surface roughness may be possible at high Reynolds numbers, without the prohibitive drag penalty anticipated heretofore. Evidence was also found for the uninhibited existence of the near-wall cycle, well known to smooth-wall-turbulence researchers, in the spanwise space between roughness elements.

  4. Cavitation pitting and erosion of aluminum 6061-T6 in mineral oil water

    NASA Technical Reports Server (NTRS)

    Rao, B. C. S.; Buckley, D. H.

    1983-01-01

    Cavitation erosion studies of aluminum 6061-T6 in mineral oil and in ordinary tap water are presented. The maximum erosion rate (MDPR, or mean depth of penetration rate) in mineral oil was about four times that in water. The MDPR in mineral oil decreased continuously with time, but the MDPR in water remained approximately constant. The cavitation pits in mineral oil were of smaller diameter and depth than the pits in water. Treating the pits as spherical segments, we computed the radius r of the sphere. The logarithm of h/a, where h is the pit depth and 2a is the top width of the pit, was linear when plotted against the logarithm of 2r/h - 1.

  5. Estimating loblolly pine size-density trajectories across a range of planting densities

    Treesearch

    Curtis L. VanderSchaaf; Harold E. Burkhart

    2013-01-01

    Size-density trajectories on the logarithmic (ln) scale are generally thought to consist of two major stages. The first is often referred to as the density-independent mortality stage where the probability of mortality is independent of stand density; in the second, often referred to as the density-dependent mortality or self-thinning stage, the probability of...

  6. Detrended fluctuation analysis of short datasets: An application to fetal cardiac data

    NASA Astrophysics Data System (ADS)

    Govindan, R. B.; Wilson, J. D.; Preißl, H.; Eswaran, H.; Campbell, J. Q.; Lowery, C. L.

    2007-02-01

    Using detrended fluctuation analysis (DFA) we perform scaling analysis of short datasets of length 500-1500 data points. We quantify the long range correlation (exponent α) by computing the mean value of the local exponents αL (in the asymptotic regime). The local exponents are obtained as the (numerical) derivative of the logarithm of the fluctuation function F(s) with respect to the logarithm of the scale factor s:αL=dlog10F(s)/dlog10s. These local exponents display huge variations and complicate the correct quantification of the underlying correlations. We propose the use of the phase randomized surrogate (PRS), which preserves the long range correlations of the original data, to minimize the variations in the local exponents. Using the numerically generated uncorrelated and long range correlated data, we show that performing DFA on several realizations of PRS and estimating αL from the averaged fluctuation functions (of all realizations) can minimize the variations in αL. The application of this approach to the fetal cardiac data (RR intervals) is discussed and we show that there is a statistically significant correlation between α and the gestation age.

  7. Solubility and crystallization of xylose isomerase from Streptomyces rubiginosus

    NASA Astrophysics Data System (ADS)

    Vuolanto, Antti; Uotila, Sinikka; Leisola, Matti; Visuri, Kalevi

    2003-10-01

    We have studied the crystallization and crystal solubility of xylose isomerase (XI) from Streptomyces rubiginosus. In this paper, we show a rational approach for developing a large-scale crystallization process for XI. Firstly, we measured the crystal solubility in salt solutions with respect to salt concentration, temperature and pH. In ammonium sulfate the solubility of XI decreased logarithmically when increasing the salt concentration. Surprisingly, the XI crystals had a solubility minimum at low concentration of magnesium sulfate. The solubility of XI in 0.17 M magnesium sulfate was less than 0.5 g l -1. The solubility of XI increased logarithmically when increasing the temperature. We also found a solubility minimum around pH 7. This is far from the isoelectric point of XI (pH 3.95). Secondly, based on the solubility study, we developed a large-scale crystallization process for XI. In a simple and economical cooling crystallization of XI from 0.17 M magnesium sulfate solution, the recovery of crystalline active enzyme was over 95%. Moreover, we developed a process for production of uniform crystals and produced homogenous crystals with average crystal sizes between 12 and 360 μm.

  8. Electroweak gauge-boson production at small q T : Infrared safety from the collinear anomaly

    NASA Astrophysics Data System (ADS)

    Becher, Thomas; Neubert, Matthias; Wilhelm, Daniel

    2012-02-01

    Using methods from effective field theory, we develop a novel, systematic framework for the calculation of the cross sections for electroweak gauge-boson production at small and very small transverse momentum q T , in which large logarithms of the scale ratio M V /q T are resummed to all orders. These cross sections receive logarithmically enhanced corrections from two sources: the running of the hard matching coefficient and the collinear factorization anomaly. The anomaly leads to the dynamical generation of a non-perturbative scale {q_* } ˜ {M_V}{e^{ - {text{const}}/{α_s}left( {{M_V}} right)}} , which protects the processes from receiving large long-distance hadronic contributions. Expanding the cross sections in either α s or q T generates strongly divergent series, which must be resummed. As a by-product, we obtain an explicit non-perturbative expression for the intercept of the cross sections at q T = 0, including the normalization and first-order α s ( q ∗ ) correction. We perform a detailed numerical comparison of our predictions with the available data on the transverse-momentum distribution in Z-boson production at the Tevatron and LHC.

  9. Logarithmic Compression of Sensory Signals within the Dendritic Tree of a Collision-Sensitive Neuron

    PubMed Central

    2012-01-01

    Neurons in a variety of species, both vertebrate and invertebrate, encode the kinematics of objects approaching on a collision course through a time-varying firing rate profile that initially increases, then peaks, and eventually decays as collision becomes imminent. In this temporal profile, the peak firing rate signals when the approaching object's subtended size reaches an angular threshold, an event which has been related to the timing of escape behaviors. In a locust neuron called the lobula giant motion detector (LGMD), the biophysical basis of this angular threshold computation relies on a multiplicative combination of the object's angular size and speed, achieved through a logarithmic-exponential transform. To understand how this transform is implemented, we modeled the encoding of angular velocity along the pathway leading to the LGMD based on the experimentally determined activation pattern of its presynaptic neurons. These simulations show that the logarithmic transform of angular speed occurs between the synaptic conductances activated by the approaching object onto the LGMD's dendritic tree and its membrane potential at the spike initiation zone. Thus, we demonstrate an example of how a single neuron's dendritic tree implements a mathematical step in a neural computation important for natural behavior. PMID:22492048

  10. THE ANALYSIS OF BIOLUMINESCENCES OF SHORT DURATION, RECORDED WITH PHOTOELECTRIC CELL AND STRING GALVANOMETER

    PubMed Central

    Harvey, E. Newton; Snell, Peter A.

    1931-01-01

    1. The rapid decay of luminescence in extracts of the ostracod crustacean Cypridina hilgendorfii, has been studied by means of a photoelectric-amplifier-string galvanometer recording system. 2. For rapid flashes of luminescence, the decay is logarithmic if ratio of luciferin to luciferase is small; logarithmic plus an initial flash, if ratio of luciferin to luciferase is greater than five. The logarithmic plot of luminescence intensity against time is concave to time axis if ratio of luciferin to luciferase is very large. 3. The velocity constant of rapid flashes of luminescence is approximately proportional to enzyme concentration, is independent of luciferin concentration, and varies approximately inversely as the square root of the total luciferin (luciferin + oxyluciferin) concentration. For large total luciferin concentrations, the velocity constant is almost independent of the total luciferin. 4. The variation of velocity constant with total luciferin concentration (luciferin + oxyluciferin) and its independence of luciferin concentration is explained by assuming that light intensity is a measure of the luciferin molecules which become activated to oxidize (accompanied with luminescence) by adsorption on luciferase. The adsorption equilibrium is the same for luciferin and oxyluciferin and determines the velocity constant. PMID:19872603

  11. Aging Wiener-Khinchin theorem and critical exponents of 1/f^{β} noise.

    PubMed

    Leibovich, N; Dechant, A; Lutz, E; Barkai, E

    2016-11-01

    The power spectrum of a stationary process may be calculated in terms of the autocorrelation function using the Wiener-Khinchin theorem. We here generalize the Wiener-Khinchin theorem for nonstationary processes and introduce a time-dependent power spectrum 〈S_{t_{m}}(ω)〉 where t_{m} is the measurement time. For processes with an aging autocorrelation function of the form 〈I(t)I(t+τ)〉=t^{Υ}ϕ_{EA}(τ/t), where ϕ_{EA}(x) is a nonanalytic function when x is small, we find aging 1/f^{β} noise. Aging 1/f^{β} noise is characterized by five critical exponents. We derive the relations between the scaled autocorrelation function and these exponents. We show that our definition of the time-dependent spectrum retains its interpretation as a density of Fourier modes and discuss the relation to the apparent infrared divergence of 1/f^{β} noise. We illustrate our results for blinking-quantum-dot models, single-file diffusion, and Brownian motion in a logarithmic potential.

  12. Time and temperature dependent modulus of pyrrone and polyimide moldings

    NASA Technical Reports Server (NTRS)

    Lander, L. L.

    1972-01-01

    A method is presented by which the modulus obtained from a stress relaxation test can be used to estimate the modulus which would be obtained from a sonic vibration test. The method was applied to stress relaxation, sonic vibration, and high speed stress-strain data which was obtained on a flexible epoxy. The modulus as measured by the three test methods was identical for identical test times, and a change of test temperature was equivalent to a shift in the logarithmic time scale. An estimate was then made of the dynamic modulus of moldings of two Pyrrones and two polyimides, using stress relaxation data and the method of analysis which was developed for the epoxy. Over the common temperature range (350 to 500 K) in which data from both types of tests were available, the estimated dynamic modulus value differed by only a few percent from the measured value. As a result, it is concluded that, over the 500 to 700 K temperature range, the estimated dynamic modulus values are accurate.

  13. Universal behaviour of interoccurrence times between losses in financial markets: An analytical description

    NASA Astrophysics Data System (ADS)

    Ludescher, J.; Tsallis, C.; Bunde, A.

    2011-09-01

    We consider 16 representative financial records (stocks, indices, commodities, and exchange rates) and study the distribution PQ(r) of the interoccurrence times r between daily losses below negative thresholds -Q, for fixed mean interoccurrence time RQ. We find that in all cases, PQ(r) follows the form PQ(r)~1/[(1+(q- 1)βr]1/(q-1), where β and q are universal constants that depend only on RQ, but not on a specific asset. While β depends only slightly on RQ, the q-value increases logarithmically with RQ, q=1+q0 ln(RQ/2), such that for RQ→2, PQ(r) approaches a simple exponential, PQ(r)cong2-r. The fact that PQ does not scale with RQ is due to the multifractality of the financial markets. The analytic form of PQ allows also to estimate both the risk function and the Value-at-Risk, and thus to improve the estimation of the financial risk.

  14. Transition to exponential relaxation in weakly disordered electron glasses

    NASA Astrophysics Data System (ADS)

    Ovadyahu, Z.

    2018-06-01

    The out-of-equilibrium excess conductance of electron-glasses Δ G (t ) typically relaxes with a logarithmic time dependence. Here it is shown that the log(t ) relaxation of a weakly disordered InxO film crosses over asymptotically to an exponential dependence Δ G (t )∝exp {-[t /τ (∞ )]} . This allows for assigning a well-defined relaxation-time τ (∞ ) for a given system disorder (characterized by the Ioffe-Regel parameter kFℓ ). Near the metal-insulator transition, τ (∞ ) obeys the scaling relation τ (∞ ) ∝[(kFℓ)C-kFℓ ] with the same critical disorder (kFℓ)C where the zero-temperature conductivity of this system vanishes. The latter defines the position of the disorder-driven metal-to-insulator transition which is a quantum-phase transition. In this regard the electron glass differs from classical glasses, such as the structural glass and spin glass. The ability to experimentally assign an unambiguous relaxation time allows us to demonstrate the steep dependence of the electron-glass dynamics on carrier concentration.

  15. The Behavioral Economics of Choice and Interval Timing

    ERIC Educational Resources Information Center

    Jozefowiez, J.; Staddon, J. E. R.; Cerutti, D. T.

    2009-01-01

    The authors propose a simple behavioral economic model (BEM) describing how reinforcement and interval timing interact. The model assumes a Weber-law-compliant logarithmic representation of time. Associated with each represented time value are the payoffs that have been obtained for each possible response. At a given real time, the response with…

  16. Z -boson decays to a vector quarkonium plus a photon

    NASA Astrophysics Data System (ADS)

    Bodwin, Geoffrey T.; Chung, Hee Sok; Ee, June-Haak; Lee, Jungil

    2018-01-01

    We compute the decay rates for the processes Z →V +γ , where Z is the Z -boson, γ is the photon, and V is one of the vector quarkonia J /ψ or ϒ (n S ), with n =1 , 2, or 3. Our computations include corrections through relative orders αs and v2 and resummations of logarithms of mZ2/mQ2, to all orders in αs, at next-to-leading-logarithmic accuracy. (v is the velocity of the heavy quark Q or the heavy antiquark Q ¯ in the quarkonium rest frame, and mZ and mQ are the masses of Z and Q , respectively.) Our calculations are the first to include both the order-αs correction to the light-cone distributions amplitude and the resummation of logarithms of mZ2/mQ2 and are the first calculations for the ϒ (2 S ) and ϒ (3 S ) final states. The resummations of logarithms of mZ2/mQ2 that are associated with the order-αs and order-v2 corrections are carried out by making use of the Abel-Padé method. We confirm the analytic result for the order-v2 correction that was presented in a previous publication, and we correct the relative sign of the direct and indirect amplitudes and some choices of scales in that publication. Our branching fractions for Z →J /ψ +γ and Z →ϒ (1 S )+γ differ by 2.0 σ and -4.0 σ , respectively, from the branching fractions that are given in the most recent publication on this topic (in units of the uncertainties that are given in that publication). However, we argue that the uncertainties in the rates are underestimated in that publication.

  17. A computer graphics display and data compression technique

    NASA Technical Reports Server (NTRS)

    Teague, M. J.; Meyer, H. G.; Levenson, L. (Editor)

    1974-01-01

    The computer program discussed is intended for the graphical presentation of a general dependent variable X that is a function of two independent variables, U and V. The required input to the program is the variation of the dependent variable with one of the independent variables for various fixed values of the other. The computer program is named CRP, and the output is provided by the SD 4060 plotter. Program CRP is an extremely flexible program that offers the user a wide variety of options. The dependent variable may be presented in either a linear or a logarithmic manner. Automatic centering of the plot is provided in the ordinate direction, and the abscissa is scaled automatically for a logarithmic plot. A description of the carpet plot technique is given along with the coordinates system used in the program. Various aspects of the program logic are discussed and detailed documentation of the data card format is presented.

  18. Relationships between Rainy Days, Mean Daily Intensity, and Seasonal Rainfall over the Koyna Catchment during 1961–2005

    PubMed Central

    Nandargi, S.; Mulye, S. S.

    2012-01-01

    There are limitations in using monthly rainfall totals in studies of rainfall climatology as well as in hydrological and agricultural investigations. Variations in rainfall may be considered to result from frequency changes in the daily rainfall of the respective regime. In the present study, daily rainfall data of the stations inside the Koyna catchment has been analysed for the period of 1961–2005 to understand the relationship between the rain and rainy days, mean daily intensity (MDI) and seasonal rainfall over the catchment on monthly as well as seasonal scale. Considering the topographical location of the catchment, analysis of seasonal rainfall data of 8 stations suggests that a linear relationship fits better than the logarithmic relationship in the case of seasonal rainfall versus mean daily intensity. So far as seasonal rainfall versus number of rainy days is considered, the logarithmic relationship is found to be better. PMID:22654646

  19. Top-pair production at hadron colliders with next-to-next-to-leading logarithmic soft-gluon resummation

    NASA Astrophysics Data System (ADS)

    Cacciari, Matteo; Czakon, Michał; Mangano, Michelangelo; Mitov, Alexander; Nason, Paolo

    2012-04-01

    Incorporating all recent theoretical advances, we resum soft-gluon corrections to the total ttbar cross-section at hadron colliders at the next-to-next-to-leading logarithmic (NNLL) order. We perform the resummation in the well established framework of Mellin N-space resummation. We exhaustively study the sources of systematic uncertainty like renormalization and factorization scale variation, power suppressed effects and missing two- and higher-loop corrections. The inclusion of soft-gluon resummation at NNLL brings only a minor decrease in the perturbative uncertainty with respect to the NLL approximation, and a small shift in the central value, consistent with the quoted uncertainties. These numerical predictions agree with the currently available measurements from the Tevatron and LHC and have uncertainty of similar size. We conclude that significant improvements in the ttbar cross-sections can potentially be expected only upon inclusion of the complete NNLO corrections.

  20. Resummation of high order corrections in Higgs boson plus jet production at the LHC

    DOE PAGES

    Sun, Peng; Isaacson, Joshua; Yuan, C. -P.; ...

    2017-02-22

    We study the effect of multiple parton radiation to Higgs boson plus jet production at the LHC. The large logarithms arising from the small imbalance in the transverse momentum of the Higgs boson plus jet final state system are resummed to all orders in the expansion of the strong interaction coupling at the accuracy of Next-to-Leading Logarithm (NLL), by applying the transverse momentum dependent (TMD) factorization formalism. We show that the appropriate resummation scale should be the jet transverse momentum, rather than the partonic center of mass energy which has been normally used in the TMD resummation formalism. Furthermore, themore » transverse momentum distribution of the Higgs boson, particularly near the lower cut-off applied on the jet transverse momentum, can only be reliably predicted by the resummation calculation which is free of the so-called Sudakov-shoulder singularity problem, present in fixed-order calculations.« less

  1. Two Universality Properties Associated with the Monkey Model of Zipf's Law

    NASA Astrophysics Data System (ADS)

    Perline, Richard; Perline, Ron

    2016-03-01

    The distribution of word probabilities in the monkey model of Zipf's law is associated with two universality properties: (1) the power law exponent converges strongly to $-1$ as the alphabet size increases and the letter probabilities are specified as the spacings from a random division of the unit interval for any distribution with a bounded density function on $[0,1]$; and (2), on a logarithmic scale the version of the model with a finite word length cutoff and unequal letter probabilities is approximately normally distributed in the part of the distribution away from the tails. The first property is proved using a remarkably general limit theorem for the logarithm of sample spacings from Shao and Hahn, and the second property follows from Anscombe's central limit theorem for a random number of i.i.d. random variables. The finite word length model leads to a hybrid Zipf-lognormal mixture distribution closely related to work in other areas.

  2. Resummation of high order corrections in Higgs boson plus jet production at the LHC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Peng; Isaacson, Joshua; Yuan, C. -P.

    We study the effect of multiple parton radiation to Higgs boson plus jet production at the LHC. The large logarithms arising from the small imbalance in the transverse momentum of the Higgs boson plus jet final state system are resummed to all orders in the expansion of the strong interaction coupling at the accuracy of Next-to-Leading Logarithm (NLL), by applying the transverse momentum dependent (TMD) factorization formalism. We show that the appropriate resummation scale should be the jet transverse momentum, rather than the partonic center of mass energy which has been normally used in the TMD resummation formalism. Furthermore, themore » transverse momentum distribution of the Higgs boson, particularly near the lower cut-off applied on the jet transverse momentum, can only be reliably predicted by the resummation calculation which is free of the so-called Sudakov-shoulder singularity problem, present in fixed-order calculations.« less

  3. Representational change and strategy use in children's number line estimation during the first years of primary school.

    PubMed

    White, Sonia L J; Szűcs, Dénes

    2012-01-04

    The objective of this study was to scrutinize number line estimation behaviors displayed by children in mathematics classrooms during the first three years of schooling. We extend existing research by not only mapping potential logarithmic-linear shifts but also provide a new perspective by studying in detail the estimation strategies of individual target digits within a number range familiar to children. Typically developing children (n = 67) from Years 1-3 completed a number-to-position numerical estimation task (0-20 number line). Estimation behaviors were first analyzed via logarithmic and linear regression modeling. Subsequently, using an analysis of variance we compared the estimation accuracy of each digit, thus identifying target digits that were estimated with the assistance of arithmetic strategy. Our results further confirm a developmental logarithmic-linear shift when utilizing regression modeling; however, uniquely we have identified that children employ variable strategies when completing numerical estimation, with levels of strategy advancing with development. In terms of the existing cognitive research, this strategy factor highlights the limitations of any regression modeling approach, or alternatively, it could underpin the developmental time course of the logarithmic-linear shift. Future studies need to systematically investigate this relationship and also consider the implications for educational practice.

  4. Representational change and strategy use in children's number line estimation during the first years of primary school

    PubMed Central

    2012-01-01

    Background The objective of this study was to scrutinize number line estimation behaviors displayed by children in mathematics classrooms during the first three years of schooling. We extend existing research by not only mapping potential logarithmic-linear shifts but also provide a new perspective by studying in detail the estimation strategies of individual target digits within a number range familiar to children. Methods Typically developing children (n = 67) from Years 1-3 completed a number-to-position numerical estimation task (0-20 number line). Estimation behaviors were first analyzed via logarithmic and linear regression modeling. Subsequently, using an analysis of variance we compared the estimation accuracy of each digit, thus identifying target digits that were estimated with the assistance of arithmetic strategy. Results Our results further confirm a developmental logarithmic-linear shift when utilizing regression modeling; however, uniquely we have identified that children employ variable strategies when completing numerical estimation, with levels of strategy advancing with development. Conclusion In terms of the existing cognitive research, this strategy factor highlights the limitations of any regression modeling approach, or alternatively, it could underpin the developmental time course of the logarithmic-linear shift. Future studies need to systematically investigate this relationship and also consider the implications for educational practice. PMID:22217191

  5. Arrhenius time-scaled least squares: a simple, robust approach to accelerated stability data analysis for bioproducts.

    PubMed

    Rauk, Adam P; Guo, Kevin; Hu, Yanling; Cahya, Suntara; Weiss, William F

    2014-08-01

    Defining a suitable product presentation with an acceptable stability profile over its intended shelf-life is one of the principal challenges in bioproduct development. Accelerated stability studies are routinely used as a tool to better understand long-term stability. Data analysis often employs an overall mass action kinetics description for the degradation and the Arrhenius relationship to capture the temperature dependence of the observed rate constant. To improve predictive accuracy and precision, the current work proposes a least-squares estimation approach with a single nonlinear covariate and uses a polynomial to describe the change in a product attribute with respect to time. The approach, which will be referred to as Arrhenius time-scaled (ATS) least squares, enables accurate, precise predictions to be achieved for degradation profiles commonly encountered during bioproduct development. A Monte Carlo study is conducted to compare the proposed approach with the common method of least-squares estimation on the logarithmic form of the Arrhenius equation and nonlinear estimation of a first-order model. The ATS least squares method accommodates a range of degradation profiles, provides a simple and intuitive approach for data presentation, and can be implemented with ease. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.

  6. Geometric structure and information change in phase transitions

    NASA Astrophysics Data System (ADS)

    Kim, Eun-jin; Hollerbach, Rainer

    2017-06-01

    We propose a toy model for a cyclic order-disorder transition and introduce a geometric methodology to understand stochastic processes involved in transitions. Specifically, our model consists of a pair of forward and backward processes (FPs and BPs) for the emergence and disappearance of a structure in a stochastic environment. We calculate time-dependent probability density functions (PDFs) and the information length L , which is the total number of different states that a system undergoes during the transition. Time-dependent PDFs during transient relaxation exhibit strikingly different behavior in FPs and BPs. In particular, FPs driven by instability undergo the broadening of the PDF with a large increase in fluctuations before the transition to the ordered state accompanied by narrowing the PDF width. During this stage, we identify an interesting geodesic solution accompanied by the self-regulation between the growth and nonlinear damping where the time scale τ of information change is constant in time, independent of the strength of the stochastic noise. In comparison, BPs are mainly driven by the macroscopic motion due to the movement of the PDF peak. The total information length L between initial and final states is much larger in BPs than in FPs, increasing linearly with the deviation γ of a control parameter from the critical state in BPs while increasing logarithmically with γ in FPs. L scales as |lnD | and D-1 /2 in FPs and BPs, respectively, where D measures the strength of the stochastic forcing. These differing scalings with γ and D suggest a great utility of L in capturing different underlying processes, specifically, diffusion vs advection in phase transition by geometry. We discuss physical origins of these scalings and comment on implications of our results for bistable systems undergoing repeated order-disorder transitions (e.g., fitness).

  7. Geometric structure and information change in phase transitions.

    PubMed

    Kim, Eun-Jin; Hollerbach, Rainer

    2017-06-01

    We propose a toy model for a cyclic order-disorder transition and introduce a geometric methodology to understand stochastic processes involved in transitions. Specifically, our model consists of a pair of forward and backward processes (FPs and BPs) for the emergence and disappearance of a structure in a stochastic environment. We calculate time-dependent probability density functions (PDFs) and the information length L, which is the total number of different states that a system undergoes during the transition. Time-dependent PDFs during transient relaxation exhibit strikingly different behavior in FPs and BPs. In particular, FPs driven by instability undergo the broadening of the PDF with a large increase in fluctuations before the transition to the ordered state accompanied by narrowing the PDF width. During this stage, we identify an interesting geodesic solution accompanied by the self-regulation between the growth and nonlinear damping where the time scale τ of information change is constant in time, independent of the strength of the stochastic noise. In comparison, BPs are mainly driven by the macroscopic motion due to the movement of the PDF peak. The total information length L between initial and final states is much larger in BPs than in FPs, increasing linearly with the deviation γ of a control parameter from the critical state in BPs while increasing logarithmically with γ in FPs. L scales as |lnD| and D^{-1/2} in FPs and BPs, respectively, where D measures the strength of the stochastic forcing. These differing scalings with γ and D suggest a great utility of L in capturing different underlying processes, specifically, diffusion vs advection in phase transition by geometry. We discuss physical origins of these scalings and comment on implications of our results for bistable systems undergoing repeated order-disorder transitions (e.g., fitness).

  8. Linking the fractional derivative and the Lomnitz creep law to non-Newtonian time-varying viscosity

    NASA Astrophysics Data System (ADS)

    Pandey, Vikash; Holm, Sverre

    2016-09-01

    Many of the most interesting complex media are non-Newtonian and exhibit time-dependent behavior of thixotropy and rheopecty. They may also have temporal responses described by power laws. The material behavior is represented by the relaxation modulus and the creep compliance. On the one hand, it is shown that in the special case of a Maxwell model characterized by a linearly time-varying viscosity, the medium's relaxation modulus is a power law which is similar to that of a fractional derivative element often called a springpot. On the other hand, the creep compliance of the time-varying Maxwell model is identified as Lomnitz's logarithmic creep law, making this possibly its first direct derivation. In this way both fractional derivatives and Lomnitz's creep law are linked to time-varying viscosity. A mechanism which yields fractional viscoelasticity and logarithmic creep behavior has therefore been found. Further, as a result of this linking, the curve-fitting parameters involved in the fractional viscoelastic modeling, and the Lomnitz law gain physical interpretation.

  9. Linking the fractional derivative and the Lomnitz creep law to non-Newtonian time-varying viscosity.

    PubMed

    Pandey, Vikash; Holm, Sverre

    2016-09-01

    Many of the most interesting complex media are non-Newtonian and exhibit time-dependent behavior of thixotropy and rheopecty. They may also have temporal responses described by power laws. The material behavior is represented by the relaxation modulus and the creep compliance. On the one hand, it is shown that in the special case of a Maxwell model characterized by a linearly time-varying viscosity, the medium's relaxation modulus is a power law which is similar to that of a fractional derivative element often called a springpot. On the other hand, the creep compliance of the time-varying Maxwell model is identified as Lomnitz's logarithmic creep law, making this possibly its first direct derivation. In this way both fractional derivatives and Lomnitz's creep law are linked to time-varying viscosity. A mechanism which yields fractional viscoelasticity and logarithmic creep behavior has therefore been found. Further, as a result of this linking, the curve-fitting parameters involved in the fractional viscoelastic modeling, and the Lomnitz law gain physical interpretation.

  10. Quantum Algorithmic Readout in Multi-Ion Clocks.

    PubMed

    Schulte, M; Lörch, N; Leroux, I D; Schmidt, P O; Hammerer, K

    2016-01-08

    Optical clocks based on ensembles of trapped ions promise record frequency accuracy with good short-term stability. Most suitable ion species lack closed transitions, so the clock signal must be read out indirectly by transferring the quantum state of the clock ions to cotrapped logic ions of a different species. Existing methods of quantum logic readout require a linear overhead in either time or the number of logic ions. Here we describe a quantum algorithmic readout whose overhead scales logarithmically with the number of clock ions in both of these respects. The scheme allows a quantum nondemolition readout of the number of excited clock ions using a single multispecies gate operation which can also be used in other areas of ion trap technology such as quantum information processing, quantum simulations, metrology, and precision spectroscopy.

  11. Ergodic Transition in a Simple Model of the Continuous Double Auction

    PubMed Central

    Radivojević, Tijana; Anselmi, Jonatha; Scalas, Enrico

    2014-01-01

    We study a phenomenological model for the continuous double auction, whose aggregate order process is equivalent to two independent queues. The continuous double auction defines a continuous-time random walk for trade prices. The conditions for ergodicity of the auction are derived and, as a consequence, three possible regimes in the behavior of prices and logarithmic returns are observed. In the ergodic regime, prices are unstable and one can observe a heteroskedastic behavior in the logarithmic returns. On the contrary, non-ergodicity triggers stability of prices, even if two different regimes can be seen. PMID:24558377

  12. Ergodic transition in a simple model of the continuous double auction.

    PubMed

    Radivojević, Tijana; Anselmi, Jonatha; Scalas, Enrico

    2014-01-01

    We study a phenomenological model for the continuous double auction, whose aggregate order process is equivalent to two independent M/M/1 queues. The continuous double auction defines a continuous-time random walk for trade prices. The conditions for ergodicity of the auction are derived and, as a consequence, three possible regimes in the behavior of prices and logarithmic returns are observed. In the ergodic regime, prices are unstable and one can observe a heteroskedastic behavior in the logarithmic returns. On the contrary, non-ergodicity triggers stability of prices, even if two different regimes can be seen.

  13. A two-layer multiple-time-scale turbulence model and grid independence study

    NASA Technical Reports Server (NTRS)

    Kim, S.-W.; Chen, C.-P.

    1989-01-01

    A two-layer multiple-time-scale turbulence model is presented. The near-wall model is based on the classical Kolmogorov-Prandtl turbulence hypothesis and the semi-empirical logarithmic law of the wall. In the two-layer model presented, the computational domain of the conservation of mass equation and the mean momentum equation penetrated up to the wall, where no slip boundary condition has been prescribed; and the near wall boundary of the turbulence equations has been located at the fully turbulent region, yet very close to the wall, where the standard wall function method has been applied. Thus, the conservation of mass constraint can be satisfied more rigorously in the two-layer model than in the standard wall function method. In most of the two-layer turbulence models, the number of grid points to be used inside the near-wall layer posed the issue of computational efficiency. The present finite element computational results showed that the grid independent solutions were obtained with as small as two grid points, i.e., one quadratic element, inside the near wall layer. Comparison of the computational results obtained by using the two-layer model and those obtained by using the wall function method is also presented.

  14. Evidence of Temporal Postdischarge Decontamination of Bacteria by Gliding Electric Discharges: Application to Hafnia alvei▿

    PubMed Central

    Kamgang-Youbi, Georges; Herry, Jean-Marie; Bellon-Fontaine, Marie-Noëlle; Brisset, Jean-Louis; Doubla, Avaly; Naïtali, Murielle

    2007-01-01

    This study aimed to characterize the bacterium-destroying properties of a gliding arc plasma device during electric discharges and also under temporal postdischarge conditions (i.e., when the discharge was switched off). This phenomenon was reported for the first time in the literature in the case of the plasma destruction of microorganisms. When cells of a model bacterium, Hafnia alvei, were exposed to electric discharges, followed or not followed by temporal postdischarges, the survival curves exhibited a shoulder and then log-linear decay. These destruction kinetics were modeled using GinaFiT, a freeware tool to assess microbial survival curves, and adjustment parameters were determined. The efficiency of postdischarge treatments was clearly affected by the discharge time (t*); both the shoulder length and the inactivation rate kmax were linearly modified as a function of t*. Nevertheless, all conditions tested (t* ranging from 2 to 5 min) made it possible to achieve an abatement of at least 7 decimal logarithm units. Postdischarge treatment was also efficient against bacteria not subjected to direct discharge, and the disinfecting properties of “plasma-activated water” were dependent on the treatment time for the solution. Water treated with plasma for 2 min achieved a 3.7-decimal-logarithm-unit reduction in 20 min after application to cells, and abatement greater than 7 decimal logarithm units resulted from the same contact time with water activated with plasma for 10 min. These disinfecting properties were maintained during storage of activated water for 30 min. After that, they declined as the storage time increased. PMID:17557841

  15. Spatiotemporal characterization of Ensemble Prediction Systems - the Mean-Variance of Logarithms (MVL) diagram

    NASA Astrophysics Data System (ADS)

    Gutiérrez, J. M.; Primo, C.; Rodríguez, M. A.; Fernández, J.

    2008-02-01

    We present a novel approach to characterize and graphically represent the spatiotemporal evolution of ensembles using a simple diagram. To this aim we analyze the fluctuations obtained as differences between each member of the ensemble and the control. The lognormal character of these fluctuations suggests a characterization in terms of the first two moments of the logarithmic transformed values. On one hand, the mean is associated with the exponential growth in time. On the other hand, the variance accounts for the spatial correlation and localization of fluctuations. In this paper we introduce the MVL (Mean-Variance of Logarithms) diagram to intuitively represent the interplay and evolution of these two quantities. We show that this diagram uncovers useful information about the spatiotemporal dynamics of the ensemble. Some universal features of the diagram are also described, associated either with the nonlinear system or with the ensemble method and illustrated using both toy models and numerical weather prediction systems.

  16. Parallel, exhaustive processing underlies logarithmic search functions: Visual search with cortical magnification.

    PubMed

    Wang, Zhiyuan; Lleras, Alejandro; Buetti, Simona

    2018-04-17

    Our lab recently found evidence that efficient visual search (with a fixed target) is characterized by logarithmic Reaction Time (RT) × Set Size functions whose steepness is modulated by the similarity between target and distractors. To determine whether this pattern of results was based on low-level visual factors uncontrolled by previous experiments, we minimized the possibility of crowding effects in the display, compensated for the cortical magnification factor by magnifying search items based on their eccentricity, and compared search performance on such displays to performance on displays without magnification compensation. In both cases, the RT × Set Size functions were found to be logarithmic, and the modulation of the log slopes by target-distractor similarity was replicated. Consistent with previous results in the literature, cortical magnification compensation eliminated most target eccentricity effects. We conclude that the log functions and their modulation by target-distractor similarity relations reflect a parallel exhaustive processing architecture for early vision.

  17. Genetic parameters for racing records in trotters using linear and generalized linear models.

    PubMed

    Suontama, M; van der Werf, J H J; Juga, J; Ojala, M

    2012-09-01

    Heritability and repeatability and genetic and phenotypic correlations were estimated for trotting race records with linear and generalized linear models using 510,519 records on 17,792 Finnhorses and 513,161 records on 25,536 Standardbred trotters. Heritability and repeatability were estimated for single racing time and earnings traits with linear models, and logarithmic scale was used for racing time and fourth-root scale for earnings to correct for nonnormality. Generalized linear models with a gamma distribution were applied for single racing time and with a multinomial distribution for single earnings traits. In addition, genetic parameters for annual earnings were estimated with linear models on the observed and fourth-root scales. Racing success traits of single placings, winnings, breaking stride, and disqualifications were analyzed using generalized linear models with a binomial distribution. Estimates of heritability were greatest for racing time, which ranged from 0.32 to 0.34. Estimates of heritability were low for single earnings with all distributions, ranging from 0.01 to 0.09. Annual earnings were closer to normal distribution than single earnings. Heritability estimates were moderate for annual earnings on the fourth-root scale, 0.19 for Finnhorses and 0.27 for Standardbred trotters. Heritability estimates for binomial racing success variables ranged from 0.04 to 0.12, being greatest for winnings and least for breaking stride. Genetic correlations among racing traits were high, whereas phenotypic correlations were mainly low to moderate, except correlations between racing time and earnings were high. On the basis of a moderate heritability and moderate to high repeatability for racing time and annual earnings, selection of horses for these traits is effective when based on a few repeated records. Because of high genetic correlations, direct selection for racing time and annual earnings would also result in good genetic response in racing success.

  18. Logarithmic contact time dependence of adhesion force and its dominant role among the effects of AFM experimental parameters under low humidity

    NASA Astrophysics Data System (ADS)

    Lai, Tianmao; Meng, Yonggang

    2017-10-01

    The influences of contact time, normal load, piezo velocity, and measurement number of times on the adhesion force between two silicon surfaces were studied with an atomic force microscope (AFM) at low humidity (17-15%). Results show that the adhesion force is time-dependent and increases logarithmically with contact time until saturation is reached, which is related with the growing size of a water bridge between them. The contact time plays a dominant role among these parameters. The adhesion forces with different normal loads and piezo velocities can be quantitatively obtained just by figuring out the length of contact time, provided that the contact time dependence is known. The time-dependent adhesion force with repeated contacts at one location usually increases first sharply and then slowly with measurement number of times until saturation is reached, which is in accordance with the contact time dependence. The behavior of the adhesion force with repeated contacts can be adjusted by the lengths of contact time and non-contact time. These results may help facilitate the anti-adhesion design of silicon-based microscale systems working under low humidity.

  19. The influence of current speed and vegetation density on flow structure in two macrotidal eelgrass canopies

    USGS Publications Warehouse

    Lacy, Jessica R.; Wyllie-Echeverria, Sandy

    2011-01-01

    The influence of eelgrass (Zostera marina) on near-bed currents, turbulence, and drag was investigated at three sites in two eelgrass canopies of differing density and at one unvegetated site in the San Juan archipelago of Puget Sound, Washington, USA. Eelgrass blade length exceeded 1 m. Velocity profiles up to 1.5 m above the sea floor were collected over a spring-neap tidal cycle with a downward-looking pulse-coherent acoustic Doppler profiler above the canopies and two acoustic Doppler velocimeters within the canopies. The eelgrass attenuated currents by a minimum of 40%, and by more than 70% at the most densely vegetated site. Attenuation decreased with increasing current speed. The data were compared to the shear-layer model of vegetated flows and the displaced logarithmic model. Velocity profiles outside the meadows were logarithmic. Within the canopies, most profiles were consistent with the shear-layer model, with a logarithmic layer above the canopy. However, at the less-dense sites, when currents were strong, shear at the sea floor and above the canopy was significant relative to shear at the top of the canopy, and the velocity profiles more closely resembled those in a rough-wall boundary layer. Turbulence was strong at the canopy top and decreased with height. Friction velocity at the canopy top was 1.5–2 times greater than at the unvegetated, sandy site. The coefficient of drag CD on the overlying flow derived from the logarithmic velocity profile above the canopy, was 3–8 times greater than at the unvegetated site (0.01–0.023 vs. 2.9 × 10−3).

  20. How Long Does It Take to Describe What One Sees? A Logarithmic Speed-Difficulty Trade-off in Speech Production

    PubMed Central

    Latash, Mark L.; Mikaelian, Irina L.

    2010-01-01

    We explored the relations between task difficulty and speech time in picture description tasks. Six native speakers of Mandarin Chinese (CH group) and six native speakers or Indo-European languages (IE group) produced quick and accurate verbal descriptions of pictures in a self-paced manner. The pictures always involved two objects, a plate and one of the three objects (a stick, a fork, or a knife) located and oriented differently with respect to the plate in different trials. An index of difficulty was assigned to each picture. CH group showed lower reaction time and much lower speech time. Speech time scaled linearly with the log-transformed index of difficulty in all subjects. The results suggest generality of Fitts’ law for movement and speech tasks, and possibly for other cognitive tasks as well. The differences between the CH and IE groups may be due to specific task features, differences in the grammatical rules of CH and IE languages, and possible use of tone for information transmission. PMID:21339514

  1. QLog Solar-Cell Mode Photodiode Logarithmic CMOS Pixel Using Charge Compression and Readout †

    PubMed Central

    Ni, Yang

    2018-01-01

    In this paper, we present a new logarithmic pixel design currently under development at New Imaging Technologies SA (NIT). This new logarithmic pixel design uses charge domain logarithmic signal compression and charge-transfer-based signal readout. This structure gives a linear response in low light conditions and logarithmic response in high light conditions. The charge transfer readout efficiently suppresses the reset (KTC) noise by using true correlated double sampling (CDS) in low light conditions. In high light conditions, thanks to charge domain logarithmic compression, it has been demonstrated that 3000 electrons should be enough to cover a 120 dB dynamic range with a mobile phone camera-like signal-to-noise ratio (SNR) over the whole dynamic range. This low electron count permits the use of ultra-small floating diffusion capacitance (sub-fF) without charge overflow. The resulting large conversion gain permits a single photon detection capability with a wide dynamic range without a complex sensor/system design. A first prototype sensor with 320 × 240 pixels has been implemented to validate this charge domain logarithmic pixel concept and modeling. The first experimental results validate the logarithmic charge compression theory and the low readout noise due to the charge-transfer-based readout. PMID:29443903

  2. QLog Solar-Cell Mode Photodiode Logarithmic CMOS Pixel Using Charge Compression and Readout.

    PubMed

    Ni, Yang

    2018-02-14

    In this paper, we present a new logarithmic pixel design currently under development at New Imaging Technologies SA (NIT). This new logarithmic pixel design uses charge domain logarithmic signal compression and charge-transfer-based signal readout. This structure gives a linear response in low light conditions and logarithmic response in high light conditions. The charge transfer readout efficiently suppresses the reset (KTC) noise by using true correlated double sampling (CDS) in low light conditions. In high light conditions, thanks to charge domain logarithmic compression, it has been demonstrated that 3000 electrons should be enough to cover a 120 dB dynamic range with a mobile phone camera-like signal-to-noise ratio (SNR) over the whole dynamic range. This low electron count permits the use of ultra-small floating diffusion capacitance (sub-fF) without charge overflow. The resulting large conversion gain permits a single photon detection capability with a wide dynamic range without a complex sensor/system design. A first prototype sensor with 320 × 240 pixels has been implemented to validate this charge domain logarithmic pixel concept and modeling. The first experimental results validate the logarithmic charge compression theory and the low readout noise due to the charge-transfer-based readout.

  3. Investigation of logarithmic spiral nanoantennas at optical frequencies

    NASA Astrophysics Data System (ADS)

    Verma, Anamika; Pandey, Awanish; Mishra, Vigyanshu; Singh, Ten; Alam, Aftab; Dinesh Kumar, V.

    2013-12-01

    The first study is reported of a logarithmic spiral antenna in the optical frequency range. Using the finite integration technique, we investigated the spectral and radiation properties of a logarithmic spiral nanoantenna and a complementary structure made of thin gold film. A comparison is made with results for an Archimedean spiral nanoantenna. Such nanoantennas can exhibit broadband behavior that is independent of polarization. Two prominent features of logarithmic spiral nanoantennas are highly directional far field emission and perfectly circularly polarized radiation when excited by a linearly polarized source. The logarithmic spiral nanoantenna promises potential advantages over Archimedean spirals and could be harnessed for several applications in nanophotonics and allied areas.

  4. New type of hill-top inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barvinsky, A.O.; Department of Physics, Tomsk State University,Lenin Ave. 36, Tomsk 634050; Department of Physics and Astronomy, Pacific Institue for Theoretical Physics,University of British Columbia, 6224 Agricultural Road, Vancouver, BC V6T 1Z1

    2016-01-20

    We suggest a new type of hill-top inflation originating from the initial conditions in the form of the microcanonical density matrix for the cosmological model with a large number of quantum fields conformally coupled to gravity. Initial conditions for inflation are set up by cosmological instantons describing underbarrier oscillations in the vicinity of the inflaton potential maximum. These periodic oscillations of the inflaton field and cosmological scale factor are obtained within the approximation of two coupled oscillators subject to the slow roll regime in the Euclidean time. This regime is characterized by rapid oscillations of the scale factor on themore » background of a slowly varying inflaton, which guarantees smallness of slow roll parameters ϵ and η of the following inflation stage. A hill-like shape of the inflaton potential is shown to be generated by logarithmic loop corrections to the tree-level asymptotically shift-invariant potential in the non-minimal Higgs inflation model and R{sup 2}-gravity. The solution to the problem of hierarchy between the Planckian scale and the inflation scale is discussed within the concept of conformal higher spin fields, which also suggests the mechanism bringing the model below the gravitational cutoff and, thus, protecting it from large graviton loop corrections.« less

  5. New type of hill-top inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barvinsky, A.O.; Nesterov, D.V.; Kamenshchik, A.Yu., E-mail: barvin@td.lpi.ru, E-mail: Alexander.Kamenshchik@bo.infn.it, E-mail: nesterov@td.lpi.ru

    2016-01-01

    We suggest a new type of hill-top inflation originating from the initial conditions in the form of the microcanonical density matrix for the cosmological model with a large number of quantum fields conformally coupled to gravity. Initial conditions for inflation are set up by cosmological instantons describing underbarrier oscillations in the vicinity of the inflaton potential maximum. These periodic oscillations of the inflaton field and cosmological scale factor are obtained within the approximation of two coupled oscillators subject to the slow roll regime in the Euclidean time. This regime is characterized by rapid oscillations of the scale factor on themore » background of a slowly varying inflaton, which guarantees smallness of slow roll parameters ε and η of the following inflation stage. A hill-like shape of the inflaton potential is shown to be generated by logarithmic loop corrections to the tree-level asymptotically shift-invariant potential in the non-minimal Higgs inflation model and R{sup 2}-gravity. The solution to the problem of hierarchy between the Planckian scale and the inflation scale is discussed within the concept of conformal higher spin fields, which also suggests the mechanism bringing the model below the gravitational cutoff and, thus, protecting it from large graviton loop corrections.« less

  6. Spatio-temporal error growth in the multi-scale Lorenz'96 model

    NASA Astrophysics Data System (ADS)

    Herrera, S.; Fernández, J.; Rodríguez, M. A.; Gutiérrez, J. M.

    2010-07-01

    The influence of multiple spatio-temporal scales on the error growth and predictability of atmospheric flows is analyzed throughout the paper. To this aim, we consider the two-scale Lorenz'96 model and study the interplay of the slow and fast variables on the error growth dynamics. It is shown that when the coupling between slow and fast variables is weak the slow variables dominate the evolution of fluctuations whereas in the case of strong coupling the fast variables impose a non-trivial complex error growth pattern on the slow variables with two different regimes, before and after saturation of fast variables. This complex behavior is analyzed using the recently introduced Mean-Variance Logarithmic (MVL) diagram.

  7. Combining states without scale hierarchies with ordered parton showers

    DOE PAGES

    Fischer, Nadine; Prestel, Stefan

    2017-09-12

    Here, we present a parameter-free scheme to combine fixed-order multi-jet results with parton-shower evolution. The scheme produces jet cross sections with leading-order accuracy in the complete phase space of multiple emissions, resumming large logarithms when appropriate, while not arbitrarily enforcing ordering on momentum configurations beyond the reach of the parton-shower evolution equation. This then requires the development of a matrix-element correction scheme for complex phase-spaces including ordering conditions as well as a systematic scale-setting procedure for unordered phase-space points. Our algorithm does not require a merging-scale parameter. We implement the new method in the Vincia framework and compare to LHCmore » data.« less

  8. Next-to-leading-logarithmic power corrections for N -jettiness subtraction in color-singlet production

    NASA Astrophysics Data System (ADS)

    Boughezal, Radja; Isgrò, Andrea; Petriello, Frank

    2018-04-01

    We present a detailed derivation of the power corrections to the factorization theorem for the 0-jettiness event shape variable T . Our calculation is performed directly in QCD without using the formalism of effective field theory. We analytically calculate the next-to-leading logarithmic power corrections for small T at next-to-leading order in the strong coupling constant, extending previous computations which obtained only the leading-logarithmic power corrections. We address a discrepancy in the literature between results for the leading-logarithmic power corrections to a particular definition of 0-jettiness. We present a numerical study of the power corrections in the context of their application to the N -jettiness subtraction method for higher-order calculations, using gluon-fusion Higgs production as an example. The inclusion of the next-to-leading-logarithmic power corrections further improves the numerical efficiency of the approach beyond the improvement obtained from the leading-logarithmic power corrections.

  9. Fast Decentralized Averaging via Multi-scale Gossip

    NASA Astrophysics Data System (ADS)

    Tsianos, Konstantinos I.; Rabbat, Michael G.

    We are interested in the problem of computing the average consensus in a distributed fashion on random geometric graphs. We describe a new algorithm called Multi-scale Gossip which employs a hierarchical decomposition of the graph to partition the computation into tractable sub-problems. Using only pairwise messages of fixed size that travel at most O(n^{1/3}) hops, our algorithm is robust and has communication cost of O(n loglogn logɛ - 1) transmissions, which is order-optimal up to the logarithmic factor in n. Simulated experiments verify the good expected performance on graphs of many thousands of nodes.

  10. A Biologically Plausible Transform for Visual Recognition that is Invariant to Translation, Scale, and Rotation.

    PubMed

    Sountsov, Pavel; Santucci, David M; Lisman, John E

    2011-01-01

    Visual object recognition occurs easily despite differences in position, size, and rotation of the object, but the neural mechanisms responsible for this invariance are not known. We have found a set of transforms that achieve invariance in a neurally plausible way. We find that a transform based on local spatial frequency analysis of oriented segments and on logarithmic mapping, when applied twice in an iterative fashion, produces an output image that is unique to the object and that remains constant as the input image is shifted, scaled, or rotated.

  11. A Biologically Plausible Transform for Visual Recognition that is Invariant to Translation, Scale, and Rotation

    PubMed Central

    Sountsov, Pavel; Santucci, David M.; Lisman, John E.

    2011-01-01

    Visual object recognition occurs easily despite differences in position, size, and rotation of the object, but the neural mechanisms responsible for this invariance are not known. We have found a set of transforms that achieve invariance in a neurally plausible way. We find that a transform based on local spatial frequency analysis of oriented segments and on logarithmic mapping, when applied twice in an iterative fashion, produces an output image that is unique to the object and that remains constant as the input image is shifted, scaled, or rotated. PMID:22125522

  12. Weak mixing below the weak scale in dark-matter direct detection

    NASA Astrophysics Data System (ADS)

    Brod, Joachim; Grinstein, Benjamin; Stamou, Emmanuel; Zupan, Jure

    2018-02-01

    If dark matter couples predominantly to the axial-vector currents with heavy quarks, the leading contribution to dark-matter scattering on nuclei is either due to one-loop weak corrections or due to the heavy-quark axial charges of the nucleons. We calculate the effects of Higgs and weak gauge-boson exchanges for dark matter coupling to heavy-quark axial-vector currents in an effective theory below the weak scale. By explicit computation, we show that the leading-logarithmic QCD corrections are important, and thus resum them to all orders using the renormalization group.

  13. Soft collinear effective theory for heavy WIMP annihilation

    DOE PAGES

    Bauer, Martin; Cohen, Timothy; Hill, Richard J.; ...

    2015-01-19

    In a large class of models for Weakly Interacting Massive Particles (WIMPs), the WIMP mass M lies far above the weak scale m W . This work identifies universal Sudakov-type logarithms ~ α log 2(2 M/m W) that spoil the naive convergence of perturbation theory for annihilation processes. An effective field theory (EFT) framework is presented, allowing the systematic resummation of these logarithms. Another impact of the large separation of scales is that a long-distance wavefunction distortion from electroweak boson exchange leads to observable modifications of the cross section. Careful accounting of momentum regions in the EFT allows the rigorousmore » disentanglement of this so-called Sommerfeld enhancement from the short-distance hard annihilation process. In addition, the WIMP is described as a heavy-particle field, while the electroweak gauge bosons are treated as soft and collinear fields. Hard matching coefficients are computed at renormalization scale μ ~ 2 M , then evolved down to μ ~ m W , where electroweak symmetry breaking is incorporated and the matching onto the relevant quantum mechanical Hamiltonian is performed. The example of an SU(2) W triplet scalar dark matter candidate annihilating to line photons is used for concreteness, allowing the numerical exploration of the impact of next-to-leading order corrections and log resummation. As a result, for M ≃ 3 TeV, the resummed Sommerfeld enhanced cross section is reduced by a factor of ~ 3 with respect to the treelevel fixed order result.« less

  14. Noncommuting observables in quantum detection and estimation theory

    NASA Technical Reports Server (NTRS)

    Helstrom, C. W.

    1972-01-01

    Basing decisions and estimates on simultaneous approximate measurements of noncommuting observables in a quantum receiver is shown to be equivalent to measuring commuting projection operators on a larger Hilbert space than that of the receiver itself. The quantum-mechanical Cramer-Rao inequalities derived from right logarithmic derivatives and symmetrized logarithmic derivatives of the density operator are compared, and it is shown that the latter give superior lower bounds on the error variances of individual unbiased estimates of arrival time and carrier frequency of a coherent signal. For a suitably weighted sum of the error variances of simultaneous estimates of these, the former yield the superior lower bound under some conditions.

  15. Pattern recognition neural-net by spatial mapping of biology visual field

    NASA Astrophysics Data System (ADS)

    Lin, Xin; Mori, Masahiko

    2000-05-01

    The method of spatial mapping in biology vision field is applied to artificial neural networks for pattern recognition. By the coordinate transform that is called the complex-logarithm mapping and Fourier transform, the input images are transformed into scale- rotation- and shift- invariant patterns, and then fed into a multilayer neural network for learning and recognition. The results of computer simulation and an optical experimental system are described.

  16. Evaluating the assumption of power-law late time scaling of breakthrough curves in highly heterogeneous media

    NASA Astrophysics Data System (ADS)

    Pedretti, Daniele

    2017-04-01

    Power-law (PL) distributions are widely adopted to define the late-time scaling of solute breakthrough curves (BTCs) during transport experiments in highly heterogeneous media. However, from a statistical perspective, distinguishing between a PL distribution and another tailed distribution is difficult, particularly when a qualitative assessment based on visual analysis of double-logarithmic plotting is used. This presentation aims to discuss the results from a recent analysis where a suite of statistical tools was applied to evaluate rigorously the scaling of BTCs from experiments that generate tailed distributions typically described as PL at late time. To this end, a set of BTCs from numerical simulations in highly heterogeneous media were generated using a transition probability approach (T-PROGS) coupled to a finite different numerical solver of the flow equation (MODFLOW) and a random walk particle tracking approach for Lagrangian transport (RW3D). The T-PROGS fields assumed randomly distributed hydraulic heterogeneities with long correlation scales creating solute channeling and anomalous transport. For simplicity, transport was simulated as purely advective. This combination of tools generates strongly non-symmetric BTCs visually resembling PL distributions at late time when plotted in double log scales. Unlike other combination of modeling parameters and boundary conditions (e.g. matrix diffusion in fractures), at late time no direct link exists between the mathematical functions describing scaling of these curves and physical parameters controlling transport. The results suggest that the statistical tests fail to describe the majority of curves as PL distributed. Moreover, they suggest that PL or lognormal distributions have the same likelihood to represent parametrically the shape of the tails. It is noticeable that forcing a model to reproduce the tail as PL functions results in a distribution of PL slopes comprised between 1.2 and 4, which are the typical values observed during field experiments. We conclude that care must be taken when defining a BTC late time distribution as a power law function. Even though the estimated scaling factors are found to fall in traditional ranges, the actual distribution controlling the scaling of concentration may different from a power-law function, with direct consequences for instance for the selection of effective parameters in upscaling modeling solutions.

  17. Mapping soil total nitrogen of cultivated land at county scale by using hyperspectral image

    NASA Astrophysics Data System (ADS)

    Gu, Xiaohe; Zhang, Li Yan; Shu, Meiyan; Yang, Guijun

    2018-02-01

    Monitoring total nitrogen content (TNC) in the soil of cultivated land quantitively and mastering its spatial distribution are helpful for crop growing, soil fertility adjustment and sustainable development of agriculture. The study aimed to develop a universal method to map total nitrogen content in soil of cultivated land by HSI image at county scale. Several mathematical transformations were used to improve the expression ability of HSI image. The correlations between soil TNC and the reflectivity and its mathematical transformations were analyzed. Then the susceptible bands and its transformations were screened to develop the optimizing model of map soil TNC in the Anping County based on the method of multiple linear regression. Results showed that the bands of 14th, 16th, 19th, 37th and 60th with different mathematical transformations were screened as susceptible bands. Differential transformation was helpful for reducing the noise interference to the diagnosis ability of the target spectrum. The determination coefficient of the first order differential of logarithmic transformation was biggest (0.505), while the RMSE was lowest. The study confirmed the first order differential of logarithm transformation as the optimal inversion model for soil TNC, which was used to map soil TNC of cultivated land in the study area.

  18. Stochastic exponential synchronization of memristive neural networks with time-varying delays via quantized control.

    PubMed

    Zhang, Wanli; Yang, Shiju; Li, Chuandong; Zhang, Wei; Yang, Xinsong

    2018-08-01

    This paper focuses on stochastic exponential synchronization of delayed memristive neural networks (MNNs) by the aid of systems with interval parameters which are established by using the concept of Filippov solution. New intermittent controller and adaptive controller with logarithmic quantization are structured to deal with the difficulties induced by time-varying delays, interval parameters as well as stochastic perturbations, simultaneously. Moreover, not only control cost can be reduced but also communication channels and bandwidth are saved by using these controllers. Based on novel Lyapunov functions and new analytical methods, several synchronization criteria are established to realize the exponential synchronization of MNNs with stochastic perturbations via intermittent control and adaptive control with or without logarithmic quantization. Finally, numerical simulations are offered to substantiate our theoretical results. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. Fine tuning of optical signals in nanoporous anodic alumina photonic crystals by apodized sinusoidal pulse anodisation.

    PubMed

    Santos, Abel; Law, Cheryl Suwen; Chin Lei, Dominique Wong; Pereira, Taj; Losic, Dusan

    2016-11-03

    In this study, we present an advanced nanofabrication approach to produce gradient-index photonic crystal structures based on nanoporous anodic alumina. An apodization strategy is for the first time applied to a sinusoidal pulse anodisation process in order to engineer the photonic stop band of nanoporous anodic alumina (NAA) in depth. Four apodization functions are explored, including linear positive, linear negative, logarithmic positive and logarithmic negative, with the aim of finely tuning the characteristic photonic stop band of these photonic crystal structures. We systematically analyse the effect of the amplitude difference (from 0.105 to 0.840 mA cm -2 ), the pore widening time (from 0 to 6 min), the anodisation period (from 650 to 950 s) and the anodisation time (from 15 to 30 h) on the quality and the position of the characteristic photonic stop band and the interferometric colour of these photonic crystal structures using the aforementioned apodization functions. Our results reveal that a logarithmic negative apodisation function is the most optimal approach to obtain unprecedented well-resolved and narrow photonic stop bands across the UV-visible-NIR spectrum of NAA-based gradient-index photonic crystals. Our study establishes a fully comprehensive rationale towards the development of unique NAA-based photonic crystal structures with finely engineered optical properties for advanced photonic devices such as ultra-sensitive optical sensors, selective optical filters and all-optical platforms for quantum computing.

  20. Comparative diagnostics of allergy using quantitative immuno-PCR and ELISA.

    PubMed

    Simonova, Maria A; Pivovarov, Victor D; Ryazantsev, Dmitry Y; Dolgova, Anna S; Berzhets, Valentina M; Zavriev, Sergei K; Svirshchevskaya, Elena V

    2018-05-01

    Estimation of specific IgE is essential for the prevention of allergy progression. Quantitative immuno-PCR (qiPCR) can increase the sensitivity of IgE detection. We aimed to develop qiPCR and compare it to the conventional ELISA in identification of IgE to Alt a 1 and Fel d 1 allergens. Single stranded 60-mer DNA conjugated to streptavidin was used to detect antigen-IgE-biotin complex by qiPCR. In semi-logarithmic scale qiPCR data were linear in a full range of serum dilutions resulting in three- to ten-times higher sensitivity of qiPCR in comparison with ELISA in IgE estimation in low titer sera. Higher sensitivity of qiPCR in identification of low titer IgE is a result of a higher linearity of qiPCR data.

  1. Deposition and persistence of beachcast seabird carcasses

    USGS Publications Warehouse

    van Pelt, Thomas I.; Piatt, John F.

    1995-01-01

    Following a massive wreck of guillemots (Uria aalge) in late winter and spring of 1993, we monitored the deposition and subsequent disappearance of 398 beachcast guillemot carcasses on two beaches in Resurrection Bay, Alaska, during a 100 day period. Deposition of carcasses declined logarithmically with time after the original event. Since fresh carcasses were more likely to be removed between counts than older carcasses, persistence rates increased logarithmically over time. Scavenging appeared to be the primary cause of carcass removal, followed by burial in beach debris and sand. Along-shore transport was negligible. We present an equation which estimates the number of carcasses deposited at time zero from beach surveys conducted some time later, using non-linear persistence rates that are a function of time. We use deposition rates to model the accumulation of beached carcasses, accounting for further deposition subsequent to the original event. Finally, we present a general method for extrapolating from a single count the number of carcasses cumulatively deposited on surveyed beaches, and discuss how our results can be used to assess the magnitude of mass seabird mortality events from beach surveys.

  2. The time dependence of rock healing as a universal relaxation process, a tutorial

    NASA Astrophysics Data System (ADS)

    Snieder, Roel; Sens-Schönfelder, Christoph; Wu, Renjie

    2017-01-01

    The material properties of earth materials often change after the material has been perturbed (slow dynamics). For example, the seismic velocity of subsurface materials changes after earthquakes, and granular materials compact after being shaken. Such relaxation processes are associated by observables that change logarithmically with time. Since the logarithm diverges for short and long times, the relaxation can, strictly speaking, not have a log-time dependence. We present a self-contained description of a relaxation function that consists of a superposition of decaying exponentials that has log-time behaviour for intermediate times, but converges to zero for long times, and is finite for t = 0. The relaxation function depends on two parameters, the minimum and maximum relaxation time. These parameters can, in principle, be extracted from the observed relaxation. As an example, we present a crude model of a fracture that is closing under an external stress. Although the fracture model violates some of the assumptions on which the relaxation function is based, it follows the relaxation function well. We provide qualitative arguments that the relaxation process, just like the Gutenberg-Richter law, is applicable to a wide range of systems and has universal properties.

  3. Detection of right-to-left shunts: comparison between the International Consensus and Spencer Logarithmic Scale criteria.

    PubMed

    Lao, Annabelle Y; Sharma, Vijay K; Tsivgoulis, Georgios; Frey, James L; Malkoff, Marc D; Navarro, Jose C; Alexandrov, Andrei V

    2008-10-01

    International Consensus Criteria (ICC) consider right-to-left shunt (RLS) present when Transcranial Doppler (TCD) detects even one microbubble (microB). Spencer Logarithmic Scale (SLS) offers more grades of RLS with detection of >30 microB corresponding to a large shunt. We compared the yield of ICC and SLS in detection and quantification of a large RLS. We prospectively evaluated paradoxical embolism in consecutive patients with ischemic strokes or transient ischemic attack (TIA) using injections of 9 cc saline agitated with 1 cc of air. Results were classified according to ICC [negative (no microB), grade I (1-20 microB), grade II (>20 microB or "shower" appearance of microB), and grade III ("curtain" appearance of microB)] and SLS criteria [negative (no microB), grade I (1-10 microB), grade II (11-30 microB), grade III (31100 microB), grade IV (101300 microB), grade V (>300 microB)]. The RLS size was defined as large (>4 mm) using diameter measurement of the septal defects on transesophageal echocardiography (TEE). TCD comparison to TEE showed 24 true positive, 48 true negative, 4 false positive, and 2 false negative cases (sensitivity 92.3%, specificity 92.3%, positive predictive value (PPV) 85.7%, negative predictive value (NPV) 96%, and accuracy 92.3%) for any RLS presence. Both ICC and SLS were 100% sensitive for detection of large RLS. ICC and SLS criteria yielded a false positive rate of 24.4% and 7.7%, respectively when compared to TEE. Although both grading scales provide agreement as to any shunt presence, using the Spencer Scale grade III or higher can decrease by one-half the number of false positive TCD diagnoses to predict large RLS on TEE.

  4. Stress Energy Tensor in LCFT and LOGARITHMIC Sugawara Construction

    NASA Astrophysics Data System (ADS)

    Kogan, Ian I.; Nichols, Alexander

    We discuss the partners of the stress energy tensor and their structure in Logarithmic conformal field theories. In particular we draw attention to the fundamental differences between theories with zero and non-zero central charge. However they are both characterised by at least two independent parameters. We show how, by using a generalised Sugawara construction, one can calculate the logarithmic partner of T. We show that such a construction works in the c=-2 theory using the conformal dimension one primary currents which generate a logarithmic extension of the Kac-Moody algebra. This is an expanded version of a talk presented by A. Nichols at the conference on Logarithmic Conformal Field Theory and its Applications in Tehran Iran, 2001.

  5. An alternative approach to calculating Area-Under-the-Curve (AUC) in delay discounting research.

    PubMed

    Borges, Allison M; Kuang, Jinyi; Milhorn, Hannah; Yi, Richard

    2016-09-01

    Applied to delay discounting data, Area-Under-the-Curve (AUC) provides an atheoretical index of the rate of delay discounting. The conventional method of calculating AUC, by summing the areas of the trapezoids formed by successive delay-indifference point pairings, does not account for the fact that most delay discounting tasks scale delay pseudoexponentially, that is, time intervals between delays typically get larger as delays get longer. This results in a disproportionate contribution of indifference points at long delays to the total AUC, with minimal contribution from indifference points at short delays. We propose two modifications that correct for this imbalance via a base-10 logarithmic transformation and an ordinal scaling transformation of delays. These newly proposed indices of discounting, AUClog d and AUCor d, address the limitation of AUC while preserving a primary strength (remaining atheoretical). Re-examination of previously published data provides empirical support for both AUClog d and AUCor d . Thus, we believe theoretical and empirical arguments favor these methods as the preferred atheoretical indices of delay discounting. © 2016 Society for the Experimental Analysis of Behavior.

  6. Aging underdamped scaled Brownian motion: Ensemble- and time-averaged particle displacements, nonergodicity, and the failure of the overdamping approximation.

    PubMed

    Safdari, Hadiseh; Cherstvy, Andrey G; Chechkin, Aleksei V; Bodrova, Anna; Metzler, Ralf

    2017-01-01

    We investigate both analytically and by computer simulations the ensemble- and time-averaged, nonergodic, and aging properties of massive particles diffusing in a medium with a time dependent diffusivity. We call this stochastic diffusion process the (aging) underdamped scaled Brownian motion (UDSBM). We demonstrate how the mean squared displacement (MSD) and the time-averaged MSD of UDSBM are affected by the inertial term in the Langevin equation, both at short, intermediate, and even long diffusion times. In particular, we quantify the ballistic regime for the MSD and the time-averaged MSD as well as the spread of individual time-averaged MSD trajectories. One of the main effects we observe is that, both for the MSD and the time-averaged MSD, for superdiffusive UDSBM the ballistic regime is much shorter than for ordinary Brownian motion. In contrast, for subdiffusive UDSBM, the ballistic region extends to much longer diffusion times. Therefore, particular care needs to be taken under what conditions the overdamped limit indeed provides a correct description, even in the long time limit. We also analyze to what extent ergodicity in the Boltzmann-Khinchin sense in this nonstationary system is broken, both for subdiffusive and superdiffusive UDSBM. Finally, the limiting case of ultraslow UDSBM is considered, with a mixed logarithmic and power-law dependence of the ensemble- and time-averaged MSDs of the particles. In the limit of strong aging, remarkably, the ordinary UDSBM and the ultraslow UDSBM behave similarly in the short time ballistic limit. The approaches developed here open ways for considering other stochastic processes under physically important conditions when a finite particle mass and aging in the system cannot be neglected.

  7. Aging underdamped scaled Brownian motion: Ensemble- and time-averaged particle displacements, nonergodicity, and the failure of the overdamping approximation

    NASA Astrophysics Data System (ADS)

    Safdari, Hadiseh; Cherstvy, Andrey G.; Chechkin, Aleksei V.; Bodrova, Anna; Metzler, Ralf

    2017-01-01

    We investigate both analytically and by computer simulations the ensemble- and time-averaged, nonergodic, and aging properties of massive particles diffusing in a medium with a time dependent diffusivity. We call this stochastic diffusion process the (aging) underdamped scaled Brownian motion (UDSBM). We demonstrate how the mean squared displacement (MSD) and the time-averaged MSD of UDSBM are affected by the inertial term in the Langevin equation, both at short, intermediate, and even long diffusion times. In particular, we quantify the ballistic regime for the MSD and the time-averaged MSD as well as the spread of individual time-averaged MSD trajectories. One of the main effects we observe is that, both for the MSD and the time-averaged MSD, for superdiffusive UDSBM the ballistic regime is much shorter than for ordinary Brownian motion. In contrast, for subdiffusive UDSBM, the ballistic region extends to much longer diffusion times. Therefore, particular care needs to be taken under what conditions the overdamped limit indeed provides a correct description, even in the long time limit. We also analyze to what extent ergodicity in the Boltzmann-Khinchin sense in this nonstationary system is broken, both for subdiffusive and superdiffusive UDSBM. Finally, the limiting case of ultraslow UDSBM is considered, with a mixed logarithmic and power-law dependence of the ensemble- and time-averaged MSDs of the particles. In the limit of strong aging, remarkably, the ordinary UDSBM and the ultraslow UDSBM behave similarly in the short time ballistic limit. The approaches developed here open ways for considering other stochastic processes under physically important conditions when a finite particle mass and aging in the system cannot be neglected.

  8. Radiative corrections to masses and couplings in universal extra dimensions

    NASA Astrophysics Data System (ADS)

    Freitas, Ayres; Kong, Kyoungchul; Wiegand, Daniel

    2018-03-01

    Models with an orbifolded universal extra dimension receive important loop-induced corrections to the masses and couplings of Kaluza-Klein (KK) particles. The dominant contributions stem from so-called boundary terms which violate KK number. Previously, only the parts of these boundary terms proportional to ln(Λ R) have been computed, where R is the radius of the extra dimension and Λ is cut-off scale. However, for typical values of Λ R ˜ 10 · · · 50, the logarithms are not particularly large and non-logarithmic contributions may be numerically important. In this paper, these remaining finite terms are computed and their phenomenological impact is discussed. It is shown that the finite terms have a significant impact on the KK mass spectrum. Furthermore, one finds new KK-number violating interactions that do not depend on ln(Λ R) but nevertheless are non-zero. These lead to new production and decay channels for level-2 KK particles at colliders.

  9. High precision predictions for exclusive VH production at the LHC

    DOE PAGES

    Li, Ye; Liu, Xiaohui

    2014-06-04

    We present a resummation-improved prediction for pp → VH + 0 jets at the Large Hadron Collider. We focus on highly-boosted final states in the presence of jet veto to suppress the tt¯ background. In this case, conventional fixed-order calculations are plagued by the existence of large Sudakov logarithms α n slog m(p veto T/Q) for Q ~ m V + m H which lead to unreliable predictions as well as large theoretical uncertainties, and thus limit the accuracy when comparing experimental measurements to the Standard Model. In this work, we show that the resummation of Sudakov logarithms beyond themore » next-to-next-to-leading-log accuracy, combined with the next-to-next-to-leading order calculation, reduces the scale uncertainty and stabilizes the perturbative expansion in the region where the vector bosons carry large transverse momentum. Thus, our result improves the precision with which Higgs properties can be determined from LHC measurements using boosted Higgs techniques.« less

  10. Ordering dynamics of self-propelled particles in an inhomogeneous medium

    NASA Astrophysics Data System (ADS)

    Das, Rakesh; Mishra, Shradha; Puri, Sanjay

    2018-02-01

    Ordering dynamics of self-propelled particles in an inhomogeneous medium in two dimensions is studied. We write coarse-grained hydrodynamic equations of motion for density and polarisation fields in the presence of an external random disorder field, which is quenched in time. The strength of inhomogeneity is tuned from zero disorder (clean system) to large disorder. In the clean system, the polarisation field grows algebraically as LP ∼ t0.5 . The density field does not show clean power-law growth; however, it follows Lρ ∼ t0.8 approximately. In the inhomogeneous system, we find a disorder-dependent growth. For both the density and the polarisation, growth slows down with increasing strength of disorder. The polarisation shows a disorder-dependent power-law growth LP(t,Δ) ∼ t1/\\bar zP(Δ) for intermediate times. At late times, there is a crossover to logarithmic growth LP(t,Δ) ∼ (\\ln t)1/\\varphi , where φ is a disorder-independent exponent. Two-point correlation functions for the polarisation show dynamical scaling, but the density does not.

  11. Computing Logarithms by Hand

    ERIC Educational Resources Information Center

    Reed, Cameron

    2016-01-01

    How can old-fashioned tables of logarithms be computed without technology? Today, of course, no practicing mathematician, scientist, or engineer would actually use logarithms to carry out a calculation, let alone worry about deriving them from scratch. But high school students may be curious about the process. This article develops a…

  12. Brownian motion in time-dependent logarithmic potential: Exact results for dynamics and first-passage properties.

    PubMed

    Ryabov, Artem; Berestneva, Ekaterina; Holubec, Viktor

    2015-09-21

    The paper addresses Brownian motion in the logarithmic potential with time-dependent strength, U(x, t) = g(t)log(x), subject to the absorbing boundary at the origin of coordinates. Such model can represent kinetics of diffusion-controlled reactions of charged molecules or escape of Brownian particles over a time-dependent entropic barrier at the end of a biological pore. We present a simple asymptotic theory which yields the long-time behavior of both the survival probability (first-passage properties) and the moments of the particle position (dynamics). The asymptotic survival probability, i.e., the probability that the particle will not hit the origin before a given time, is a functional of the potential strength. As such, it exhibits a rather varied behavior for different functions g(t). The latter can be grouped into three classes according to the regime of the asymptotic decay of the survival probability. We distinguish 1. the regular (power-law decay), 2. the marginal (power law times a slow function of time), and 3. the regime of enhanced absorption (decay faster than the power law, e.g., exponential). Results of the asymptotic theory show good agreement with numerical simulations.

  13. Simulated Stochastic Approximation Annealing for Global Optimization with a Square-Root Cooling Schedule

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liang, Faming; Cheng, Yichen; Lin, Guang

    2014-06-13

    Simulated annealing has been widely used in the solution of optimization problems. As known by many researchers, the global optima cannot be guaranteed to be located by simulated annealing unless a logarithmic cooling schedule is used. However, the logarithmic cooling schedule is so slow that no one can afford to have such a long CPU time. This paper proposes a new stochastic optimization algorithm, the so-called simulated stochastic approximation annealing algorithm, which is a combination of simulated annealing and the stochastic approximation Monte Carlo algorithm. Under the framework of stochastic approximation Markov chain Monte Carlo, it is shown that themore » new algorithm can work with a cooling schedule in which the temperature can decrease much faster than in the logarithmic cooling schedule, e.g., a square-root cooling schedule, while guaranteeing the global optima to be reached when the temperature tends to zero. The new algorithm has been tested on a few benchmark optimization problems, including feed-forward neural network training and protein-folding. The numerical results indicate that the new algorithm can significantly outperform simulated annealing and other competitors.« less

  14. Species-abundance distribution patterns of soil fungi: contribution to the ecological understanding of their response to experimental fire in Mediterranean maquis (southern Italy).

    PubMed

    Persiani, Anna Maria; Maggi, Oriana

    2013-01-01

    Experimental fires, of both low and high intensity, were lit during summer 2000 and the following 2 y in the Castel Volturno Nature Reserve, southern Italy. Soil samples were collected Jul 2000-Jul 2002 to analyze the soil fungal community dynamics. Species abundance distribution patterns (geometric, logarithmic, log normal, broken-stick) were compared. We plotted datasets with information both on species richness and abundance for total, xerotolerant and heat-stimulated soil microfungi. The xerotolerant fungi conformed to a broken-stick model for both the low- and high intensity fires at 7 and 84 d after the fire; their distribution subsequently followed logarithmic models in the 2 y following the fire. The distribution of the heat-stimulated fungi changed from broken-stick to logarithmic models and eventually to a log-normal model during the post-fire recovery. Xerotolerant and, to a far greater extent, heat-stimulated soil fungi acquire an important functional role following soil water stress and/or fire disturbance; these disturbances let them occupy unsaturated habitats and become increasingly abundant over time.

  15. Scaling laws in the dynamics of crime growth rate

    NASA Astrophysics Data System (ADS)

    Alves, Luiz G. A.; Ribeiro, Haroldo V.; Mendes, Renio S.

    2013-06-01

    The increasing number of crimes in areas with large concentrations of people have made cities one of the main sources of violence. Understanding characteristics of how crime rate expands and its relations with the cities size goes beyond an academic question, being a central issue for contemporary society. Here, we characterize and analyze quantitative aspects of murders in the period from 1980 to 2009 in Brazilian cities. We find that the distribution of the annual, biannual and triannual logarithmic homicide growth rates exhibit the same functional form for distinct scales, that is, a scale invariant behavior. We also identify asymptotic power-law decay relations between the standard deviations of these three growth rates and the initial size. Further, we discuss similarities with complex organizations.

  16. Large-scale structure from cosmic-string loops in a baryon-dominated universe

    NASA Technical Reports Server (NTRS)

    Melott, Adrian L.; Scherrer, Robert J.

    1988-01-01

    The results are presented of a numerical simulation of the formation of large-scale structure in a universe with Omega(0) = 0.2 and h = 0.5 dominated by baryons in which cosmic strings provide the initial density perturbations. The numerical model yields a power spectrum. Nonlinear evolution confirms that the model can account for 700 km/s bulk flows and a strong cluster-cluster correlation, but does rather poorly on smaller scales. There is no visual 'filamentary' structure, and the two-point correlation has too steep a logarithmic slope. The value of G mu = 4 x 10 to the -6th is significantly lower than previous estimates for the value of G mu in baryon-dominated cosmic string models.

  17. The advantages of logarithmically scaled data for electromagnetic inversion

    NASA Astrophysics Data System (ADS)

    Wheelock, Brent; Constable, Steven; Key, Kerry

    2015-06-01

    Non-linear inversion algorithms traverse a data misfit space over multiple iterations of trial models in search of either a global minimum or some target misfit contour. The success of the algorithm in reaching that objective depends upon the smoothness and predictability of the misfit space. For any given observation, there is no absolute form a datum must take, and therefore no absolute definition for the misfit space; in fact, there are many alternatives. However, not all misfit spaces are equal in terms of promoting the success of inversion. In this work, we appraise three common forms that complex data take in electromagnetic geophysical methods: real and imaginary components, a power of amplitude and phase, and logarithmic amplitude and phase. We find that the optimal form is logarithmic amplitude and phase. Single-parameter misfit curves of log-amplitude and phase data for both magnetotelluric and controlled-source electromagnetic methods are the smoothest of the three data forms and do not exhibit flattening at low model resistivities. Synthetic, multiparameter, 2-D inversions illustrate that log-amplitude and phase is the most robust data form, converging to the target misfit contour in the fewest steps regardless of starting model and the amount of noise added to the data; inversions using the other two data forms run slower or fail under various starting models and proportions of noise. It is observed that inversion with log-amplitude and phase data is nearly two times faster in converging to a solution than with other data types. We also assess the statistical consequences of transforming data in the ways discussed in this paper. With the exception of real and imaginary components, which are assumed to be Gaussian, all other data types do not produce an expected mean-squared misfit value of 1.00 at the true model (a common assumption) as the errors in the complex data become large. We recommend that real and imaginary data with errors larger than 10 per cent of the complex amplitude be withheld from a log-amplitude and phase inversion rather than retaining them with large error-bars.

  18. Extraction of partonic transverse momentum distributions from semi-inclusive deep inelastic scattering and Drell-Yan data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pisano, Cristian; Bacchetta, Alessandro; Delcarro, Filippo

    We present a first attempt at a global fit of unpolarized quark transverse momentum dependent distribution and fragmentation functions from available data on semi-inclusive deep-inelastic scattering, Drell-Yan and $Z$ boson production processes. This analysis is performed in the low transverse momentum region, at leading order in perturbative QCD and with the inclusion of energy scale evolution effects at the next-to-leading logarithmic accuracy.

  19. IraL Is an RssB Anti-adaptor That Stabilizes RpoS during Logarithmic Phase Growth in Escherichia coli and Shigella

    PubMed Central

    Hryckowian, Andrew J.; Battesti, Aurelia; Lemke, Justin J.; Meyer, Zachary C.

    2014-01-01

    ABSTRACT RpoS (σS), the general stress response sigma factor, directs the expression of genes under a variety of stressful conditions. Control of the cellular σS concentration is critical for appropriately scaled σS-dependent gene expression. One way to maintain appropriate levels of σS is to regulate its stability. Indeed, σS degradation is catalyzed by the ClpXP protease and the recognition of σS by ClpXP depends on the adaptor protein RssB. Three anti-adaptors (IraD, IraM, and IraP) exist in Escherichia coli K-12; each interacts with RssB and inhibits RssB activity under different stress conditions, thereby stabilizing σS. Unlike K-12, some E. coli isolates, including uropathogenic E. coli strain CFT073, show comparable cellular levels of σS during the logarithmic and stationary growth phases, suggesting that there are differences in the regulation of σS levels among E. coli strains. Here, we describe IraL, an RssB anti-adaptor that stabilizes σS during logarithmic phase growth in CFT073 and other E. coli and Shigella strains. By immunoblot analyses, we show that IraL affects the levels and stability of σS during logarithmic phase growth. By computational and PCR-based analyses, we reveal that iraL is found in many E. coli pathotypes but not in laboratory-adapted strains. Finally, by bacterial two-hybrid and copurification analyses, we demonstrate that IraL interacts with RssB by a mechanism distinct from that used by other characterized anti-adaptors. We introduce a fourth RssB anti-adaptor found in E. coli species and suggest that differences in the regulation of σS levels may contribute to host and niche specificity in pathogenic and nonpathogenic E. coli strains. PMID:24865554

  20. Z-Boson Decays To A Vector Quarkonium Plus A Photon

    DOE PAGES

    Bodwin, Geoffrey T.; Chung, Hee Sok; Ee, June-Haak; ...

    2018-01-18

    We compute the decay rates for the processes Z → V + γ , where Z is the Z -boson, γ is the photon, and V is one of the vector quarkonia J / ψ or Υ ( n S ) , with n = 1 , 2, or 3. Our computations include corrections through relative orders α s and v 2 and resummations of logarithms of mmore » $$2\\atop{Z}$$/$$2\\atop{Q}$$, to all orders in α s , at next-to-leading-logarithmic accuracy. ( v is the velocity of the heavy quark Q or the heavy antiquark $$\\bar{Q}$$ in the quarkonium rest frame, and m Z and m Q are the masses of Z and Q , respectively.) Our calculations are the first to include both the order- α s correction to the light-cone distributions amplitude and the resummation of logarithms of m$$2\\atop{Z}$$/$$2\\atop{Q}$$ and are the first calculations for the Υ (2S) and Υ (3S) final states. The resummations of logarithms of m$$2\\atop{Z}$$/$$2\\atop{Q}$$ that are associated with the order- α s and order- v 2 corrections are carried out by making use of the Abel-Padé method. We confirm the analytic result for the order- v 2 correction that was presented in a previous publication, and we correct the relative sign of the direct and indirect amplitudes and some choices of scales in that publication. In conclusion, our branching fractions for Z → J / ψ + γ and Z → Υ (1 S) + γ differ by 2.0σ and -4.0 σ, respectively, from the branching fractions that are given in the most recent publication on this topic (in units of the uncertainties that are given in that publication). However, we argue that the uncertainties in the rates are underestimated in that publication.« less

  1. Z-Boson Decays To A Vector Quarkonium Plus A Photon

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bodwin, Geoffrey T.; Chung, Hee Sok; Ee, June-Haak

    We compute the decay rates for the processes Z → V + γ , where Z is the Z -boson, γ is the photon, and V is one of the vector quarkonia J / ψ or Υ ( n S ) , with n = 1 , 2, or 3. Our computations include corrections through relative orders α s and v 2 and resummations of logarithms of mmore » $$2\\atop{Z}$$/$$2\\atop{Q}$$, to all orders in α s , at next-to-leading-logarithmic accuracy. ( v is the velocity of the heavy quark Q or the heavy antiquark $$\\bar{Q}$$ in the quarkonium rest frame, and m Z and m Q are the masses of Z and Q , respectively.) Our calculations are the first to include both the order- α s correction to the light-cone distributions amplitude and the resummation of logarithms of m$$2\\atop{Z}$$/$$2\\atop{Q}$$ and are the first calculations for the Υ (2S) and Υ (3S) final states. The resummations of logarithms of m$$2\\atop{Z}$$/$$2\\atop{Q}$$ that are associated with the order- α s and order- v 2 corrections are carried out by making use of the Abel-Padé method. We confirm the analytic result for the order- v 2 correction that was presented in a previous publication, and we correct the relative sign of the direct and indirect amplitudes and some choices of scales in that publication. In conclusion, our branching fractions for Z → J / ψ + γ and Z → Υ (1 S) + γ differ by 2.0σ and -4.0 σ, respectively, from the branching fractions that are given in the most recent publication on this topic (in units of the uncertainties that are given in that publication). However, we argue that the uncertainties in the rates are underestimated in that publication.« less

  2. Time lag between deformation and seismicity along monogenetic volcanic unrest periods: The case of El Hierro Island (Canary Islands)

    NASA Astrophysics Data System (ADS)

    Lamolda, Héctor; Felpeto, Alicia; Bethencourt, Abelardo

    2017-07-01

    Between 2011 and 2014 there were at least seven episodes of magmatic intrusion in El Hierro Island, but only the first one led to a submarine eruption in 2011-2012. In order to study the relationship between GPS deformation and seismicity during these episodes, we compare the temporal evolution of the deformation with the cumulative seismic energy released. In some of the episodes both deformation and seismicity evolve in a very similar way, but in others a time lag appears between them, in which the deformation precedes the seismicity. Furthermore, a linear correlation between decimal logarithm of intruded magma volume and decimal logarithm of total seismic energy released along the different episodes has been observed. Therefore, if a future magmatic intrusion in El Hierro Island follows this behavior with a proper time lag, we could have an a priori estimate on the order of magnitude the seismic energy released would reach.

  3. Alongshore wind forcing of coastal sea level as a function of frequency

    USGS Publications Warehouse

    Ryan, H.F.; Noble, M.A.

    2006-01-01

    The amplitude of the frequency response function between coastal alongshore wind stress and adjusted sea level anomalies along the west coast of the United States increases linearly as a function of the logarithm (log10) of the period for time scales up to at least 60, and possibly 100, days. The amplitude of the frequency response function increases even more rapidly at longer periods out to at least 5 yr. At the shortest periods, the amplitude of the frequency response function is small because sea level is forced only by the local component of the wind field. The regional wind field, which controls the wind-forced response in sea level for periods between 20 and 100 days, not only has much broader spatial scales than the local wind, but also propagates along the coast in the same direction as continental shelf waves. Hence, it has a stronger coupling to and an increased frequency response for sea level. At periods of a year or more, observed coastal sea level fluctuations are not only forced by the regional winds, but also by joint correlations among the larger-scale climatic patterns associated with El Nin??o. Therefore, the amplitude of the frequency response function is large, despite the fact that the energy in the coastal wind field is relatively small. These data show that the coastal sea level response to wind stress forcing along the west coast of the United States changes in a consistent and predictable pattern over a very broad range of frequencies with time scales from a few days to several years.

  4. Logarithmic amplifiers.

    PubMed

    Gandler, W; Shapiro, H

    1990-01-01

    Logarithmic amplifiers (log amps), which produce an output signal proportional to the logarithm of the input signal, are widely used in cytometry for measurements of parameters that vary over a wide dynamic range, e.g., cell surface immunofluorescence. Existing log amp circuits all deviate to some extent from ideal performance with respect to dynamic range and fidelity to the logarithmic curve; accuracy in quantitative analysis using log amps therefore requires that log amps be individually calibrated. However, accuracy and precision may be limited by photon statistics and system noise when very low level input signals are encountered.

  5. Stress Energy tensor in LCFT and the Logarithmic Sugawara construction

    NASA Astrophysics Data System (ADS)

    Kogan, Ian I.; Nichols, Alexander

    2002-01-01

    We discuss the partners of the stress energy tensor and their structure in Logarithmic conformal field theories. In particular we draw attention to the fundamental differences between theories with zero and non-zero central charge. However they are both characterised by at least two independent parameters. We show how, by using a generalised Sugawara construction, one can calculate the logarithmic partner of T. We show that such a construction works in the c = -2 theory using the conformal dimension one primary currents which generate a logarithmic extension of the Kac-Moody algebra.

  6. Logarithmic M(2,p) minimal models, their logarithmic couplings, and duality

    NASA Astrophysics Data System (ADS)

    Mathieu, Pierre; Ridout, David

    2008-10-01

    A natural construction of the logarithmic extension of the M(2,p) (chiral) minimal models is presented, which generalises our previous model of percolation ( p=3). Its key aspect is the replacement of the minimal model irreducible modules by reducible ones obtained by requiring that only one of the two principal singular vectors of each module vanish. The resulting theory is then constructed systematically by repeatedly fusing these building block representations. This generates indecomposable representations of the type which signify the presence of logarithmic partner fields in the theory. The basic data characterising these indecomposable modules, the logarithmic couplings, are computed for many special cases and given a new structural interpretation. Quite remarkably, a number of them are presented in closed analytic form (for general p). These are the prime examples of "gauge-invariant" data—quantities independent of the ambiguities present in defining the logarithmic partner fields. Finally, mere global conformal invariance is shown to enforce strong constraints on the allowed spectrum: It is not possible to include modules other than those generated by the fusion of the model's building blocks. This generalises the statement that there cannot exist two effective central charges in a c=0 model. It also suggests the existence of a second "dual" logarithmic theory for each p. Such dual models are briefly discussed.

  7. Large-Eddy Simulations of Fully Developed Turbulent Channel and Pipe Flows with Smooth and Rough Walls

    NASA Astrophysics Data System (ADS)

    Saito, Namiko

    Studies in turbulence often focus on two flow conditions, both of which occur frequently in real-world flows and are sought-after for their value in advancing turbulence theory. These are the high Reynolds number regime and the effect of wall surface roughness. In this dissertation, a Large-Eddy Simulation (LES) recreates both conditions over a wide range of Reynolds numbers Retau = O(102) - O(108) and accounts for roughness by locally modeling the statistical effects of near-wall anisotropic fine scales in a thin layer immediately above the rough surface. A subgrid, roughness-corrected wall model is introduced to dynamically transmit this modeled information from the wall to the outer LES, which uses a stretched-vortex subgrid-scale model operating in the bulk of the flow. Of primary interest is the Reynolds number and roughness dependence of these flows in terms of first and second order statistics. The LES is first applied to a fully turbulent uniformly-smooth/rough channel flow to capture the flow dynamics over smooth, transitionally rough and fully rough regimes. Results include a Moody-like diagram for the wall averaged friction factor, believed to be the first of its kind obtained from LES. Confirmation is found for experimentally observed logarithmic behavior in the normalized stream-wise turbulent intensities. Tight logarithmic collapse, scaled on the wall friction velocity, is found for smooth-wall flows when Re tau ≥ O(106) and in fully rough cases. Since the wall model operates locally and dynamically, the framework is used to investigate non-uniform roughness distribution cases in a channel, where the flow adjustments to sudden surface changes are investigated. Recovery of mean quantities and turbulent statistics after transitions are discussed qualitatively and quantitatively at various roughness and Reynolds number levels. The internal boundary layer, which is defined as the border between the flow affected by the new surface condition and the unaffected part, is computed, and a collapse of the profiles on a length scale containing the logarithm of friction Reynolds number is presented. Finally, we turn to the possibility of expanding the present framework to accommodate more general geometries. As a first step, the whole LES framework is modified for use in the curvilinear geometry of a fully-developed turbulent pipe flow, with implementation carried out in a spectral element solver capable of handling complex wall profiles. The friction factors have shown favorable agreement with the superpipe data, and the LES estimates of the Karman constant and additive constant of the log-law closely match values obtained from experiment.

  8. Ultrasonic cleaning of conveyor belt materials using Listeria monocytogenes as a model organism.

    PubMed

    Tolvanén, Riina; Lunden, Janne; Korkeala, Hannu; Wirtanen, Gun

    2007-03-01

    Persistent Listeria monocytogenes contamination of food industry equipment is a difficult problem to solve. Ultrasonic cleaning offers new possibilities for cleaning conveyors and other equipment that are not easy to clean. Ultrasonic cleaning was tested on three conveyor belt materials: polypropylene, acetal, and stainless steel (cold-rolled, AISI 304). Cleaning efficiency was tested at two temperatures (30 and 45 degrees C) and two cleaning times (30 and 60 s) with two cleaning detergents (KOH, and NaOH combined with KOH). Conveyor belt materials were soiled with milk-based soil and L. monocytogenes strains V1, V3, and B9, and then incubated for 72 h to attach bacteria to surfaces. Ultrasonic cleaning treatments reduced L. monocytogenes counts on stainless steel 4.61 to 5.90 log units; on acetal, 3.37 to 5.55 log units; and on polypropylene, 2.31 to 4.40 log units. The logarithmic reduction differences were statistically analyzed by analysis of variance using Statistical Package for the Social Sciences software. The logarithmic reduction was significantly greater in stainless steel than in plastic materials (P < 0.001 for polypropylene, P = 0.023 for acetal). Higher temperatures enhanced the cleaning efficiency in tested materials. No significant difference occurred between cleaning times. The logarithmic reduction was significantly higher (P = 0.013) in cleaning treatments with potassium hydroxide detergent. In this study, ultrasonic cleaning was efficient for cleaning conveyor belt materials.

  9. Evaporation Loss of Light Elements as a Function of Cooling Rate: Logarithmic Law

    NASA Technical Reports Server (NTRS)

    Xiong, Yong-Liang; Hewins, Roger H.

    2003-01-01

    Knowledge about the evaporation loss of light elements is important to our understanding of chondrule formation processes. The evaporative loss of light elements (such as B and Li) as a function of cooling rate is of special interest because recent investigations of the distribution of Li, Be and B in meteoritic chondrules have revealed that Li varies by 25 times, and B and Be varies by about 10 times. Therefore, if we can extrapolate and interpolate with confidence the evaporation loss of B and Li (and other light elements such as K, Na) at a wide range of cooling rates of interest based upon limited experimental data, we would be able to assess the full range of scenarios relating to chondrule formation processes. Here, we propose that evaporation loss of light elements as a function of cooling rate should obey the logarithmic law.

  10. Method of detecting system function by measuring frequency response

    DOEpatents

    Morrison, John L.; Morrison, William H.; Christophersen, Jon P.; Motloch, Chester G.

    2013-01-08

    Methods of rapidly measuring an impedance spectrum of an energy storage device in-situ over a limited number of logarithmically distributed frequencies are described. An energy storage device is excited with a known input signal, and a response is measured to ascertain the impedance spectrum. An excitation signal is a limited time duration sum-of-sines consisting of a select number of frequencies. In one embodiment, magnitude and phase of each frequency of interest within the sum-of-sines is identified when the selected frequencies and sample rate are logarithmic integer steps greater than two. This technique requires a measurement with a duration of one period of the lowest frequency. In another embodiment, where selected frequencies are distributed in octave steps, the impedance spectrum can be determined using a captured time record that is reduced to a half-period of the lowest frequency.

  11. Q estimation of seismic data using the generalized S-transform

    NASA Astrophysics Data System (ADS)

    Hao, Yaju; Wen, Xiaotao; Zhang, Bo; He, Zhenhua; Zhang, Rui; Zhang, Jinming

    2016-12-01

    Quality factor, Q, is a parameter that characterizes the energy dissipation during seismic wave propagation. The reservoir pore is one of the main factors that affect the value of Q. Especially, when pore space is filled with oil or gas, the rock usually exhibits a relative low Q value. Such a low Q value has been used as a direct hydrocarbon indicator by many researchers. The conventional Q estimation method based on spectral ratio suffers from the problem of waveform tuning; hence, many researchers have introduced time-frequency analysis techniques to tackle this problem. Unfortunately, the window functions adopted in time-frequency analysis algorithms such as continuous wavelet transform (CWT) and S-transform (ST) contaminate the amplitude spectra because the seismic signal is multiplied by the window functions during time-frequency decomposition. The basic assumption of the spectral ratio method is that there is a linear relationship between natural logarithmic spectral ratio and frequency. However, this assumption does not hold if we take the influence of window functions into consideration. In this paper, we first employ a recently developed two-parameter generalized S-transform (GST) to obtain the time-frequency spectra of seismic traces. We then deduce the non-linear relationship between natural logarithmic spectral ratio and frequency. Finally, we obtain a linear relationship between natural logarithmic spectral ratio and a newly defined parameter γ by ignoring the negligible second order term. The gradient of this linear relationship is 1/Q. Here, the parameter γ is a function of frequency and source wavelet. Numerical examples for VSP and post-stack reflection data confirm that our algorithm is capable of yielding accurate results. The Q-value results estimated from field data acquired in western China show reasonable comparison with oil-producing well location.

  12. Universality from disorder in the random-bond Blume-Capel model

    NASA Astrophysics Data System (ADS)

    Fytas, N. G.; Zierenberg, J.; Theodorakis, P. E.; Weigel, M.; Janke, W.; Malakis, A.

    2018-04-01

    Using high-precision Monte Carlo simulations and finite-size scaling we study the effect of quenched disorder in the exchange couplings on the Blume-Capel model on the square lattice. The first-order transition for large crystal-field coupling is softened to become continuous, with a divergent correlation length. An analysis of the scaling of the correlation length as well as the susceptibility and specific heat reveals that it belongs to the universality class of the Ising model with additional logarithmic corrections which is also observed for the Ising model itself if coupled to weak disorder. While the leading scaling behavior of the disordered system is therefore identical between the second-order and first-order segments of the phase diagram of the pure model, the finite-size scaling in the ex-first-order regime is affected by strong transient effects with a crossover length scale L*≈32 for the chosen parameters.

  13. Multilayer neural networks with extensively many hidden units.

    PubMed

    Rosen-Zvi, M; Engel, A; Kanter, I

    2001-08-13

    The information processing abilities of a multilayer neural network with a number of hidden units scaling as the input dimension are studied using statistical mechanics methods. The mapping from the input layer to the hidden units is performed by general symmetric Boolean functions, whereas the hidden layer is connected to the output by either discrete or continuous couplings. Introducing an overlap in the space of Boolean functions as order parameter, the storage capacity is found to scale with the logarithm of the number of implementable Boolean functions. The generalization behavior is smooth for continuous couplings and shows a discontinuous transition to perfect generalization for discrete ones.

  14. Strength and life criteria for corrugated fiberboard by three methods

    Treesearch

    Thomas J. Urbanik

    1997-01-01

    The conventional test method for determining the stacking life of corrugated containers at a fixed load level does not adequately predict a safe load when storage time is fixed. This study introduced multiple load levels and related the probability of time at failure to load. A statistical analysis of logarithm-of-time failure data varying with load level predicts the...

  15. DATASPACE - A PROGRAM FOR THE LOGARITHMIC INTERPOLATION OF TEST DATA

    NASA Technical Reports Server (NTRS)

    Ledbetter, F. E.

    1994-01-01

    Scientists and engineers work with the reduction, analysis, and manipulation of data. In many instances, the recorded data must meet certain requirements before standard numerical techniques may be used to interpret it. For example, the analysis of a linear visoelastic material requires knowledge of one of two time-dependent properties, the stress relaxation modulus E(t) or the creep compliance D(t), one of which may be derived from the other by a numerical method if the recorded data points are evenly spaced or increasingly spaced with respect to the time coordinate. The problem is that most laboratory data are variably spaced, making the use of numerical techniques difficult. To ease this difficulty in the case of stress relaxation data analysis, NASA scientists developed DATASPACE (A Program for the Logarithmic Interpolation of Test Data), to establish a logarithmically increasing time interval in the relaxation data. The program is generally applicable to any situation in which a data set needs increasingly spaced abscissa values. DATASPACE first takes the logarithm of the abscissa values, then uses a cubic spline interpolation routine (which minimizes interpolation error) to create an evenly spaced array from the log values. This array is returned from the log abscissa domain to the abscissa domain and written to an output file for further manipulation. As a result of the interpolation in the log abscissa domain, the data is increasingly spaced. In the case of stress relaxation data, the array is closely spaced at short times and widely spaced at long times, thus avoiding the distortion inherent in evenly spaced time coordinates. The interpolation routine gives results which compare favorably with the recorded data. The experimental data curve is retained and the interpolated points reflect the desired spacing. DATASPACE is written in FORTRAN 77 for IBM PC compatibles with a math co-processor running MS-DOS and Apple Macintosh computers running MacOS. With minor modifications the source code is portable to any platform that supports an ANSI FORTRAN 77 compiler. MicroSoft FORTRAN v2.1 is required for the Macintosh version. An executable is included with the PC version. DATASPACE is available on a 5.25 inch 360K MS-DOS format diskette (standard distribution) or on a 3.5 inch 800K Macintosh format diskette. This program was developed in 1991. IBM PC is a trademark of International Business Machines Corporation. MS-DOS is a registered trademark of Microsoft Corporation. Macintosh and MacOS are trademarks of Apple Computer, Inc.

  16. Quantum critical scaling and fluctuations in Kondo lattice materials

    PubMed Central

    Yang, Yi-feng; Pines, David; Lonzarich, Gilbert

    2017-01-01

    We propose a phenomenological framework for three classes of Kondo lattice materials that incorporates the interplay between the fluctuations associated with the antiferromagnetic quantum critical point and those produced by the hybridization quantum critical point that marks the end of local moment behavior. We show that these fluctuations give rise to two distinct regions of quantum critical scaling: Hybridization fluctuations are responsible for the logarithmic scaling in the density of states of the heavy electron Kondo liquid that emerges below the coherence temperature T∗, whereas the unconventional power law scaling in the resistivity that emerges at lower temperatures below TQC may reflect the combined effects of hybridization and antiferromagnetic quantum critical fluctuations. Our framework is supported by experimental measurements on CeCoIn5, CeRhIn5, and other heavy electron materials. PMID:28559308

  17. Assessing Technical Performance and Determining the Learning Curve in Cleft Palate Surgery Using a High-Fidelity Cleft Palate Simulator.

    PubMed

    Podolsky, Dale J; Fisher, David M; Wong Riff, Karen W; Szasz, Peter; Looi, Thomas; Drake, James M; Forrest, Christopher R

    2018-06-01

    This study assessed technical performance in cleft palate repair using a newly developed assessment tool and high-fidelity cleft palate simulator through a longitudinal simulation training exercise. Three residents performed five and one resident performed nine consecutive endoscopically recorded cleft palate repairs using a cleft palate simulator. Two fellows in pediatric plastic surgery and two expert cleft surgeons also performed recorded simulated repairs. The Cleft Palate Objective Structured Assessment of Technical Skill (CLOSATS) and end-product scales were developed to assess performance. Two blinded cleft surgeons assessed the recordings and the final repairs using the CLOSATS, end-product scale, and a previously developed global rating scale. The average procedure-specific (CLOSATS), global rating, and end-product scores increased logarithmically after each successive simulation session for the residents. Reliability of the CLOSATS (average item intraclass correlation coefficient (ICC), 0.85 ± 0.093) and global ratings (average item ICC, 0.91 ± 0.02) among the raters was high. Reliability of the end-product assessments was lower (average item ICC, 0.66 ± 0.15). Standard setting linear regression using an overall cutoff score of 7 of 10 corresponded to a pass score for the CLOSATS and the global score of 44 (maximum, 60) and 23 (maximum, 30), respectively. Using logarithmic best-fit curves, 6.3 simulation sessions are required to reach the minimum standard. A high-fidelity cleft palate simulator has been developed that improves technical performance in cleft palate repair. The simulator and technical assessment scores can be used to determine performance before operating on patients.

  18. How Do Students Acquire an Understanding of Logarithmic Concepts?

    ERIC Educational Resources Information Center

    Mulqueeny, Ellen

    2012-01-01

    The use of logarithms, an important tool for calculus and beyond, has been reduced to symbol manipulation without understanding in most entry-level college algebra courses. The primary aim of this research, therefore, was to investigate college students' understanding of logarithmic concepts through the use of a series of instructional tasks…

  19. Design of a Programmable Gain, Temperature Compensated Current-Input Current-Output CMOS Logarithmic Amplifier.

    PubMed

    Ming Gu; Chakrabartty, Shantanu

    2014-06-01

    This paper presents the design of a programmable gain, temperature compensated, current-mode CMOS logarithmic amplifier that can be used for biomedical signal processing. Unlike conventional logarithmic amplifiers that use a transimpedance technique to generate a voltage signal as a logarithmic function of the input current, the proposed approach directly produces a current output as a logarithmic function of the input current. Also, unlike a conventional transimpedance amplifier the gain of the proposed logarithmic amplifier can be programmed using floating-gate trimming circuits. The synthesis of the proposed circuit is based on the Hart's extended translinear principle which involves embedding a floating-voltage source and a linear resistive element within a translinear loop. Temperature compensation is then achieved using a translinear-based resistive cancelation technique. Measured results from prototypes fabricated in a 0.5 μm CMOS process show that the amplifier has an input dynamic range of 120 dB and a temperature sensitivity of 230 ppm/°C (27 °C- 57°C), while consuming less than 100 nW of power.

  20. Reversible and Irreversible Behavior of Glass-forming Materials from the Standpoint of Hierarchical Dynamical Facilitation

    NASA Astrophysics Data System (ADS)

    Keys, Aaron

    2013-03-01

    Using molecular simulation and coarse-grained lattice models, we study the dynamics of glass-forming liquids above and below the glass transition temperature. In the supercooled regime, we study the structure, statistics, and dynamics of excitations responsible for structural relaxation for several atomistic models of glass-formers. Excitations (or soft spots) are detected in terms of persistent particle displacements. At supercooled conditions, we find that excitations are associated with correlated particle motions that are sparse and localized, and the statistics and dynamics of these excitations are facilitated and hierarchical. Excitations at one point in space facilitate the birth and death of excitations at neighboring locations, and space-time excitation structures are microcosms of heterogeneous dynamics at larger scales. Excitation-energy scales grow logarithmically with the characteristic size of the excitation, giving structural-relaxation times that can be predicted quantitatively from dynamics at short time scales. We demonstrate that these same physical principles govern the dynamics of glass-forming systems driven out-of-equilibrium by time-dependent protocols. For a system cooled and re-heated through the glass transition, non-equilibrium response functions, such as heat capacities, are notably asymmetric in time, and the response to melting a glass depends markedly on the cooling protocol by which the glass was formed. We introduce a quantitative description of this behavior based on the East model, with parameters determined from reversible transport data, that agrees well with irreversible differential scanning calorimetry. We find that the observed hysteresis and asymmetric response is a signature of an underlying dynamical transition between equilibrium melts with no trivial spatial correlations and non-equilibrium glasses with correlation lengths that are both large and dependent upon the rate at which the glass is prepared. The correlation length corresponds to the size of amorphous domains bounded by excitations that remain frozen on the observation time scale, thus forming stripes when viewed in space and time. We elucidate properties of the striped phase and show that glasses of this type, traditionally prepared through cooling, can be considered a finite-size realization of the inactive phase formed by the s-ensemble in the space-time thermodynamic limit.

  1. Kinetics of the B1-B2 phase transition in KCl under rapid compression

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Chuanlong; Smith, Jesse S.; Sinogeikin, Stanislav V.

    2016-01-28

    Kinetics of the B1-B2 phase transition in KCl has been investigated under various compression rates (0.03–13.5 GPa/s) in a dynamic diamond anvil cell using time-resolved x-ray diffraction and fast imaging. Our experimental data show that the volume fraction across the transition generally gives sigmoidal curves as a function of pressure during rapid compression. Based upon classical nucleation and growth theories (Johnson-Mehl-Avrami-Kolmogorov theories), we propose a model that is applicable for studying kinetics for the compression rates studied. The fit of the experimental volume fraction as a function of pressure provides information on effective activation energy and average activation volume at amore » given compression rate. The resulting parameters are successfully used for interpreting several experimental observables that are compression-rate dependent, such as the transition time, grain size, and over-pressurization. The effective activation energy (Q{sub eff}) is found to decrease linearly with the logarithm of compression rate. When Q{sub eff} is applied to the Arrhenius equation, this relationship can be used to interpret the experimentally observed linear relationship between the logarithm of the transition time and logarithm of the compression rates. The decrease of Q{sub eff} with increasing compression rate results in the decrease of the nucleation rate, which is qualitatively in agreement with the observed change of the grain size with compression rate. The observed over-pressurization is also well explained by the model when an exponential relationship between the average activation volume and the compression rate is assumed.« less

  2. Renormalization group scale-setting from the action—a road to modified gravity theories

    NASA Astrophysics Data System (ADS)

    Domazet, Silvije; Štefančić, Hrvoje

    2012-12-01

    The renormalization group (RG) corrected gravitational action in Einstein-Hilbert and other truncations is considered. The running scale of the RG is treated as a scalar field at the level of the action and determined in a scale-setting procedure recently introduced by Koch and Ramirez for the Einstein-Hilbert truncation. The scale-setting procedure is elaborated for other truncations of the gravitational action and applied to several phenomenologically interesting cases. It is shown how the logarithmic dependence of the Newton's coupling on the RG scale leads to exponentially suppressed effective cosmological constant and how the scale-setting in particular RG-corrected gravitational theories yields the effective f(R) modified gravity theories with negative powers of the Ricci scalar R. The scale-setting at the level of the action at the non-Gaussian fixed point in Einstein-Hilbert and more general truncations is shown to lead to universal effective action quadratic in the Ricci tensor.

  3. Zipf's law from scale-free geometry.

    PubMed

    Lin, Henry W; Loeb, Abraham

    2016-03-01

    The spatial distribution of people exhibits clustering across a wide range of scales, from household (∼10(-2) km) to continental (∼10(4) km) scales. Empirical data indicate simple power-law scalings for the size distribution of cities (known as Zipf's law) and the population density fluctuations as a function of scale. Using techniques from random field theory and statistical physics, we show that these power laws are fundamentally a consequence of the scale-free spatial clustering of human populations and the fact that humans inhabit a two-dimensional surface. In this sense, the symmetries of scale invariance in two spatial dimensions are intimately connected to urban sociology. We test our theory by empirically measuring the power spectrum of population density fluctuations and show that the logarithmic slope α=2.04 ± 0.09, in excellent agreement with our theoretical prediction α=2. The model enables the analytic computation of many new predictions by importing the mathematical formalism of random fields.

  4. Investigation of effective impact parameters in electron-ion temperature relaxation via Particle-Particle Coulombic molecular dynamics

    NASA Astrophysics Data System (ADS)

    Zhao, Yinjian

    2017-09-01

    Aiming at a high simulation accuracy, a Particle-Particle (PP) Coulombic molecular dynamics model is implemented to study the electron-ion temperature relaxation. In this model, the Coulomb's law is directly applied in a bounded system with two cutoffs at both short and long length scales. By increasing the range between the two cutoffs, it is found that the relaxation rate deviates from the BPS theory and approaches the LS theory and the GMS theory. Also, the effective minimum and maximum impact parameters (bmin* and bmax*) are obtained. For the simulated plasma condition, bmin* is about 6.352 times smaller than the Landau length (bC), and bmax* is about 2 times larger than the Debye length (λD), where bC and λD are used in the LS theory. Surprisingly, the effective relaxation time obtained from the PP model is very close to the LS theory and the GMS theory, even though the effective Coulomb logarithm is two times greater than the one used in the LS theory. Besides, this work shows that the PP model (commonly known as computationally expensive) is becoming practicable via GPU parallel computing techniques.

  5. The Correlation Dimension of Young Stars in Dwarf Galaxies

    NASA Astrophysics Data System (ADS)

    Odekon, Mary Crone

    2006-11-01

    We present the correlation dimension of resolved young stars in four actively star-forming dwarf galaxies that are sufficiently resolved and transparent to be modeled as projections of three-dimensional point distributions. We use data from the Hubble Space Telescope archive; photometry for one of the galaxies, UGCA 292, is presented here for the first time. We find that there are statistically distinguishable differences in the nature of stellar clustering among the sample galaxies. The young stars of VII Zw 403, the brightest galaxy in the sample, have the highest value for the correlation dimension and the most dramatic decrease with logarithmic scale, falling from 1.68+/-0.14 to 0.10+/-0.05 over less than a factor of 10 in r. This decrease is consistent with the edge effect produced by a projected Poisson distribution within a 2:2:1 ellipsoid. The young stars in UGC 4483, the faintest galaxy in the sample, exhibit very different behavior, with a constant value of about 0.5 over this same range in r, extending nearly to the edge of the distribution. This behavior may indicate either a scale-free distribution with an unusually low correlation dimension or a two-component (not scale-free) combination of cluster and field stars.

  6. Definition of (so MIScalled) ``Complexity" as UTTER-SIMPLICITY!!!(sMciUS!!!) Versus Deviations From( sMciUS!!!): ``COMPLICATEDNESS" Definition(s) and MEASURE(S)!!!

    NASA Astrophysics Data System (ADS)

    Young, F.; Siegel, E.

    2010-03-01

    (so MIScalled) ``complexity''(sMc) associated BOTH SCALE- INVARIANCE Symmetry-RESTORING(S-I S-R) [vs. S-I S-B!!!], AND X (w) P(w ) 1/w^(1.000...) ``pink''/Zipf/Archimedes-HYPERBOLICITY INEVITABILITY CONNECTION is by simple-calculus SISR's logarithm- function derivative: (d/dw)ln(w)=1/w=1/w^(1.000...), hence: (d/dw) [SISR](w)=1/w=1/w^(1.000...)=(via Noether-theorem relating continuous-(SISR)-symmetries to conservation-laws)=(d/dw)[4-DIV (J(INTER-SCALE)=0](w)=1/w =1/w^(1.000...). Hence sMc is information inter-scale conservation [as Anderson-Mandell, Fractals of Brain; Fractals of Mind(1994)-experimental- psychology!!!], i.e. sMciUS!!!, VERSUS ``COMPLICATEDNESS", is sMcciUS!!!: EITHER: PLUS (Additive: Murphy's-law absence) OR TIMES (Multiplicative: Murphy's-law dominance) various disparate system-specificity ``COMPLICATIONS". ``COMPLICATEDNESS" MEASURES: DEVIATIONS FROM sMciUS!!!: EITHER [S-I S-B] MINUS [S- I S-R] AND/OR [``red"/Pareto X(w) P(w) 1/w^(#=/=1.000...)] MINUS [X(w) P(w) 1/w^(1.000...) ``pink"/Zipf/Archimedes-HYPERBOLICITY INEVITABILITY] = [1/w^(#=/=1.000...)] MINUS [1/w^(1.000...)]; almost but not exactly a fractals Hurst-exponent-like [# - 1.000...]!!!

  7. Programming of the complex logarithm function in the solution of the cracked anisotropic plate loaded by a point force

    NASA Astrophysics Data System (ADS)

    Zaal, K. J. J. M.

    1991-06-01

    In programming solutions of complex function theory, the complex logarithm function is replaced by the complex logarithmic function, introducing a discontinuity along the branch cut into the programmed solution which was not present in the mathematical solution. Recently, Liaw and Kamel presented their solution of the infinite anisotropic centrally cracked plate loaded by an arbitrary point force, which they used as Green's function in a boundary element method intended to evaluate the stress intensity factor at the tip of a crack originating from an elliptical home. Their solution may be used as Green's function of many more numerical methods involving anisotropic elasticity. In programming applications of Liaw and Kamel's solution, the standard definition of the logarithmic function with the branch cut at the nonpositive real axis cannot provide a reliable computation of the displacement field for Liaw and Kamel's solution. Either the branch cut should be redefined outside the domain of the logarithmic function, after proving that the domain is limited to a part of the plane, or the logarithmic function should be defined on its Riemann surface. A two dimensional line fractal can provide the link between all mesh points on the plane essential to evaluate the logarithm function on its Riemann surface. As an example, a two dimensional line fractal is defined for a mesh once used by Erdogan and Arin.

  8. Full Waveform Modeling of Transient Electromagnetic Response Based on Temporal Interpolation and Convolution Method

    NASA Astrophysics Data System (ADS)

    Qi, Youzheng; Huang, Ling; Wu, Xin; Zhu, Wanhua; Fang, Guangyou; Yu, Gang

    2017-07-01

    Quantitative modeling of the transient electromagnetic (TEM) response requires consideration of the full transmitter waveform, i.e., not only the specific current waveform in a half cycle but also the bipolar repetition. In this paper, we present a novel temporal interpolation and convolution (TIC) method to facilitate the accurate TEM modeling. We first calculate the temporal basis response on a logarithmic scale using the fast digital-filter-based methods. Then, we introduce a function named hamlogsinc in the framework of discrete signal processing theory to reconstruct the basis function and to make the convolution with the positive half of the waveform. Finally, a superposition procedure is used to take account of the effect of previous bipolar waveforms. Comparisons with the established fast Fourier transform method demonstrate that our TIC method can get the same accuracy with a shorter computing time.

  9. Eigentime identities for on weighted polymer networks

    NASA Astrophysics Data System (ADS)

    Dai, Meifeng; Tang, Hualong; Zou, Jiahui; He, Di; Sun, Yu; Su, Weiyi

    2018-01-01

    In this paper, we first analytically calculate the eigenvalues of the transition matrix of a structure with very complex architecture and their multiplicities. We call this structure polymer network. Based on the eigenvalues obtained in the iterative manner, we then calculate the eigentime identity. We highlight two scaling behaviors (logarithmic and linear) for this quantity, strongly depending on the value of the weight factor. Finally, by making use of the obtained eigenvalues, we determine the weighted counting of spanning trees.

  10. Slepton Pair Production at Hadron Colliders

    NASA Astrophysics Data System (ADS)

    Fuks, B.

    2007-04-01

    In R-parity conserving supersymmetric models, sleptons are produced in pairs at hadron colliders. We show that measurements of the longitudinal single-spin asymmetry at possible polarization upgrades of existing colliders allow for a direct extraction of the slepton mixing angle. A calculation of the transverse-momentum spectrum shows the importance of resummed contributions at next-to-leading logarithmic accuracy in the small and intermediate transverse-momentum regions and little dependence on unphysical scales and non-perturbative contributions.

  11. A Three-Dimensional Receiver Operator Characteristic Surface Diagnostic Metric

    DTIC Science & Technology

    2010-10-01

    steps applied for generating the 3D ROC surface diagnostic metrics: 1. Obtain system data: Gain access to a suitable database of system data under...surface, VUSTPR and VUSCCR, can be calculated. This can be accomplished by partitioning the VUSTPR and VUSCCR volumes into polyhedrons as illustrated... polyhedron volumes to produce VUSTPR and VUSCCR. In the example given in Figures 7 and 8 a logarithmic scaling has been applied to the TL axis. This places

  12. Massive Boson Production at Small qT in Soft-Collinear Effective Theory

    NASA Astrophysics Data System (ADS)

    Becher, Thomas; Neubert, Matthias; Wilhelm, Daniel

    2013-01-01

    We study the differential cross sections for electroweak gauge-boson and Higgs production at small and very small transverse-momentum qT. Large logarithms are resummed using soft-collinear effective theory. The collinear anomaly generates a non-perturbative scale q*, which protects the processes from receiving large long-distance hadronic contributions. A numerical comparison of our predictions with data on the transverse-momentum distribution in Z-boson production at the Tevatron and LHC is given.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baumgart, Matthew; Cohen, Timothy; Moult, Ian

    We construct an effective field theory (EFT) description of the hard photon spectrum for heavy WIMP annihilation. This facilitates precision predictions relevant for line searches, and allows the incorporation of non-trivial energy resolution effects. Our framework combines techniques from non-relativistic EFTs and soft-collinear effective theory (SCET), as well as its multi-scale extensions that have been recently introduced for studying jet substructure. We find a number of interesting features, including the simultaneous presence of SCET I and SCET II modes, as well as collinear-soft modes at the electroweak scale. We derive a factorization formula that enables both the resummation of themore » leading large Sudakov double logarithms that appear in the perturbative spectrum, and the inclusion of Sommerfeld enhancement effects. Consistency of this factorization is demonstrated to leading logarithmic order through explicit calculation. Our final result contains both the exclusive and the inclusive limits, thereby providing a unifying description of these two previously-considered approximations. We estimate the impact on experimental sensitivity, focusing for concreteness on an SU(2) W triplet fermion dark matter — the pure wino — where the strongest constraints are due to a search for gamma-ray lines from the Galactic Center. Here, we find numerically significant corrections compared to previous results, thereby highlighting the importance of accounting for the photon spectrum when interpreting data from current and future indirect detection experiments.« less

  14. Resummed photon spectra for WIMP annihilation

    DOE PAGES

    Baumgart, Matthew; Cohen, Timothy; Moult, Ian; ...

    2018-03-20

    We construct an effective field theory (EFT) description of the hard photon spectrum for heavy WIMP annihilation. This facilitates precision predictions relevant for line searches, and allows the incorporation of non-trivial energy resolution effects. Our framework combines techniques from non-relativistic EFTs and soft-collinear effective theory (SCET), as well as its multi-scale extensions that have been recently introduced for studying jet substructure. We find a number of interesting features, including the simultaneous presence of SCET I and SCET II modes, as well as collinear-soft modes at the electroweak scale. We derive a factorization formula that enables both the resummation of themore » leading large Sudakov double logarithms that appear in the perturbative spectrum, and the inclusion of Sommerfeld enhancement effects. Consistency of this factorization is demonstrated to leading logarithmic order through explicit calculation. Our final result contains both the exclusive and the inclusive limits, thereby providing a unifying description of these two previously-considered approximations. We estimate the impact on experimental sensitivity, focusing for concreteness on an SU(2) W triplet fermion dark matter — the pure wino — where the strongest constraints are due to a search for gamma-ray lines from the Galactic Center. Here, we find numerically significant corrections compared to previous results, thereby highlighting the importance of accounting for the photon spectrum when interpreting data from current and future indirect detection experiments.« less

  15. On the Use of the Log-Normal Particle Size Distribution to Characterize Global Rain

    NASA Technical Reports Server (NTRS)

    Meneghini, Robert; Rincon, Rafael; Liao, Liang

    2003-01-01

    Although most parameterizations of the drop size distributions (DSD) use the gamma function, there are several advantages to the log-normal form, particularly if we want to characterize the large scale space-time variability of the DSD and rain rate. The advantages of the distribution are twofold: the logarithm of any moment can be expressed as a linear combination of the individual parameters of the distribution; the parameters of the distribution are approximately normally distributed. Since all radar and rainfall-related parameters can be written approximately as a moment of the DSD, the first property allows us to express the logarithm of any radar/rainfall variable as a linear combination of the individual DSD parameters. Another consequence is that any power law relationship between rain rate, reflectivity factor, specific attenuation or water content can be expressed in terms of the covariance matrix of the DSD parameters. The joint-normal property of the DSD parameters has applications to the description of the space-time variation of rainfall in the sense that any radar-rainfall quantity can be specified by the covariance matrix associated with the DSD parameters at two arbitrary space-time points. As such, the parameterization provides a means by which we can use the spaceborne radar-derived DSD parameters to specify in part the covariance matrices globally. However, since satellite observations have coarse temporal sampling, the specification of the temporal covariance must be derived from ancillary measurements and models. Work is presently underway to determine whether the use of instantaneous rain rate data from the TRMM Precipitation Radar can provide good estimates of the spatial correlation in rain rate from data collected in 5(sup 0)x 5(sup 0) x 1 month space-time boxes. To characterize the temporal characteristics of the DSD parameters, disdrometer data are being used from the Wallops Flight Facility site where as many as 4 disdrometers have been used to acquire data over a 2 km path. These data should help quantify the temporal form of the covariance matrix at this site.

  16. Self-induced conversion in dense neutrino gases: Pendulum in flavor space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hannestad, Steen; Max-Planck-Institut fuer Physik; Raffelt, Georg G.

    Neutrino-neutrino interactions can lead to collective flavor conversion effects in supernovae and in the early universe. We demonstrate that the case of bipolar oscillations, where a dense gas of neutrinos and antineutrinos in equal numbers completely converts from one flavor to another even if the mixing angle is small, is equivalent to a pendulum in flavor space. Bipolar flavor conversion corresponds to the swinging of the pendulum, which begins in an unstable upright position (the initial flavor), and passes through momentarily the vertically downward position (the other flavor) in the course of its motion. The time scale to complete onemore » cycle of oscillation depends logarithmically on the vacuum mixing angle. Likewise, the presence of an ordinary medium can be shown analytically to contribute to a logarithmic increase in the bipolar conversion period. We further find that a more complex (and realistic) system of unequal numbers of neutrinos and antineutrinos is analogous to a spinning top subject to a torque. This analogy easily explains how such a system can oscillate in both the bipolar and the synchronized mode, depending on the neutrino density and the size of the neutrino-antineutrino asymmetry. Our simple model applies strictly only to isotropic neutrino gasses. In more general cases, and especially for neutrinos streaming from a supernova core, different modes couple to each other with unequal strength, an effect that can lead to kinematical decoherence in flavor space rather than collective oscillations. The exact circumstances under which collective oscillations occur in nonisotropic media remain to be understood.« less

  17. Scale dependence of the 200-mb divergence inferred from EOLE data.

    NASA Technical Reports Server (NTRS)

    Morel, P.; Necco, G.

    1973-01-01

    The EOLE experiment with 480 constant-volume balloons distributed over the Southern Hemisphere approximately at the 200-mb level, has provided a unique, highly accurate set of tracer trajectories in the general westerly circulation. The trajectories of neighboring balloons are analyzed to estimate the horizontal divergence from the Lagrangian derivative of the area of one cluster. The variance of the divergence estimates results from two almost comparable effects: the true divergence of the horizontal flow and eddy diffusion due to small-scale, two-dimensional turbulence. Taking this into account, the rms divergence is found to be of the order of 0.00001 per sec and decreases logarithmically with cluster size. This scale dependence is shown to be consistent with the quasi-geostrophic turbulence model of the general circulation in midlatitudes.

  18. The effects of intermittency on statistical characteristics of turbulence and scale similarity of breakdown coefficients

    NASA Astrophysics Data System (ADS)

    Novikov, E. A.

    1990-05-01

    The influence of intermittency on turbulent diffusion is expressed in terms of the statistics of the dissipation field. The high-order moments of relative diffusion are obtained by using the concept of scale similarity of the breakdown coefficients (bdc). The method of bdc is useful for obtaining new models and general results, which then can be expressed in terms of multifractals. In particular, the concavity and other properties of spectral codimension are proved. Special attention is paid to the logarithmically periodic modulations. The parametrization of small-scale intermittent turbulence, which can be used for large-eddy simulation, is presented. The effect of molecular viscosity is taken into account in the spirit of the renorm group, but without spectral series, ɛ expansion, and fictitious random forces.

  19. Nonlinear Real-Time Optical Signal Processing.

    DTIC Science & Technology

    1981-06-30

    bandwidth and space-bandwidth products. Real-time homonorphic and loga- rithmic filtering by halftone nonlinear processing has been achieved. A...Page ABSTRACT 1 1. RESEARCH OBJECTIVES AND PROGRESS 3 I-- 1.1 Introduction and Project overview 3 1.2 Halftone Processing 9 1.3 Direct Nonlinear...time homomorphic and logarithmic filtering by halftone nonlinear processing has been achieved. A detailed analysis of degradation due to the finite gamma

  20. Four things we don't know about scalar transfer from plant canopies

    NASA Astrophysics Data System (ADS)

    Finnigan, J. J.

    2009-04-01

    In terrestrial plant canopies, turbulent exchange of water through evapotranspiration is intimately bound up with exchange of other scalars, heat and carbon dioxide in particular. Turbulent transport is rarely the process limiting exchange of these scalars between the biosphere and the atmosphere. However, in measurement programs like FLUXNET or when we parameterise surface exchange at the canopy scale in climate or weather models we must understand the mechanism of turbulent exchange in detail. In this talk we survey four current obstacles to extending our understanding of canopy turbulence from the idealised case of homogeneous flow in neutral stratification to complex flows in stable and unstable conditions. 1. Canopy eddy structure and the hydrodynamic instability Recent analysis of canopy LES and wind tunnel simulations has revealed the ‘two hairpin' structure of a characteristic canopy eddy. This structure explains a large body of results from a wide range of canopies and redefines the Roughness Sub Layer (RSL) as an asymptotic layer similar to the logarithmic and outer layers of the Planetary Boundary Layer. However, the nature of the non-linear ‘mixing-layer' instability process that gives canopy/RSL eddies their coherence and enhanced transport efficiency (as compared to eddies in the logarithmic layer above) is poorly understood so we do not know how resilient this instability and the eddies that depend upon it are to large scale flow perturbations or to changes in stability. 2. Turbulent Schmidt and Prandtl Numbers The scalar RSL can be defined as the layer across which the turbulent Schmidt (Sc) and Prandtl (Pr) numbers in neutral stratification change from their canopy top values of ~0.5, typical of mixing layers, to their logarithmic layer values of ~1.0, typical of boundary layers. The value of Sc or Pr is a critical parameter when adjusting Monin-Obukhov similarity theory (MOST) for the proximity of the canopy. The need for such adjustments has been recognized for several decades but they are still often ignored with serious consequences for prognostic models. However, at the present time we have only weak experimental evidence for the values of Sc and Pr in neutral conditions. More importantly, our poor understanding of the processes that set Sc and Pr and control their variation with diabatic stability is a barrier to generalizing MOST for use above tall canopies. 3. Diabatic stability and canopy flows As radiative cooling proceeds after sundown, turbulence within dense canopies can collapse suddenly leading to decoupling of the canopy layer from the boundary layer above. Theory suggests that this process should occur because of the different transport mechanisms of scalars and momentum at leaf level. So far no definitive experimental results are available to confirm or refute this theory or to set bounds on its applicability. This has important implications for transport and canopy microclimate. In particular we need to know how the controlling time scales of this process depend upon canopy density and radiative transfer. 4. Gravity currents Deep coherent gravity currents are often observed on long hill slopes covered with tall canopies. The process of turbulent collapse after sundown mentioned in (3) above produces a deep stable layer which is decoupled from the boundary layer above and must come into a new dynamic balance involving the hydrostatic and hydrodynamic pressure gradients and canopy drag. Scale analysis suggests that the strength of such currents depends upon hill length rather than hill slope while wind tunnel experiments reveal that they can penetrate onto flat ground far upwind of the hills on which they originate. Many field sites where flow is well behaved during the day can, therefore, be affected by such gravity flows at night. The parameters controlling the unsteady dynamics of this situation are not known but are of critical importance to measurements of water and other trace gas exchange over the diurnal cycle. The four topics chosen move from the fundamentals of canopy eddy structure to the impact at large scale of microscale processes. Each requires us to consider simultaneously processes from the leaf to the whole canopy scale and each will require effort from the whole community if serious progress is to be made.

  1. Leading chiral logarithms for the nucleon mass

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vladimirov, Alexey A.; Bijnens, Johan

    2016-01-22

    We give a short introduction to the calculation of the leading chiral logarithms, and present the results of the recent evaluation of the LLog series for the nucleon mass within the heavy baryon theory. The presented results are the first example of LLog calculation in the nucleon ChPT. We also discuss some regularities observed in the leading logarithmical series for nucleon mass.

  2. The square lattice Ising model on the rectangle II: finite-size scaling limit

    NASA Astrophysics Data System (ADS)

    Hucht, Alfred

    2017-06-01

    Based on the results published recently (Hucht 2017 J. Phys. A: Math. Theor. 50 065201), the universal finite-size contributions to the free energy of the square lattice Ising model on the L× M rectangle, with open boundary conditions in both directions, are calculated exactly in the finite-size scaling limit L, M\\to∞ , T\\to Tc , with fixed temperature scaling variable x\\propto(T/Tc-1)M and fixed aspect ratio ρ\\propto L/M . We derive exponentially fast converging series for the related Casimir potential and Casimir force scaling functions. At the critical point T=Tc we confirm predictions from conformal field theory (Cardy and Peschel 1988 Nucl. Phys. B 300 377, Kleban and Vassileva 1991 J. Phys. A: Math. Gen. 24 3407). The presence of corners and the related corner free energy has dramatic impact on the Casimir scaling functions and leads to a logarithmic divergence of the Casimir potential scaling function at criticality.

  3. Local magnitude scale for earthquakes in Turkey

    NASA Astrophysics Data System (ADS)

    Kılıç, T.; Ottemöller, L.; Havskov, J.; Yanık, K.; Kılıçarslan, Ö.; Alver, F.; Özyazıcıoğlu, M.

    2017-01-01

    Based on the earthquake event data accumulated by the Turkish National Seismic Network between 2007 and 2013, the local magnitude (Richter, Ml) scale is calibrated for Turkey and the close neighborhood. A total of 137 earthquakes (Mw > 3.5) are used for the Ml inversion for the whole country. Three Ml scales, whole country, East, and West Turkey, are developed, and the scales also include the station correction terms. Since the scales for the two parts of the country are very similar, it is concluded that a single Ml scale is suitable for the whole country. Available data indicate the new scale to suffer from saturation beyond magnitude 6.5. For this data set, the horizontal amplitudes are on average larger than vertical amplitudes by a factor of 1.8. The recommendation made is to measure Ml amplitudes on the vertical channels and then add the logarithm scale factor to have a measure of maximum amplitude on the horizontal. The new Ml is compared to Mw from EMSC, and there is almost a 1:1 relationship, indicating that the new scale gives reliable magnitudes for Turkey.

  4. Cloud Inhomogeneity from MODIS

    NASA Technical Reports Server (NTRS)

    Oreopoulos, Lazaros; Cahalan, Robert F.

    2004-01-01

    Two full months (July 2003 and January 2004) of MODIS Atmosphere Level-3 data from the Terra and Aqua satellites are analyzed in order to characterize the horizontal variability of cloud optical thickness and water path at global scales. Various options to derive cloud variability parameters are discussed. The climatology of cloud inhomogeneity is built by first calculating daily parameter values at spatial scales of l degree x 1 degree, and then at zonal and global scales, followed by averaging over monthly time scales. Geographical, diurnal, and seasonal changes of inhomogeneity parameters are examined separately for the two cloud phases, and separately over land and ocean. We find that cloud inhomogeneity is weaker in summer than in winter, weaker over land than ocean for liquid clouds, weaker for local morning than local afternoon, about the same for liquid and ice clouds on a global scale, but with wider probability distribution functions (PDFs) and larger latitudinal variations for ice, and relatively insensitive to whether water path or optical thickness products are used. Typical mean values at hemispheric and global scales of the inhomogeneity parameter nu (roughly the mean over the standard deviation of water path or optical thickness), range from approximately 2.5 to 3, while for the inhomogeneity parameter chi (the ratio of the logarithmic to linear mean) from approximately 0.7 to 0.8. Values of chi for zonal averages can occasionally fall below 0.6 and for individual gridpoints below 0.5. Our results demonstrate that MODIS is capable of revealing significant fluctuations in cloud horizontal inhomogenity and stress the need to model their global radiative effect in future studies.

  5. Extrusion rate of the Mount St. Helens lava dome estimated from terrestrial imagery, November 2004-December 2005: Chapter 12 in A volcano rekindled: the renewed eruption of Mount St. Helens, 2004-2006

    USGS Publications Warehouse

    Major, Jon J.; Kingsbury, Cole G.; Poland, Michael P.; LaHusen, Richard G.; Sherrod, David R.; Scott, William E.; Stauffer, Peter H.

    2008-01-01

    Oblique, terrestrial imagery from a single, fixed-position camera was used to estimate linear extrusion rates during sustained exogenous growth of the Mount St. Helens lava dome from November 2004 through December 2005. During that 14-month period, extrusion rates declined logarithmically from about 8-10 m/d to about 2 m/d. The overall ebbing of effusive output was punctuated, however, by episodes of fluctuating extrusion rates that varied on scales of days to weeks. The overall decline of effusive output and finer scale rate fluctuations correlated approximately with trends in seismicity and deformation. Those correlations portray an extrusion that underwent episodic, broad-scale stick-slip behavior superposed on the finer scale, smaller magnitude stick-slip behavior that has been hypothesized by other researchers to correlate with repetitive, nearly periodic shallow earthquakes.

  6. Wetting in a phase separating polymer blend film: quench depth dependence

    PubMed

    Geoghegan; Ermer; Jungst; Krausch; Brenn

    2000-07-01

    We have used 3He nuclear reaction analysis to measure the growth of the wetting layer as a function of immiscibility (quench depth) in blends of deuterated polystyrene and poly(alpha-methylstyrene) undergoing surface-directed spinodal decomposition. We are able to identify three different laws for the surface layer growth with time t. For the deepest quenches, the forces driving phase separation dominate (high thermal noise) and the surface layer grows with a t(1/3) coarsening behavior. For shallower quenches, a logarithmic behavior is observed, indicative of a low noise system. The crossover from logarithmic growth to t(1/3) behavior is close to where a wetting transition should occur. We also discuss the possibility of a "plating transition" extending complete wetting to deeper quenches by comparing the surface field with thermal noise. For the shallowest quench, a critical blend exhibits a t(1/2) behavior. We believe this surface layer growth is driven by the curvature of domains at the surface and shows how the wetting layer forms in the absence of thermal noise. This suggestion is reinforced by a slower growth at later times, indicating that the surface domains have coalesced. Atomic force microscopy measurements in each of the different regimes further support the above. The surface in the region of t(1/3) growth is initially somewhat rougher than that in the regime of logarithmic growth, indicating the existence of droplets at the surface.

  7. A comparison of wake characteristics of model and prototype buildings in transverse winds

    NASA Technical Reports Server (NTRS)

    Logan, E., Jr.; Phataraphruk, P.; Chang, J.

    1978-01-01

    Previously measured mean velocity and turbulence intensity profiles in the wake of a 26.8-m long building 3.2 m high and transverse to the wind direction in an atmospheric boundary layer several hundred meters thick were compared with profiles at corresponding stations downstream of a 1/50-scale model on the floor of a large meteorological wind tunnel in a boundary layer 0.61 m in thickness. The validity of using model wake data to predict full scale data was determined. Preliminary results are presented which indicate that disparities result from differences in relative depth of logarithmic layers, surface roughness, and the proximity of upstream obstacles.

  8. Magnitude and intensity: Measures of earthquake size and severity

    USGS Publications Warehouse

    Spall, Henry

    1982-01-01

    Earthquakes can be measured in terms of either the amount of energy they release (magnitude) or the degree of ground shaking they cause at a particular locality (intensity).  Although magnitude and intensity are basically different measures of an earthquake, they are frequently confused by the public and new reports of earthquakes.  Part of the confusion probably arises from the general similarity of scales used express these quantities.  The various magnitude scales represent logarithmic expressions of the energy released by an earthquake.  Magnitude is calculated from the record made by an earthquake on a calibrated seismograph.  There are no upper or lower limits to magnitude, although no measured earthquakes have exceeded magnitude 8.9.

  9. Extraction of partonic transverse momentum distributions from semi-inclusive deep-inelastic scattering, Drell-Yan and Z-boson production

    DOE PAGES

    Bacchetta, Alessandro; Delcarro, Filippo; Pisano, Cristian; ...

    2017-06-15

    We present an extraction of unpolarized partonic transverse momentum distributions (TMDs) from a simultaneous fit of available data measured in semi-inclusive deep-inelastic scattering, Drell-Yan and Z boson production. To connect data at different scales, we use TMD evolution at next-to-leading logarithmic accuracy. The analysis is restricted to the low-transverse-momentum region, with no matching to fixed-order calculations at high transverse momentum. We introduce specific choices to deal with TMD evolution at low scales, of the order of 1 GeV 2. Furthermore, this could be considered as a first attempt at a global fit of TMDs.

  10. Nonperturbative contributions to a resummed leptonic angular distribution in inclusive neutral vector boson production

    NASA Astrophysics Data System (ADS)

    Guzzi, Marco; Nadolsky, Pavel M.; Wang, Bowen

    2014-07-01

    We present an analysis of nonperturbative contributions to the transverse momentum distribution of Z/γ* bosons produced at hadron colliders. The new data on the angular distribution ϕη* of Drell-Yan pairs measured at the Tevatron are shown to be in excellent agreement with a perturbative QCD prediction based on the Collins-Soper-Sterman (CSS) resummation formalism at next-to-next-to-leading logarithmic (NNLL) accuracy. Using these data, we determine the nonperturbative component of the CSS resummed cross section and estimate its dependence on arbitrary resummation scales and other factors. With the scale dependence included at the NNLL level, a significant nonperturbative component is needed to describe the angular data.

  11. THE LITTLEST HIGGS MODEL AND ONE-LOOP ELECTROWEAK PRECISION CONSTRAINTS.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    CHEN, M.C.; DAWSON,S.

    2004-06-16

    We present in this talk the one-loop electroweak precision constraints in the Littlest Higgs model, including the logarithmically enhanced contributions from both fermion and scalar loops. We find the one-loop contributions are comparable to the tree level corrections in some regions of parameter space. A low cutoff scale is allowed for a non-zero triplet VEV. Constraints on various other parameters in the model are also discussed. The role of triplet scalars in constructing a consistent renormalization scheme is emphasized.

  12. Transverse momentum dependent parton distributions at small- x

    DOE PAGES

    Xiao, Bo-Wen; Yuan, Feng; Zhou, Jian

    2017-05-23

    We study the transverse momentum dependent (TMD) parton distributions at small-x in a consistent framework that takes into account the TMD evolution and small-x evolution simultaneously. The small-x evolution effects are included by computing the TMDs at appropriate scales in terms of the dipole scattering amplitudes, which obey the relevant Balitsky–Kovchegov equation. Meanwhile, the TMD evolution is obtained by resumming the Collins–Soper type large logarithms emerged from the calculations in small-x formalism into Sudakov factors.

  13. Interplay between Shear Loading and Structural Aging in a Physical Gelatin Gel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ronsin, O.; Caroli, C.; Baumberger, T.

    2009-09-25

    We show that the aging of the mechanical relaxation of a gelatin gel exhibits the same scaling phenomenology as polymer and colloidal glasses. In addition, gelatin is known to exhibit logarithmic structural aging (stiffening). We find that stress accelerates this process. However, this effect is definitely irreducible to a mere age shift with respect to natural aging. We suggest that it is interpretable in terms of elastically aided elementary (coil->helix) local events whose dynamics gradually slows down as aging increases geometric frustration.

  14. Analysis of interacting entropy-corrected holographic and new agegraphic dark energies

    NASA Astrophysics Data System (ADS)

    Ranjit, Chayan; Debnath, Ujjal

    In the present work, we assume the flat FRW model of the universe is filled with dark matter and dark energy where they are interacting. For dark energy model, we consider the entropy-corrected HDE (ECHDE) model and the entropy-corrected NADE (ECNADE). For entropy-corrected models, we assume logarithmic correction and power law correction. For ECHDE model, length scale L is assumed to be Hubble horizon and future event horizon. The ωde-ωde‧ analysis for our different horizons are discussed.

  15. Transverse momentum dependent parton distributions at small-x

    NASA Astrophysics Data System (ADS)

    Xiao, Bo-Wen; Yuan, Feng; Zhou, Jian

    2017-08-01

    We study the transverse momentum dependent (TMD) parton distributions at small-x in a consistent framework that takes into account the TMD evolution and small-x evolution simultaneously. The small-x evolution effects are included by computing the TMDs at appropriate scales in terms of the dipole scattering amplitudes, which obey the relevant Balitsky-Kovchegov equation. Meanwhile, the TMD evolution is obtained by resumming the Collins-Soper type large logarithms emerged from the calculations in small-x formalism into Sudakov factors.

  16. Transverse momentum dependent parton distributions at small- x

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xiao, Bo-Wen; Yuan, Feng; Zhou, Jian

    We study the transverse momentum dependent (TMD) parton distributions at small-x in a consistent framework that takes into account the TMD evolution and small-x evolution simultaneously. The small-x evolution effects are included by computing the TMDs at appropriate scales in terms of the dipole scattering amplitudes, which obey the relevant Balitsky–Kovchegov equation. Meanwhile, the TMD evolution is obtained by resumming the Collins–Soper type large logarithms emerged from the calculations in small-x formalism into Sudakov factors.

  17. Uniform versus Gaussian Beams: A Comparison of the Effects of Diffraction, Obscuration, and Aberations.

    DTIC Science & Technology

    1985-12-16

    balancing is discussed for the two types of beams. Zernike polynomials representing balanced primary aberration for uniform and Gaussian annular beams...plotted on a logarithmic scale (Figs. 3c and 3d ). The positions of maxima and minima and the correspond- ing irradiance and encircled-power values are...aberration 2 4 (representing a term in the expansion of the aberration in terms of a set of " Zernike " polynomials which are orthonormal over the amplitude

  18. Breaking of scale invariance in the time dependence of correlation functions in isotropic and homogeneous turbulence

    NASA Astrophysics Data System (ADS)

    Tarpin, Malo; Canet, Léonie; Wschebor, Nicolás

    2018-05-01

    In this paper, we present theoretical results on the statistical properties of stationary, homogeneous, and isotropic turbulence in incompressible flows in three dimensions. Within the framework of the non-perturbative renormalization group, we derive a closed renormalization flow equation for a generic n-point correlation (and response) function for large wave-numbers with respect to the inverse integral scale. The closure is obtained from a controlled expansion and relies on extended symmetries of the Navier-Stokes field theory. It yields the exact leading behavior of the flow equation at large wave-numbers |p→ i| and for arbitrary time differences ti in the stationary state. Furthermore, we obtain the form of the general solution of the corresponding fixed point equation, which yields the analytical form of the leading wave-number and time dependence of n-point correlation functions, for large wave-numbers and both for small ti and in the limit ti → ∞. At small ti, the leading contribution at large wave-numbers is logarithmically equivalent to -α (ɛL ) 2 /3|∑tip→ i|2, where α is a non-universal constant, L is the integral scale, and ɛ is the mean energy injection rate. For the 2-point function, the (tp)2 dependence is known to originate from the sweeping effect. The derived formula embodies the generalization of the effect of sweeping to n-point correlation functions. At large wave-numbers and large ti, we show that the ti2 dependence in the leading order contribution crosses over to a |ti| dependence. The expression of the correlation functions in this regime was not derived before, even for the 2-point function. Both predictions can be tested in direct numerical simulations and in experiments.

  19. Design and Analysis of Compact DNA Strand Displacement Circuits for Analog Computation Using Autocatalytic Amplifiers.

    PubMed

    Song, Tianqi; Garg, Sudhanshu; Mokhtar, Reem; Bui, Hieu; Reif, John

    2018-01-19

    A main goal in DNA computing is to build DNA circuits to compute designated functions using a minimal number of DNA strands. Here, we propose a novel architecture to build compact DNA strand displacement circuits to compute a broad scope of functions in an analog fashion. A circuit by this architecture is composed of three autocatalytic amplifiers, and the amplifiers interact to perform computation. We show DNA circuits to compute functions sqrt(x), ln(x) and exp(x) for x in tunable ranges with simulation results. A key innovation in our architecture, inspired by Napier's use of logarithm transforms to compute square roots on a slide rule, is to make use of autocatalytic amplifiers to do logarithmic and exponential transforms in concentration and time. In particular, we convert from the input that is encoded by the initial concentration of the input DNA strand, to time, and then back again to the output encoded by the concentration of the output DNA strand at equilibrium. This combined use of strand-concentration and time encoding of computational values may have impact on other forms of molecular computation.

  20. Optimizing stroke clinical trial design: estimating the proportion of eligible patients.

    PubMed

    Taylor, Alexis; Castle, Amanda; Merino, José G; Hsia, Amie; Kidwell, Chelsea S; Warach, Steven

    2010-10-01

    Clinical trial planning and site selection require an accurate estimate of the number of eligible patients at each site. In this study, we developed a tool to calculate the proportion of patients who would meet a specific trial's age, baseline severity, and time to treatment inclusion criteria. From a sample of 1322 consecutive patients with acute ischemic cerebrovascular syndromes, we developed regression curves relating the proportion of patients within each range of the 3 variables. We used half the patients to develop the model and the other half to validate it by comparing predicted vs actual proportions who met the criteria for 4 current stroke trials. The predicted proportion of patients meeting inclusion criteria ranged from 6% to 28% among the different trials. The proportion of trial-eligible patients predicted from the first half of the data were within 0.4% to 1.4% of the actual proportion of eligible patients. This proportion increased logarithmically with National Institutes of Health Stroke Scale score and time from onset; lowering the baseline limits of the National Institutes of Health Stroke Scale score and extending the treatment window would have the greatest impact on the proportion of patients eligible for a stroke trial. This model helps estimate the proportion of stroke patients eligible for a study based on different upper and lower limits for age, stroke severity, and time to treatment, and it may be a useful tool in clinical trial planning.

  1. Extraction of quark transversity distribution and Collins fragmentation functions with QCD evolution

    NASA Astrophysics Data System (ADS)

    Kang, Zhong-Bo; Prokudin, Alexei; Sun, Peng; Yuan, Feng

    2016-01-01

    We study the transverse-momentum-dependent (TMD) evolution of the Collins azimuthal asymmetries in e+e- annihilations and semi-inclusive hadron production in deep inelastic scattering processes. All the relevant coefficients are calculated up to the next-to-leading-logarithmic-order accuracy. By applying the TMD evolution at the approximate next-to-leading-logarithmic order in the Collins-Soper-Sterman formalism, we extract transversity distributions for u and d quarks and Collins fragmentation functions from current experimental data by a global analysis of the Collins asymmetries in back-to-back dihadron productions in e+e- annihilations measured by BELLE and BABAR collaborations and semi-inclusive hadron production in deep inelastic scattering data from HERMES, COMPASS, and JLab HALL A experiments. The impact of the evolution effects and the relevant theoretical uncertainties are discussed. We further discuss the TMD interpretation for our results and illustrate the unpolarized quark distribution, transversity distribution, unpolarized quark fragmentation, and Collins fragmentation functions depending on the transverse momentum and the hard momentum scale. We make detailed predictions for future experiments and discuss their impact.

  2. The time resolution of the St Petersburg paradox

    PubMed Central

    Peters, Ole

    2011-01-01

    A resolution of the St Petersburg paradox is presented. In contrast to the standard resolution, utility is not required. Instead, the time-average performance of the lottery is computed. The final result can be phrased mathematically identically to Daniel Bernoulli's resolution, which uses logarithmic utility, but is derived using a conceptually different argument. The advantage of the time resolution is the elimination of arbitrary utility functions. PMID:22042904

  3. VISCEL: A general-purpose computer program for analysis of linear viscoelastic structures (user's manual), volume 1

    NASA Technical Reports Server (NTRS)

    Gupta, K. K.; Akyuz, F. A.; Heer, E.

    1972-01-01

    This program, an extension of the linear equilibrium problem solver ELAS, is an updated and extended version of its earlier form (written in FORTRAN 2 for the IBM 7094 computer). A synchronized material property concept utilizing incremental time steps and the finite element matrix displacement approach has been adopted for the current analysis. A special option enables employment of constant time steps in the logarithmic scale, thereby reducing computational efforts resulting from accumulative material memory effects. A wide variety of structures with elastic or viscoelastic material properties can be analyzed by VISCEL. The program is written in FORTRAN 5 language for the Univac 1108 computer operating under the EXEC 8 system. Dynamic storage allocation is automatically effected by the program, and the user may request up to 195K core memory in a 260K Univac 1108/EXEC 8 machine. The physical program VISCEL, consisting of about 7200 instructions, has four distinct links (segments), and the compiled program occupies a maximum of about 11700 words decimal of core storage.

  4. Galactic cannibalism. III. The morphological evolution of galaxies and clusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hausman, M.A.; Ostriker, J.P.

    1978-09-01

    We present a numerical simulation for the evolution of massive cluster galaxies due to the accretion of other galaxies, finding that after several accretions a bright ''normal'' galaxy begins to resemble a cD giant, with a bright core and large core radius. Observable quantities such as color, scale size, and logarithmic intensity gradient ..cap alpha.. are calculated and are consistent with observations. The multiple nuclei sometimes found in cD galaxies may be understood as the undigested remnants of cannibalized companions. A cluster's bright galaxies are selectively depleted, an effect which can transform the cluster's luminosity function from a power lawmore » to the observed form with a steep high-luminosity falloff and which pushes the turnover point to lower luminosities with time. We suggest that these effects may account for apparent nonstatistical features observed in the luminosity distribution of bright cluster galaxies, and that the sequence of cluster types discovered by Bautz and Morgan and Oemler is essentially one of increasing dynamical evolution, the rate of evolution depending inversely on the cluster's central relaxation time.« less

  5. A study of the eigenvectors of low frequency vibrational modes in crystalline cytidine via high pressure Raman spectroscopy

    NASA Astrophysics Data System (ADS)

    Lee, Scott A.

    2014-03-01

    High-pressure Raman spectroscopy has been used to study the eigenvectors and eigenvalues of the low-frequency vibrational modes of crystalline cytidine at 295 K by evaluating the logarithmic derivative of the vibrational frequency with respect to pressure: 1/ω dω/dP. Crystalline samples of molecular materials such as cytidine have vibrational modes that are localized within a molecular unit (``internal'' modes) as well as modes in which the molecular units vibrate against each other (``external'' modes). The value of the logarithmic derivative is a diagnostic probe of the nature of the eigenvector of the vibrational modes, making high pressure experiments a very useful probe for such studies. Internal stretching modes have low logarithmic derivatives while external as well as internal torsional and bending modes have higher logarithmic derivatives. All of the Raman modes below 200 cm-1 in cytidine are found to have high logarithmic derivatives, consistent with being either external modes or internal torsional or bending modes.

  6. Electronic filters, signal conversion apparatus, hearing aids and methods

    NASA Technical Reports Server (NTRS)

    Morley, Jr., Robert E. (Inventor); Engebretson, A. Maynard (Inventor); Engel, George L. (Inventor); Sullivan, Thomas J. (Inventor)

    1994-01-01

    An electronic filter for filtering an electrical signal. Signal processing circuitry therein includes a logarithmic filter having a series of filter stages with inputs and outputs in cascade and respective circuits associated with the filter stages for storing electrical representations of filter parameters. The filter stages include circuits for respectively adding the electrical representations of the filter parameters to the electrical signal to be filtered thereby producing a set of filter sum signals. At least one of the filter stages includes circuitry for producing a filter signal in substantially logarithmic form at its output by combining a filter sum signal for that filter stage with a signal from an output of another filter stage. The signal processing circuitry produces an intermediate output signal, and a multiplexer connected to the signal processing circuit multiplexes the intermediate output signal with the electrical signal to be filtered so that the logarithmic filter operates as both a logarithmic prefilter and a logarithmic postfilter. Other electronic filters, signal conversion apparatus, electroacoustic systems, hearing aids and methods are also disclosed.

  7. Self-guaranteed measurement-based quantum computation

    NASA Astrophysics Data System (ADS)

    Hayashi, Masahito; Hajdušek, Michal

    2018-05-01

    In order to guarantee the output of a quantum computation, we usually assume that the component devices are trusted. However, when the total computation process is large, it is not easy to guarantee the whole system when we have scaling effects, unexpected noise, or unaccounted for correlations between several subsystems. If we do not trust the measurement basis or the prepared entangled state, we do need to be worried about such uncertainties. To this end, we propose a self-guaranteed protocol for verification of quantum computation under the scheme of measurement-based quantum computation where no prior-trusted devices (measurement basis or entangled state) are needed. The approach we present enables the implementation of verifiable quantum computation using the measurement-based model in the context of a particular instance of delegated quantum computation where the server prepares the initial computational resource and sends it to the client, who drives the computation by single-qubit measurements. Applying self-testing procedures, we are able to verify the initial resource as well as the operation of the quantum devices and hence the computation itself. The overhead of our protocol scales with the size of the initial resource state to the power of 4 times the natural logarithm of the initial state's size.

  8. Parameter identification of JONSWAP spectrum acquired by airborne LIDAR

    NASA Astrophysics Data System (ADS)

    Yu, Yang; Pei, Hailong; Xu, Chengzhong

    2017-12-01

    In this study, we developed the first linear Joint North Sea Wave Project (JONSWAP) spectrum (JS), which involves a transformation from the JS solution to the natural logarithmic scale. This transformation is convenient for defining the least squares function in terms of the scale and shape parameters. We identified these two wind-dependent parameters to better understand the wind effect on surface waves. Due to its efficiency and high-resolution, we employed the airborne Light Detection and Ranging (LIDAR) system for our measurements. Due to the lack of actual data, we simulated ocean waves in the MATLAB environment, which can be easily translated into industrial programming language. We utilized the Longuet-Higgin (LH) random-phase method to generate the time series of wave records and used the fast Fourier transform (FFT) technique to compute the power spectra density. After validating these procedures, we identified the JS parameters by minimizing the mean-square error of the target spectrum to that of the estimated spectrum obtained by FFT. We determined that the estimation error is relative to the amount of available wave record data. Finally, we found the inverse computation of wind factors (wind speed and wind fetch length) to be robust and sufficiently precise for wave forecasting.

  9. Reynolds number scaling of pocket events in the viscous sublayer

    NASA Astrophysics Data System (ADS)

    Metzger, M.; Fershtut, A.; Kunkel, C.; Klewicki, J.

    2017-12-01

    Recent findings [X. Wu et al., Proc. Natl. Acad. Sci. USA 114, E5292 (2017), 10.1073/pnas.1704671114] reinforce earlier assertions [e.g., R. Falco, Philos. Trans. R. Soc. London A 336, 103 (1991), 10.1098/rsta.1991.0069] that the sublayer pocket motions play a distinctly important role in near-wall dynamics. In the present study, smoke visualization and axial velocity measurements are combined in order to establish the scaling behavior of pocket events in the viscous sublayer of the turbulent boundary layer. In doing so, an identical analysis methodology is employed over an extensive range of friction Reynolds numbers 388 ≤δ+≤2.2 ×105 . Both the pocket width W and time interval between pocket events T increase logarithmically with Reynolds number when normalized by viscous units. Normalization of W and T by the Taylor microscales evaluated at a wall-normal location of about 100 viscous units, however, appears to successfully remove this Reynolds-number dependence. The present results are discussed in the context of motion formation owing to the three dimensionalization of the near-wall vorticity field and, concomitantly, the recurring perturbation of the viscous sublayer.

  10. New scaling model for variables and increments with heavy-tailed distributions

    NASA Astrophysics Data System (ADS)

    Riva, Monica; Neuman, Shlomo P.; Guadagnini, Alberto

    2015-06-01

    Many hydrological (as well as diverse earth, environmental, ecological, biological, physical, social, financial and other) variables, Y, exhibit frequency distributions that are difficult to reconcile with those of their spatial or temporal increments, ΔY. Whereas distributions of Y (or its logarithm) are at times slightly asymmetric with relatively mild peaks and tails, those of ΔY tend to be symmetric with peaks that grow sharper, and tails that become heavier, as the separation distance (lag) between pairs of Y values decreases. No statistical model known to us captures these behaviors of Y and ΔY in a unified and consistent manner. We propose a new, generalized sub-Gaussian model that does so. We derive analytical expressions for probability distribution functions (pdfs) of Y and ΔY as well as corresponding lead statistical moments. In our model the peak and tails of the ΔY pdf scale with lag in line with observed behavior. The model allows one to estimate, accurately and efficiently, all relevant parameters by analyzing jointly sample moments of Y and ΔY. We illustrate key features of our new model and method of inference on synthetically generated samples and neutron porosity data from a deep borehole.

  11. Segmented strings in AdS 3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Callebaut, Nele; Gubser, Steven S.; Samberg, Andreas

    We study segmented strings in flat space and in AdS 3. In flat space, these well known classical motions describe strings which at any instant of time are piecewise linear. In AdS 3, the worldsheet is composed of faces each of which is a region bounded by null geodesics in an AdS 2 subspace of AdS 3. The time evolution can be described by specifying the null geodesic motion of kinks in the string at which two segments are joined. The outcome of collisions of kinks on the worldsheet can be worked out essentially using considerations of causality. We studymore » several examples of closed segmented strings in AdS 3 and find an unexpected quasi-periodic behavior. Here, we also work out a WKB analysis of quantum states of yo-yo strings in AdS 5 and find a logarithmic term reminiscent of the logarithmic twist of string states on the leading Regge trajectory.« less

  12. Segmented strings in AdS 3

    DOE PAGES

    Callebaut, Nele; Gubser, Steven S.; Samberg, Andreas; ...

    2015-11-17

    We study segmented strings in flat space and in AdS 3. In flat space, these well known classical motions describe strings which at any instant of time are piecewise linear. In AdS 3, the worldsheet is composed of faces each of which is a region bounded by null geodesics in an AdS 2 subspace of AdS 3. The time evolution can be described by specifying the null geodesic motion of kinks in the string at which two segments are joined. The outcome of collisions of kinks on the worldsheet can be worked out essentially using considerations of causality. We studymore » several examples of closed segmented strings in AdS 3 and find an unexpected quasi-periodic behavior. Here, we also work out a WKB analysis of quantum states of yo-yo strings in AdS 5 and find a logarithmic term reminiscent of the logarithmic twist of string states on the leading Regge trajectory.« less

  13. Compression technique for large statistical data bases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eggers, S.J.; Olken, F.; Shoshani, A.

    1981-03-01

    The compression of large statistical databases is explored and are proposed for organizing the compressed data, such that the time required to access the data is logarithmic. The techniques exploit special characteristics of statistical databases, namely, variation in the space required for the natural encoding of integer attributes, a prevalence of a few repeating values or constants, and the clustering of both data of the same length and constants in long, separate series. The techniques are variations of run-length encoding, in which modified run-lengths for the series are extracted from the data stream and stored in a header, which ismore » used to form the base level of a B-tree index into the database. The run-lengths are cumulative, and therefore the access time of the data is logarithmic in the size of the header. The details of the compression scheme and its implementation are discussed, several special cases are presented, and an analysis is given of the relative performance of the various versions.« less

  14. A quick response four decade logarithmic high-voltage stepping supply

    NASA Technical Reports Server (NTRS)

    Doong, H.

    1978-01-01

    An improved high-voltage stepping supply, for space instrumentation is described where low power consumption and fast settling time between steps are required. The high-voltage stepping supply, utilizing an average power of 750 milliwatts, delivers a pair of mirror images with 64 level logarithmic outputs. It covers a four decade range of + or - 2500 to + or - 0.29 volts having an output stability of + or - 0.5 percent or + or - 20 millivolts for all line load and temperature variations. The supply provides a typical step setting time of 1 millisecond with 100 microseconds for the lower two decades. The versatile design features of the high-voltage stepping supply provides a quick response staircase generator as described or a fixed voltage with the option to change levels as required over large dynamic ranges without circuit modifications. The concept can be implemented up to + or - 5000 volts. With these design features, the high-voltage stepping supply should find numerous applications where charged particle detection, electro-optical systems, and high voltage scientific instruments are used.

  15. Stability Analysis of Roughness Array Wake in a High-Speed Boundary Layer

    NASA Technical Reports Server (NTRS)

    Choudhari, Meelan; Li, Fei; Edwards, Jack

    2009-01-01

    Computations are performed to examine the effects of both an isolated and spanwise periodic array of trip elements on a high-speed laminar boundary layer, so as to identify the potential physical mechanisms underlying an earlier transition to turbulence as a result of the trip(s). In the context of a 0.333 scale model of the Hyper-X forebody configuration, the time accurate solution for an array of ramp shaped trips asymptotes to a stationary field at large times, indicating the likely absence of a strong absolute instability in the mildly separated flow due to the trips. A prominent feature of the wake flow behind the trip array corresponds to streamwise streaks that are further amplified in passing through the compression corner. Stability analysis of the streaks using a spatial, 2D eigenvalue approach reveals the potential for a strong convective instability that might explain the earlier onset of turbulence within the array wake. The dominant modes of streak instability are primarily sustained by the spanwise gradients associated with the streaks and lead to integrated logarithmic amplification factors (N factors) approaching 7 over the first ramp of the scaled Hyper-X forebody, and substantially higher over the second ramp. Additional computations are presented to shed further light on the effects of both trip geometry and the presence of a compression corner on the evolution of the streaks.

  16. Image sensor system with bio-inspired efficient coding and adaptation.

    PubMed

    Okuno, Hirotsugu; Yagi, Tetsuya

    2012-08-01

    We designed and implemented an image sensor system equipped with three bio-inspired coding and adaptation strategies: logarithmic transform, local average subtraction, and feedback gain control. The system comprises a field-programmable gate array (FPGA), a resistive network, and active pixel sensors (APS), whose light intensity-voltage characteristics are controllable. The system employs multiple time-varying reset voltage signals for APS in order to realize multiple logarithmic intensity-voltage characteristics, which are controlled so that the entropy of the output image is maximized. The system also employs local average subtraction and gain control in order to obtain images with an appropriate contrast. The local average is calculated by the resistive network instantaneously. The designed system was successfully used to obtain appropriate images of objects that were subjected to large changes in illumination.

  17. Comparing barrier algorithms

    NASA Technical Reports Server (NTRS)

    Arenstorf, Norbert S.; Jordan, Harry F.

    1987-01-01

    A barrier is a method for synchronizing a large number of concurrent computer processes. After considering some basic synchronization mechanisms, a collection of barrier algorithms with either linear or logarithmic depth are presented. A graphical model is described that profiles the execution of the barriers and other parallel programming constructs. This model shows how the interaction between the barrier algorithms and the work that they synchronize can impact their performance. One result is that logarithmic tree structured barriers show good performance when synchronizing fixed length work, while linear self-scheduled barriers show better performance when synchronizing fixed length work with an imbedded critical section. The linear barriers are better able to exploit the process skew associated with critical sections. Timing experiments, performed on an eighteen processor Flex/32 shared memory multiprocessor, that support these conclusions are detailed.

  18. High-energy evolution to three loops

    NASA Astrophysics Data System (ADS)

    Caron-Huot, Simon; Herranen, Matti

    2018-02-01

    The Balitsky-Kovchegov equation describes the high-energy growth of gauge theory scattering amplitudes as well as nonlinear saturation effects which stop it. We obtain the three-loop corrections to the equation in planar N = 4 super Yang-Mills theory. Our method exploits a recently established equivalence with the physics of soft wide-angle radiation, so-called non-global logarithms, and thus yields at the same time the threeloop evolution equation for non-global logarithms. As a by-product of our analysis, we develop a Lorentz-covariant method to subtract infrared and collinear divergences in crosssection calculations in the planar limit. We compare our result in the linear regime with a recent prediction for the so-called Pomeron trajectory, and compare its collinear limit with predictions from the spectrum of twist-two operators.

  19. The Evolution of Soft Collinear Effective Theory

    DOE PAGES

    Lee, Christopher

    2015-02-25

    Soft Collinear Effective Theory (SCET) is an effective field theory of Quantum Chromodynamics (QCD) for processes where there are energetic, nearly lightlike degrees of freedom interacting with one another via soft radiation. SCET has found many applications in high-energy and nuclear physics, especially in recent years the physics of hadronic jets in e +e -, lepton-hadron, hadron-hadron, and heavy-ion collisions. SCET can be used to factorize multi-scale cross sections in these processes into single-scale hard, collinear, and soft functions, and to evolve these through the renormalization group to resum large logarithms of ratios of the scales that appear in themore » QCD perturbative expansion, as well as to study properties of nonperturbative effects. We overview the elementary concepts of SCET and describe how they can be applied in high-energy and nuclear physics.« less

  20. Turbulence in simulated H II regions

    NASA Astrophysics Data System (ADS)

    Medina, S.-N. X.; Arthur, S. J.; Henney, W. J.; Mellema, G.; Gazol, A.

    2014-12-01

    We investigate the scale dependence of fluctuations inside a realistic model of an evolving turbulent H II region and to what extent these may be studied observationally. We find that the multiple scales of energy injection from champagne flows and the photoionization of clumps and filaments leads to a flatter spectrum of fluctuations than would be expected from top-down turbulence driven at the largest scales. The traditional structure function approach to the observational study of velocity fluctuations is shown to be incapable of reliably determining the velocity power spectrum of our simulation. We find that a more promising approach is the Velocity Channel Analysis technique of Lazarian & Pogosyan (2000), which, despite being intrinsically limited by thermal broadening, can successfully recover the logarithmic slope of the velocity power spectrum to a precision of ±0.1 from high-resolution optical emission-line spectroscopy.

  1. A Poisson regression approach to model monthly hail occurrence in Northern Switzerland using large-scale environmental variables

    NASA Astrophysics Data System (ADS)

    Madonna, Erica; Ginsbourger, David; Martius, Olivia

    2018-05-01

    In Switzerland, hail regularly causes substantial damage to agriculture, cars and infrastructure, however, little is known about its long-term variability. To study the variability, the monthly number of days with hail in northern Switzerland is modeled in a regression framework using large-scale predictors derived from ERA-Interim reanalysis. The model is developed and verified using radar-based hail observations for the extended summer season (April-September) in the period 2002-2014. The seasonality of hail is explicitly modeled with a categorical predictor (month) and monthly anomalies of several large-scale predictors are used to capture the year-to-year variability. Several regression models are applied and their performance tested with respect to standard scores and cross-validation. The chosen model includes four predictors: the monthly anomaly of the two meter temperature, the monthly anomaly of the logarithm of the convective available potential energy (CAPE), the monthly anomaly of the wind shear and the month. This model well captures the intra-annual variability and slightly underestimates its inter-annual variability. The regression model is applied to the reanalysis data back in time to 1980. The resulting hail day time series shows an increase of the number of hail days per month, which is (in the model) related to an increase in temperature and CAPE. The trend corresponds to approximately 0.5 days per month per decade. The results of the regression model have been compared to two independent data sets. All data sets agree on the sign of the trend, but the trend is weaker in the other data sets.

  2. Quantum loop corrections of a charged de Sitter black hole

    NASA Astrophysics Data System (ADS)

    Naji, J.

    2018-03-01

    A charged black hole in de Sitter (dS) space is considered and logarithmic corrected entropy used to study its thermodynamics. Logarithmic corrections of entropy come from thermal fluctuations, which play a role of quantum loop correction. In that case we are able to study the effect of quantum loop on black hole thermodynamics and statistics. As a black hole is a gravitational object, it helps to obtain some information about the quantum gravity. The first and second laws of thermodynamics are investigated for the logarithmic corrected case and we find that it is only valid for the charged dS black hole. We show that the black hole phase transition disappears in the presence of logarithmic correction.

  3. Modeling the Bergeron-Findeisen Process Using PDF Methods With an Explicit Representation of Mixing

    NASA Astrophysics Data System (ADS)

    Jeffery, C.; Reisner, J.

    2005-12-01

    Currently, the accurate prediction of cloud droplet and ice crystal number concentration in cloud resolving, numerical weather prediction and climate models is a formidable challenge. The Bergeron-Findeisen process in which ice crystals grow by vapor deposition at the expense of super-cooled droplets is expected to be inhomogeneous in nature--some droplets will evaporate completely in centimeter-scale filaments of sub-saturated air during turbulent mixing while others remain unchanged [Baker et al., QJRMS, 1980]--and is unresolved at even cloud-resolving scales. Despite the large body of observational evidence in support of the inhomogeneous mixing process affecting cloud droplet number [most recently, Brenguier et al., JAS, 2000], it is poorly understood and has yet to be parameterized and incorporated into a numerical model. In this talk, we investigate the Bergeron-Findeisen process using a new approach based on simulations of the probability density function (PDF) of relative humidity during turbulent mixing. PDF methods offer a key advantage over Eulerian (spatial) models of cloud mixing and evaporation: the low probability (cm-scale) filaments of entrained air are explicitly resolved (in probability space) during the mixing event even though their spatial shape, size and location remain unknown. Our PDF approach reveals the following features of the inhomogeneous mixing process during the isobaric turbulent mixing of two parcels containing super-cooled water and ice, respectively: (1) The scavenging of super-cooled droplets is inhomogeneous in nature; some droplets evaporate completely at early times while others remain unchanged. (2) The degree of total droplet evaporation during the initial mixing period depends linearly on the mixing fractions of the two parcels and logarithmically on Damköhler number (Da)---the ratio of turbulent to evaporative time-scales. (3) Our simulations predict that the PDF of Lagrangian (time-integrated) subsaturation (S) goes as S-1 at high Da. This behavior results from a Gaussian mixing closure and requires observational validation.

  4. Water quality trend analysis for the Karoon River in Iran.

    PubMed

    Naddafi, K; Honari, H; Ahmadi, M

    2007-11-01

    The Karoon River basin, with a basin area of 67,000 km(2), is located in the southern part of Iran. Monthly measurements of the discharge and the water quality variables have been monitored at the Gatvand and Khorramshahr stations of the Karoon River on a monthly basis for the period 1967-2005 and 1969-2005 for Gatvand and Khorramshahr stations, respectively. In this paper the time series of monthly values of water quality parameters and the discharge were analyzed using statistical methods and the existence of trends and the evaluation of the best fitted models were performed. The Kolmogorov-Smirnov test was used to select the theoretical distribution which best fitted the data. Simple regression was used to examine the concentration-time relationships. The concentration-time relationships showed better correlation in Khorramshahr station than that of Gatvand station. The exponential model expresses better concentration - time relationships in Khorramshahr station, but in Gatvand station the logarithmic model is more fitted. The correlation coefficients are positive for all of the variables in Khorramshahr station also in Gatvand station all of the variables are positive except magnesium (Mg2+), bicarbonates (HCO3-) and temporary hardness which shows a decreasing relationship. The logarithmic and the exponential models describe better the concentration-time relationships for two stations.

  5. DIVERSITY: A new method for evaluating sensitivity of groundwater to contamination

    NASA Astrophysics Data System (ADS)

    Ray, J. A.; O'Dell, P. W.

    1993-12-01

    This study outlines an improved method, DIVERSITY, for delineating and rating groundwater sensitivity. It is an acronym for DIspersion/VElocity-Rated SensitivITY, which is based on an assessment of three aquifer characteristics: recharge potential, flow velocity, and flow directions. The primary objective of this method is to produce sensitivity maps at the county or state scale that illustrate intrinsic potential for contamination of the uppermost aquifer. Such maps can be used for recognition of aquifer sensitivity and for protection of groundwater quality. We suggest that overriding factors that strongly affect one or more of the three basic aquifer characteristics may systematically elevate or lower the sensitivity rating. The basic method employs a three-step procedure: (1) Hydrogeologic settings are delineated on the basis of geology and groundwater recharge/discharge position within a terrane. (2) A sensitivity envelope or model for each setting is outlined on a three-component rating graph. (3) Sensitivity ratings derived from the envelope are extrapolated to hydrogeologic setting polygons utilizing overriding and key factors, when appropriate. The three-component sensitivity rating graph employs two logarithmic scales and a relative area scale on which measured and estimated values may be plotted. The flow velocity scale ranging from 0.01 to more than 10,000 m/d is the keystone of the rating graph. Whenever possible, actual time-of-travel values are plotted on the velocity scale to bracket the position of a sensitivity envelope. The DIVERSITY method was developed and tested for statewide use in Kentucky, but we believe it is also practical and applicable for use in almost any other area.

  6. Viscoelastic subdiffusion: from anomalous to normal.

    PubMed

    Goychuk, Igor

    2009-10-01

    We study viscoelastic subdiffusion in bistable and periodic potentials within the generalized Langevin equation approach. Our results justify the (ultra)slow fluctuating rate view of the corresponding bistable non-Markovian dynamics which displays bursting and anticorrelation of the residence times in two potential wells. The transition kinetics is asymptotically stretched exponential when the potential barrier V0 several times exceeds thermal energy k(B)T [V(0) approximately (2-10)k(B)T] and it cannot be described by the non-Markovian rate theory (NMRT). The well-known NMRT result approximates, however, ever better with the increasing barrier height, the most probable logarithm of the residence times. Moreover, the rate description is gradually restored when the barrier height exceeds a fuzzy borderline which depends on the power-law exponent of free subdiffusion alpha . Such a potential-free subdiffusion is ergodic. Surprisingly, in periodic potentials it is not sensitive to the barrier height in the long time asymptotic limit. However, the transient to this asymptotic regime is extremally slow and it does profoundly depend on the barrier height. The time scale of such subdiffusion can exceed the mean residence time in a potential well or in a finite spatial domain by many orders of magnitude. All these features are in sharp contrast with an alternative subdiffusion mechanism involving jumps among traps with the divergent mean residence time in these traps.

  7. Impact of the bottom drag coefficient on saltwater intrusion in the extremely shallow estuary

    NASA Astrophysics Data System (ADS)

    Lyu, Hanghang; Zhu, Jianrong

    2018-02-01

    The interactions between the extremely shallow, funnel-shaped topography and dynamic processes in the North Branch (NB) of the Changjiang Estuary produce a particular type of saltwater intrusion, saltwater spillover (SSO), from the NB into the South Branch (SB). This dominant type of saltwater intrusion threatens the winter water supplies of reservoirs located in the estuary. Simulated SSO was weaker than actual SSO in previous studies, and this problem has not been solved until now. The improved ECOM-si model with the advection scheme HSIMT-TVD was applied in this study. Logarithmic and Chézy-Manning formulas of the bottom drag coefficient (BDC) were established in the model to investigate the associated effect on saltwater intrusion in the NB. Modeled data and data collected at eight measurement stations located in the NB from February 19 to March 1, 2017, were compared, and three skill assessment indicators, the correlation coefficient (CC), root-mean-square error (RMSE), and skill score (SS), of water velocity and salinity were used to quantitatively validate the model. The results indicated that the water velocities modeled using the Chézy-Manning formula of BDC were slightly more accurate than those based on the logarithmic BDC formula, but the salinities produced by the latter formula were more accurate than those of the former. The results showed that the BDC increases when water depth decreases during ebb tide, and the results based on the Chézy-Manning formula were smaller than those based on the logarithmic formula. Additionally, the landward net water flux in the upper reaches of the NB during spring tide increases based on the Chézy-Manning formula, and saltwater intrusion in the NB was enhanced, especially in the upper reaches of the NB. At a transect in the upper reaches of the NB, the net transect water flux (NTWF) is upstream in spring tide and downstream in neap tide, and the values produced by the Chézy-Manning formula are much larger than those based on the logarithmic formula. Notably, SSO during spring tide was 1.8 times larger based on the Chézy-Manning formula than that based on the logarithmic formula. The model underestimated SSO and salinity at the hydrological stations in the SB based on the logarithmic BDC formula but successfully simulated SSO and the temporal variations in salinity in the SB using the Chézy-Manning formula of BDC.

  8. Alternating current (AC) iontophoretic transport across human epidermal membrane: effects of AC frequency and amplitude.

    PubMed

    Yan, Guang; Xu, Qingfang; Anissimov, Yuri G; Hao, Jinsong; Higuchi, William I; Li, S Kevin

    2008-03-01

    As a continuing effort to understand the mechanisms of alternating current (AC) transdermal iontophoresis and the iontophoretic transport pathways in the stratum corneum (SC), the objectives of the present study were to determine the interplay of AC frequency, AC voltage, and iontophoretic transport of ionic and neutral permeants across human epidermal membrane (HEM) and use AC as a means to characterize the transport pathways. Constant AC voltage iontophoresis experiments were conducted with HEM in 0.10 M tetraethyl ammonium pivalate (TEAP). AC frequencies ranging from 0.0001 to 25 Hz and AC applied voltages of 0.5 and 2.5 V were investigated. Tetraethyl ammonium (TEA) and arabinose (ARA) were the ionic and neutral model permeants, respectively. In data analysis, the logarithm of the permeability coefficients of HEM for the model permeants was plotted against the logarithm of the HEM electrical resistance for each AC condition. As expected, linear correlations between the logarithms of permeability coefficients and the logarithms of resistances of HEM were observed, and the permeability data were first normalized and then compared at the same HEM electrical resistance using these correlations. Transport enhancement of the ionic permeant was significantly larger than that of the neutral permeant during AC iontophoresis. The fluxes of the ionic permeant during AC iontophoresis of 2.5 V in the frequency range from 5 to 1,000 Hz were relatively constant and were approximately 4 times over those of passive transport. When the AC frequency decreased from 5 to 0.001 Hz at 2.5 V, flux enhancement increased to around 50 times over passive transport. While the AC frequency for achieving the full effect of iontophoretic enhancement at low AC frequency was lower than anticipated, the frequency for approaching passive diffusion transport at high frequency was higher than expected from the HEM morphology. These observations are consistent with a transport model of multiple barriers in series and the previous hypothesis that the iontophoresis pathways across HEM under AC behave like a series of reservoirs interconnected by short pore pathways.

  9. Small range logarithm calculation on Intel Quartus II Verilog

    NASA Astrophysics Data System (ADS)

    Mustapha, Muhazam; Mokhtar, Anis Shahida; Ahmad, Azfar Asyrafie

    2018-02-01

    Logarithm function is the inverse of exponential function. This paper implement power series of natural logarithm function using Verilog HDL in Quartus II. The mode of design used is RTL in order to decrease the number of megafunctions. The simulations were done to determine the precision and number of LEs used so that the output calculated accurately. It is found that the accuracy of the system only valid for the range of 1 to e.

  10. Accelerating consensus on coevolving networks: The effect of committed individuals

    NASA Astrophysics Data System (ADS)

    Singh, P.; Sreenivasan, S.; Szymanski, B. K.; Korniss, G.

    2012-04-01

    Social networks are not static but, rather, constantly evolve in time. One of the elements thought to drive the evolution of social network structure is homophily—the need for individuals to connect with others who are similar to them. In this paper, we study how the spread of a new opinion, idea, or behavior on such a homophily-driven social network is affected by the changing network structure. In particular, using simulations, we study a variant of the Axelrod model on a network with a homophily-driven rewiring rule imposed. First, we find that the presence of rewiring within the network, in general, impedes the reaching of consensus in opinion, as the time to reach consensus diverges exponentially with network size N. We then investigate whether the introduction of committed individuals who are rigid in their opinion on a particular issue can speed up the convergence to consensus on that issue. We demonstrate that as committed agents are added, beyond a critical value of the committed fraction, the consensus time growth becomes logarithmic in network size N. Furthermore, we show that slight changes in the interaction rule can produce strikingly different results in the scaling behavior of consensus time, Tc. However, the benefit gained by introducing committed agents is qualitatively preserved across all the interaction rules we consider.

  11. Logarithmic spiral trajectories generated by Solar sails

    NASA Astrophysics Data System (ADS)

    Bassetto, Marco; Niccolai, Lorenzo; Quarta, Alessandro A.; Mengali, Giovanni

    2018-02-01

    Analytic solutions to continuous thrust-propelled trajectories are available in a few cases only. An interesting case is offered by the logarithmic spiral, that is, a trajectory characterized by a constant flight path angle and a fixed thrust vector direction in an orbital reference frame. The logarithmic spiral is important from a practical point of view, because it may be passively maintained by a Solar sail-based spacecraft. The aim of this paper is to provide a systematic study concerning the possibility of inserting a Solar sail-based spacecraft into a heliocentric logarithmic spiral trajectory without using any impulsive maneuver. The required conditions to be met by the sail in terms of attitude angle, propulsive performance, parking orbit characteristics, and initial position are thoroughly investigated. The closed-form variations of the osculating orbital parameters are analyzed, and the obtained analytical results are used for investigating the phasing maneuver of a Solar sail along an elliptic heliocentric orbit. In this mission scenario, the phasing orbit is composed of two symmetric logarithmic spiral trajectories connected with a coasting arc.

  12. Optimization of non-linear gradient in hydrophobic interaction chromatography for the analytical characterization of antibody-drug conjugates.

    PubMed

    Bobály, Balázs; Randazzo, Giuseppe Marco; Rudaz, Serge; Guillarme, Davy; Fekete, Szabolcs

    2017-01-20

    The goal of this work was to evaluate the potential of non-linear gradients in hydrophobic interaction chromatography (HIC), to improve the separation between the different homologous species (drug-to-antibody, DAR) of commercial antibody-drug conjugates (ADC). The selectivities between Brentuximab Vedotin species were measured using three different gradient profiles, namely linear, power function based and logarithmic ones. The logarithmic gradient provides the most equidistant retention distribution for the DAR species and offers the best overall separation of cysteine linked ADC in HIC. Another important advantage of the logarithmic gradient, is its peak focusing effect for the DAR0 species, which is particularly useful to improve the quantitation limit of DAR0. Finally, the logarithmic behavior of DAR species of ADC in HIC was modelled using two different approaches, based on i) the linear solvent strength theory (LSS) and two scouting linear gradients and ii) a new derived equation and two logarithmic scouting gradients. In both cases, the retention predictions were excellent and systematically below 3% compared to the experimental values. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Electronic filters, repeated signal charge conversion apparatus, hearing aids and methods

    NASA Technical Reports Server (NTRS)

    Morley, Jr., Robert E. (Inventor); Engebretson, A. Maynard (Inventor); Engel, George L. (Inventor); Sullivan, Thomas J. (Inventor)

    1993-01-01

    An electronic filter for filtering an electrical signal. Signal processing circuitry therein includes a logarithmic filter having a series of filter stages with inputs and outputs in cascade and respective circuits associated with the filter stages for storing electrical representations of filter parameters. The filter stages include circuits for respectively adding the electrical representations of the filter parameters to the electrical signal to be filtered thereby producing a set of filter sum signals. At least one of the filter stages includes circuitry for producing a filter signal in substantially logarithmic form at its output by combining a filter sum signal for that filter stage with a signal from an output of another filter stage. The signal processing circuitry produces an intermediate output signal, and a multiplexer connected to the signal processing circuit multiplexes the intermediate output signal with the electrical signal to be filtered so that the logarithmic filter operates as both a logarithmic prefilter and a logarithmic postfilter. Other electronic filters, signal conversion apparatus, electroacoustic systems, hearing aids and methods are also disclosed.

  14. Coarse graining Escherichia coli chemotaxis: from multi-flagella propulsion to logarithmic sensing.

    PubMed

    Curk, Tine; Matthäus, Franziska; Brill-Karniely, Yifat; Dobnikar, Jure

    2012-01-01

    Various sensing mechanisms in nature can be described by the Weber-Fechner law stating that the response to varying stimuli is proportional to their relative rather than absolute changes. The chemotaxis of bacteria Escherichia coli is an example where such logarithmic sensing enables sensitivity over large range of concentrations. It has recently been experimentally demonstrated that under certain conditions E. coli indeed respond to relative gradients of ligands. We use numerical simulations of bacteria in food gradients to investigate the limits of validity of the logarithmic behavior. We model the chemotactic signaling pathway reactions, couple them to a multi-flagella model for propelling and take the effects of rotational diffusion into account to accurately reproduce the experimental observations of single cell swimming. Using this simulation scheme we analyze the type of response of bacteria subject to exponential ligand profiles and identify the regimes of absolute gradient sensing, relative gradient sensing, and a rotational diffusion dominated regime. We explore dependance of the swimming speed, average run time and the clockwise (CW) bias on ligand variation and derive a small set of relations that define a coarse grained model for bacterial chemotaxis. Simulations based on this coarse grained model compare well with microfluidic experiments on E. coli diffusion in linear and exponential gradients of aspartate.

  15. A Memory-Based Model of Hick's Law

    ERIC Educational Resources Information Center

    Schneider, Darryl W.; Anderson, John R.

    2011-01-01

    We propose and evaluate a memory-based model of Hick's law, the approximately linear increase in choice reaction time with the logarithm of set size (the number of stimulus-response alternatives). According to the model, Hick's law reflects a combination of associative interference during retrieval from declarative memory and occasional savings…

  16. Disaccommodation in LaMnO3.075

    NASA Astrophysics Data System (ADS)

    Muroi, M.; Street, R.; Cochrane, J. W.; Russell, G. J.

    2000-10-01

    The time dependence of low-field ac susceptibility has been studied on the cation-deficient perovskite manganite LaMnO3.075. It is found that the ac susceptibility \\|χ\\| decreases with time over a wide temperature range below Tc (122 K) and the decay of \\|χ\\| is roughly proportional to the logarithm of time after demagnetization. It is argued that the time dependence of \\|χ\\|, or disaccommodation, arises from progressive domain-wall stabilization through induced exchange interaction, as well as induced magnetocrystalline anisotropy.

  17. Transistor circuit increases range of logarithmic current amplifier

    NASA Technical Reports Server (NTRS)

    Gilmour, G.

    1966-01-01

    Circuit increases the range of a logarithmic current amplifier by combining a commercially available amplifier with a silicon epitaxial transistor. A temperature compensating network is provided for the transistor.

  18. Entropy as a measure of diffusion

    NASA Astrophysics Data System (ADS)

    Aghamohammadi, Amir; Fatollahi, Amir H.; Khorrami, Mohammad; Shariati, Ahmad

    2013-10-01

    The time variation of entropy, as an alternative to the variance, is proposed as a measure of the diffusion rate. It is shown that for linear and time-translationally invariant systems having a large-time limit for the density, at large times the entropy tends exponentially to a constant. For systems with no stationary density, at large times the entropy is logarithmic with a coefficient specifying the speed of the diffusion. As an example, the large-time behaviors of the entropy and the variance are compared for various types of fractional-derivative diffusions.

  19. Logarithmic corrections to black hole entropy from Kerr/CFT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pathak, Abhishek; Porfyriadis, Achilleas P.; Strominger, Andrew

    It has been shown by A. Sen that logarithmic corrections to the black hole area-entropy law are entirely determined macroscopically from the massless particle spectrum. They therefore serve as powerful consistency checks on any proposed enumeration of quantum black hole microstates. Furthermore, Sen’s results include a macroscopic computation of the logarithmic corrections for a five-dimensional near extremal Kerr-Newman black hole. We compute these corrections microscopically using a stringy embedding of the Kerr/CFT correspondence and find perfect agreement.

  20. Logarithmic corrections to black hole entropy from Kerr/CFT

    DOE PAGES

    Pathak, Abhishek; Porfyriadis, Achilleas P.; Strominger, Andrew; ...

    2017-04-14

    It has been shown by A. Sen that logarithmic corrections to the black hole area-entropy law are entirely determined macroscopically from the massless particle spectrum. They therefore serve as powerful consistency checks on any proposed enumeration of quantum black hole microstates. Furthermore, Sen’s results include a macroscopic computation of the logarithmic corrections for a five-dimensional near extremal Kerr-Newman black hole. We compute these corrections microscopically using a stringy embedding of the Kerr/CFT correspondence and find perfect agreement.

  1. Integral definition of the logarithmic function and the derivative of the exponential function in calculus

    NASA Astrophysics Data System (ADS)

    Vaninsky, Alexander

    2015-04-01

    Defining the logarithmic function as a definite integral with a variable upper limit, an approach used by some popular calculus textbooks, is problematic. We discuss the disadvantages of such a definition and provide a way to fix the problem. We also consider a definition-based, rigorous derivation of the derivative of the exponential function that is easier, more intuitive, and complies with the standard definitions of the number e, the logarithmic, and the exponential functions.

  2. Threshold Resummation for Squark-Antisquark and Gluino-Pair Production at the LHC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kulesza, A.; Motyka, L.; II Institute for Theoretical Physics, University of Hamburg, Luruper Chaussee 149, D-22761, Germany and Institute of Physics, Jagellonian University, Reymonta 4, 30-059 Krakow

    2009-03-20

    We study the effect of soft gluon emission in the hadroproduction of squark-antisquark and gluino-gluino pairs at the next-to-leading logarithmic (NLL) accuracy within the framework of the minimal supersymmetric model. The one-loop soft anomalous dimension matrices controlling the color evolution of the underlying hard-scattering processes are calculated. We present the resummed total cross sections and show numerical results for proton-proton collisions at 14 TeV. For the gluino-pair production, the theoretical uncertainty due to scale variation is reduced to the few-percent level.

  3. Disordered two-dimensional electron systems with chiral symmetry

    NASA Astrophysics Data System (ADS)

    Markoš, P.; Schweitzer, L.

    2012-10-01

    We review the results of our recent numerical investigations on the electronic properties of disordered two dimensional systems with chiral unitary, chiral orthogonal, and chiral symplectic symmetry. Of particular interest is the behavior of the density of states and the logarithmic scaling of the smallest Lyapunov exponents in the vicinity of the chiral quantum critical point in the band center at E=0. The observed peaks or depressions in the density of states, the distribution of the critical conductances, and the possible non-universality of the critical exponents for certain chiral unitary models are discussed.

  4. Electron-atom spin asymmetry and two-electron photodetachment - Addenda to the Coulomb-dipole threshold law

    NASA Technical Reports Server (NTRS)

    Temkin, A.

    1984-01-01

    Temkin (1982) has derived the ionization threshold law based on a Coulomb-dipole theory of the ionization process. The present investigation is concerned with a reexamination of several aspects of the Coulomb-dipole threshold law. Attention is given to the energy scale of the logarithmic denominator, the spin-asymmetry parameter, and an estimate of alpha and the energy range of validity of the threshold law, taking into account the result of the two-electron photodetachment experiment conducted by Donahue et al. (1984).

  5. Safety analytics for integrating crash frequency and real-time risk modeling for expressways.

    PubMed

    Wang, Ling; Abdel-Aty, Mohamed; Lee, Jaeyoung

    2017-07-01

    To find crash contributing factors, there have been numerous crash frequency and real-time safety studies, but such studies have been conducted independently. Until this point, no researcher has simultaneously analyzed crash frequency and real-time crash risk to test whether integrating them could better explain crash occurrence. Therefore, this study aims at integrating crash frequency and real-time safety analyses using expressway data. A Bayesian integrated model and a non-integrated model were built: the integrated model linked the crash frequency and the real-time models by adding the logarithm of the estimated expected crash frequency in the real-time model; the non-integrated model independently estimated the crash frequency and the real-time crash risk. The results showed that the integrated model outperformed the non-integrated model, as it provided much better model results for both the crash frequency and the real-time models. This result indicated that the added component, the logarithm of the expected crash frequency, successfully linked and provided useful information to the two models. This study uncovered few variables that are not typically included in the crash frequency analysis. For example, the average daily standard deviation of speed, which was aggregated based on speed at 1-min intervals, had a positive effect on crash frequency. In conclusion, this study suggested a methodology to improve the crash frequency and real-time models by integrating them, and it might inspire future researchers to understand crash mechanisms better. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Time and Space Efficient Algorithms for Two-Party Authenticated Data Structures

    NASA Astrophysics Data System (ADS)

    Papamanthou, Charalampos; Tamassia, Roberto

    Authentication is increasingly relevant to data management. Data is being outsourced to untrusted servers and clients want to securely update and query their data. For example, in database outsourcing, a client's database is stored and maintained by an untrusted server. Also, in simple storage systems, clients can store very large amounts of data but at the same time, they want to assure their integrity when they retrieve them. In this paper, we present a model and protocol for two-party authentication of data structures. Namely, a client outsources its data structure and verifies that the answers to the queries have not been tampered with. We provide efficient algorithms to securely outsource a skip list with logarithmic time overhead at the server and client and logarithmic communication cost, thus providing an efficient authentication primitive for outsourced data, both structured (e.g., relational databases) and semi-structured (e.g., XML documents). In our technique, the client stores only a constant amount of space, which is optimal. Our two-party authentication framework can be deployed on top of existing storage applications, thus providing an efficient authentication service. Finally, we present experimental results that demonstrate the practical efficiency and scalability of our scheme.

  7. A new real-time guidance strategy for aerodynamic ascent flight

    NASA Astrophysics Data System (ADS)

    Yamamoto, Takayuki; Kawaguchi, Jun'ichiro

    2007-12-01

    Reusable launch vehicles are conceived to constitute the future space transportation system. If these vehicles use air-breathing propulsion and lift taking-off horizontally, the optimal steering for these vehicles exhibits completely different behavior from that in conventional rockets flight. In this paper, the new guidance strategy is proposed. This method derives from the optimality condition as for steering and an analysis concludes that the steering function takes the form comprised of Linear and Logarithmic terms, which include only four parameters. The parameter optimization of this method shows the acquired terminal horizontal velocity is almost same with that obtained by the direct numerical optimization. This supports the parameterized Liner Logarithmic steering law. And here is shown that there exists a simple linear relation between the terminal states and the parameters to be corrected. The relation easily makes the parameters determined to satisfy the terminal boundary conditions in real-time. The paper presents the guidance results for the practical application cases. The results show the guidance is well performed and satisfies the terminal boundary conditions specified. The strategy built and presented here does guarantee the robust solution in real-time excluding any optimization process, and it is found quite practical.

  8. Very low scale Coleman-Weinberg inflation with nonminimal coupling

    NASA Astrophysics Data System (ADS)

    Kaneta, Kunio; Seto, Osamu; Takahashi, Ryo

    2018-03-01

    We study viable small-field Coleman-Weinberg (CW) inflation models with the help of nonminimal coupling to gravity. The simplest small-field CW inflation model (with a low-scale potential minimum) is incompatible with the cosmological constraint on the scalar spectral index. However, there are possibilities to make the model realistic. First, we revisit the CW inflation model supplemented with a linear potential term. We next consider the CW inflation model with a logarithmic nonminimal coupling and illustrate that the model can open a new viable parameter space that includes the model with a linear potential term. We also show parameter spaces where the Hubble scale during the inflation can be as small as 10-4 GeV , 1 GeV, 1 04 GeV , and 1 08 GeV for the number of e -folds of 40, 45, 50, and 55, respectively, with other cosmological constraints being satisfied.

  9. Ising Criticality of the Clock Model from Density of States Obtained by the Replica Exchange-Wang-Landau Method

    NASA Astrophysics Data System (ADS)

    Cadilhe, Antonio

    2018-04-01

    We performed extensive simulations, using the Replica Exchange-Wang-Landau method, of the clock model for orders 3 and 4 on a square lattice, where critical behaviors are expected to belong to the Ising universality class. Though order 2 represents the Ising model, thus, being exactly solvable in two-dimensions, we still provide such results for comparison to the other two orders. Results for various energy related quantities such as the mean energy per spin, specific heat, as well as logarithm scaling of the peak of the specific heat are presented and shown to follow Ising behavior. Additionally, we also present results related to magnetic quantities, such as the magnetization, magnetic susceptibility, and corresponding scaling behavior of the peak of the magnetic susceptibility. Again, our results show scaling in conformity to Ising critical behavior.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Katti, Amogh; Di Fatta, Giuseppe; Naughton III, Thomas J

    Future extreme-scale high-performance computing systems will be required to work under frequent component failures. The MPI Forum's User Level Failure Mitigation proposal has introduced an operation, MPI_Comm_shrink, to synchronize the alive processes on the list of failed processes, so that applications can continue to execute even in the presence of failures by adopting algorithm-based fault tolerance techniques. This MPI_Comm_shrink operation requires a fault tolerant failure detection and consensus algorithm. This paper presents and compares two novel failure detection and consensus algorithms. The proposed algorithms are based on Gossip protocols and are inherently fault-tolerant and scalable. The proposed algorithms were implementedmore » and tested using the Extreme-scale Simulator. The results show that in both algorithms the number of Gossip cycles to achieve global consensus scales logarithmically with system size. The second algorithm also shows better scalability in terms of memory and network bandwidth usage and a perfect synchronization in achieving global consensus.« less

  11. Speed-difficulty trade-off in speech: Chinese versus English

    PubMed Central

    Sun, Yao; Latash, Elizaveta M.; Mikaelian, Irina L.

    2011-01-01

    This study continues the investigation of the previously described speed-difficulty trade-off in picture description tasks. In particular, we tested a hypothesis that the Mandarin Chinese and American English are similar in showing logarithmic dependences between speech time and index of difficulty (ID), while they differ significantly in the amount of time needed to describe simple pictures, this difference increases for more complex pictures, and it is associated with a proportional difference in the number of syllables used. Subjects (eight Chinese speakers and eight English speakers) were tested in pairs. One subject (the Speaker) described simple pictures, while the other subject (the Performer) tried to reproduce the pictures based on the verbal description as quickly as possible with a set of objects. The Chinese speakers initiated speech production significantly faster than the English speakers. Speech time scaled linearly with ln(ID) in all subjects, but the regression coefficient was significantly higher in the English speakers as compared with the Chinese speakers. The number of errors was somewhat lower in the Chinese participants (not significantly). The Chinese pairs also showed a shorter delay between the initiation of speech and initiation of action by the Performer, shorter movement time by the Performer, and shorter overall performance time. The number of syllables scaled with ID, and the Chinese speakers used significantly smaller numbers of syllables. Speech rate was comparable between the two groups, about 3 syllables/s; it dropped for more complex pictures (higher ID). When asked to reproduce the same pictures without speaking, movement time scaled linearly with ln(ID); the Chinese performers were slower than the English performers. We conclude that natural languages show a speed-difficulty trade-off similar to Fitts’ law; the trade-offs in movement and speech production are likely to originate at a cognitive level. The time advantage of the Chinese participants originates not from similarity of the simple pictures and Chinese written characters and not from more sloppy performance. It is linked to using fewer syllables to transmit the same information. We suggest that natural languages may differ by informational density defined as the amount of information transmitted by a given number of syllables. PMID:21479658

  12. Hydrological drivers of wetland vegetational biodiversity patterns within Everglades National Park, Florida

    NASA Astrophysics Data System (ADS)

    Todd, J.; Pumo, D.; Azaele, S.; Muneepeerakul, R.; Miralles-Wilhelm, F. R.; Rinaldo, A.; Rodriguez-Iturbe, I.

    2009-12-01

    The influence of hydrological dynamics on vegetational biodiversity and structuring of wetland environments is of growing interest as wetlands are modified by human alteration and the increasing threat from climate change. Hydrology has long been considered a driving force in shaping wetland communities as the frequency of inundation along with the duration and depth of flooding are key determinants of wetland structure. We attempt to link hydrological dynamics with vegetational distribution and species richness across Everglades National Park (ENP) using two publicly available datasets. The first, the Everglades Depth Estimation Network (EDEN),is a water-surface model which determines the median daily measure of water level across a 400mX400m grid over seven years of measurement. The second is a vegetation map and classification system at the 1:15,000 scale which categorizes vegetation within the Everglades into 79 community types. From these data, we have studied the probabilistic structure of the frequency, duration, and depth of hydroperiods. Preliminary results indicate that the percentage of time a location is inundated is a principal structuring variable with individual communities responding differently. For example, sawgrass appears to be more of a generalist community as it is found across a wide range of time inundated percentages while spike rush has a more restricted distribution and favors wetter environments disproportionately more than predicted at random. Further, the diversity of vegetation communities (e.g. a measure of biodiversity) found across a hydrologic variable does not necessarily match the distribution function for that variable on the landscape. For instance, the number of communities does not differ across the percentage of time inundated. Different measures of vegetation biodiversity such as the local number of community types are also studied at different spatial scales with some characteristics, like the slope of the semi-logarithmic relation between rank and occupancy, found to be robust to scale changes. The ENP offers an expansive natural environment in which to study how vegetational dynamics and community composition are affected by hydrologic variables from the small scale (at the individual community level) to the large (biodiversity measures at differing spatial scales).

  13. The Origins of Order: Self-Organization and Selection in Evolution

    NASA Astrophysics Data System (ADS)

    Kauffman, Stuart A.

    The following sections are included: * Introduction * Fitness Landscapes in Sequence Space * The NK Model of Rugged Fitness Landscapes * The NK Model of Random Epistatic Interactions * The Rank Order Statistics on K = N - 1 Random Landscapes * The number of local optima is very large * The expected fraction of fitter 1-mutant neighbors dwindles by 1/2 on each improvement step * Walks to local optima are short and vary as a logarithmic function of N * The expected time to reach an optimum is proportional to the dimensionality of the space * The ratio of accepted to tried mutations scales as lnN/N * Any genotype can only climb to a small fraction of the local optima * A small fraction of the genotypes can climb to any one optimum * Conflicting constraints cause a "complexity catastrophe": as complexity increase accessible adaptive peaks fall toward the mean fitness * The "Tunable" NK Family of Correlated Landscapes * Other Combinatorial Optimization Problems and Their Landscapes * Summary * References

  14. Gaussian solitary waves and compactons in Fermi–Pasta–Ulam lattices with Hertzian potentials

    PubMed Central

    James, Guillaume; Pelinovsky, Dmitry

    2014-01-01

    We consider a class of fully nonlinear Fermi–Pasta–Ulam (FPU) lattices, consisting of a chain of particles coupled by fractional power nonlinearities of order α>1. This class of systems incorporates a classical Hertzian model describing acoustic wave propagation in chains of touching beads in the absence of precompression. We analyse the propagation of localized waves when α is close to unity. Solutions varying slowly in space and time are searched with an appropriate scaling, and two asymptotic models of the chain of particles are derived consistently. The first one is a logarithmic Korteweg–de Vries (KdV) equation and possesses linearly orbitally stable Gaussian solitary wave solutions. The second model consists of a generalized KdV equation with Hölder-continuous fractional power nonlinearity and admits compacton solutions, i.e. solitary waves with compact support. When , we numerically establish the asymptotically Gaussian shape of exact FPU solitary waves with near-sonic speed and analytically check the pointwise convergence of compactons towards the limiting Gaussian profile. PMID:24808748

  15. New variational bounds on convective transport. I. Formulation and analysis

    NASA Astrophysics Data System (ADS)

    Tobasco, Ian; Souza, Andre N.; Doering, Charles R.

    2016-11-01

    We study the maximal rate of scalar transport between parallel walls separated by distance h, by an incompressible fluid with scalar diffusion coefficient κ. Given velocity vector field u with intensity measured by the Péclet number Pe =h2 < | ∇ u |2 >1/2 / κ (where < . > is space-time average) the challenge is to determine the largest enhancement of wall-to-wall scalar flux over purely diffusive transport, i.e., the Nusselt number Nu . Variational formulations of the problem are presented and it is determined that Nu <= cPe 2 / 3 , where c is an absolute constant, as Pe -> ∞ . Moreover, this scaling for optimal transport-possibly modulo logarithmic corrections-is asymptotically sharp: admissible steady flows with Nu >=c' Pe 2 / 3 /[ log Pe ] 2 are constructed. The structure of (nearly) maximally transporting flow fields is discussed. Supported in part by National Science Foundation Graduate Research Fellowship DGE-0813964, awards OISE-0967140, PHY-1205219, DMS-1311833, and DMS-1515161, and the John Simon Guggenheim Memorial Foundation.

  16. Simulating the component counts of combinatorial structures.

    PubMed

    Arratia, Richard; Barbour, A D; Ewens, W J; Tavaré, Simon

    2018-02-09

    This article describes and compares methods for simulating the component counts of random logarithmic combinatorial structures such as permutations and mappings. We exploit the Feller coupling for simulating permutations to provide a very fast method for simulating logarithmic assemblies more generally. For logarithmic multisets and selections, this approach is replaced by an acceptance/rejection method based on a particular conditioning relationship that represents the distribution of the combinatorial structure as that of independent random variables conditioned on a weighted sum. We show how to improve its acceptance rate. We illustrate the method by estimating the probability that a random mapping has no repeated component sizes, and establish the asymptotic distribution of the difference between the number of components and the number of distinct component sizes for a very general class of logarithmic structures. Copyright © 2018. Published by Elsevier Inc.

  17. Double Resummation for Higgs Production

    NASA Astrophysics Data System (ADS)

    Bonvini, Marco; Marzani, Simone

    2018-05-01

    We present the first double-resummed prediction of the inclusive cross section for the main Higgs production channel in proton-proton collisions, namely, gluon fusion. Our calculation incorporates to all orders in perturbation theory two distinct towers of logarithmic corrections which are enhanced, respectively, at threshold, i.e., large x , and in the high-energy limit, i.e., small x . Large-x logarithms are resummed to next-to-next-to-next-to-leading logarithmic accuracy, while small-x ones to leading logarithmic accuracy. The double-resummed cross section is furthermore matched to the state-of-the-art fixed-order prediction at next-to-next-to-next-to-leading accuracy. We find that double resummation corrects the Higgs production rate by 2% at the currently explored center-of-mass energy of 13 TeV and its impact reaches 10% at future circular colliders at 100 TeV.

  18. Many-body localization transition: Schmidt gap, entanglement length, and scaling

    NASA Astrophysics Data System (ADS)

    Gray, Johnnie; Bose, Sougato; Bayat, Abolfazl

    2018-05-01

    Many-body localization has become an important phenomenon for illuminating a potential rift between nonequilibrium quantum systems and statistical mechanics. However, the nature of the transition between ergodic and localized phases in models displaying many-body localization is not yet well understood. Assuming that this is a continuous transition, analytic results show that the length scale should diverge with a critical exponent ν ≥2 in one-dimensional systems. Interestingly, this is in stark contrast with all exact numerical studies which find ν ˜1 . We introduce the Schmidt gap, new in this context, which scales near the transition with an exponent ν >2 compatible with the analytical bound. We attribute this to an insensitivity to certain finite-size fluctuations, which remain significant in other quantities at the sizes accessible to exact numerical methods. Additionally, we find that a physical manifestation of the diverging length scale is apparent in the entanglement length computed using the logarithmic negativity between disjoint blocks.

  19. Synthetic analog computation in living cells.

    PubMed

    Daniel, Ramiz; Rubens, Jacob R; Sarpeshkar, Rahul; Lu, Timothy K

    2013-05-30

    A central goal of synthetic biology is to achieve multi-signal integration and processing in living cells for diagnostic, therapeutic and biotechnology applications. Digital logic has been used to build small-scale circuits, but other frameworks may be needed for efficient computation in the resource-limited environments of cells. Here we demonstrate that synthetic analog gene circuits can be engineered to execute sophisticated computational functions in living cells using just three transcription factors. Such synthetic analog gene circuits exploit feedback to implement logarithmically linear sensing, addition, ratiometric and power-law computations. The circuits exhibit Weber's law behaviour as in natural biological systems, operate over a wide dynamic range of up to four orders of magnitude and can be designed to have tunable transfer functions. Our circuits can be composed to implement higher-order functions that are well described by both intricate biochemical models and simple mathematical functions. By exploiting analog building-block functions that are already naturally present in cells, this approach efficiently implements arithmetic operations and complex functions in the logarithmic domain. Such circuits may lead to new applications for synthetic biology and biotechnology that require complex computations with limited parts, need wide-dynamic-range biosensing or would benefit from the fine control of gene expression.

  20. Turbulent channel flow under moderate polymer drag reduction

    NASA Astrophysics Data System (ADS)

    Elsnab, John; Monty, Jason; White, Christopher; Koochesfahani, Manoochehr; Klewicki, Joseph

    2017-11-01

    Streamwise velocity profiles and their wall-normal derivatives are used to investigate the properties of turbulent channel flow under the moderate polymer drag reduction (DR) conditions of 6-27%. Velocity data were obtained over a friction Reynolds number (Re) from 650-1800 using the single velocity component version of molecular tagging velocimetry (MTV). This adaptation of the MTV technique captures instantaneous profiles at high spatial resolution (>800 data points per profile), thus generating well-resolved derivative information. The mean velocity profiles indicate that the extent of the logarithmic region diminishes with increasing polymer concentration, while the logarithmic profile slope increases for drag reductions greater than about 20%. The measurements allow reconstruction of the mean momentum balance for channel flow that provides additional insights regarding the physics described by previous numerical simulation analyses that examined the mean dynamical structure of polymer laden channel flow at low Re. The present findings indicate that the polymer modifies the onset of the inertial domain, and that the extent of this domain shrinks with increasing DR. Once on the inertial domain, self-similar behaviors occur, but modified (sometimes subtly) by the modified distribution of characteristic y-scaling behavior of the Reynolds stress motions.

  1. Electronic transport in two-dimensional high dielectric constant nanosystems

    DOE PAGES

    Ortuño, M.; Somoza, A. M.; Vinokur, V. M.; ...

    2015-04-10

    There has been remarkable recent progress in engineering high-dielectric constant two dimensional (2D) materials, which are being actively pursued for applications in nanoelectronics in capacitor and memory devices, energy storage, and high-frequency modulation in communication devices. Yet many of the unique properties of these systems are poorly understood and remain unexplored. Here we report a numerical study of hopping conductivity of the lateral network of capacitors, which models two-dimensional insulators, and demonstrate that 2D long-range Coulomb interactions lead to peculiar size effects. We find that the characteristic energy governing electronic transport scales logarithmically with either system size or electrostatic screeningmore » length depending on which one is shorter. Our results are relevant well beyond their immediate context, explaining, for example, recent experimental observations of logarithmic size dependence of electric conductivity of thin superconducting films in the critical vicinity of superconductor-insulator transition where a giant dielectric constant develops. Our findings mark a radical departure from the orthodox view of conductivity in 2D systems as a local characteristic of materials and establish its macroscopic global character as a generic property of high-dielectric constant 2D nanomaterials.« less

  2. Nonlinear geometric scaling of coercivity in a three-dimensional nanoscale analog of spin ice

    NASA Astrophysics Data System (ADS)

    Shishkin, I. S.; Mistonov, A. A.; Dubitskiy, I. S.; Grigoryeva, N. A.; Menzel, D.; Grigoriev, S. V.

    2016-08-01

    Magnetization hysteresis loops of a three-dimensional nanoscale analog of spin ice based on the nickel inverse opal-like structure (IOLS) have been studied at room temperature. The samples are produced by filling nickel into the voids of artificial opal-like films. The spin ice behavior is induced by tetrahedral elements within the IOLS, which have the same arrangement of magnetic moments as a spin ice. The thickness of the films vary from a two-dimensional, i.e., single-layered, antidot array to a three-dimensional, i.e., multilayered, structure. The coercive force, the saturation, and the irreversibility field have been measured in dependence of the thickness of the IOLS for in-plane and out-of-plane applied fields. The irreversibility and saturation fields change abruptly from the antidot array to the three-dimensional IOLS and remain constant upon further increase of the number of layers n . The coercive force Hc seems to increase logarithmically with increasing n as Hc=Hc 0+α ln(n +1 ) . The logarithmic law implies the avalanchelike remagnetization of anisotropic structural elements connecting tetrahedral and cubic nodes in the IOLS. We conclude that the "ice rule" is the base of mechanism regulating this process.

  3. Electronic transport in two-dimensional high dielectric constant nanosystems.

    PubMed

    Ortuño, M; Somoza, A M; Vinokur, V M; Baturina, T I

    2015-04-10

    There has been remarkable recent progress in engineering high-dielectric constant two dimensional (2D) materials, which are being actively pursued for applications in nanoelectronics in capacitor and memory devices, energy storage, and high-frequency modulation in communication devices. Yet many of the unique properties of these systems are poorly understood and remain unexplored. Here we report a numerical study of hopping conductivity of the lateral network of capacitors, which models two-dimensional insulators, and demonstrate that 2D long-range Coulomb interactions lead to peculiar size effects. We find that the characteristic energy governing electronic transport scales logarithmically with either system size or electrostatic screening length depending on which one is shorter. Our results are relevant well beyond their immediate context, explaining, for example, recent experimental observations of logarithmic size dependence of electric conductivity of thin superconducting films in the critical vicinity of superconductor-insulator transition where a giant dielectric constant develops. Our findings mark a radical departure from the orthodox view of conductivity in 2D systems as a local characteristic of materials and establish its macroscopic global character as a generic property of high-dielectric constant 2D nanomaterials.

  4. A factorization approach to next-to-leading-power threshold logarithms

    NASA Astrophysics Data System (ADS)

    Bonocore, D.; Laenen, E.; Magnea, L.; Melville, S.; Vernazza, L.; White, C. D.

    2015-06-01

    Threshold logarithms become dominant in partonic cross sections when the selected final state forces gluon radiation to be soft or collinear. Such radiation factorizes at the level of scattering amplitudes, and this leads to the resummation of threshold logarithms which appear at leading power in the threshold variable. In this paper, we consider the extension of this factorization to include effects suppressed by a single power of the threshold variable. Building upon the Low-Burnett-Kroll-Del Duca (LBKD) theorem, we propose a decomposition of radiative amplitudes into universal building blocks, which contain all effects ultimately responsible for next-to-leading-power (NLP) threshold logarithms in hadronic cross sections for electroweak annihilation processes. In particular, we provide a NLO evaluation of the radiative jet function, responsible for the interference of next-to-soft and collinear effects in these cross sections. As a test, using our expression for the amplitude, we reproduce all abelian-like NLP threshold logarithms in the NNLO Drell-Yan cross section, including the interplay of real and virtual emissions. Our results are a significant step towards developing a generally applicable resummation formalism for NLP threshold effects, and illustrate the breakdown of next-to-soft theorems for gauge theory amplitudes at loop level.

  5. Static and sliding contact of rough surfaces: Effect of asperity-scale properties and long-range elastic interactions

    NASA Astrophysics Data System (ADS)

    Hulikal, Srivatsan; Lapusta, Nadia; Bhattacharya, Kaushik

    2018-07-01

    Friction in static and sliding contact of rough surfaces is important in numerous physical phenomena. We seek to understand macroscopically observed static and sliding contact behavior as the collective response of a large number of microscopic asperities. To that end, we build on Hulikal et al. (2015) and develop an efficient numerical framework that can be used to investigate how the macroscopic response of multiple frictional contacts depends on long-range elastic interactions, different constitutive assumptions about the deforming contacts and their local shear resistance, and surface roughness. We approximate the contact between two rough surfaces as that between a regular array of discrete deformable elements attached to a elastic block and a rigid rough surface. The deformable elements are viscoelastic or elasto/viscoplastic with a range of relaxation times, and the elastic interaction between contacts is long-range. We find that the model reproduces the main macroscopic features of evolution of contact and friction for a range of constitutive models of the elements, suggesting that macroscopic frictional response is robust with respect to the microscopic behavior. Viscoelasticity/viscoplasticity contributes to the increase of friction with contact time and leads to a subtle history dependence. Interestingly, long-range elastic interactions only change the results quantitatively compared to the meanfield response. The developed numerical framework can be used to study how specific observed macroscopic behavior depends on the microscale assumptions. For example, we find that sustained increase in the static friction coefficient during long hold times suggests viscoelastic response of the underlying material with multiple relaxation time scales. We also find that the experimentally observed proportionality of the direct effect in velocity jump experiments to the logarithm of the velocity jump points to a complex material-dependent shear resistance at the microscale.

  6. Seismic dynamics in advance and after the recent strong earthquakes in Italy and New Zealand

    NASA Astrophysics Data System (ADS)

    Nekrasova, A.; Kossobokov, V. G.

    2017-12-01

    We consider seismic events as a sequence of avalanches in self-organized system of blocks-and-faults of the Earth lithosphere and characterize earthquake series with the distribution of the control parameter, η = τ × 10B × (5-M) × L C of the Unified Scaling Law for Earthquakes, USLE (where τ is inter-event time, B is analogous to the Gutenberg-Richter b-value, and C is fractal dimension of seismic locus). A systematic analysis of earthquake series in Central Italy and New Zealand, 1993-2017, suggests the existence, in a long-term, of different rather steady levels of seismic activity characterized with near constant values of η, which, in mid-term, intermittently switch at times of transitions associated with the strong catastrophic events. On such a transition, seismic activity, in short-term, may follow different scenarios with inter-event time scaling of different kind, including constant, logarithmic, power law, exponential rise/decay or a mixture of those. The results do not support the presence of universality in seismic energy release. The observed variability of seismic activity in advance and after strong (M6.0+) earthquakes in Italy and significant (M7.0+) earthquakes in New Zealand provides important constraints on modelling realistic earthquake sequences by geophysicists and can be used to improve local seismic hazard assessments including earthquake forecast/prediction methodologies. The transitions of seismic regime in Central Italy and New Zealand started in 2016 are still in progress and require special attention and geotechnical monitoring. It would be premature to make any kind of definitive conclusions on the level of seismic hazard which is evidently high at this particular moment of time in both regions. The study supported by the Russian Science Foundation Grant No.16-17-00093.

  7. Drift and pseudomomentum in bounded turbulent shear flows

    NASA Astrophysics Data System (ADS)

    Phillips, W. R. C.

    2015-10-01

    This paper is concerned with the evaluation of two Lagrangian measures which arise in oscillatory or fluctuating shear flows when the fluctuating field is rotational and the spectrum of wave numbers which comprise it is continuous. The measures are the drift and pseudomomentum. Phillips [J. Fluid Mech. 430, 209 (2001), 10.1017/S0022112000002858] has shown that the measures are, in such instances, succinctly expressed in terms of Lagrangian integrals of Eulerian space-time correlations. But they are difficult to interpret, and the present work begins by expressing them in a more insightful form. This is achieved by assuming the space-time correlations are separable as magnitude, determined by one-point velocity correlations, and spatial diminution. The measures then parse into terms comprised of the mean Eulerian velocity, one-point velocity correlations, and a family of integrals of spatial diminution, which in turn define a series of Lagrangian time and velocity scales. The pseudomomentum is seen to be strictly negative and related to the turbulence kinetic energy, while the drift is mixed and strongly influenced by the Reynolds stress. Both are calculated for turbulent channel flow for a range of Reynolds numbers and appear, as the Reynolds number increases, to approach a terminal form. At all Reynolds numbers studied, the pseudomomentum has a sole peak located in wall units in the low teens, while at the highest Reynolds number studied, Reτ=5200 , the drift is negative in the vicinity of that peak, positive elsewhere, and largest near the rigid boundary. In contrast, the time and velocity scales grow almost logarithmically over much of the layer. Finally, the drift and pseudomomentum are discussed in the context of coherent wall layer structures with which they are intricately linked.

  8. An algorithm for the numerical evaluation of the associated Legendre functions that runs in time independent of degree and order

    NASA Astrophysics Data System (ADS)

    Bremer, James

    2018-05-01

    We describe a method for the numerical evaluation of normalized versions of the associated Legendre functions Pν- μ and Qν- μ of degrees 0 ≤ ν ≤ 1, 000, 000 and orders - ν ≤ μ ≤ ν for arguments in the interval (- 1 , 1). Our algorithm, which runs in time independent of ν and μ, is based on the fact that while the associated Legendre functions themselves are extremely expensive to represent via polynomial expansions, the logarithms of certain solutions of the differential equation defining them are not. We exploit this by numerically precomputing the logarithms of carefully chosen solutions of the associated Legendre differential equation and representing them via piecewise trivariate Chebyshev expansions. These precomputed expansions, which allow for the rapid evaluation of the associated Legendre functions over a large swath of parameter domain mentioned above, are supplemented with asymptotic and series expansions in order to cover it entirely. The results of numerical experiments demonstrating the efficacy of our approach are presented, and our code for evaluating the associated Legendre functions is publicly available.

  9. Sediment erosion by Görtler vortices: the scour-hole problem

    NASA Astrophysics Data System (ADS)

    Hopfinger, E. J.; Kurniawan, A.; Graf, W. H.; Lemmin, U.

    2004-12-01

    Experimental results on sediment erosion (scour) by a plane turbulent wall jet, issuing from a sluice gate, are presented which show clearly it seems for the first time that the turbulent wall layer is destabilized by the concave curvature of the water/sediment interface. The streamwise Görtler vortices which emerge create sediment streaks or longitudinal sediment ridges. The analysis of the results in terms of Görtler instability of the wall layer indicates that the strength of these curvature-excited streamwise vortices is such that the sediment transport is primarily due to turbulence created by these vortices. Their contribution to the wall shear stress is taken to be of the same form as the normal turbulent wall shear stress. For this reason, the model developed by Hogg et al. (J. Fluid Mech. Vol. 338, 1997, p. 317) remains valid; only the numerical coefficients are affected. The logarithmic dependency of the time evolution of the scour-hole depth predicted by this model is shown to be in good agreement with experiments. New scaling laws for the quasi-steady state depth and the associated time, inspired by the Hogg et al. (1997) model are proposed. Furthermore, it is emphasized that at least two scouring regimes must be distinguished: a short-time regime after which a quasi-steady state is reached, followed by a long-time regime, leading to an asymptotic state of virtually no sediment transport.

  10. The ABC (in any D) of logarithmic CFT

    NASA Astrophysics Data System (ADS)

    Hogervorst, Matthijs; Paulos, Miguel; Vichi, Alessandro

    2017-10-01

    Logarithmic conformal field theories have a vast range of applications, from critical percolation to systems with quenched disorder. In this paper we thoroughly examine the structure of these theories based on their symmetry properties. Our analysis is model-independent and holds for any spacetime dimension. Our results include a determination of the general form of correlation functions and conformal block decompositions, clearing the path for future bootstrap applications. Several examples are discussed in detail, including logarithmic generalized free fields, holographic models, self-avoiding random walks and critical percolation.

  11. Evolution of opinions on social networks in the presence of competing committed groups.

    PubMed

    Xie, Jierui; Emenheiser, Jeffrey; Kirby, Matthew; Sreenivasan, Sameet; Szymanski, Boleslaw K; Korniss, Gyorgy

    2012-01-01

    Public opinion is often affected by the presence of committed groups of individuals dedicated to competing points of view. Using a model of pairwise social influence, we study how the presence of such groups within social networks affects the outcome and the speed of evolution of the overall opinion on the network. Earlier work indicated that a single committed group within a dense social network can cause the entire network to quickly adopt the group's opinion (in times scaling logarithmically with the network size), so long as the committed group constitutes more than about 10% of the population (with the findings being qualitatively similar for sparse networks as well). Here we study the more general case of opinion evolution when two groups committed to distinct, competing opinions A and B, and constituting fractions pA and pB of the total population respectively, are present in the network. We show for stylized social networks (including Erdös-Rényi random graphs and Barabási-Albert scale-free networks) that the phase diagram of this system in parameter space (pA,pB) consists of two regions, one where two stable steady-states coexist, and the remaining where only a single stable steady-state exists. These two regions are separated by two fold-bifurcation (spinodal) lines which meet tangentially and terminate at a cusp (critical point). We provide further insights to the phase diagram and to the nature of the underlying phase transitions by investigating the model on infinite (mean-field limit), finite complete graphs and finite sparse networks. For the latter case, we also derive the scaling exponent associated with the exponential growth of switching times as a function of the distance from the critical point.

  12. Coalescence of repelling colloidal droplets: a route to monodisperse populations.

    PubMed

    Roger, Kevin; Botet, Robert; Cabane, Bernard

    2013-05-14

    Populations of droplets or particles dispersed in a liquid may evolve through Brownian collisions, aggregation, and coalescence. We have found a set of conditions under which these populations evolve spontaneously toward a narrow size distribution. The experimental system consists of poly(methyl methacrylate) (PMMA) nanodroplets dispersed in a solvent (acetone) + nonsolvent (water) mixture. These droplets carry electrical charges, located on the ionic end groups of the macromolecules. We used time-resolved small angle X-ray scattering to determine their size distribution. We find that the droplets grow through coalescence events: the average radius (R) increases logarithmically with elapsed time while the relative width σR/(R) of the distribution decreases as the inverse square root of (R). We interpret this evolution as resulting from coalescence events that are hindered by ionic repulsions between droplets. We generalize this evolution through a simulation of the Smoluchowski kinetic equation, with a kernel that takes into account the interactions between droplets. In the case of vanishing or attractive interactions, all droplet encounters lead to coalescence. The corresponding kernel leads to the well-known "self-preserving" particle distribution of the coalescence process, where σR/(R) increases to a plateau value. However, for droplets that interact through long-range ionic repulsions, "large + small" droplet encounters are more successful at coalescence than "large + large" encounters. We show that the corresponding kernel leads to a particular scaling of the droplet-size distribution-known as the "second-scaling law" in the theory of critical phenomena, where σR/(R) decreases as 1/√(R) and becomes independent of the initial distribution. We argue that this scaling explains the narrow size distributions of colloidal dispersions that have been synthesized through aggregation processes.

  13. Evolution of Opinions on Social Networks in the Presence of Competing Committed Groups

    PubMed Central

    Xie, Jierui; Emenheiser, Jeffrey; Kirby, Matthew; Sreenivasan, Sameet; Szymanski, Boleslaw K.; Korniss, Gyorgy

    2012-01-01

    Public opinion is often affected by the presence of committed groups of individuals dedicated to competing points of view. Using a model of pairwise social influence, we study how the presence of such groups within social networks affects the outcome and the speed of evolution of the overall opinion on the network. Earlier work indicated that a single committed group within a dense social network can cause the entire network to quickly adopt the group's opinion (in times scaling logarithmically with the network size), so long as the committed group constitutes more than about of the population (with the findings being qualitatively similar for sparse networks as well). Here we study the more general case of opinion evolution when two groups committed to distinct, competing opinions and , and constituting fractions and of the total population respectively, are present in the network. We show for stylized social networks (including Erdös-Rényi random graphs and Barabási-Albert scale-free networks) that the phase diagram of this system in parameter space consists of two regions, one where two stable steady-states coexist, and the remaining where only a single stable steady-state exists. These two regions are separated by two fold-bifurcation (spinodal) lines which meet tangentially and terminate at a cusp (critical point). We provide further insights to the phase diagram and to the nature of the underlying phase transitions by investigating the model on infinite (mean-field limit), finite complete graphs and finite sparse networks. For the latter case, we also derive the scaling exponent associated with the exponential growth of switching times as a function of the distance from the critical point. PMID:22448238

  14. Rigorous asymptotics of traveling-wave solutions to the thin-film equation and Tanner’s law

    NASA Astrophysics Data System (ADS)

    Giacomelli, Lorenzo; Gnann, Manuel V.; Otto, Felix

    2016-09-01

    We are interested in traveling-wave solutions to the thin-film equation with zero microscopic contact angle (in the sense of complete wetting without precursor) and inhomogeneous mobility {{h}3}+{λ3-n}{{h}n} , where h, λ, and n\\in ≤ft(\\frac{3}{2},\\frac{7}{3}\\right) denote film height, slip parameter, and mobility exponent, respectively. Existence and uniqueness of these solutions have been established by Maria Chiricotto and the first of the authors in previous work under the assumption of sub-quadratic growth as h\\to ∞ . In the present work we investigate the asymptotics of solutions as h\\searrow 0 (the contact-line region) and h\\to ∞ . As h\\searrow 0 we observe, to leading order, the same asymptotics as for traveling waves or source-type self-similar solutions to the thin-film equation with homogeneous mobility h n and we additionally characterize corrections to this law. Moreover, as h\\to ∞ we identify, to leading order, the logarithmic Tanner profile, i.e. the solution to the corresponding unperturbed problem with λ =0 that determines the apparent macroscopic contact angle. Besides higher-order terms, corrections turn out to affect the asymptotic law as h\\to ∞ only by setting the length scale in the logarithmic Tanner profile. Moreover, we prove that both the correction and the length scale depend smoothly on n. Hence, in line with the common philosophy, the precise modeling of liquid-solid interactions (within our model, the mobility exponent) does not affect the qualitative macroscopic properties of the film.

  15. Role of large-scale motions to turbulent inertia in turbulent pipe and channel flows

    NASA Astrophysics Data System (ADS)

    Hwang, Jinyul; Lee, Jin; Sung, Hyung Jin

    2015-11-01

    The role of large-scale motions (LSMs) to the turbulent inertia (TI) term (the wall-normal gradient of the Reynolds shear stress) is examined in turbulent pipe and channel flows at Reτ ~ 930 . The TI term in the mean momentum equation represents the net force of inertia exerted by the Reynolds shear stress. Although the turbulence statistics characterizing the internal turbulent flows are similar close to the wall, the TI term differs in the logarithmic region due to the different characteristics of LSMs (λx > 3 δ) . The contribution of the LSMs to the TI term and the Reynolds shear stress in the channel flow is larger than that in the pipe flow. The LSMs in the logarithmic region act like a mean momentum source (where TI >0) even the TI profile is negative above the peak of the Reynolds shear stress. The momentum sources carried by the LSMs are related to the low-speed regions elongated in the downstream, revealing that momentum source-like motions occur in the upstream position of the low-speed structure. The streamwise extent of this structure is relatively long in the channel flow, whereas the high-speed regions on the both sides of the low-speed region in the channel flow are shorter and weaker than those in the pipe flow. This work was supported by the Creative Research Initiatives (No. 2015-001828) program of the National Research Foundation of Korea (MSIP) and partially supported by KISTI under the Strategic Supercomputing Support Program.

  16. Evaluation of Vertical Lacunarity Profiles in Forested Areas Using Airborne Laser Scanning Point Clouds

    NASA Astrophysics Data System (ADS)

    Székely, B.; Kania, A.; Standovár, T.; Heilmeier, H.

    2016-06-01

    The horizontal variation and vertical layering of the vegetation are important properties of the canopy structure determining the habitat; three-dimensional (3D) distribution of objects (shrub layers, understory vegetation, etc.) is related to the environmental factors (e.g., illumination, visibility). It has been shown that gaps in forests, mosaic-like structures are essential to biodiversity; various methods have been introduced to quantify this property. As the distribution of gaps in the vegetation is a multi-scale phenomenon, in order to capture it in its entirety, scale-independent methods are preferred; one of these is the calculation of lacunarity. We used Airborne Laser Scanning point clouds measured over a forest plantation situated in a former floodplain. The flat topographic relief ensured that the tree growth is independent of the topographic effects. The tree pattern in the plantation crops provided various quasi-regular and irregular patterns, as well as various ages of the stands. The point clouds were voxelized and layers of voxels were considered as images for two-dimensional input. These images calculated for a certain vicinity of reference points were taken as images for the computation of lacunarity curves, providing a stack of lacunarity curves for each reference points. These sets of curves have been compared to reveal spatial changes of this property. As the dynamic range of the lacunarity values is very large, the natural logarithms of the values were considered. Logarithms of lacunarity functions show canopy-related variations, we analysed these variations along transects. The spatial variation can be related to forest properties and ecology-specific aspects.

  17. Generation of Magnetohydrodynamic Waves in Low Solar Atmospheric Flux Tubes by Photospheric Motions

    NASA Astrophysics Data System (ADS)

    Mumford, S. J.; Fedun, V.; Erdélyi, R.

    2015-01-01

    Recent ground- and space-based observations reveal the presence of small-scale motions between convection cells in the solar photosphere. In these regions, small-scale magnetic flux tubes are generated via the interaction of granulation motion and the background magnetic field. This paper studies the effects of these motions on magnetohydrodynamic (MHD) wave excitation from broadband photospheric drivers. Numerical experiments of linear MHD wave propagation in a magnetic flux tube embedded in a realistic gravitationally stratified solar atmosphere between the photosphere and the low choromosphere (above β = 1) are performed. Horizontal and vertical velocity field drivers mimic granular buffeting and solar global oscillations. A uniform torsional driver as well as Archimedean and logarithmic spiral drivers mimic observed torsional motions in the solar photosphere. The results are analyzed using a novel method for extracting the parallel, perpendicular, and azimuthal components of the perturbations, which caters to both the linear and non-linear cases. Employing this method yields the identification of the wave modes excited in the numerical simulations and enables a comparison of excited modes via velocity perturbations and wave energy flux. The wave energy flux distribution is calculated to enable the quantification of the relative strengths of excited modes. The torsional drivers primarily excite Alfvén modes (≈60% of the total flux) with small contributions from the slow kink mode, and, for the logarithmic spiral driver, small amounts of slow sausage mode. The horizontal and vertical drivers primarily excite slow kink or fast sausage modes, respectively, with small variations dependent upon flux surface radius.

  18. Operator algebra as an application of logarithmic representation of infinitesimal generators

    NASA Astrophysics Data System (ADS)

    Iwata, Yoritaka

    2018-02-01

    The operator algebra is introduced based on the framework of logarithmic representation of infinitesimal generators. In conclusion a set of generally-unbounded infinitesimal generators is characterized as a module over the Banach algebra.

  19. Rate and State Friction Relation for Nanoscale Contacts: Thermally Activated Prandtl-Tomlinson Model with Chemical Aging

    NASA Astrophysics Data System (ADS)

    Tian, Kaiwen; Goldsby, David L.; Carpick, Robert W.

    2018-05-01

    Rate and state friction (RSF) laws are widely used empirical relationships that describe macroscale to microscale frictional behavior. They entail a linear combination of the direct effect (the increase of friction with sliding velocity due to the reduced influence of thermal excitations) and the evolution effect (the change in friction with changes in contact "state," such as the real contact area or the degree of interfacial chemical bonds). Recent atomic force microscope (AFM) experiments and simulations found that nanoscale single-asperity amorphous silica-silica contacts exhibit logarithmic aging (increasing friction with time) over several decades of contact time, due to the formation of interfacial chemical bonds. Here we establish a physically based RSF relation for such contacts by combining the thermally activated Prandtl-Tomlinson (PTT) model with an evolution effect based on the physics of chemical aging. This thermally activated Prandtl-Tomlinson model with chemical aging (PTTCA), like the PTT model, uses the loading point velocity for describing the direct effect, not the tip velocity (as in conventional RSF laws). Also, in the PTTCA model, the combination of the evolution and direct effects may be nonlinear. We present AFM data consistent with the PTTCA model whereby in aging tests, for a given hold time, static friction increases with the logarithm of the loading point velocity. Kinetic friction also increases with the logarithm of the loading point velocity at sufficiently high velocities, but at a different increasing rate. The discrepancy between the rates of increase of static and kinetic friction with velocity arises from the fact that appreciable aging during static contact changes the energy landscape. Our approach extends the PTT model, originally used for crystalline substrates, to amorphous materials. It also establishes how conventional RSF laws can be modified for nanoscale single-asperity contacts to provide a physically based friction relation for nanoscale contacts that exhibit chemical bond-induced aging, as well as other aging mechanisms with similar physical characteristics.

  20. Perceptual scale expansion: an efficient angular coding strategy for locomotor space.

    PubMed

    Durgin, Frank H; Li, Zhi

    2011-08-01

    Whereas most sensory information is coded on a logarithmic scale, linear expansion of a limited range may provide a more efficient coding for the angular variables important to precise motor control. In four experiments, we show that the perceived declination of gaze, like the perceived orientation of surfaces, is coded on a distorted scale. The distortion seems to arise from a nearly linear expansion of the angular range close to horizontal/straight ahead and is evident in explicit verbal and nonverbal measures (Experiments 1 and 2), as well as in implicit measures of perceived gaze direction (Experiment 4). The theory is advanced that this scale expansion (by a factor of about 1.5) may serve a functional goal of coding efficiency for angular perceptual variables. The scale expansion of perceived gaze declination is accompanied by a corresponding expansion of perceived optical slants in the same range (Experiments 3 and 4). These dual distortions can account for the explicit misperception of distance typically obtained by direct report and exocentric matching, while allowing for accurate spatial action to be understood as the result of calibration.

  1. Perceptual Scale Expansion: An Efficient Angular Coding Strategy for Locomotor Space

    PubMed Central

    Durgin, Frank H.; Li, Zhi

    2011-01-01

    Whereas most sensory information is coded in a logarithmic scale, linear expansion of a limited range may provide a more efficient coding for angular variables important to precise motor control. In four experiments it is shown that the perceived declination of gaze, like the perceived orientation of surfaces is coded on a distorted scale. The distortion seems to arise from a nearly linear expansion of the angular range close to horizontal/straight ahead and is evident in explicit verbal and non-verbal measures (Experiments 1 and 2) and in implicit measures of perceived gaze direction (Experiment 4). The theory is advanced that this scale expansion (by a factor of about 1.5) may serve a functional goal of coding efficiency for angular perceptual variables. The scale expansion of perceived gaze declination is accompanied by a corresponding expansion of perceived optical slants in the same range (Experiments 3 and 4). These dual distortions can account for the explicit misperception of distance typically obtained by direct report and exocentric matching while allowing accurate spatial action to be understood as the result of calibration. PMID:21594732

  2. Large-scale behaviour of local and entanglement entropy of the free Fermi gas at any temperature

    NASA Astrophysics Data System (ADS)

    Leschke, Hajo; Sobolev, Alexander V.; Spitzer, Wolfgang

    2016-07-01

    The leading asymptotic large-scale behaviour of the spatially bipartite entanglement entropy (EE) of the free Fermi gas infinitely extended in multidimensional Euclidean space at zero absolute temperature, T = 0, is by now well understood. Here, we present and discuss the first rigorous results for the corresponding EE of thermal equilibrium states at T> 0. The leading large-scale term of this thermal EE turns out to be twice the first-order finite-size correction to the infinite-volume thermal entropy (density). Not surprisingly, this correction is just the thermal entropy on the interface of the bipartition. However, it is given by a rather complicated integral derived from a semiclassical trace formula for a certain operator on the underlying one-particle Hilbert space. But in the zero-temperature limit T\\downarrow 0, the leading large-scale term of the thermal EE considerably simplifies and displays a {ln}(1/T)-singularity which one may identify with the known logarithmic enhancement at T = 0 of the so-called area-law scaling. birthday of the ideal Fermi gas.

  3. Single-scale renormalisation group improvement of multi-scale effective potentials

    NASA Astrophysics Data System (ADS)

    Chataignier, Leonardo; Prokopec, Tomislav; Schmidt, Michael G.; Świeżewska, Bogumiła

    2018-03-01

    We present a new method for renormalisation group improvement of the effective potential of a quantum field theory with an arbitrary number of scalar fields. The method amounts to solving the renormalisation group equation for the effective potential with the boundary conditions chosen on the hypersurface where quantum corrections vanish. This hypersurface is defined through a suitable choice of a field-dependent value for the renormalisation scale. The method can be applied to any order in perturbation theory and it is a generalisation of the standard procedure valid for the one-field case. In our method, however, the choice of the renormalisation scale does not eliminate individual logarithmic terms but rather the entire loop corrections to the effective potential. It allows us to evaluate the improved effective potential for arbitrary values of the scalar fields using the tree-level potential with running coupling constants as long as they remain perturbative. This opens the possibility of studying various applications which require an analysis of multi-field effective potentials across different energy scales. In particular, the issue of stability of the scalar potential can be easily studied beyond tree level.

  4. Size, shape, and diffusivity of a single Debye-Hückel polyelectrolyte chain in solution.

    PubMed

    Soysa, W Chamath; Dünweg, B; Prakash, J Ravi

    2015-08-14

    Brownian dynamics simulations of a coarse-grained bead-spring chain model, with Debye-Hückel electrostatic interactions between the beads, are used to determine the root-mean-square end-to-end vector, the radius of gyration, and various shape functions (defined in terms of eigenvalues of the radius of gyration tensor) of a weakly charged polyelectrolyte chain in solution, in the limit of low polymer concentration. The long-time diffusivity is calculated from the mean square displacement of the centre of mass of the chain, with hydrodynamic interactions taken into account through the incorporation of the Rotne-Prager-Yamakawa tensor. Simulation results are interpreted in the light of the Odjik, Skolnick, Fixman, Khokhlov, and Khachaturian blob scaling theory (Everaers et al., Eur. Phys. J. E 8, 3 (2002)) which predicts that all solution properties are determined by just two scaling variables-the number of electrostatic blobs X and the reduced Debye screening length, Y. We identify three broad regimes, the ideal chain regime at small values of Y, the blob-pole regime at large values of Y, and the crossover regime at intermediate values of Y, within which the mean size, shape, and diffusivity exhibit characteristic behaviours. In particular, when simulation results are recast in terms of blob scaling variables, universal behaviour independent of the choice of bead-spring chain parameters, and the number of blobs X, is observed in the ideal chain regime and in much of the crossover regime, while the existence of logarithmic corrections to scaling in the blob-pole regime leads to non-universal behaviour.

  5. Entanglement entropy in a boundary impurity model.

    PubMed

    Levine, G C

    2004-12-31

    Boundary impurities are known to dramatically alter certain bulk properties of (1+1)-dimensional strongly correlated systems. The entanglement entropy of a zero temperature Luttinger liquid bisected by a single impurity is computed using a novel finite size scaling or bosonization scheme. For a Luttinger liquid of length 2L and UV cutoff epsilon, the boundary impurity correction (deltaSimp) to the logarithmic entanglement entropy (Sent proportional, variant lnL/epsilon scales as deltaSimp approximately yrlnL/epsilon, where yr is the renormalized backscattering coupling constant. In this way, the entanglement entropy within a region is related to scattering through the region's boundary. In the repulsive case (g<1), deltaSimp diverges (negatively) suggesting that the entropy vanishes. Our results are consistent with the recent conjecture that entanglement entropy decreases irreversibly along renormalization group flow.

  6. Theoretical scaling law of coronal magnetic field and electron power-law index in solar microwave burst sources

    NASA Astrophysics Data System (ADS)

    Huang, Y.; Song, Q. W.; Tan, B. L.

    2018-04-01

    It is first proposed a theoretical scaling law respectively for the coronal magnetic field strength B and electron power-law index δ versus frequency and coronal height in solar microwave burst sources. Based on the non-thermal gyro-synchrotron radiation model (Ramaty in Astrophys. J. 158:753, 1969), B and δ are uniquely solved by the observable optically-thin spectral index and turnover (peak) frequency, the other parameters (plasma density, temperature, view angle, low and high energy cutoffs, etc.) are relatively insensitive to the calculations, thus taken as some typical values. Both of B and δ increase with increasing of radio frequency but with decreasing of coronal height above photosphere, and well satisfy a square or cubic logarithmic fitting.

  7. A Numerical Fit of Analytical to Simulated Density Profiles in Dark Matter Haloes

    NASA Astrophysics Data System (ADS)

    Caimmi, R.; Marmo, C.; Valentinuzzi, T.

    2005-06-01

    Analytical and geometrical properties of generalized power-law (GPL) density profiles are investigated in detail. In particular, a one-to-one correspondence is found between mathematical parameters (a scaling radius, r_0, a scaling density, rho_0, and three exponents, alpha, beta, gamma), and geometrical parameters (the coordinates of the intersection of the asymptotes, x_C, y_C, and three vertical intercepts, b, b_beta, b_gamma, related to the curve and the asymptotes, respectively): (r_0,rho_0,alpha,beta,gamma) <--> (x_C,y_C,b,b_beta,b_gamma). Then GPL density profiles are compared with simulated dark haloes (SDH) density profiles, and nonlinear least-absolute values and least-squares fits involving the above mentioned five parameters (RFSM5 method) are prescribed. More specifically, the sum of absolute values or squares of absolute logarithmic residuals, R_i= log rhoSDH(r_i)-log rhoGPL(r_i), is evaluated on 10^5 points making a 5- dimension hypergrid, through a few iterations. The size is progressively reduced around a fiducial minimum, and superpositions on nodes of earlier hypergrids are avoided. An application is made to a sample of 17 SDHs on the scale of cluster of galaxies, within a flat LambdaCDM cosmological model (Rasia et al. 2004). In dealing with the mean SDH density profile, a virial radius, rvir, averaged over the whole sample, is assigned, which allows the calculation of the remaining parameters. Using a RFSM5 method provides a better fit with respect to other methods. The geometrical parameters, averaged over the whole sample of best fitting GPL density profiles, yield (alpha,beta,gamma) approx(0.6,3.1,1.0), to be compared with (alpha,beta,gamma)=(1,3,1), i.e. the NFW density profile (Navarro et al. 1995, 1996, 1997), (alpha,beta,gamma)=(1.5,3,1.5) (Moore et al. 1998, 1999), (alpha,beta,gamma)=(1,2.5,1) (Rasia et al. 2004); and, in addition, gamma approx 1.5 (Hiotelis 2003), deduced from the application of a RFSM5 method, but using a different definition of scaled radius, or concentration; and gamma approx 1.2-1.3 deduced from more recent high-resolution simulations (Diemand et al. 2004, Reed et al. 2005). No evident correlation is found between SDH dynamical state (relaxed or merging) and asymptotic inner slope of the fitting logarithmic density profile or (for SDH comparable virial masses) scaled radius. Mean values and standard deviations of some parameters are calculated, and in particular the decimal logarithm of the scaled radius, xivir, reads < log xivir >=0.74 and sigma_s log xivir=0.15-0.17, consistent with previous results related to NFW density profiles. It provides additional support to the idea, that NFW density profiles may be considered as a convenient way to parametrize SDH density profiles, without implying that it necessarily produces the best possible fit (Bullock et al. 2001). A certain degree of degeneracy is found in fitting GPL to SDH density profiles. If it is intrinsic to the RFSM5 method or it could be reduced by the next generation of high-resolution simulations, still remains an open question.

  8. Flow characteristics and scaling past highly porous wall-mounted fences

    NASA Astrophysics Data System (ADS)

    Rodríguez-López, Eduardo; Bruce, Paul J. K.; Buxton, Oliver R. H.

    2017-07-01

    An extensive characterization of the flow past wall-mounted highly porous fences based on single- and multi-scale geometries has been performed using hot-wire anemometry in a low-speed wind tunnel. Whilst drag properties (estimated from the time-averaged momentum equation) seem to be mostly dependent on the grids' blockage ratio; wakes of different size and orientation bars seem to generate distinct behaviours regarding turbulence properties. Far from the near-grid region, the flow is dominated by the presence of two well-differentiated layers: one close to the wall dominated by the near-wall behaviour and another one corresponding to the grid's wake and shear layer, originating from between this and the freestream. It is proposed that the effective thickness of the wall layer can be inferred from the wall-normal profile of root-mean-square streamwise velocity or, alternatively, from the wall-normal profile of streamwise velocity correlation. Using these definitions of wall-layer thickness enables us to collapse different trends of the turbulence behaviour inside this layer. In particular, the root-mean-square level of the wall shear stress fluctuations, longitudinal integral length scale, and spanwise turbulent structure is shown to display a satisfactory scaling with this thickness rather than with the whole thickness of the grid's wake. Moreover, it is shown that certain grids destroy the spanwise arrangement of large turbulence structures in the logarithmic region, which are then re-formed after a particular streamwise extent. It is finally shown that for fences subject to a boundary layer of thickness comparable to their height, the effective thickness of the wall layer scales with the incoming boundary layer thickness. Analogously, it is hypothesized that the growth rate of the internal layer is also partly dependent on the incoming boundary layer thickness.

  9. Surface roughness effects on turbulent Couette flow

    NASA Astrophysics Data System (ADS)

    Lee, Young Mo; Lee, Jae Hwa

    2017-11-01

    Direct numerical simulation of a turbulent Couette flow with two-dimensional (2-D) rod roughness is performed to examine the effects of the surface roughness. The Reynolds number based on the channel centerline laminar velocity (Uco) and channel half height (h) is Re =7200. The 2-D rods are periodically arranged with a streamwise pitch of λ = 8 k on the bottom wall, and the roughness height is k = 0.12 h. It is shown that the wall-normal extent for the logarithmic layer is significantly shortened in the rough-wall turbulent Couette flow, compared to a turbulent Couette flow with smooth wall. Although the Reynolds stresses are increased in a turbulent channel flow with surface roughness in the outer layer due to large-scale ejection motions produced by the 2-D rods, those of the rough-wall Couette flow are decreased. Isosurfaces of the u-structures averaged in time suggest that the decrease of the turbulent activity near the centerline is associated with weakened large-scale counter-rotating roll modes by the surface roughness. This research was supported by the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF-2017R1D1A1A09000537) and the Ministry of Science, ICT & Future Planning (NRF-2017R1A5A1015311).

  10. Synthetic Molecular Machines for Active Self-Assembly: Prototype Algorithms, Designs, and Experimental Study

    NASA Astrophysics Data System (ADS)

    Dabby, Nadine L.

    Computer science and electrical engineering have been the great success story of the twentieth century. The neat modularity and mapping of a language onto circuits has led to robots on Mars, desktop computers and smartphones. But these devices are not yet able to do some of the things that life takes for granted: repair a scratch, reproduce, regenerate, or grow exponentially fast--all while remaining functional. This thesis explores and develops algorithms, molecular implementations, and theoretical proofs in the context of "active self-assembly" of molecular systems. The long-term vision of active self-assembly is the theoretical and physical implementation of materials that are composed of reconfigurable units with the programmability and adaptability of biology's numerous molecular machines. En route to this goal, we must first find a way to overcome the memory limitations of molecular systems, and to discover the limits of complexity that can be achieved with individual molecules. One of the main thrusts in molecular programming is to use computer science as a tool for figuring out what can be achieved. While molecular systems that are Turing-complete have been demonstrated [Winfree, 1996], these systems still cannot achieve some of the feats biology has achieved. One might think that because a system is Turing-complete, capable of computing "anything," that it can do any arbitrary task. But while it can simulate any digital computational problem, there are many behaviors that are not "computations" in a classical sense, and cannot be directly implemented. Examples include exponential growth and molecular motion relative to a surface. Passive self-assembly systems cannot implement these behaviors because (a) molecular motion relative to a surface requires a source of fuel that is external to the system, and (b) passive systems are too slow to assemble exponentially-fast-growing structures. We call these behaviors "energetically incomplete" programmable behaviors. This class of behaviors includes any behavior where a passive physical system simply does not have enough physical energy to perform the specified tasks in the requisite amount of time. As we will demonstrate and prove, a sufficiently expressive implementation of an "active" molecular self-assembly approach can achieve these behaviors. Using an external source of fuel solves part of the problem, so the system is not "energetically incomplete." But the programmable system also needs to have sufficient expressive power to achieve the specified behaviors. Perhaps surprisingly, some of these systems do not even require Turing completeness to be sufficiently expressive. Building on a large variety of work by other scientists in the fields of DNA nanotechnology, chemistry and reconfigurable robotics, this thesis introduces several research contributions in the context of active self-assembly. We show that simple primitives such as insertion and deletion are able to generate complex and interesting results such as the growth of a linear polymer in logarithmic time and the ability of a linear polymer to treadmill. To this end we developed a formal model for active-self assembly that is directly implementable with DNA molecules. We show that this model is computationally equivalent to a machine capable of producing strings that are stronger than regular languages and, at most, as strong as context-free grammars. This is a great advance in the theory of active self-assembly as prior models were either entirely theoretical or only implementable in the context of macro-scale robotics. We developed a chain reaction method for the autonomous exponential growth of a linear DNA polymer. Our method is based on the insertion of molecules into the assembly, which generates two new insertion sites for every initial one employed. The building of a line in logarithmic time is a first step toward building a shape in logarithmic time. We demonstrate the first construction of a synthetic linear polymer that grows exponentially fast via insertion. We show that monomer molecules are converted into the polymer in logarithmic time via spectrofluorimetry and gel electrophoresis experiments. We also demonstrate the division of these polymers via the addition of a single DNA complex that competes with the insertion mechanism. This shows the growth of a population of polymers in logarithmic time. We characterize the DNA insertion mechanism that we utilize in Chapter 4. We experimentally demonstrate that we can control the kinetics of this reaction over at least seven orders of magnitude, by programming the sequences of DNA that initiate the reaction. In addition, we review co-authored work on programming molecular robots using prescriptive landscapes of DNA origami; this was the first microscopic demonstration of programming a molecular robot to walk on a 2-dimensional surface. We developed a snapshot method for imaging these random walking molecular robots and a CAPTCHA-like analysis method for difficult-to-interpret imaging data.

  11. Rectification of depth measurement using pulsed thermography with logarithmic peak second derivative method

    NASA Astrophysics Data System (ADS)

    Li, Xiaoli; Zeng, Zhi; Shen, Jingling; Zhang, Cunlin; Zhao, Yuejin

    2018-03-01

    Logarithmic peak second derivative (LPSD) method is the most popular method for depth prediction in pulsed thermography. It is widely accepted that this method is independent of defect size. The theoretical model for LPSD method is based on the one-dimensional solution of heat conduction without considering the effect of defect size. When a decay term considering defect aspect ratio is introduced into the solution to correct the three-dimensional thermal diffusion effect, we found that LPSD method is affected by defect size by analytical model. Furthermore, we constructed the relation between the characteristic time of LPSD method and defect aspect ratio, which was verified with the experimental results of stainless steel and glass fiber reinforced plate (GFRP) samples. We also proposed an improved LPSD method for depth prediction when the effect of defect size was considered, and the rectification results of stainless steel and GFRP samples were presented and discussed.

  12. Logarithmic Superdiffusion in Two Dimensional Driven Lattice Gases

    NASA Astrophysics Data System (ADS)

    Krug, J.; Neiss, R. A.; Schadschneider, A.; Schmidt, J.

    2018-03-01

    The spreading of density fluctuations in two-dimensional driven diffusive systems is marginally anomalous. Mode coupling theory predicts that the diffusivity in the direction of the drive diverges with time as (ln t)^{2/3} with a prefactor depending on the macroscopic current-density relation and the diffusion tensor of the fluctuating hydrodynamic field equation. Here we present the first numerical verification of this behavior for a particular version of the two-dimensional asymmetric exclusion process. Particles jump strictly asymmetrically along one of the lattice directions and symmetrically along the other, and an anisotropy parameter p governs the ratio between the two rates. Using a novel massively parallel coupling algorithm that strongly reduces the fluctuations in the numerical estimate of the two-point correlation function, we are able to accurately determine the exponent of the logarithmic correction. In addition, the variation of the prefactor with p provides a stringent test of mode coupling theory.

  13. Mean-variance portfolio optimization by using time series approaches based on logarithmic utility function

    NASA Astrophysics Data System (ADS)

    Soeryana, E.; Fadhlina, N.; Sukono; Rusyaman, E.; Supian, S.

    2017-01-01

    Investments in stocks investors are also faced with the issue of risk, due to daily price of stock also fluctuate. For minimize the level of risk, investors usually forming an investment portfolio. Establishment of a portfolio consisting of several stocks are intended to get the optimal composition of the investment portfolio. This paper discussed about optimizing investment portfolio of Mean-Variance to stocks by using mean and volatility is not constant based on logarithmic utility function. Non constant mean analysed using models Autoregressive Moving Average (ARMA), while non constant volatility models are analysed using the Generalized Autoregressive Conditional heteroscedastic (GARCH). Optimization process is performed by using the Lagrangian multiplier technique. As a numerical illustration, the method is used to analyse some Islamic stocks in Indonesia. The expected result is to get the proportion of investment in each Islamic stock analysed.

  14. Distributed Optimal Consensus Over Resource Allocation Network and Its Application to Dynamical Economic Dispatch.

    PubMed

    Li, Chaojie; Yu, Xinghuo; Huang, Tingwen; He, Xing; Chaojie Li; Xinghuo Yu; Tingwen Huang; Xing He; Li, Chaojie; Huang, Tingwen; He, Xing; Yu, Xinghuo

    2018-06-01

    The resource allocation problem is studied and reformulated by a distributed interior point method via a -logarithmic barrier. By the facilitation of the graph Laplacian, a fully distributed continuous-time multiagent system is developed for solving the problem. Specifically, to avoid high singularity of the -logarithmic barrier at boundary, an adaptive parameter switching strategy is introduced into this dynamical multiagent system. The convergence rate of the distributed algorithm is obtained. Moreover, a novel distributed primal-dual dynamical multiagent system is designed in a smart grid scenario to seek the saddle point of dynamical economic dispatch, which coincides with the optimal solution. The dual decomposition technique is applied to transform the optimization problem into easily solvable resource allocation subproblems with local inequality constraints. The good performance of the new dynamical systems is, respectively, verified by a numerical example and the IEEE six-bus test system-based simulations.

  15. Extraction of phase information in daily stock prices

    NASA Astrophysics Data System (ADS)

    Fujiwara, Yoshi; Maekawa, Satoshi

    2000-06-01

    It is known that, in an intermediate time-scale such as days, stock market fluctuations possess several statistical properties that are common to different markets. Namely, logarithmic returns of an asset price have (i) truncated Pareto-Lévy distribution, (ii) vanishing linear correlation, (iii) volatility clustering and its power-law autocorrelation. The fact (ii) is a consequence of nonexistence of arbitragers with simple strategies, but this does not mean statistical independence of market fluctuations. Little attention has been paid to temporal structure of higher-order statistics, although it contains some important information on market dynamics. We applied a signal separation technique, called Independent Component Analysis (ICA), to actual data of daily stock prices in Tokyo and New York Stock Exchange (TSE/NYSE). ICA does a linear transformation of lag vectors from time-series to find independent components by a nonlinear algorithm. We obtained a similar impulse response for these dataset. If it were a Martingale process, it can be shown that impulse response should be a delta-function under a few conditions that could be numerically checked and as was verified by surrogate data. This result would provide information on the market dynamics including speculative bubbles and arbitrating processes. .

  16. Acid-rain induced changes in streamwater quality during storms on Catoctin Mountain, Maryland

    USGS Publications Warehouse

    Rice, Karen C.; Bricker, O.P.

    1992-01-01

    Catoctin Mountain receives some of the most acidic (lowest pH) rain in the United States. In 1990, the U.S. Geological Survey (USGS), in cooperation with the Maryland Department of the Environment (MDE) and the Maryland Department of Natural Resources (DNR), began a study of the effects of acid rain on the quality of streamwater on the part of Catoctin Mountain within Cunningham Falls State Park, Maryland (fig. 1). Samples of precipitation collected on the mountain by the USGS since 1982 have been analyzed for acidity and concentration of chemical constituents. During 1982-91, the volume-weighted average pH of precipitation was 4.2. (Volume weighting corrects for the effect of acids being washed out of the atmosphere at the beginning of rainfall). The pH value is measured on a logarithmic scale, which means that for each whole number change, the acidity changes by a factor of 10. Thus rain with a pH of 4.2 is more than 10 times as acidic as uncontaminated rain, which has a pH of about 5.6. The acidity of rain during several rainstorms on Catoctin Mountain was more than 100 times more acidic than uncontaminated rain.

  17. The theory of maximally and minimally even sets, the one- dimensional antiferromagnetic Ising model, and the continued fraction compromise of musical scales

    NASA Astrophysics Data System (ADS)

    Douthett, Elwood (Jack) Moser, Jr.

    1999-10-01

    Cyclic configurations of white and black sites, together with convex (concave) functions used to weight path length, are investigated. The weights of the white set and black set are the sums of the weights of the paths connecting the white sites and black sites, respectively, and the weight between sets is the sum of the weights of the paths that connect sites opposite in color. It is shown that when the weights of all configurations of a fixed number of white and a fixed number of black sites are compared, minimum (maximum) weight of a white set, minimum (maximum) weight of the a black set, and maximum (minimum) weight between sets occur simultaneously. Such configurations are called maximally even configurations. Similarly, the configurations whose weights are the opposite extremes occur simultaneously and are called minimally even configurations. Algorithms that generate these configurations are constructed and applied to the one- dimensional antiferromagnetic spin-1/2 Ising model. Next the goodness of continued fractions as applied to musical intervals (frequency ratios and their base 2 logarithms) is explored. It is shown that, for the intermediate convergents between two consecutive principal convergents of an irrational number, the first half of the intermediate convergents are poorer approximations than the preceding principal convergent while the second half are better approximations; the goodness of a middle intermediate convergent can only be determined by calculation. These convergents are used to determine what equal-tempered systems have intervals that most closely approximate the musical fifth (pn/ qn = log2(3/2)). The goodness of exponentiated convergents ( 2pn/qn~3/2 ) is also investigated. It is shown that, with the exception of a middle convergent, the goodness of the exponential form agrees with that of its logarithmic Counterpart As in the case of the logarithmic form, the goodness of a middle intermediate convergent in the exponential form can only be determined by calculation. A Desirability Function is constructed that simultaneously measures how well multiple intervals fit in a given equal-tempered system. These measurements are made for octave (base 2) and tritave systems (base 3). Combinatorial properties important to music modulation are considered. These considerations lead These considerations lead to the construction of maximally even scales as partitions of an equal-tempered system.

  18. Logarithms in the Year 10 A.C.

    ERIC Educational Resources Information Center

    Kalman, Dan; Mitchell, Charles E.

    1981-01-01

    An alternative application of logarithms in the high school algebra curriculum that is not undermined by the existence and widespread availability of calculators is presented. The importance and use of linear relationships are underscored in the proposed lessons. (MP)

  19. Entanglement spreading after a geometric quench in quantum spin chains

    NASA Astrophysics Data System (ADS)

    Alba, Vincenzo; Heidrich-Meisner, Fabian

    2014-08-01

    We investigate the entanglement spreading in the anisotropic spin-1/2 Heisenberg (XXZ) chain after a geometric quench. This corresponds to a sudden change of the geometry of the chain or, in the equivalent language of interacting fermions confined in a box trap, to a sudden increase of the trap size. The entanglement dynamics after the quench is associated with the ballistic propagation of a magnetization wave front. At the free fermion point (XX chain), the von Neumann entropy SA exhibits several intriguing dynamical regimes. Specifically, at short times a logarithmic increase is observed, similar to local quenches. This is accurately described by an analytic formula that we derive from heuristic arguments. At intermediate times partial revivals of the short-time dynamics are superposed with a power-law increase SA˜tα, with α <1. Finally, at very long times a steady state develops with constant entanglement entropy, apart from oscillations. As expected, since the model is integrable, we find that the steady state is nonthermal, although it exhibits extensive entanglement entropy. We also investigate the entanglement dynamics after the quench from a finite to the infinite chain (sudden expansion). While at long times the entanglement vanishes, we demonstrate that its relaxation dynamics exhibits a number of scaling properties. Finally, we discuss the short-time entanglement dynamics in the XXZ chain in the gapless phase. The same formula that describes the time dependence for the XX chain remains valid in the whole gapless phase.

  20. Prediction of Soil pH Hyperspectral Spectrum in Guanzhong Area of Shaanxi Province Based on PLS

    NASA Astrophysics Data System (ADS)

    Liu, Jinbao; Zhang, Yang; Wang, Huanyuan; Cheng, Jie; Tong, Wei; Wei, Jing

    2017-12-01

    The soil pH of Fufeng County, Yangling County and Wugong County in Shaanxi Province was studied. The spectral reflectance was measured by ASD Field Spec HR portable terrain spectrum, and its spectral characteristics were analyzed. The first deviation of the original spectral reflectance of the soil, the second deviation, the logarithm of the reciprocal logarithm, the first order differential of the reciprocal logarithm and the second order differential of the reciprocal logarithm were used to establish the soil pH Spectral prediction model. The results showed that the correlation between the reflectance spectra after SNV pre-treatment and the soil pH was significantly improved. The optimal prediction model of soil pH established by partial least squares method was a prediction model based on the first order differential of the reciprocal logarithm of spectral reflectance. The principal component factor was 10, the decision coefficient Rc2 = 0.9959, the model root means square error RMSEC = 0.0076, the correction deviation SEC = 0.0077; the verification decision coefficient Rv2 = 0.9893, the predicted root mean square error RMSEP = 0.0157, The deviation of SEP = 0.0160, the model was stable, the fitting ability and the prediction ability were high, and the soil pH can be measured quickly.

  1. Using History to Teach Mathematics: The Case of Logarithms

    NASA Astrophysics Data System (ADS)

    Panagiotou, Evangelos N.

    2011-01-01

    Many authors have discussed the question why we should use the history of mathematics to mathematics education. For example, Fauvel (For Learn Math, 11(2): 3-6, 1991) mentions at least fifteen arguments for applying the history of mathematics in teaching and learning mathematics. Knowing how to introduce history into mathematics lessons is a more difficult step. We found, however, that only a limited number of articles contain instructions on how to use the material, as opposed to numerous general articles suggesting the use of the history of mathematics as a didactical tool. The present article focuses on converting the history of logarithms into material appropriate for teaching students of 11th grade, without any knowledge of calculus. History uncovers that logarithms were invented prior of the exponential function and shows that the logarithms are not an arbitrary product, as is the case when we leap straight in the definition given in all modern textbooks, but they are a response to a problem. We describe step by step the historical evolution of the concept, in a way appropriate for use in class, until the definition of the logarithm as area under the hyperbola. Next, we present the formal development of the theory and define the exponential function. The teaching sequence has been successfully undertaken in two high school classrooms.

  2. A Renormalisation Group Method. V. A Single Renormalisation Group Step

    NASA Astrophysics Data System (ADS)

    Brydges, David C.; Slade, Gordon

    2015-05-01

    This paper is the fifth in a series devoted to the development of a rigorous renormalisation group method applicable to lattice field theories containing boson and/or fermion fields, and comprises the core of the method. In the renormalisation group method, increasingly large scales are studied in a progressive manner, with an interaction parametrised by a field polynomial which evolves with the scale under the renormalisation group map. In our context, the progressive analysis is performed via a finite-range covariance decomposition. Perturbative calculations are used to track the flow of the coupling constants of the evolving polynomial, but on their own perturbative calculations are insufficient to control error terms and to obtain mathematically rigorous results. In this paper, we define an additional non-perturbative coordinate, which together with the flow of coupling constants defines the complete evolution of the renormalisation group map. We specify conditions under which the non-perturbative coordinate is contractive under a single renormalisation group step. Our framework is essentially combinatorial, but its implementation relies on analytic results developed earlier in the series of papers. The results of this paper are applied elsewhere to analyse the critical behaviour of the 4-dimensional continuous-time weakly self-avoiding walk and of the 4-dimensional -component model. In particular, the existence of a logarithmic correction to mean-field scaling for the susceptibility can be proved for both models, together with other facts about critical exponents and critical behaviour.

  3. On the explicit construction of Parisi landscapes in finite dimensional Euclidean spaces

    NASA Astrophysics Data System (ADS)

    Fyodorov, Y. V.; Bouchaud, J.-P.

    2007-12-01

    An N-dimensional Gaussian landscape with multiscale translation-invariant logarithmic correlations has been constructed, and the statistical mechanics of a single particle in this environment has been investigated. In the limit of a high dimensional N → ∞, the free energy of the system in the thermodynamic limit coincides with the most general version of Derrida’s generalized random energy model. The low-temperature behavior depends essentially on the spectrum of length scales involved in the construction of the landscape. The construction is argued to be valid in any finite spatial dimensions N ≥1.

  4. Applying the log-normal distribution to target detection

    NASA Astrophysics Data System (ADS)

    Holst, Gerald C.

    1992-09-01

    Holst and Pickard experimentally determined that MRT responses tend to follow a log-normal distribution. The log normal distribution appeared reasonable because nearly all visual psychological data is plotted on a logarithmic scale. It has the additional advantage that it is bounded to positive values; an important consideration since probability of detection is often plotted in linear coordinates. Review of published data suggests that the log-normal distribution may have universal applicability. Specifically, the log-normal distribution obtained from MRT tests appears to fit the target transfer function and the probability of detection of rectangular targets.

  5. Renormalization group, normal form theory and the Ising model

    NASA Astrophysics Data System (ADS)

    Raju, Archishman; Hayden, Lorien; Clement, Colin; Liarte, Danilo; Sethna, James

    The results of the renormalization group are commonly advertised as the existence of power law singularities at critical points. Logarithmic and exponential corrections are seen as special cases and dealt with on a case-by-case basis. We propose to systematize computing the singularities in the renormalization group using perturbative normal form theory. This gives us a way to classify all such singularities in a unified framework and to generate a systematic machinery to do scaling collapses. We show that this procedure leads to some new results even in classic cases like the Ising model and has general applicability.

  6. On the regularized fermionic projector of the vacuum

    NASA Astrophysics Data System (ADS)

    Finster, Felix

    2008-03-01

    We construct families of fermionic projectors with spherically symmetric regularization, which satisfy the condition of a distributional MP-product. The method is to analyze regularization tails with a power law or logarithmic scaling in composite expressions in the fermionic projector. The resulting regularizations break the Lorentz symmetry and give rise to a multilayer structure of the fermionic projector near the light cone. Furthermore, we construct regularizations which go beyond the distributional MP-product in that they yield additional distributional contributions supported at the origin. The remaining freedom for the regularization parameters and the consequences for the normalization of the fermionic states are discussed.

  7. The evolution of the small x gluon TMD

    NASA Astrophysics Data System (ADS)

    Zhou, Jian

    2016-06-01

    We study the evolution of the small x gluon transverse momentum dependent (TMD) distribution in the dilute limit. The calculation has been carried out in the Ji-Ma-Yuan scheme using a simple quark target model. As expected, we find that the resulting small x gluon TMD simultaneously satisfies both the Collins-Soper (CS) evolution equation and the Balitsky-Fadin-Kuraev-Lipatov (BFKL) evolution equation. We thus confirmed the earlier finding that the high energy factorization (HEF) and the TMD factorization should be jointly employed to resum the different type large logarithms in a process where three relevant scales are well separated.

  8. What is the optimal architecture for visual information routing?

    PubMed

    Wolfrum, Philipp; von der Malsburg, Christoph

    2007-12-01

    Analyzing the design of networks for visual information routing is an underconstrained problem due to insufficient anatomical and physiological data. We propose here optimality criteria for the design of routing networks. For a very general architecture, we derive the number of routing layers and the fanout that minimize the required neural circuitry. The optimal fanout l is independent of network size, while the number k of layers scales logarithmically (with a prefactor below 1), with the number n of visual resolution units to be routed independently. The results are found to agree with data of the primate visual system.

  9. Semi-dynamic leaching tests of nickel containing wastes stabilized/solidified with magnesium potassium phosphate cements.

    PubMed

    Torras, Josep; Buj, Irene; Rovira, Miquel; de Pablo, Joan

    2011-02-28

    Herein is presented a study on the long-term leaching behaviour of nickel containing wastes stabilized/solidified with magnesium potassium phosphate cements. Two different semi-dynamic leaching tests were carried out on monolithic materials: ANS 16.1 test with liquid-to-solid ratio (L/S) of 10 dm(3) kg(-1) and increasing renewal times, and ASTM C1308 test with liquid-to-solid ratio (L/S) of 100 dm(3) kg(-1) and constant renewal time of 1 day. ASTM C1308 provides a lower degree of saturation of the leachant with respect to the leached material. The effectiveness of magnesium potassium phosphate cements for the inertization of nickel was proved. XRD analyses showed the presence of bobierrite on the monolith's surface after the leaching test, which had not been detected prior to the leaching test. In addition, the calculated cumulative release of the main components of the stabilization matrix (Mg(2+), total P and K(+)) was represented versus time in logarithmic scale and it was determined if the leaching mechanism corresponds to diffusion. Potassium is released by diffusion, while total phosphorous and magnesium show dissolution. Magnesium release in ANS 16.1 is slowed down because of saturation of the leachant. Experimental results demonstrate the importance of L/S ratio and renewal times in semi-dynamic leaching tests. Copyright © 2010 Elsevier B.V. All rights reserved.

  10. Impacts of climate change on current methodologies for flood risk analysis: Watershed-scale analyses using the Soil and Water Assessment Tool (SWAT)

    NASA Astrophysics Data System (ADS)

    Spellman, P.; Griffis, V. W.; LaFond, K.

    2013-12-01

    A changing climate brings about new challenges for flood risk analysis and water resources planning and management. Current methods for estimating flood risk in the US involve fitting the Pearson Type III (P3) probability distribution to the logarithms of the annual maximum flood (AMF) series using the method of moments. These methods are employed under the premise of stationarity, which assumes that the fitted distribution is time invariant and variables affecting stream flow such as climate do not fluctuate. However, climate change would bring about shifts in meteorological forcings which can alter the summary statistics (mean, variance, skew) of flood series used for P3 parameter estimation, resulting in erroneous flood risk projections. To ascertain the degree to which future risk may be misrepresented by current techniques, we use climate scenarios generated from global climate models (GCMs) as input to a hydrological model to explore how relative changes to current climate affect flood response for watersheds in the northeastern United States. The watersheds were calibrated and run on a daily time step using the continuous, semi-distributed, process based Soil and Water Assessment Tool (SWAT). Nash Sutcliffe Efficiency (NSE), RMSE to Standard Deviation ratio (RSR) and Percent Bias (PBIAS) were all used to assess model performance. Eight climate scenarios were chosen from GCM output based on relative precipitation and temperature changes from the current climate of the watershed and then further bias-corrected. Four of the scenarios were selected to represent warm-wet, warm-dry, cool-wet and cool-dry future climates, and the other four were chosen to represent more extreme, albeit possible, changes in precipitation and temperature. We quantify changes in response by comparing the differences in total mass balance and summary statistics of the logarithms of the AMF series from historical baseline values. We then compare forecasts of flood quantiles from fitting a P3 distribution to the logs of historical AMF data to that of generated AMF series.

  11. Statistical scaling of pore-scale Lagrangian velocities in natural porous media.

    PubMed

    Siena, M; Guadagnini, A; Riva, M; Bijeljic, B; Pereira Nunes, J P; Blunt, M J

    2014-08-01

    We investigate the scaling behavior of sample statistics of pore-scale Lagrangian velocities in two different rock samples, Bentheimer sandstone and Estaillades limestone. The samples are imaged using x-ray computer tomography with micron-scale resolution. The scaling analysis relies on the study of the way qth-order sample structure functions (statistical moments of order q of absolute increments) of Lagrangian velocities depend on separation distances, or lags, traveled along the mean flow direction. In the sandstone block, sample structure functions of all orders exhibit a power-law scaling within a clearly identifiable intermediate range of lags. Sample structure functions associated with the limestone block display two diverse power-law regimes, which we infer to be related to two overlapping spatially correlated structures. In both rocks and for all orders q, we observe linear relationships between logarithmic structure functions of successive orders at all lags (a phenomenon that is typically known as extended power scaling, or extended self-similarity). The scaling behavior of Lagrangian velocities is compared with the one exhibited by porosity and specific surface area, which constitute two key pore-scale geometric observables. The statistical scaling of the local velocity field reflects the behavior of these geometric observables, with the occurrence of power-law-scaling regimes within the same range of lags for sample structure functions of Lagrangian velocity, porosity, and specific surface area.

  12. Scaling effects on spring phenology detections from MODIS data at multiple spatial resolutions over the contiguous United States

    NASA Astrophysics Data System (ADS)

    Peng, Dailiang; Zhang, Xiaoyang; Zhang, Bing; Liu, Liangyun; Liu, Xinjie; Huete, Alfredo R.; Huang, Wenjiang; Wang, Siyuan; Luo, Shezhou; Zhang, Xiao; Zhang, Helin

    2017-10-01

    Land surface phenology (LSP) has been widely retrieved from satellite data at multiple spatial resolutions, but the spatial scaling effects on LSP detection are poorly understood. In this study, we collected enhanced vegetation index (EVI, 250 m) from collection 6 MOD13Q1 product over the contiguous United States (CONUS) in 2007 and 2008, and generated a set of multiple spatial resolution EVI data by resampling 250 m to 2 × 250 m and 3 × 250 m, 4 × 250 m, …, 35 × 250 m. These EVI time series were then used to detect the start of spring season (SOS) at various spatial resolutions. Further the SOS variation across scales was examined at each coarse resolution grid (35 × 250 m ≈ 8 km, refer to as reference grid) and ecoregion. Finally, the SOS scaling effects were associated with landscape fragment, proportion of primary land cover type, and spatial variability of seasonal greenness variation within each reference grid. The results revealed the influences of satellite spatial resolutions on SOS retrievals and the related impact factors. Specifically, SOS significantly varied lineally or logarithmically across scales although the relationship could be either positive or negative. The overall SOS values averaged from spatial resolutions between 250 m and 35 × 250 m at large ecosystem regions were generally similar with a difference less than 5 days, while the SOS values within the reference grid could differ greatly in some local areas. Moreover, the standard deviation of SOS across scales in the reference grid was less than 5 days in more than 70% of area over the CONUS, which was smaller in northeastern than in southern and western regions. The SOS scaling effect was significantly associated with heterogeneity of vegetation properties characterized using land landscape fragment, proportion of primary land cover type, and spatial variability of seasonal greenness variation, but the latter was the most important impact factor.

  13. Ideal evolution of magnetohydrodynamic turbulence when imposing Taylor-Green symmetries.

    PubMed

    Brachet, M E; Bustamante, M D; Krstulovic, G; Mininni, P D; Pouquet, A; Rosenberg, D

    2013-01-01

    We investigate the ideal and incompressible magnetohydrodynamic (MHD) equations in three space dimensions for the development of potentially singular structures. The methodology consists in implementing the fourfold symmetries of the Taylor-Green vortex generalized to MHD, leading to substantial computer time and memory savings at a given resolution; we also use a regridding method that allows for lower-resolution runs at early times, with no loss of spectral accuracy. One magnetic configuration is examined at an equivalent resolution of 6144(3) points and three different configurations on grids of 4096(3) points. At the highest resolution, two different current and vorticity sheet systems are found to collide, producing two successive accelerations in the development of small scales. At the latest time, a convergence of magnetic field lines to the location of maximum current is probably leading locally to a strong bending and directional variability of such lines. A novel analytical method, based on sharp analysis inequalities, is used to assess the validity of the finite-time singularity scenario. This method allows one to rule out spurious singularities by evaluating the rate at which the logarithmic decrement of the analyticity-strip method goes to zero. The result is that the finite-time singularity scenario cannot be ruled out, and the singularity time could be somewhere between t=2.33 and t=2.70. More robust conclusions will require higher resolution runs and grid-point interpolation measurements of maximum current and vorticity.

  14. On the nature of seizure dynamics

    PubMed Central

    Stacey, William C.; Quilichini, Pascale P.; Ivanov, Anton I.

    2014-01-01

    Seizures can occur spontaneously and in a recurrent manner, which defines epilepsy; or they can be induced in a normal brain under a variety of conditions in most neuronal networks and species from flies to humans. Such universality raises the possibility that invariant properties exist that characterize seizures under different physiological and pathological conditions. Here, we analysed seizure dynamics mathematically and established a taxonomy of seizures based on first principles. For the predominant seizure class we developed a generic model called Epileptor. As an experimental model system, we used ictal-like discharges induced in vitro in mouse hippocampi. We show that only five state variables linked by integral-differential equations are sufficient to describe the onset, time course and offset of ictal-like discharges as well as their recurrence. Two state variables are responsible for generating rapid discharges (fast time scale), two for spike and wave events (intermediate time scale) and one for the control of time course, including the alternation between ‘normal’ and ictal periods (slow time scale). We propose that normal and ictal activities coexist: a separatrix acts as a barrier (or seizure threshold) between these states. Seizure onset is reached upon the collision of normal brain trajectories with the separatrix. We show theoretically and experimentally how a system can be pushed toward seizure under a wide variety of conditions. Within our experimental model, the onset and offset of ictal-like discharges are well-defined mathematical events: a saddle-node and homoclinic bifurcation, respectively. These bifurcations necessitate a baseline shift at onset and a logarithmic scaling of interspike intervals at offset. These predictions were not only confirmed in our in vitro experiments, but also for focal seizures recorded in different syndromes, brain regions and species (humans and zebrafish). Finally, we identified several possible biophysical parameters contributing to the five state variables in our model system. We show that these parameters apply to specific experimental conditions and propose that there exists a wide array of possible biophysical mechanisms for seizure genesis, while preserving central invariant properties. Epileptor and the seizure taxonomy will guide future modeling and translational research by identifying universal rules governing the initiation and termination of seizures and predicting the conditions necessary for those transitions. PMID:24919973

  15. Coulomb Logarithm in Nonideal and Degenerate Plasmas

    NASA Astrophysics Data System (ADS)

    Filippov, A. V.; Starostin, A. N.; Gryaznov, V. K.

    2018-03-01

    Various methods for determining the Coulomb logarithm in the kinetic theory of transport and various variants of the choice of the plasma screening constant, taking into account and disregarding the contribution of the ion component and the boundary value of the electron wavevector are considered. The correlation of ions is taken into account using the Ornstein-Zernike integral equation in the hypernetted-chain approximation. It is found that the effect of ion correlation in a nondegenerate plasma is weak, while in a degenerate plasma, this effect must be taken into account when screening is determined by the electron component alone. The calculated values of the electrical conductivity of a hydrogen plasma are compared with the values determined experimentally in the megabar pressure range. It is shown that the values of the Coulomb logarithm can indeed be smaller than unity. Special experiments are proposed for a more exact determination of the Coulomb logarithm in a magnetic field for extremely high pressures, for which electron scattering by ions prevails.

  16. Logarithmic Laplacian Prior Based Bayesian Inverse Synthetic Aperture Radar Imaging.

    PubMed

    Zhang, Shuanghui; Liu, Yongxiang; Li, Xiang; Bi, Guoan

    2016-04-28

    This paper presents a novel Inverse Synthetic Aperture Radar Imaging (ISAR) algorithm based on a new sparse prior, known as the logarithmic Laplacian prior. The newly proposed logarithmic Laplacian prior has a narrower main lobe with higher tail values than the Laplacian prior, which helps to achieve performance improvement on sparse representation. The logarithmic Laplacian prior is used for ISAR imaging within the Bayesian framework to achieve better focused radar image. In the proposed method of ISAR imaging, the phase errors are jointly estimated based on the minimum entropy criterion to accomplish autofocusing. The maximum a posterior (MAP) estimation and the maximum likelihood estimation (MLE) are utilized to estimate the model parameters to avoid manually tuning process. Additionally, the fast Fourier Transform (FFT) and Hadamard product are used to minimize the required computational efficiency. Experimental results based on both simulated and measured data validate that the proposed algorithm outperforms the traditional sparse ISAR imaging algorithms in terms of resolution improvement and noise suppression.

  17. Habitable zones around main sequence stars.

    PubMed

    Kasting, J F; Whitmire, D P; Reynolds, R T

    1993-01-01

    A one-dimensional climate model is used to estimate the width of the habitable zone (HZ) around our Sun and around other main sequence stars. Our basic premise is that we are dealing with Earth-like planets with CO2/H2O/N2 atmospheres and that habitability requires the presence of liquid water on the planet's surface. The inner edge of the HZ is determined in our model by loss of water via photolysis and hydrogen escape. The outer edge of the HZ is determined by the formation of CO2 clouds, which cool a planet's surface by increasing its albedo and by lowering the convective lapse rate. Conservative estimates for these distances in our own Solar System are 0.95 and 1.37 AU, respectively; the actual width of the present HZ could be much greater. Between these two limits, climate stability is ensured by a feedback mechanism in which atmospheric CO2 concentrations vary inversely with planetary surface temperature. The width of the HZ is slightly greater for planets that are larger than Earth and for planets which have higher N2 partial pressures. The HZ evolves outward in time because the Sun increases in luminosity as it ages. A conservative estimate for the width of the 4.6-Gyr continuously habitable zone (CHZ) is 0.95 to 1.15 AU. Stars later than F0 have main sequence lifetimes exceeding 2 Gyr and, so, are also potential candidates for harboring habitable planets. The HZ around an F star is larger and occurs farther out than for our Sun; the HZ around K and M stars is smaller and occurs farther in. Nevertheless, the widths of all of these HZs are approximately the same if distance is expressed on a logarithmic scale. A log distance scale is probably the appropriate scale for this problem because the planets in our own Solar System are spaced logarithmically and because the distance at which another star would be expected to form planets should be related to the star's mass. The width of the CHZ around other stars depends on the time that a planet is required to remain habitable and on whether a planet that is initially frozen can be thawed by modest increases in stellar luminosity. For a specified period of habitability, CHZs around K and M stars are wider (in log distance) than for our Sun because these stars evolve more slowly. Planets orbiting late K stars and M stars may not be habitable, however, b ecause they can become trapped in synchronous rotation as a consequence of tidal damping. F stars have narrower (log distance) CHZ's than our Sun because they evolve more rapidly. Our results suggest that mid-to-early K stars should be considered along with G stars as optimal candidates in the search for extraterrestrial life.

  18. Durability Testing of Tank Track Rubber Compounds under Cyclic Loading

    DTIC Science & Technology

    1987-10-15

    depiction of time-to-failure vs applied ( engineering ) stress for 15TP-14AX rubber compounds in creep experiments at 23"C. (After McKenna (1...behavior of the 15TP-14AX rubber was carried out at 23, 75, 125 and 175 OC. The logarithm of the time to failure vs. the applied ( engineering ) stress is...4 3I. I I 5 10 15 a/MPa Figure 3-7 Semilogarith±ic depiction of time-to--failure vs applied ( engineering ) stress for 15TP-14AX rubber compounds in

  19. Meso-Scale Modeling of Spall in a Heterogeneous Two-Phase Material

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Springer, Harry Keo

    2008-07-11

    The influence of the heterogeneous second-phase particle structure and applied loading conditions on the ductile spall response of a model two-phase material was investigated. Quantitative metallography, three-dimensional (3D) meso-scale simulations (MSS), and small-scale spall experiments provided the foundation for this study. Nodular ductile iron (NDI) was selected as the model two-phase material for this study because it contains a large and readily identifiable second- phase particle population. Second-phase particles serve as the primary void nucleation sites in NDI and are, therefore, central to its ductile spall response. A mathematical model was developed for the NDI second-phase volume fraction that accountedmore » for the non-uniform particle size and spacing distributions within the framework of a length-scale dependent Gaussian probability distribution function (PDF). This model was based on novel multiscale sampling measurements. A methodology was also developed for the computer generation of representative particle structures based on their mathematical description, enabling 3D MSS. MSS were used to investigate the effects of second-phase particle volume fraction and particle size, loading conditions, and physical domain size of simulation on the ductile spall response of a model two-phase material. MSS results reinforce existing model predictions, where the spall strength metric (SSM) logarithmically decreases with increasing particle volume fraction. While SSM predictions are nearly independent of applied load conditions at lower loading rates, which is consistent with previous studies, loading dependencies are observed at higher loading rates. There is also a logarithmic decrease in SSM for increasing (initial) void size, as well. A model was developed to account for the effects of loading rate, particle size, matrix sound-speed, and, in the NDI-specific case, the probabilistic particle volume fraction model. Small-scale spall experiments were designed and executed for the purpose of validating closely-coupled 3D MSS. While the spall strength is nearly independent of specimen thickness, the fragment morphology varies widely. Detailed MSS demonstrate that the interactions between the tensile release waves are altered by specimen thickness and that these interactions are primarily responsible for fragment formation. MSS also provided insights on the regional amplification of damage, which enables the development of predictive void evolution models.« less

  20. Nonlinear coherent optical image processing using logarithmic transmittance of bacteriorhodopsin films

    NASA Astrophysics Data System (ADS)

    Downie, John D.

    1995-08-01

    The transmission properties of some bacteriorhodopsin-film spatial light modulators are uniquely suited to allow nonlinear optical image-processing operations to be applied to images with multiplicative noise characteristics. A logarithmic amplitude-transmission characteristic of the film permits the conversion of multiplicative noise to additive noise, which may then be linearly filtered out in the Fourier plane of the transformed image. I present experimental results demonstrating the principle and the capability for several different image and noise situations, including deterministic noise and speckle. The bacteriorhodopsin film studied here displays the logarithmic transmission response for write intensities spanning a dynamic range greater than 2 orders of magnitude.

Top