ERIC Educational Resources Information Center
Science Teacher, 2005
2005-01-01
This article features questions regarding logarithmic functions and hair growth. The first question is, "What is the underlying natural phenomenon that causes the natural log function to show up so frequently in scientific equations?" There are two reasons for this. The first is simply that the logarithm of a number is often used as a replacement…
Small range logarithm calculation on Intel Quartus II Verilog
NASA Astrophysics Data System (ADS)
Mustapha, Muhazam; Mokhtar, Anis Shahida; Ahmad, Azfar Asyrafie
2018-02-01
Logarithm function is the inverse of exponential function. This paper implement power series of natural logarithm function using Verilog HDL in Quartus II. The mode of design used is RTL in order to decrease the number of megafunctions. The simulations were done to determine the precision and number of LEs used so that the output calculated accurately. It is found that the accuracy of the system only valid for the range of 1 to e.
Alber, S A; Schaffner, D W
1992-01-01
A comparison was made between mathematical variations of the square root and Schoolfield models for predicting growth rate as a function of temperature. The statistical consequences of square root and natural logarithm transformations of growth rate use in several variations of the Schoolfield and square root models were examined. Growth rate variances of Yersinia enterocolitica in brain heart infusion broth increased as a function of temperature. The ability of the two data transformations to correct for the heterogeneity of variance was evaluated. A natural logarithm transformation of growth rate was more effective than a square root transformation at correcting for the heterogeneity of variance. The square root model was more accurate than the Schoolfield model when both models used natural logarithm transformation. PMID:1444367
Children's Early Mental Number Line: Logarithmic or Decomposed Linear?
ERIC Educational Resources Information Center
Moeller, Korbinean; Pixner, Silvia; Kaufmann, Liane; Nuerk, Hans-Christoph
2009-01-01
Recently, the nature of children's mental number line has received much investigation. In the number line task, children are required to mark a presented number on a physical number line with fixed endpoints. Typically, it was observed that the estimations of younger/inexperienced children were accounted for best by a logarithmic function, whereas…
Q estimation of seismic data using the generalized S-transform
NASA Astrophysics Data System (ADS)
Hao, Yaju; Wen, Xiaotao; Zhang, Bo; He, Zhenhua; Zhang, Rui; Zhang, Jinming
2016-12-01
Quality factor, Q, is a parameter that characterizes the energy dissipation during seismic wave propagation. The reservoir pore is one of the main factors that affect the value of Q. Especially, when pore space is filled with oil or gas, the rock usually exhibits a relative low Q value. Such a low Q value has been used as a direct hydrocarbon indicator by many researchers. The conventional Q estimation method based on spectral ratio suffers from the problem of waveform tuning; hence, many researchers have introduced time-frequency analysis techniques to tackle this problem. Unfortunately, the window functions adopted in time-frequency analysis algorithms such as continuous wavelet transform (CWT) and S-transform (ST) contaminate the amplitude spectra because the seismic signal is multiplied by the window functions during time-frequency decomposition. The basic assumption of the spectral ratio method is that there is a linear relationship between natural logarithmic spectral ratio and frequency. However, this assumption does not hold if we take the influence of window functions into consideration. In this paper, we first employ a recently developed two-parameter generalized S-transform (GST) to obtain the time-frequency spectra of seismic traces. We then deduce the non-linear relationship between natural logarithmic spectral ratio and frequency. Finally, we obtain a linear relationship between natural logarithmic spectral ratio and a newly defined parameter γ by ignoring the negligible second order term. The gradient of this linear relationship is 1/Q. Here, the parameter γ is a function of frequency and source wavelet. Numerical examples for VSP and post-stack reflection data confirm that our algorithm is capable of yielding accurate results. The Q-value results estimated from field data acquired in western China show reasonable comparison with oil-producing well location.
Synthetic analog computation in living cells.
Daniel, Ramiz; Rubens, Jacob R; Sarpeshkar, Rahul; Lu, Timothy K
2013-05-30
A central goal of synthetic biology is to achieve multi-signal integration and processing in living cells for diagnostic, therapeutic and biotechnology applications. Digital logic has been used to build small-scale circuits, but other frameworks may be needed for efficient computation in the resource-limited environments of cells. Here we demonstrate that synthetic analog gene circuits can be engineered to execute sophisticated computational functions in living cells using just three transcription factors. Such synthetic analog gene circuits exploit feedback to implement logarithmically linear sensing, addition, ratiometric and power-law computations. The circuits exhibit Weber's law behaviour as in natural biological systems, operate over a wide dynamic range of up to four orders of magnitude and can be designed to have tunable transfer functions. Our circuits can be composed to implement higher-order functions that are well described by both intricate biochemical models and simple mathematical functions. By exploiting analog building-block functions that are already naturally present in cells, this approach efficiently implements arithmetic operations and complex functions in the logarithmic domain. Such circuits may lead to new applications for synthetic biology and biotechnology that require complex computations with limited parts, need wide-dynamic-range biosensing or would benefit from the fine control of gene expression.
On the entropy function in sociotechnical systems
Montroll, Elliott W.
1981-01-01
The entropy function H = -Σpj log pj (pj being the probability of a system being in state j) and its continuum analogue H = ∫p(x) log p(x) dx are fundamental in Shannon's theory of information transfer in communication systems. It is here shown that the discrete form of H also appears naturally in single-lane traffic flow theory. In merchandising, goods flow from a whole-saler through a retailer to a customer. Certain features of the process may be deduced from price distribution functions derived from Sears Roebuck and Company catalogues. It is found that the dispersion in logarithm of catalogue prices of a given year has remained about constant, independently of the year, for over 75 years. From this it may be inferred that the continuum entropy function for the variable logarithm of price had inadvertently, through Sears Roebuck policies, been maximized for that firm subject to the observed dispersion. PMID:16593136
On the entropy function in sociotechnical systems.
Montroll, E W
1981-12-01
The entropy function H = -Sigmap(j) log p(j) (p(j) being the probability of a system being in state j) and its continuum analogue H = integralp(x) log p(x) dx are fundamental in Shannon's theory of information transfer in communication systems. It is here shown that the discrete form of H also appears naturally in single-lane traffic flow theory. In merchandising, goods flow from a whole-saler through a retailer to a customer. Certain features of the process may be deduced from price distribution functions derived from Sears Roebuck and Company catalogues. It is found that the dispersion in logarithm of catalogue prices of a given year has remained about constant, independently of the year, for over 75 years. From this it may be inferred that the continuum entropy function for the variable logarithm of price had inadvertently, through Sears Roebuck policies, been maximized for that firm subject to the observed dispersion.
NASA Astrophysics Data System (ADS)
Jue, Brian J.; Bice, Michael D.
2013-07-01
As students explore the technological tools available to them for learning mathematics, some will eventually discover what happens when a function button is repeatedly pressed on a calculator. We explore several examples of this, presenting tabular and graphical results for the square root, natural logarithm and sine and cosine functions. Observed behaviour is proven and then discussed in the context of fixed points.
Generalizing a Limit Description of the Natural Logarithm
ERIC Educational Resources Information Center
Dobbs, David E.
2010-01-01
If f is a continuous positive-valued function defined on the closed interval from a to x and if k[subscript 0] is greater than 0, then lim[subscript k[right arrow]0[superscript +] [integral][superscript x] [subscript a] f (t)[superscript k-k[subscript 0
NASA Astrophysics Data System (ADS)
Zaal, K. J. J. M.
1991-06-01
In programming solutions of complex function theory, the complex logarithm function is replaced by the complex logarithmic function, introducing a discontinuity along the branch cut into the programmed solution which was not present in the mathematical solution. Recently, Liaw and Kamel presented their solution of the infinite anisotropic centrally cracked plate loaded by an arbitrary point force, which they used as Green's function in a boundary element method intended to evaluate the stress intensity factor at the tip of a crack originating from an elliptical home. Their solution may be used as Green's function of many more numerical methods involving anisotropic elasticity. In programming applications of Liaw and Kamel's solution, the standard definition of the logarithmic function with the branch cut at the nonpositive real axis cannot provide a reliable computation of the displacement field for Liaw and Kamel's solution. Either the branch cut should be redefined outside the domain of the logarithmic function, after proving that the domain is limited to a part of the plane, or the logarithmic function should be defined on its Riemann surface. A two dimensional line fractal can provide the link between all mesh points on the plane essential to evaluate the logarithm function on its Riemann surface. As an example, a two dimensional line fractal is defined for a mesh once used by Erdogan and Arin.
Stark, J A; Hladky, S B
2000-02-01
Dwell-time histograms are often plotted as part of patch-clamp investigations of ion channel currents. The advantages of plotting these histograms with a logarithmic time axis were demonstrated by, J. Physiol. (Lond.). 378:141-174), Pflügers Arch. 410:530-553), and, Biophys. J. 52:1047-1054). Sigworth and Sine argued that the interpretation of such histograms is simplified if the counts are presented in a manner similar to that of a probability density function. However, when ion channel records are recorded as a discrete time series, the dwell times are quantized. As a result, the mapping of dwell times to logarithmically spaced bins is highly irregular; bins may be empty, and significant irregularities may extend beyond the duration of 100 samples. Using simple approximations based on the nature of the binning process and the transformation rules for probability density functions, we develop adjustments for the display of the counts to compensate for this effect. Tests with simulated data suggest that this procedure provides a faithful representation of the data.
Function algorithms for MPP scientific subroutines, volume 1
NASA Technical Reports Server (NTRS)
Gouch, J. G.
1984-01-01
Design documentation and user documentation for function algorithms for the Massively Parallel Processor (MPP) are presented. The contract specifies development of MPP assembler instructions to perform the following functions: natural logarithm; exponential (e to the x power); square root; sine; cosine; and arctangent. To fulfill the requirements of the contract, parallel array and solar implementations for these functions were developed on the PDP11/34 Program Development and Management Unit (PDMU) that is resident at the MPP testbed installation located at the NASA Goddard facility.
NASA Astrophysics Data System (ADS)
Liang, Yingjie; Chen, Wen
2018-04-01
The mean squared displacement (MSD) of the traditional ultraslow diffusion is a logarithmic function of time. Recently, the continuous time random walk model is employed to characterize this ultraslow diffusion dynamics by connecting the heavy-tailed logarithmic function and its variation as the asymptotical waiting time density. In this study we investigate the limiting waiting time density of a general ultraslow diffusion model via the inverse Mittag-Leffler function, whose special case includes the traditional logarithmic ultraslow diffusion model. The MSD of the general ultraslow diffusion model is analytically derived as an inverse Mittag-Leffler function, and is observed to increase even more slowly than that of the logarithmic function model. The occurrence of very long waiting time in the case of the inverse Mittag-Leffler function has the largest probability compared with the power law model and the logarithmic function model. The Monte Carlo simulations of one dimensional sample path of a single particle are also performed. The results show that the inverse Mittag-Leffler waiting time density is effective in depicting the general ultraslow random motion.
NASA Astrophysics Data System (ADS)
Liang, Yingjie; Chen, Wen
2018-03-01
Ultraslow diffusion has been observed in numerous complicated systems. Its mean squared displacement (MSD) is not a power law function of time, but instead a logarithmic function, and in some cases grows even more slowly than the logarithmic rate. The distributed-order fractional diffusion equation model simply does not work for the general ultraslow diffusion. Recent study has used the local structural derivative to describe ultraslow diffusion dynamics by using the inverse Mittag-Leffler function as the structural function, in which the MSD is a function of inverse Mittag-Leffler function. In this study, a new stretched logarithmic diffusion law and its underlying non-local structural derivative diffusion model are proposed to characterize the ultraslow diffusion in aging dense colloidal glass at both the short and long waiting times. It is observed that the aging dynamics of dense colloids is a class of the stretched logarithmic ultraslow diffusion processes. Compared with the power, the logarithmic, and the inverse Mittag-Leffler diffusion laws, the stretched logarithmic diffusion law has better precision in fitting the MSD of the colloidal particles at high densities. The corresponding non-local structural derivative diffusion equation manifests clear physical mechanism, and its structural function is equivalent to the first-order derivative of the MSD.
NASA Astrophysics Data System (ADS)
Yang, X. I. A.; Marusic, I.; Meneveau, C.
2016-06-01
Townsend [Townsend, The Structure of Turbulent Shear Flow (Cambridge University Press, Cambridge, UK, 1976)] hypothesized that the logarithmic region in high-Reynolds-number wall-bounded flows consists of space-filling, self-similar attached eddies. Invoking this hypothesis, we express streamwise velocity fluctuations in the inertial layer in high-Reynolds-number wall-bounded flows as a hierarchical random additive process (HRAP): uz+=∑i=1Nzai . Here u is the streamwise velocity fluctuation, + indicates normalization in wall units, z is the wall normal distance, and ai's are independently, identically distributed random additives, each of which is associated with an attached eddy in the wall-attached hierarchy. The number of random additives is Nz˜ln(δ /z ) where δ is the boundary layer thickness and ln is natural log. Due to its simplified structure, such a process leads to predictions of the scaling behaviors for various turbulence statistics in the logarithmic layer. Besides reproducing known logarithmic scaling of moments, structure functions, and correlation function [" close="]3/2
NASA Astrophysics Data System (ADS)
Vaninsky, Alexander
2015-04-01
Defining the logarithmic function as a definite integral with a variable upper limit, an approach used by some popular calculus textbooks, is problematic. We discuss the disadvantages of such a definition and provide a way to fix the problem. We also consider a definition-based, rigorous derivation of the derivative of the exponential function that is easier, more intuitive, and complies with the standard definitions of the number e, the logarithmic, and the exponential functions.
The Role of Hellinger Processes in Mathematical Finance
NASA Astrophysics Data System (ADS)
Choulli, T.; Hurd, T. R.
2001-09-01
This paper illustrates the natural role that Hellinger processes can play in solving problems from ¯nance. We propose an extension of the concept of Hellinger process applicable to entropy distance and f-divergence distances, where f is a convex logarithmic function or a convex power function with general order q, 0 6= q < 1. These concepts lead to a new approach to Merton's optimal portfolio problem and its dual in general L¶evy markets.
Numerical solution of the quantum Lenard-Balescu equation for a non-degenerate one-component plasma
Scullard, Christian R.; Belt, Andrew P.; Fennell, Susan C.; ...
2016-09-01
We present a numerical solution of the quantum Lenard-Balescu equation using a spectral method, namely an expansion in Laguerre polynomials. This method exactly conserves both particles and kinetic energy and facilitates the integration over the dielectric function. To demonstrate the method, we solve the equilibration problem for a spatially homogeneous one-component plasma with various initial conditions. Unlike the more usual Landau/Fokker-Planck system, this method requires no input Coulomb logarithm; the logarithmic terms in the collision integral arise naturally from the equation along with the non-logarithmic order-unity terms. The spectral method can also be used to solve the Landau equation andmore » a quantum version of the Landau equation in which the integration over the wavenumber requires only a lower cutoff. We solve these problems as well and compare them with the full Lenard-Balescu solution in the weak-coupling limit. Finally, we discuss the possible generalization of this method to include spatial inhomogeneity and velocity anisotropy.« less
Logarithmic black hole entropy corrections and holographic Rényi entropy
NASA Astrophysics Data System (ADS)
Mahapatra, Subhash
2018-01-01
The entanglement and Rényi entropies for spherical entangling surfaces in CFTs with gravity duals can be explicitly calculated by mapping these entropies first to the thermal entropy on hyperbolic space and then, using the AdS/CFT correspondence, to the Wald entropy of topological black holes. Here we extend this idea by taking into account corrections to the Wald entropy. Using the method based on horizon symmetries and the asymptotic Cardy formula, we calculate corrections to the Wald entropy and find that these corrections are proportional to the logarithm of the area of the horizon. With the corrected expression for the entropy of the black hole, we then find corrections to the Rényi entropies. We calculate these corrections for both Einstein and Gauss-Bonnet gravity duals. Corrections with logarithmic dependence on the area of the entangling surface naturally occur at the order GD^0. The entropic c-function and the inequalities of the Rényi entropy are also satisfied even with the correction terms.
Abelian non-global logarithms from soft gluon clustering
NASA Astrophysics Data System (ADS)
Kelley, Randall; Walsh, Jonathan R.; Zuberi, Saba
2012-09-01
Most recombination-style jet algorithms cluster soft gluons in a complex way. This leads to previously identified correlations in the soft gluon phase space and introduces logarithmic corrections to jet cross sections, which are known as clustering logarithms. The leading Abelian clustering logarithms occur at least at next-to leading logarithm (NLL) in the exponent of the distribution. Using the framework of Soft Collinear Effective Theory (SCET), we show that new clustering effects contributing at NLL arise at each order. While numerical resummation of clustering logs is possible, it is unlikely that they can be analytically resummed to NLL. Clustering logarithms make the anti-kT algorithm theoretically preferred, for which they are power suppressed. They can arise in Abelian and non-Abelian terms, and we calculate the Abelian clustering logarithms at O ( {α_s^2} ) for the jet mass distribution using the Cambridge/Aachen and kT algorithms, including jet radius dependence, which extends previous results. We find that clustering logarithms can be naturally thought of as a class of non-global logarithms, which have traditionally been tied to non-Abelian correlations in soft gluon emission.
A Bid Price Equation For Timber Sales on the Ouachita and Ozark National Forests
Michael M. Huebschmann; Thomas B. Lynch; David K. Lewis; Daniel S. Tilley; James M. Guldin
2004-01-01
Data from 150 timber sales on the Ouachita and Ozark National Forests in Arkansas and southeaster n Oklahoma were used to develop an equation that relates bid prices to timber sale variables. Variables used to predict the natural logarithm of the real, winning total bid price are the natural logarithms of total sawtimber volume per sale, total pulpwood volume per sale...
NASA Technical Reports Server (NTRS)
Pierson, Willard J., Jr.
1989-01-01
The values of the Normalized Radar Backscattering Cross Section (NRCS), sigma (o), obtained by a scatterometer are random variables whose variance is a known function of the expected value. The probability density function can be obtained from the normal distribution. Models for the expected value obtain it as a function of the properties of the waves on the ocean and the winds that generated the waves. Point estimates of the expected value were found from various statistics given the parameters that define the probability density function for each value. Random intervals were derived with a preassigned probability of containing that value. A statistical test to determine whether or not successive values of sigma (o) are truly independent was derived. The maximum likelihood estimates for wind speed and direction were found, given a model for backscatter as a function of the properties of the waves on the ocean. These estimates are biased as a result of the terms in the equation that involve natural logarithms, and calculations of the point estimates of the maximum likelihood values are used to show that the contributions of the logarithmic terms are negligible and that the terms can be omitted.
Laser induced phosphorescence uranium analysis
Bushaw, B.A.
1983-06-10
A method is described for measuring the uranium content of aqueous solutions wherein a uranyl phosphate complex is irradiated with a 5 nanosecond pulse of 425 nanometer laser light and resultant 520 nanometer emissions are observed for a period of 50 to 400 microseconds after the pulse. Plotting the natural logarithm of emission intensity as a function of time yields an intercept value which is proportional to uranium concentration.
1993-10-29
natural logarithm of the ratio of two maxima a period apart. Both methods are based on the results from the numerical integration. The details of this...check and okay member funtions are for sofware handshaking between the client and sever pracrss. Finally, the Forward function is used to initiate a
Laser induced phosphorescence uranium analysis
Bushaw, Bruce A.
1986-01-01
A method is described for measuring the uranium content of aqueous solutions wherein a uranyl phosphate complex is irradiated with a 5 nanosecond pulse of 425 nanometer laser light and resultant 520 nanometer emissions are observed for a period of 50 to 400 microseconds after the pulse. Plotting the natural logarithm of emission intensity as a function of time yields an intercept value which is proportional to uranium concentration.
Persiani, Anna Maria; Maggi, Oriana
2013-01-01
Experimental fires, of both low and high intensity, were lit during summer 2000 and the following 2 y in the Castel Volturno Nature Reserve, southern Italy. Soil samples were collected Jul 2000-Jul 2002 to analyze the soil fungal community dynamics. Species abundance distribution patterns (geometric, logarithmic, log normal, broken-stick) were compared. We plotted datasets with information both on species richness and abundance for total, xerotolerant and heat-stimulated soil microfungi. The xerotolerant fungi conformed to a broken-stick model for both the low- and high intensity fires at 7 and 84 d after the fire; their distribution subsequently followed logarithmic models in the 2 y following the fire. The distribution of the heat-stimulated fungi changed from broken-stick to logarithmic models and eventually to a log-normal model during the post-fire recovery. Xerotolerant and, to a far greater extent, heat-stimulated soil fungi acquire an important functional role following soil water stress and/or fire disturbance; these disturbances let them occupy unsaturated habitats and become increasingly abundant over time.
Bio-Inspired Microsystem for Robust Genetic Assay Recognition
Lue, Jaw-Chyng; Fang, Wai-Chi
2008-01-01
A compact integrated system-on-chip (SoC) architecture solution for robust, real-time, and on-site genetic analysis has been proposed. This microsystem solution is noise-tolerable and suitable for analyzing the weak fluorescence patterns from a PCR prepared dual-labeled DNA microchip assay. In the architecture, a preceding VLSI differential logarithm microchip is designed for effectively computing the logarithm of the normalized input fluorescence signals. A posterior VLSI artificial neural network (ANN) processor chip is used for analyzing the processed signals from the differential logarithm stage. A single-channel logarithmic circuit was fabricated and characterized. A prototype ANN chip with unsupervised winner-take-all (WTA) function was designed, fabricated, and tested. An ANN learning algorithm using a novel sigmoid-logarithmic transfer function based on the supervised backpropagation (BP) algorithm is proposed for robustly recognizing low-intensity patterns. Our results show that the trained new ANN can recognize low-fluorescence patterns better than an ANN using the conventional sigmoid function. PMID:18566679
Approximating Exponential and Logarithmic Functions Using Polynomial Interpolation
ERIC Educational Resources Information Center
Gordon, Sheldon P.; Yang, Yajun
2017-01-01
This article takes a closer look at the problem of approximating the exponential and logarithmic functions using polynomials. Either as an alternative to or a precursor to Taylor polynomial approximations at the precalculus level, interpolating polynomials are considered. A measure of error is given and the behaviour of the error function is…
Monotonicity and Logarithmic Concavity of Two Functions Involving Exponential Function
ERIC Educational Resources Information Center
Liu, Ai-Qi; Li, Guo-Fu; Guo, Bai-Ni; Qi, Feng
2008-01-01
The function 1 divided by "x"[superscript 2] minus "e"[superscript"-x"] divided by (1 minus "e"[superscript"-x"])[superscript 2] for "x" greater than 0 is proved to be strictly decreasing. As an application of this monotonicity, the logarithmic concavity of the function "t" divided by "e"[superscript "at"] minus "e"[superscript"(a-1)""t"] for "a"…
Ming Gu; Chakrabartty, Shantanu
2014-06-01
This paper presents the design of a programmable gain, temperature compensated, current-mode CMOS logarithmic amplifier that can be used for biomedical signal processing. Unlike conventional logarithmic amplifiers that use a transimpedance technique to generate a voltage signal as a logarithmic function of the input current, the proposed approach directly produces a current output as a logarithmic function of the input current. Also, unlike a conventional transimpedance amplifier the gain of the proposed logarithmic amplifier can be programmed using floating-gate trimming circuits. The synthesis of the proposed circuit is based on the Hart's extended translinear principle which involves embedding a floating-voltage source and a linear resistive element within a translinear loop. Temperature compensation is then achieved using a translinear-based resistive cancelation technique. Measured results from prototypes fabricated in a 0.5 μm CMOS process show that the amplifier has an input dynamic range of 120 dB and a temperature sensitivity of 230 ppm/°C (27 °C- 57°C), while consuming less than 100 nW of power.
NASA Technical Reports Server (NTRS)
Freed, Alan D.
1997-01-01
Logarithmic strain is the preferred measure of strain used by materials scientists, who typically refer to it as the "true strain." It was Nadai who gave it the name "natural strain," which seems more appropriate. This strain measure was proposed by Ludwik for the one-dimensional extension of a rod with length l. It was defined via the integral of dl/l to which Ludwik gave the name "effective specific strain." Today, it is after Hencky, who extended Ludwik's measure to three-dimensional analysis by defining logarithmic strains for the three principal directions.
ERIC Educational Resources Information Center
Matta, Cherif F.; Massa, Lou; Gubskaya, Anna V.; Knoll, Eva
2011-01-01
The fate of dimensions of dimensioned quantities that are inserted into the argument of transcendental functions such as logarithms, exponentiation, trigonometric, and hyperbolic functions is discussed. Emphasis is placed on common misconceptions that are not often systematically examined in undergraduate courses of physical sciences. The argument…
Li, Jing; Mahmoodi, Alireza; Joseph, Dileepan
2015-10-16
An important class of complementary metal-oxide-semiconductor (CMOS) image sensors are those where pixel responses are monotonic nonlinear functions of light stimuli. This class includes various logarithmic architectures, which are easily capable of wide dynamic range imaging, at video rates, but which are vulnerable to image quality issues. To minimize fixed pattern noise (FPN) and maximize photometric accuracy, pixel responses must be calibrated and corrected due to mismatch and process variation during fabrication. Unlike literature approaches, which employ circuit-based models of varying complexity, this paper introduces a novel approach based on low-degree polynomials. Although each pixel may have a highly nonlinear response, an approximately-linear FPN calibration is possible by exploiting the monotonic nature of imaging. Moreover, FPN correction requires only arithmetic, and an optimal fixed-point implementation is readily derived, subject to a user-specified number of bits per pixel. Using a monotonic spline, involving cubic polynomials, photometric calibration is also possible without a circuit-based model, and fixed-point photometric correction requires only a look-up table. The approach is experimentally validated with a logarithmic CMOS image sensor and is compared to a leading approach from the literature. The novel approach proves effective and efficient.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakashima, Hiroyuki; Nakatsuji, Hiroshi
2007-12-14
The Schroedinger equation was solved very accurately for helium atom and its isoelectronic ions (Z=1-10) with the free iterative complement interaction (ICI) method followed by the variational principle. We obtained highly accurate wave functions and energies of helium atom and its isoelectronic ions. For helium, the calculated energy was -2.903 724 377 034 119 598 311 159 245 194 404 446 696 905 37 a.u., correct over 40 digit accuracy, and for H{sup -}, it was -0.527 751 016 544 377 196 590 814 566 747 511 383 045 02 a.u. These results prove numerically that with the free ICImore » method, we can calculate the solutions of the Schroedinger equation as accurately as one desires. We examined several types of scaling function g and initial function {psi}{sub 0} of the free ICI method. The performance was good when logarithm functions were used in the initial function because the logarithm function is physically essential for three-particle collision area. The best performance was obtained when we introduce a new logarithm function containing not only r{sub 1} and r{sub 2} but also r{sub 12} in the same logarithm function.« less
Mathematical model for logarithmic scaling of velocity fluctuations in wall turbulence.
Mouri, Hideaki
2015-12-01
For wall turbulence, moments of velocity fluctuations are known to be logarithmic functions of the height from the wall. This logarithmic scaling is due to the existence of a characteristic velocity and to the nonexistence of any characteristic height in the range of the scaling. By using the mathematics of random variables, we obtain its necessary and sufficient conditions. They are compared with characteristics of a phenomenological model of eddies attached to the wall and also with those of the logarithmic scaling of the mean velocity.
Approximating exponential and logarithmic functions using polynomial interpolation
NASA Astrophysics Data System (ADS)
Gordon, Sheldon P.; Yang, Yajun
2017-04-01
This article takes a closer look at the problem of approximating the exponential and logarithmic functions using polynomials. Either as an alternative to or a precursor to Taylor polynomial approximations at the precalculus level, interpolating polynomials are considered. A measure of error is given and the behaviour of the error function is analysed. The results of interpolating polynomials are compared with those of Taylor polynomials.
Luo, Yuan; Szolovits, Peter
2016-01-01
In natural language processing, stand-off annotation uses the starting and ending positions of an annotation to anchor it to the text and stores the annotation content separately from the text. We address the fundamental problem of efficiently storing stand-off annotations when applying natural language processing on narrative clinical notes in electronic medical records (EMRs) and efficiently retrieving such annotations that satisfy position constraints. Efficient storage and retrieval of stand-off annotations can facilitate tasks such as mapping unstructured text to electronic medical record ontologies. We first formulate this problem into the interval query problem, for which optimal query/update time is in general logarithm. We next perform a tight time complexity analysis on the basic interval tree query algorithm and show its nonoptimality when being applied to a collection of 13 query types from Allen's interval algebra. We then study two closely related state-of-the-art interval query algorithms, proposed query reformulations, and augmentations to the second algorithm. Our proposed algorithm achieves logarithmic time stabbing-max query time complexity and solves the stabbing-interval query tasks on all of Allen's relations in logarithmic time, attaining the theoretic lower bound. Updating time is kept logarithmic and the space requirement is kept linear at the same time. We also discuss interval management in external memory models and higher dimensions.
Luo, Yuan; Szolovits, Peter
2016-01-01
In natural language processing, stand-off annotation uses the starting and ending positions of an annotation to anchor it to the text and stores the annotation content separately from the text. We address the fundamental problem of efficiently storing stand-off annotations when applying natural language processing on narrative clinical notes in electronic medical records (EMRs) and efficiently retrieving such annotations that satisfy position constraints. Efficient storage and retrieval of stand-off annotations can facilitate tasks such as mapping unstructured text to electronic medical record ontologies. We first formulate this problem into the interval query problem, for which optimal query/update time is in general logarithm. We next perform a tight time complexity analysis on the basic interval tree query algorithm and show its nonoptimality when being applied to a collection of 13 query types from Allen’s interval algebra. We then study two closely related state-of-the-art interval query algorithms, proposed query reformulations, and augmentations to the second algorithm. Our proposed algorithm achieves logarithmic time stabbing-max query time complexity and solves the stabbing-interval query tasks on all of Allen’s relations in logarithmic time, attaining the theoretic lower bound. Updating time is kept logarithmic and the space requirement is kept linear at the same time. We also discuss interval management in external memory models and higher dimensions. PMID:27478379
Some properties of the Catalan-Qi function related to the Catalan numbers.
Qi, Feng; Mahmoud, Mansour; Shi, Xiao-Ting; Liu, Fang-Fang
2016-01-01
In the paper, the authors find some properties of the Catalan numbers, the Catalan function, and the Catalan-Qi function which is a generalization of the Catalan numbers. Concretely speaking, the authors present a new expression, asymptotic expansions, integral representations, logarithmic convexity, complete monotonicity, minimality, logarithmically complete monotonicity, a generating function, and inequalities of the Catalan numbers, the Catalan function, and the Catalan-Qi function. As by-products, an exponential expansion and a double inequality for the ratio of two gamma functions are derived.
Logarithmic M(2,p) minimal models, their logarithmic couplings, and duality
NASA Astrophysics Data System (ADS)
Mathieu, Pierre; Ridout, David
2008-10-01
A natural construction of the logarithmic extension of the M(2,p) (chiral) minimal models is presented, which generalises our previous model of percolation ( p=3). Its key aspect is the replacement of the minimal model irreducible modules by reducible ones obtained by requiring that only one of the two principal singular vectors of each module vanish. The resulting theory is then constructed systematically by repeatedly fusing these building block representations. This generates indecomposable representations of the type which signify the presence of logarithmic partner fields in the theory. The basic data characterising these indecomposable modules, the logarithmic couplings, are computed for many special cases and given a new structural interpretation. Quite remarkably, a number of them are presented in closed analytic form (for general p). These are the prime examples of "gauge-invariant" data—quantities independent of the ambiguities present in defining the logarithmic partner fields. Finally, mere global conformal invariance is shown to enforce strong constraints on the allowed spectrum: It is not possible to include modules other than those generated by the fusion of the model's building blocks. This generalises the statement that there cannot exist two effective central charges in a c=0 model. It also suggests the existence of a second "dual" logarithmic theory for each p. Such dual models are briefly discussed.
ERIC Educational Resources Information Center
Caglayan, Günhan
2014-01-01
This study investigates prospective secondary mathematics teachers' visual representations of polynomial and rational inequalities, and graphs of exponential and logarithmic functions with GeoGebra Dynamic Software. Five prospective teachers in a university in the United States participated in this research study, which was situated within a…
NASA Astrophysics Data System (ADS)
Lee, Scott A.
2014-03-01
High-pressure Raman spectroscopy has been used to study the eigenvectors and eigenvalues of the low-frequency vibrational modes of crystalline cytidine at 295 K by evaluating the logarithmic derivative of the vibrational frequency with respect to pressure: 1/ω dω/dP. Crystalline samples of molecular materials such as cytidine have vibrational modes that are localized within a molecular unit (``internal'' modes) as well as modes in which the molecular units vibrate against each other (``external'' modes). The value of the logarithmic derivative is a diagnostic probe of the nature of the eigenvector of the vibrational modes, making high pressure experiments a very useful probe for such studies. Internal stretching modes have low logarithmic derivatives while external as well as internal torsional and bending modes have higher logarithmic derivatives. All of the Raman modes below 200 cm-1 in cytidine are found to have high logarithmic derivatives, consistent with being either external modes or internal torsional or bending modes.
Li, Jing; Mahmoodi, Alireza; Joseph, Dileepan
2015-01-01
An important class of complementary metal-oxide-semiconductor (CMOS) image sensors are those where pixel responses are monotonic nonlinear functions of light stimuli. This class includes various logarithmic architectures, which are easily capable of wide dynamic range imaging, at video rates, but which are vulnerable to image quality issues. To minimize fixed pattern noise (FPN) and maximize photometric accuracy, pixel responses must be calibrated and corrected due to mismatch and process variation during fabrication. Unlike literature approaches, which employ circuit-based models of varying complexity, this paper introduces a novel approach based on low-degree polynomials. Although each pixel may have a highly nonlinear response, an approximately-linear FPN calibration is possible by exploiting the monotonic nature of imaging. Moreover, FPN correction requires only arithmetic, and an optimal fixed-point implementation is readily derived, subject to a user-specified number of bits per pixel. Using a monotonic spline, involving cubic polynomials, photometric calibration is also possible without a circuit-based model, and fixed-point photometric correction requires only a look-up table. The approach is experimentally validated with a logarithmic CMOS image sensor and is compared to a leading approach from the literature. The novel approach proves effective and efficient. PMID:26501287
Graviton 1-loop partition function for 3-dimensional massive gravity
NASA Astrophysics Data System (ADS)
Gaberdiel, Matthias R.; Grumiller, Daniel; Vassilevich, Dmitri
2010-11-01
Thegraviton1-loop partition function in Euclidean topologically massivegravity (TMG) is calculated using heat kernel techniques. The partition function does not factorize holomorphically, and at the chiral point it has the structure expected from a logarithmic conformal field theory. This gives strong evidence for the proposal that the dual conformal field theory to TMG at the chiral point is indeed logarithmic. We also generalize our results to new massive gravity.
Robust Bioinformatics Recognition with VLSI Biochip Microsystem
NASA Technical Reports Server (NTRS)
Lue, Jaw-Chyng L.; Fang, Wai-Chi
2006-01-01
A microsystem architecture for real-time, on-site, robust bioinformatic patterns recognition and analysis has been proposed. This system is compatible with on-chip DNA analysis means such as polymerase chain reaction (PCR)amplification. A corresponding novel artificial neural network (ANN) learning algorithm using new sigmoid-logarithmic transfer function based on error backpropagation (EBP) algorithm is invented. Our results show the trained new ANN can recognize low fluorescence patterns better than the conventional sigmoidal ANN does. A differential logarithmic imaging chip is designed for calculating logarithm of relative intensities of fluorescence signals. The single-rail logarithmic circuit and a prototype ANN chip are designed, fabricated and characterized.
NASA Technical Reports Server (NTRS)
Lan, C. E.; Lamar, J. E.
1977-01-01
A logarithmic-singularity correction factor is derived for use in kernel function methods associated with Multhopp's subsonic lifting-surface theory. Because of the form of the factor, a relation was formulated between the numbers of chordwise and spanwise control points needed for good accuracy. This formulation is developed and discussed. Numerical results are given to show the improvement of the computation with the new correction factor.
Logarithmic conformal field theory: beyond an introduction
NASA Astrophysics Data System (ADS)
Creutzig, Thomas; Ridout, David
2013-12-01
This article aims to review a selection of central topics and examples in logarithmic conformal field theory. It begins with the remarkable observation of Cardy that the horizontal crossing probability of critical percolation may be computed analytically within the formalism of boundary conformal field theory. Cardy’s derivation relies on certain implicit assumptions which are shown to lead inexorably to indecomposable modules and logarithmic singularities in correlators. For this, a short introduction to the fusion algorithm of Nahm, Gaberdiel and Kausch is provided. While the percolation logarithmic conformal field theory is still not completely understood, there are several examples for which the formalism familiar from rational conformal field theory, including bulk partition functions, correlation functions, modular transformations, fusion rules and the Verlinde formula, has been successfully generalized. This is illustrated for three examples: the singlet model \\mathfrak {M} (1,2), related to the triplet model \\mathfrak {W} (1,2), symplectic fermions and the fermionic bc ghost system; the fractional level Wess-Zumino-Witten model based on \\widehat{\\mathfrak {sl}} \\left( 2 \\right) at k=-\\frac{1}{2}, related to the bosonic βγ ghost system; and the Wess-Zumino-Witten model for the Lie supergroup \\mathsf {GL} \\left( 1 {\\mid} 1 \\right), related to \\mathsf {SL} \\left( 2 {\\mid} 1 \\right) at k=-\\frac{1}{2} and 1, the Bershadsky-Polyakov algebra W_3^{(2)} and the Feigin-Semikhatov algebras W_n^{(2)}. These examples have been chosen because they represent the most accessible, and most useful, members of the three best-understood families of logarithmic conformal field theories. The logarithmic minimal models \\mathfrak {W} (q,p), the fractional level Wess-Zumino-Witten models, and the Wess-Zumino-Witten models on Lie supergroups (excluding \\mathsf {OSP} \\left( 1 {\\mid} 2n \\right)). In this review, the emphasis lies on the representation theory of the underlying chiral algebra and the modular data pertaining to the characters of the representations. Each of the archetypal logarithmic conformal field theories is studied here by first determining its irreducible spectrum, which turns out to be continuous, as well as a selection of natural reducible, but indecomposable, modules. This is followed by a detailed description of how to obtain character formulae for each irreducible, a derivation of the action of the modular group on the characters, and an application of the Verlinde formula to compute the Grothendieck fusion rules. In each case, the (genuine) fusion rules are known, so comparisons can be made and favourable conclusions drawn. In addition, each example admits an infinite set of simple currents, hence extended symmetry algebras may be constructed and a series of bulk modular invariants computed. The spectrum of such an extended theory is typically discrete and this is how the triplet model \\mathfrak {W} (1,2) arises, for example. Moreover, simple current technology admits a derivation of the extended algebra fusion rules from those of its continuous parent theory. Finally, each example is concluded by a brief description of the computation of some bulk correlators, a discussion of the structure of the bulk state space, and remarks concerning more advanced developments and generalizations. The final part gives a very short account of the theory of staggered modules, the (simplest class of) representations that are responsible for the logarithmic singularities that distinguish logarithmic theories from their rational cousins. These modules are discussed in a generality suitable to encompass all the examples met in this review and some of the very basic structure theory is proven. Then, the important quantities known as logarithmic couplings are reviewed for Virasoro staggered modules and their role as fundamentally important parameters, akin to the three-point constants of rational conformal field theory, is discussed. An appendix is also provided in order to introduce some of the necessary, but perhaps unfamiliar, language of homological algebra.
Wang, Zhiyuan; Lleras, Alejandro; Buetti, Simona
2018-04-17
Our lab recently found evidence that efficient visual search (with a fixed target) is characterized by logarithmic Reaction Time (RT) × Set Size functions whose steepness is modulated by the similarity between target and distractors. To determine whether this pattern of results was based on low-level visual factors uncontrolled by previous experiments, we minimized the possibility of crowding effects in the display, compensated for the cortical magnification factor by magnifying search items based on their eccentricity, and compared search performance on such displays to performance on displays without magnification compensation. In both cases, the RT × Set Size functions were found to be logarithmic, and the modulation of the log slopes by target-distractor similarity was replicated. Consistent with previous results in the literature, cortical magnification compensation eliminated most target eccentricity effects. We conclude that the log functions and their modulation by target-distractor similarity relations reflect a parallel exhaustive processing architecture for early vision.
Using History to Teach Mathematics: The Case of Logarithms
NASA Astrophysics Data System (ADS)
Panagiotou, Evangelos N.
2011-01-01
Many authors have discussed the question why we should use the history of mathematics to mathematics education. For example, Fauvel (For Learn Math, 11(2): 3-6, 1991) mentions at least fifteen arguments for applying the history of mathematics in teaching and learning mathematics. Knowing how to introduce history into mathematics lessons is a more difficult step. We found, however, that only a limited number of articles contain instructions on how to use the material, as opposed to numerous general articles suggesting the use of the history of mathematics as a didactical tool. The present article focuses on converting the history of logarithms into material appropriate for teaching students of 11th grade, without any knowledge of calculus. History uncovers that logarithms were invented prior of the exponential function and shows that the logarithms are not an arbitrary product, as is the case when we leap straight in the definition given in all modern textbooks, but they are a response to a problem. We describe step by step the historical evolution of the concept, in a way appropriate for use in class, until the definition of the logarithm as area under the hyperbola. Next, we present the formal development of the theory and define the exponential function. The teaching sequence has been successfully undertaken in two high school classrooms.
The ABC (in any D) of logarithmic CFT
NASA Astrophysics Data System (ADS)
Hogervorst, Matthijs; Paulos, Miguel; Vichi, Alessandro
2017-10-01
Logarithmic conformal field theories have a vast range of applications, from critical percolation to systems with quenched disorder. In this paper we thoroughly examine the structure of these theories based on their symmetry properties. Our analysis is model-independent and holds for any spacetime dimension. Our results include a determination of the general form of correlation functions and conformal block decompositions, clearing the path for future bootstrap applications. Several examples are discussed in detail, including logarithmic generalized free fields, holographic models, self-avoiding random walks and critical percolation.
Correlation Length of Energy-Containing Structures in the Base of the Solar Corona
NASA Astrophysics Data System (ADS)
Abramenko, V.; Zank, G. P.; Dosch, A. M.; Yurchyshyn, V.
2013-12-01
An essential parameter for models of coronal heating and fast solar wind acceleration that relay on the dissipation of MHD turbulence is the characteristic energy-containing length of the squared velocity and magnetic field fluctuations transverse to the mean magnetic field inside a coronal hole (CH) at the base of the corona. The characteristic length scale defines directly the heating rate. Rather surprisingly, almost nothing is known observationally about this critical parameter. Currently, only a very rough estimate of characteristic length was obtained based on the fact that the network spacing is about 30000 km. We attempted estimation of this parameter from observations of photospheric random motions and magnetic fields measured in the photosphere inside coronal holes. We found that the characteristic length scale in the photosphere is about 600-2000 km, which is much smaller than that adopted in previous models. Our results provide a critical input parameter for current models of coronal heating and should yield an improved understanding of fast solar wind acceleration. Fig. 1-- Plotted is the natural logarithm of the correlation function of the transverse velocity fluctuations u^2 versus the spatial lag r for the two CHs. The color code refers to the accumulation time intervals of 2 (blue), 5 (green), 10 (red), and 20 (black) minutes. The values of the Batchelor integral length λ the correlation length ς and the e-folding length L in km are shown. Fig. 2-- Plot of the natural logarithm of the correlation function of magnetic fluctuations b^2 versus the spatial lag r. The insert shows this plot with linear axes.
Aly, Sharif S; Zhao, Jianyang; Li, Ben; Jiang, Jiming
2014-01-01
The Intraclass Correlation Coefficient (ICC) is commonly used to estimate the similarity between quantitative measures obtained from different sources. Overdispersed data is traditionally transformed so that linear mixed model (LMM) based ICC can be estimated. A common transformation used is the natural logarithm. The reliability of environmental sampling of fecal slurry on freestall pens has been estimated for Mycobacterium avium subsp. paratuberculosis using the natural logarithm transformed culture results. Recently, the negative binomial ICC was defined based on a generalized linear mixed model for negative binomial distributed data. The current study reports on the negative binomial ICC estimate which includes fixed effects using culture results of environmental samples. Simulations using a wide variety of inputs and negative binomial distribution parameters (r; p) showed better performance of the new negative binomial ICC compared to the ICC based on LMM even when negative binomial data was logarithm, and square root transformed. A second comparison that targeted a wider range of ICC values showed that the mean of estimated ICC closely approximated the true ICC.
Logarithmic conformal field theory
NASA Astrophysics Data System (ADS)
Gainutdinov, Azat; Ridout, David; Runkel, Ingo
2013-12-01
Conformal field theory (CFT) has proven to be one of the richest and deepest subjects of modern theoretical and mathematical physics research, especially as regards statistical mechanics and string theory. It has also stimulated an enormous amount of activity in mathematics, shaping and building bridges between seemingly disparate fields through the study of vertex operator algebras, a (partial) axiomatisation of a chiral CFT. One can add to this that the successes of CFT, particularly when applied to statistical lattice models, have also served as an inspiration for mathematicians to develop entirely new fields: the Schramm-Loewner evolution and Smirnov's discrete complex analysis being notable examples. When the energy operator fails to be diagonalisable on the quantum state space, the CFT is said to be logarithmic. Consequently, a logarithmic CFT is one whose quantum space of states is constructed from a collection of representations which includes reducible but indecomposable ones. This qualifier arises because of the consequence that certain correlation functions will possess logarithmic singularities, something that contrasts with the familiar case of power law singularities. While such logarithmic singularities and reducible representations were noted by Rozansky and Saleur in their study of the U (1|1) Wess-Zumino-Witten model in 1992, the link between the non-diagonalisability of the energy operator and logarithmic singularities in correlators is usually ascribed to Gurarie's 1993 article (his paper also contains the first usage of the term 'logarithmic conformal field theory'). The class of CFTs that were under control at this time was quite small. In particular, an enormous amount of work from the statistical mechanics and string theory communities had produced a fairly detailed understanding of the (so-called) rational CFTs. However, physicists from both camps were well aware that applications from many diverse fields required significantly more complicated non-rational theories. Examples include critical percolation, supersymmetric string backgrounds, disordered electronic systems, sandpile models describing avalanche processes, and so on. In each case, the non-rationality and non-unitarity of the CFT suggested that a more general theoretical framework was needed. Driven by the desire to better understand these applications, the mid-1990s saw significant theoretical advances aiming to generalise the constructs of rational CFT to a more general class. In 1994, Nahm introduced an algorithm for computing the fusion product of representations which was significantly generalised two years later by Gaberdiel and Kausch who applied it to explicitly construct (chiral) representations upon which the energy operator acts non-diagonalisably. Their work made it clear that underlying the physically relevant correlation functions are classes of reducible but indecomposable representations that can be investigated mathematically to the benefit of applications. In another direction, Flohr had meanwhile initiated the study of modular properties of the characters of logarithmic CFTs, a topic which had already evoked much mathematical interest in the rational case. Since these seminal theoretical papers appeared, the field has undergone rapid development, both theoretically and with regard to applications. Logarithmic CFTs are now known to describe non-local observables in the scaling limit of critical lattice models, for example percolation and polymers, and are an integral part of our understanding of quantum strings propagating on supermanifolds. They are also believed to arise as duals of three-dimensional chiral gravity models, fill out hidden sectors in non-rational theories with non-compact target spaces, and describe certain transitions in various incarnations of the quantum Hall effect. Other physical applications range from two-dimensional turbulence and non-equilibrium systems to aspects of the AdS/CFT correspondence and describing supersymmetric sigma models beyond the topological sector. We refer the reader to the reviews in this collection for further applications and details. More recently, our understanding of logarithmic CFT has improved dramatically thanks largely to a better understanding of the underlying mathematical structures. This includes those associated to the vertex operator algebras themselves (representations, characters, modular transformations, fusion, braiding) as well as structures associated with applications to two-dimensional statistical models (diagram algebras, eg. Temperley-Lieb quantum groups). Not only are we getting to the point where we understand how these structures differ from standard (rational) theories, but we are starting to tackle applications both in the boundary and bulk settings. It is now clear that the logarithmic case is generic, so it is this case that one should expect to encounter in applications. We therefore feel that it is timely to review what has been accomplished in order to disseminate this improved understanding and motivate further applications. We now give a quick overview of the articles that constitute this special issue. Adamović and Milas provide a detailed summary of their rigorous results pertaining to logarithmic vertex operator (super)algebras constructed from lattices. This survey discusses the C2-cofiniteness of the (p, p') triplet models (this is the generalisation of rationality to the logarithmic setting), describes Zhu's algebra for (some of) these theories and outlines the difficulties involved in explicitly constructing the modules responsible for their logarithmic nature. Cardy gives an account of a popular approach to logarithmic theories that regards them, heuristically at least, as limits of ordinary (but non-rational) CFTs. More precisely, it seems that any given correlator may be computed as a limit of standard (non-logarithmic) correlators, any logarithmic singularities that arise do so because of a degeneration when taking the limit. He then illustrates this phenomenon in several theories describing statistical lattice models including the n → 0 limit of the O(n ) model and the Q → 1 limit of the Q-state Potts model. Creutzig and Ridout review the continuum approach to logarithmic CFT, using the percolation (boundary) CFT to detail the connection between module structure and logarithmic singularities in correlators before describing their proposed solution to the thorny issue of generalising modular data and Verlinde formulae to the logarithmic setting. They illustrate this proposal using the three best-understood examples of logarithmic CFTs: the (1, 2) models, related to symplectic fermions; the fractional level WZW model on , related to the beta gamma ghosts; and the WZW model on GL(1|1). The analysis in each case requires that the spectrum be continuous; C2-cofinite models are only recovered as orbifolds. Flohr and Koehn consider the characters of the irreducible modules in the spectrum of a CFT and discuss why these only span a proper subspace of the space of torus vacuum amplitudes in the logarithmic case. This is illustrated explicitly for the (1, 2) triplet model and conclusions are drawn for the action of the modular group. They then note that the irreducible characters of this model also admit fermionic sum forms which seem to fit well into Nahmrsquo;s well-known conjecture for rational theories. Quasi-particle interpretations are also introduced, leading to the conclusion that logarithmic C2-cofinite theories are not so terribly different to rational theories, at least in some respects. Fuchs, Schweigert and Stigner address the problem of constructing local logarithmic CFTs starting from the chiral theory. They first review the construction of the local theory in the non-logarithmic setting from an angle that will then generalise to logarithmic theories. In particular, they observe that the bulk space can be understood as a certain coend. The authors then show how to carry out the construction of the bulk space in the category of modules over a factorisable ribbon Hopf algebra, which shares many properties with the braided categories arising from logarithmic chiral theories. The authors proceed to construct the analogue of all-genus correlators in their setting and establish invariance under the mapping class group, i.e. locality of the correlators. Gainutdinov, Jacobsen, Read, Saleur and Vasseur review their approach based on the assumption that certain classes of logarithmic CFTs admit lattice regularisations with local degrees of freedom, for example quantum spin chains (with local interactions). They therefore study the finite-dimensional algebras generated by the hamiltonian densities (typically the Temperley-Lieb algebras and their extensions) that describe the dynamics of these lattice models. The authors then argue that the lattice algebras exhibit, in finite size, mathematical properties that are in correspondence with those of their continuum limits, allowing one to predict continuum structures directly from the lattice. Moreover, the lattice models considered admit quantum group symmetries that play a central role in the algebraic analysis (representation structure and fusion). Grumiller, Riedler, Rosseel and Zojer review the role that logarithmic CFTs may play in certain versions of the AdS/CFT correspondence, particularly for what is known as topologically massive gravity (TMG). This has been a very active subject over the last five years and the article takes great care to disentangle the contributions from the many groups that have participated. They begin with some general remarks on logarithmic behaviour, much in the spirit of Cardyrsquo;s review, before detailing the distinction between the chiral (no logs) and logarithmic proposals for critical TMG. The latter is then subjected to various consistency checks before discussing evidence for logarithmic behaviour in more general classes of gravity theories including those with boundaries, supersymmetry and galilean relativity. Gurarie has written an historical overview of his seminal contributions to this field, putting his results (and those of his collaborators) in the context of understanding applications to condensed matter physics. This includes the link between the non-diagonalisability of L0 and logarithmic singularities, a study of the c → 0 catastrophe, and a proposed resolution involving supersymmetric partners for the stress-energy tensor and its logarithmic partner field. Henkel and Rouhani describe a direction in which logarithmic singularities are observed in correlators of non-relativistic field theories. Their review covers the appropriate modifications of conformal invariance that are appropriate to non-equilibrium statistical mechanics, strongly anisotropic critical points and certain variants of TMG. The main variation away from the standard relativistic idea of conformal invariance is that time is explicitly distinguished from space when considering dilations and this leads to a variety of algebraic structures to explore. In this review, the link between non-diagonalisable representations and logarithmic singularities in correlators is generalised to these algebras, before two applications of the theory are discussed. Huang and Lepowsky give a non-technical overview of their work on braided tensor structures on suitable categories of representations of vertex operator algebras. They also place their work in historic context and compare it to related approaches. The authors sketch their construction of the so-called P(z)-tensor product of modules of a vertex operator algebra, and the construction of the associativity isomorphisms for this tensor product. They proceed to give a guide to their works leading to the first authorrsquo;s proof of modularity for a class of vertex operator algebras, and to their works, joint with Zhang, on logarithmic intertwining operators and the resulting tensor product theory. Morin-Duchesne and Saint-Aubin have contributed a research article describing their recent characterisation of when the transfer matrix of a periodic loop model fails to be diagonalisable. This generalises their recent result for non-periodic loop models and provides rigorous methods to justify what has often been assumed in the lattice approach to logarithmic CFT. The philosophy here is one of analysing lattice models with finite size, aiming to demonstrate that non-diagonalisability survives the scaling limit. This is extremely difficult in general (see also the review by Gainutdinov et al ), so it is remarkable that it is even possible to demonstrate this at any level of generality. Quella and Schomerus have prepared an extensive review covering their longstanding collaboration on the logarithmic nature of conformal sigma models on Lie supergroups and their cosets with applications to string theory and AdS/CFT. Beginning with a very welcome overview of Lie superalgebras and their representations, harmonic analysis and cohomological reduction, they then apply these mathematical tools to WZW models on type I Lie supergroups and their homogeneous subspaces. Along the way, deformations are discussed and potential dualities in the corresponding string theories are described. Ruelle provides an exhaustive account of his substantial contributions to the study of the abelian sandpile model. This is a statistical model which has the surprising feature that many correlation functions can be computed exactly, in the bulk and on the boundary, even though the spectrum of conformal weights is largely unknown. Nevertheless, there is much evidence suggesting that its scaling limit is described by an, as yet unknown, c = -2 logarithmic CFT. Semikhatov and Tipunin present their very recent results regarding the construction of logarithmic chiral W-algebra extensions of a fractional level algebra. The idea is that these algebras are the centralisers of a rank-two Nichols algebra which possesses at least one fermionic generator. In turn, these Nichols algebra generators are represented by screening operators which naturally appear in CFT bosonisation. The major advantage of using these generators is that they give strong hints about the representation theory and fusion rules of the chiral algebra. Simmons has contributed an article describing the calculation of various correlation functions in the logarithmic CFT that describes critical percolation. These calculations are interpreted geometrically in a manner that should be familiar to mathematicians studying Schramm-Loewner evolutions and point towards a (largely unexplored) bridge connecting logarithmic CFT with this branch of mathematics. Of course, the field of logarithmic CFT has benefited greatly from the work of many of researchers who are not represented in this special issue. The interested reader will find many links to their work in the bibliographies of the special issue articles and reviews. In summary, logarithmic CFT describes an extension of the incredibly successful methods of rational CFT to a more general setting. This extension is necessary to properly describe many different fundamental phenomena of physical interest. The formalism is moreover highly non-trivial from a mathematical point of view and so logarithmic theories are of significant interest to both physicists and mathematicians. We hope that the collection of articles that follows will serve as an inspiration, and a valuable resource, for both of these communities.
Alternative Proofs for Inequalities of Some Trigonometric Functions
ERIC Educational Resources Information Center
Guo, Bai-Ni; Qi, Feng
2008-01-01
By using an identity relating to Bernoulli's numbers and power series expansions of cotangent function and logarithms of functions involving sine function, cosine function and tangent function, four inequalities involving cotangent function, sine function, secant function and tangent function are established.
Lambert W function for applications in physics
NASA Astrophysics Data System (ADS)
Veberič, Darko
2012-12-01
The Lambert W(x) function and its possible applications in physics are presented. The actual numerical implementation in C++ consists of Halley's and Fritsch's iterations with initial approximations based on branch-point expansion, asymptotic series, rational fits, and continued-logarithm recursion. Program summaryProgram title: LambertW Catalogue identifier: AENC_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENC_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License version 3 No. of lines in distributed program, including test data, etc.: 1335 No. of bytes in distributed program, including test data, etc.: 25 283 Distribution format: tar.gz Programming language: C++ (with suitable wrappers it can be called from C, Fortran etc.), the supplied command-line utility is suitable for other scripting languages like sh, csh, awk, perl etc. Computer: All systems with a C++ compiler. Operating system: All Unix flavors, Windows. It might work with others. RAM: Small memory footprint, less than 1 MB Classification: 1.1, 4.7, 11.3, 11.9. Nature of problem: Find fast and accurate numerical implementation for the Lambert W function. Solution method: Halley's and Fritsch's iterations with initial approximations based on branch-point expansion, asymptotic series, rational fits, and continued logarithm recursion. Additional comments: Distribution file contains the command-line utility lambert-w. Doxygen comments, included in the source files. Makefile. Running time: The tests provided take only a few seconds to run.
Parks, David R; Roederer, Mario; Moore, Wayne A
2006-06-01
In immunofluorescence measurements and most other flow cytometry applications, fluorescence signals of interest can range down to essentially zero. After fluorescence compensation, some cell populations will have low means and include events with negative data values. Logarithmic presentation has been very useful in providing informative displays of wide-ranging flow cytometry data, but it fails to adequately display cell populations with low means and high variances and, in particular, offers no way to include negative data values. This has led to a great deal of difficulty in interpreting and understanding flow cytometry data, has often resulted in incorrect delineation of cell populations, and has led many people to question the correctness of compensation computations that were, in fact, correct. We identified a set of criteria for creating data visualization methods that accommodate the scaling difficulties presented by flow cytometry data. On the basis of these, we developed a new data visualization method that provides important advantages over linear or logarithmic scaling for display of flow cytometry data, a scaling we refer to as "Logicle" scaling. Logicle functions represent a particular generalization of the hyperbolic sine function with one more adjustable parameter than linear or logarithmic functions. Finally, we developed methods for objectively and automatically selecting an appropriate value for this parameter. The Logicle display method provides more complete, appropriate, and readily interpretable representations of data that includes populations with low-to-zero means, including distributions resulting from fluorescence compensation procedures, than can be produced using either logarithmic or linear displays. The method includes a specific algorithm for evaluating actual data distributions and deriving parameters of the Logicle scaling function appropriate for optimal display of that data. It is critical to note that Logicle visualization does not change the data values or the descriptive statistics computed from them. Copyright 2006 International Society for Analytical Cytology.
Exact Asymptotics of the Freezing Transition of a Logarithmically Correlated Random Energy Model
NASA Astrophysics Data System (ADS)
Webb, Christian
2011-12-01
We consider a logarithmically correlated random energy model, namely a model for directed polymers on a Cayley tree, which was introduced by Derrida and Spohn. We prove asymptotic properties of a generating function of the partition function of the model by studying a discrete time analogy of the KPP-equation—thus translating Bramson's work on the KPP-equation into a discrete time case. We also discuss connections to extreme value statistics of a branching random walk and a rescaled multiplicative cascade measure beyond the critical point.
NASA Astrophysics Data System (ADS)
Basak, Anup; Levitas, Valery I.
2018-04-01
A thermodynamically consistent, novel multiphase phase field approach for stress- and temperature-induced martensitic phase transformations at finite strains and with interfacial stresses has been developed. The model considers a single order parameter to describe the austenite↔martensitic transformations, and another N order parameters describing N variants and constrained to a plane in an N-dimensional order parameter space. In the free energy model coexistence of three or more phases at a single material point (multiphase junction), and deviation of each variant-variant transformation path from a straight line have been penalized. Some shortcomings of the existing models are resolved. Three different kinematic models (KMs) for the transformation deformation gradient tensors are assumed: (i) In KM-I the transformation deformation gradient tensor is a linear function of the Bain tensors for the variants. (ii) In KM-II the natural logarithms of the transformation deformation gradient is taken as a linear combination of the natural logarithm of the Bain tensors multiplied with the interpolation functions. (iii) In KM-III it is derived using the twinning equation from the crystallographic theory. The instability criteria for all the phase transformations have been derived for all the kinematic models, and their comparative study is presented. A large strain finite element procedure has been developed and used for studying the evolution of some complex microstructures in nanoscale samples under various loading conditions. Also, the stresses within variant-variant boundaries, the sample size effect, effect of penalizing the triple junctions, and twinned microstructures have been studied. The present approach can be extended for studying grain growth, solidifications, para↔ferro electric transformations, and diffusive phase transformations.
Autonomic Recovery Is Delayed in Chinese Compared with Caucasian following Treadmill Exercise.
Sun, Peng; Yan, Huimin; Ranadive, Sushant M; Lane, Abbi D; Kappus, Rebecca M; Bunsawat, Kanokwan; Baynard, Tracy; Hu, Min; Li, Shichang; Fernhall, Bo
2016-01-01
Caucasian populations have a higher prevalence of cardiovascular disease (CVD) when compared with their Chinese counterparts and CVD is associated with autonomic function. It is unknown whether autonomic function during exercise recovery differs between Caucasians and Chinese. The present study investigated autonomic recovery following an acute bout of treadmill exercise in healthy Caucasians and Chinese. Sixty-two participants (30 Caucasian and 32 Chinese, 50% male) performed an acute bout of treadmill exercise at 70% of heart rate reserve. Heart rate variability (HRV) and baroreflex sensitivity (BRS) were obtained during 5-min epochs at pre-exercise, 30-min, and 60-min post-exercise. HRV was assessed using frequency [natural logarithm of high (LnHF) and low frequency (LnLF) powers, normalized high (nHF) and low frequency (nLF) powers, and LF/HF ratio] and time domains [Root mean square of successive differences (RMSSD), natural logarithm of RMSSD (LnRMSSD) and R-R interval (RRI)]. Spontaneous BRS included both up-up and down-down sequences. At pre-exercise, no group differences were observed for any HR, HRV and BRS parameters. During exercise recovery, significant race-by-time interactions were observed for LnHF, nHF, nLF, LF/HF, LnRMSSD, RRI, HR, and BRS (up-up). The declines in LnHF, nHF, RMSSD, RRI and BRS (up-up) and the increases in LF/HF, nLF and HR were blunted in Chinese when compared to Caucasians from pre-exercise to 30-min to 60-min post-exercise. Chinese exhibited delayed autonomic recovery following an acute bout of treadmill exercise. This delayed autonomic recovery may result from greater sympathetic dominance and extended vagal withdrawal in Chinese. Chinese Clinical Trial Register ChiCTR-IPR-15006684.
Autonomic Recovery Is Delayed in Chinese Compared with Caucasian following Treadmill Exercise
Sun, Peng; Yan, Huimin; Ranadive, Sushant M.; Lane, Abbi D.; Kappus, Rebecca M.; Bunsawat, Kanokwan; Baynard, Tracy; Hu, Min; Li, Shichang; Fernhall, Bo
2016-01-01
Caucasian populations have a higher prevalence of cardiovascular disease (CVD) when compared with their Chinese counterparts and CVD is associated with autonomic function. It is unknown whether autonomic function during exercise recovery differs between Caucasians and Chinese. The present study investigated autonomic recovery following an acute bout of treadmill exercise in healthy Caucasians and Chinese. Sixty-two participants (30 Caucasian and 32 Chinese, 50% male) performed an acute bout of treadmill exercise at 70% of heart rate reserve. Heart rate variability (HRV) and baroreflex sensitivity (BRS) were obtained during 5-min epochs at pre-exercise, 30-min, and 60-min post-exercise. HRV was assessed using frequency [natural logarithm of high (LnHF) and low frequency (LnLF) powers, normalized high (nHF) and low frequency (nLF) powers, and LF/HF ratio] and time domains [Root mean square of successive differences (RMSSD), natural logarithm of RMSSD (LnRMSSD) and R–R interval (RRI)]. Spontaneous BRS included both up-up and down-down sequences. At pre-exercise, no group differences were observed for any HR, HRV and BRS parameters. During exercise recovery, significant race-by-time interactions were observed for LnHF, nHF, nLF, LF/HF, LnRMSSD, RRI, HR, and BRS (up-up). The declines in LnHF, nHF, RMSSD, RRI and BRS (up-up) and the increases in LF/HF, nLF and HR were blunted in Chinese when compared to Caucasians from pre-exercise to 30-min to 60-min post-exercise. Chinese exhibited delayed autonomic recovery following an acute bout of treadmill exercise. This delayed autonomic recovery may result from greater sympathetic dominance and extended vagal withdrawal in Chinese. Trial Registration: Chinese Clinical Trial Register ChiCTR-IPR-15006684 PMID:26784109
Magnetotelluric inversion via reverse time migration algorithm of seismic data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ha, Taeyoung; Shin, Changsoo
2007-07-01
We propose a new algorithm for two-dimensional magnetotelluric (MT) inversion. Our algorithm is an MT inversion based on the steepest descent method, borrowed from the backpropagation technique of seismic inversion or reverse time migration, introduced in the middle 1980s by Lailly and Tarantola. The steepest descent direction can be calculated efficiently by using the symmetry of numerical Green's function derived from a mixed finite element method proposed by Nedelec for Maxwell's equation, without calculating the Jacobian matrix explicitly. We construct three different objective functions by taking the logarithm of the complex apparent resistivity as introduced in the recent waveform inversionmore » algorithm by Shin and Min. These objective functions can be naturally separated into amplitude inversion, phase inversion and simultaneous inversion. We demonstrate our algorithm by showing three inversion results for synthetic data.« less
Bobály, Balázs; Randazzo, Giuseppe Marco; Rudaz, Serge; Guillarme, Davy; Fekete, Szabolcs
2017-01-20
The goal of this work was to evaluate the potential of non-linear gradients in hydrophobic interaction chromatography (HIC), to improve the separation between the different homologous species (drug-to-antibody, DAR) of commercial antibody-drug conjugates (ADC). The selectivities between Brentuximab Vedotin species were measured using three different gradient profiles, namely linear, power function based and logarithmic ones. The logarithmic gradient provides the most equidistant retention distribution for the DAR species and offers the best overall separation of cysteine linked ADC in HIC. Another important advantage of the logarithmic gradient, is its peak focusing effect for the DAR0 species, which is particularly useful to improve the quantitation limit of DAR0. Finally, the logarithmic behavior of DAR species of ADC in HIC was modelled using two different approaches, based on i) the linear solvent strength theory (LSS) and two scouting linear gradients and ii) a new derived equation and two logarithmic scouting gradients. In both cases, the retention predictions were excellent and systematically below 3% compared to the experimental values. Copyright © 2016 Elsevier B.V. All rights reserved.
The complete two-loop integrated jet thrust distribution in soft-collinear effective theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
von Manteuffel, Andreas; Schabinger, Robert M.; Zhu, Hua Xing
2014-03-01
In this work, we complete the calculation of the soft part of the two-loop integrated jet thrust distribution in e+e- annihilation. This jet mass observable is based on the thrust cone jet algorithm, which involves a veto scale for out-of-jet radiation. The previously uncomputed part of our result depends in a complicated way on the jet cone size, r, and at intermediate stages of the calculation we actually encounter a new class of multiple polylogarithms. We employ an extension of the coproduct calculus to systematically exploit functional relations and represent our results concisely. In contrast to the individual contributions, themore » sum of all global terms can be expressed in terms of classical polylogarithms. Our explicit two-loop calculation enables us to clarify the small r picture discussed in earlier work. In particular, we show that the resummation of the logarithms of r that appear in the previously uncomputed part of the two-loop integrated jet thrust distribution is inextricably linked to the resummation of the non-global logarithms. Furthermore, we find that the logarithms of r which cannot be absorbed into the non-global logarithms in the way advocated in earlier work have coefficients fixed by the two-loop cusp anomalous dimension. We also show that in many cases one can straightforwardly predict potentially large logarithmic contributions to the integrated jet thrust distribution at L loops by making use of analogous contributions to the simpler integrated hemisphere soft function.« less
Fragmentation functions beyond fixed order accuracy
NASA Astrophysics Data System (ADS)
Anderle, Daniele P.; Kaufmann, Tom; Stratmann, Marco; Ringer, Felix
2017-03-01
We give a detailed account of the phenomenology of all-order resummations of logarithmically enhanced contributions at small momentum fraction of the observed hadron in semi-inclusive electron-positron annihilation and the timelike scale evolution of parton-to-hadron fragmentation functions. The formalism to perform resummations in Mellin moment space is briefly reviewed, and all relevant expressions up to next-to-next-to-leading logarithmic order are derived, including their explicit dependence on the factorization and renormalization scales. We discuss the details pertinent to a proper numerical implementation of the resummed results comprising an iterative solution to the timelike evolution equations, the matching to known fixed-order expressions, and the choice of the contour in the Mellin inverse transformation. First extractions of parton-to-pion fragmentation functions from semi-inclusive annihilation data are performed at different logarithmic orders of the resummations in order to estimate their phenomenological relevance. To this end, we compare our results to corresponding fits up to fixed, next-to-next-to-leading order accuracy and study the residual dependence on the factorization scale in each case.
NASA Astrophysics Data System (ADS)
Satpathi, Urbashi; Sinha, Supurna; Sorkin, Rafael D.
2017-12-01
We analyse diffusion at low temperature by bringing the fluctuation-dissipation theorem (FDT) to bear on a physically natural, viscous response-function R(t) . The resulting diffusion-law exhibits several distinct regimes of time and temperature, each with its own characteristic rate of spreading. As with earlier analyses, we find logarithmic spreading in the quantum regime, indicating that this behavior is robust. A consistent R(t) must satisfy the key physical requirements of Wightman positivity and passivity, and we prove that ours does so. We also prove in general that these two conditions are equivalent when the FDT holds. Given current technology, our diffusion law can be tested in a laboratory with ultra cold atoms.
A factorization approach to next-to-leading-power threshold logarithms
NASA Astrophysics Data System (ADS)
Bonocore, D.; Laenen, E.; Magnea, L.; Melville, S.; Vernazza, L.; White, C. D.
2015-06-01
Threshold logarithms become dominant in partonic cross sections when the selected final state forces gluon radiation to be soft or collinear. Such radiation factorizes at the level of scattering amplitudes, and this leads to the resummation of threshold logarithms which appear at leading power in the threshold variable. In this paper, we consider the extension of this factorization to include effects suppressed by a single power of the threshold variable. Building upon the Low-Burnett-Kroll-Del Duca (LBKD) theorem, we propose a decomposition of radiative amplitudes into universal building blocks, which contain all effects ultimately responsible for next-to-leading-power (NLP) threshold logarithms in hadronic cross sections for electroweak annihilation processes. In particular, we provide a NLO evaluation of the radiative jet function, responsible for the interference of next-to-soft and collinear effects in these cross sections. As a test, using our expression for the amplitude, we reproduce all abelian-like NLP threshold logarithms in the NNLO Drell-Yan cross section, including the interplay of real and virtual emissions. Our results are a significant step towards developing a generally applicable resummation formalism for NLP threshold effects, and illustrate the breakdown of next-to-soft theorems for gauge theory amplitudes at loop level.
NASA Astrophysics Data System (ADS)
Monthus, Cécile; Garel, Thomas
2006-07-01
In dimension d⩾3 , the directed polymer in a random medium undergoes a phase transition between a free phase at high temperature and a low-temperature disorder-dominated phase. For the latter phase, Fisher and Huse have proposed a droplet theory based on the scaling of the free-energy fluctuations ΔF(l)˜lθ at scale l . On the other hand, in related growth models belonging to the Kardar-Parisi-Zhang universality class, Forrest and Tang have found that the height-height correlation function is logarithmic at the transition. For the directed polymer model at criticality, this translates into logarithmic free-energy fluctuations ΔFTc(l)˜(lnl)σ with σ=1/2 . In this paper, we propose a droplet scaling analysis exactly at criticality based on this logarithmic scaling. Our main conclusion is that the typical correlation length ξ(T) of the low-temperature phase diverges as lnξ(T)˜[-ln(Tc-T)]1/σ˜[-ln(Tc-T)]2 , instead of the usual power law ξ(T)˜(Tc-T)-ν . Furthermore, the logarithmic dependence of ΔFTc(l) leads to the conclusion that the critical temperature Tc actually coincides with the explicit upper bound T2 derived by Derrida and co-workers, where T2 corresponds to the temperature below which the ratio ZL2¯/(ZL¯)2 diverges exponentially in L . Finally, since the Fisher-Huse droplet theory was initially introduced for the spin-glass phase, we briefly mention the similarities with and differences from the directed polymer model. If one speculates that the free energy of droplet excitations for spin glasses is also logarithmic at Tc , one obtains a logarithmic decay for the mean square correlation function at criticality, C2(r)¯˜1/(lnr)σ , instead of the usual power law 1/rd-2+η .
Impact of long-range interactions on the disordered vortex lattice
NASA Astrophysics Data System (ADS)
Koopmann, J. A.; Geshkenbein, V. B.; Blatter, G.
2003-07-01
The interaction between the vortex lines in a type-II superconductor is mediated by currents. In the absence of transverse screening this interaction is long ranged, stiffening up the vortex lattice as expressed by the dispersive elastic moduli. The effect of disorder is strongly reduced, resulting in a mean-squared displacement correlator
Logarithmic violation of scaling in anisotropic kinematic dynamo model
NASA Astrophysics Data System (ADS)
Antonov, N. V.; Gulitskiy, N. M.
2016-01-01
Inertial-range asymptotic behavior of a vector (e.g., magnetic) field, passively advected by a strongly anisotropic turbulent flow, is studied by means of the field theoretic renormalization group and the operator product expansion. The advecting velocity field is Gaussian, not correlated in time, with the pair correlation function of the form ∝δ (t -t')/k⊥d-1 +ξ , where k⊥ = |k⊥| and k⊥ is the component of the wave vector, perpendicular to the distinguished direction. The stochastic advection-diffusion equation for the transverse (divergence-free) vector field includes, as special cases, the kinematic dynamo model for magnetohydrodynamic turbulence and the linearized Navier-Stokes equation. In contrast to the well known isotropic Kraichnan's model, where various correlation functions exhibit anomalous scaling behavior with infinite sets of anomalous exponents, here the dependence on the integral turbulence scale L has a logarithmic behavior: instead of power-like corrections to ordinary scaling, determined by naive (canonical) dimensions, the anomalies manifest themselves as polynomials of logarithms of L.
Aptel, Florent; Sayous, Romain; Fortoul, Vincent; Beccat, Sylvain; Denis, Philippe
2010-12-01
To evaluate and compare the regional relationships between visual field sensitivity and retinal nerve fiber layer (RNFL) thickness as measured by spectral-domain optical coherence tomography (OCT) and scanning laser polarimetry. Prospective cross-sectional study. One hundred and twenty eyes of 120 patients (40 with healthy eyes, 40 with suspected glaucoma, and 40 with glaucoma) were tested on Cirrus-OCT, GDx VCC, and standard automated perimetry. Raw data on RNFL thickness were extracted for 256 peripapillary sectors of 1.40625 degrees each for the OCT measurement ellipse and 64 peripapillary sectors of 5.625 degrees each for the GDx VCC measurement ellipse. Correlations between peripapillary RNFL thickness in 6 sectors and visual field sensitivity in the 6 corresponding areas were evaluated using linear and logarithmic regression analysis. Receiver operating curve areas were calculated for each instrument. With spectral-domain OCT, the correlations (r(2)) between RNFL thickness and visual field sensitivity ranged from 0.082 (nasal RNFL and corresponding visual field area, linear regression) to 0.726 (supratemporal RNFL and corresponding visual field area, logarithmic regression). By comparison, with GDx-VCC, the correlations ranged from 0.062 (temporal RNFL and corresponding visual field area, linear regression) to 0.362 (supratemporal RNFL and corresponding visual field area, logarithmic regression). In pairwise comparisons, these structure-function correlations were generally stronger with spectral-domain OCT than with GDx VCC and with logarithmic regression than with linear regression. The largest areas under the receiver operating curve were seen for OCT superior thickness (0.963 ± 0.022; P < .001) in eyes with glaucoma and for OCT average thickness (0.888 ± 0.072; P < .001) in eyes with suspected glaucoma. The structure-function relationship was significantly stronger with spectral-domain OCT than with scanning laser polarimetry, and was better expressed logarithmically than linearly. Measurements with these 2 instruments should not be considered to be interchangeable. Copyright © 2010 Elsevier Inc. All rights reserved.
Horn, F.L.; Binns, J.E.
1961-05-01
Apparatus for continuously and automatically measuring and computing the specific heat of a flowing solution is described. The invention provides for the continuous measurement of all the parameters required for the mathematical solution of this characteristic. The parameters are converted to logarithmic functions which are added and subtracted in accordance with the solution and a null-seeking servo reduces errors due to changing voltage drops to a minimum. Logarithmic potentiometers are utilized in a unique manner to accomplish these results.
Dissipative quantum trajectories in complex space: Damped harmonic oscillator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chou, Chia-Chun, E-mail: ccchou@mx.nthu.edu.tw
Dissipative quantum trajectories in complex space are investigated in the framework of the logarithmic nonlinear Schrödinger equation. The logarithmic nonlinear Schrödinger equation provides a phenomenological description for dissipative quantum systems. Substituting the wave function expressed in terms of the complex action into the complex-extended logarithmic nonlinear Schrödinger equation, we derive the complex quantum Hamilton–Jacobi equation including the dissipative potential. It is shown that dissipative quantum trajectories satisfy a quantum Newtonian equation of motion in complex space with a friction force. Exact dissipative complex quantum trajectories are analyzed for the wave and solitonlike solutions to the logarithmic nonlinear Schrödinger equation formore » the damped harmonic oscillator. These trajectories converge to the equilibrium position as time evolves. It is indicated that dissipative complex quantum trajectories for the wave and solitonlike solutions are identical to dissipative complex classical trajectories for the damped harmonic oscillator. This study develops a theoretical framework for dissipative quantum trajectories in complex space.« less
A class of reduced-order models in the theory of waves and stability.
Chapman, C J; Sorokin, S V
2016-02-01
This paper presents a class of approximations to a type of wave field for which the dispersion relation is transcendental. The approximations have two defining characteristics: (i) they give the field shape exactly when the frequency and wavenumber lie on a grid of points in the (frequency, wavenumber) plane and (ii) the approximate dispersion relations are polynomials that pass exactly through points on this grid. Thus, the method is interpolatory in nature, but the interpolation takes place in (frequency, wavenumber) space, rather than in physical space. Full details are presented for a non-trivial example, that of antisymmetric elastic waves in a layer. The method is related to partial fraction expansions and barycentric representations of functions. An asymptotic analysis is presented, involving Stirling's approximation to the psi function, and a logarithmic correction to the polynomial dispersion relation.
NASA Astrophysics Data System (ADS)
Hagino, K.; Balantekin, A. B.; Lwin, N. W.; Thein, Ei Shwe Zin
2018-03-01
The hindrance phenomenon of heavy-ion fusion cross sections at deep subbarrier energies often accompanies a maximum of an astrophysical S factor at a threshold energy for fusion hindrance. We argue that this phenomenon can naturally be explained when the fusion excitation function is fitted with two potentials, with a larger (smaller) logarithmic slope at energies lower (higher) than the threshold energy. This analysis clearly suggests that the astrophysical S factor provides a convenient tool to analyze the deep subbarrier hindrance phenomenon, even though the S factor may have a strong energy dependence for heavy-ion systems unlike that for astrophysical reactions.
Logarithmic corrections to entropy of magnetically charged AdS4 black holes
NASA Astrophysics Data System (ADS)
Jeon, Imtak; Lal, Shailesh
2017-11-01
Logarithmic terms are quantum corrections to black hole entropy determined completely from classical data, thus providing a strong check for candidate theories of quantum gravity purely from physics in the infrared. We compute these terms in the entropy associated to the horizon of a magnetically charged extremal black hole in AdS4×S7 using the quantum entropy function and discuss the possibility of matching against recently derived microscopic expressions.
Huang, Chih-Fang; Chen, Chao-Tung; Wang, Pei-Ming; Koo, Malcolm
2015-05-01
In this study, cardiometabolic risk associated with betel-quid, alcohol and cigarette use, based on a simple index-lipid accumulation product (LAP), was investigated in Taiwanese male factory workers. Male factory workers were recruited during their annual routine health examination at a hospital in south Taiwan. The risk of cardiometabolic disorders was estimated by the use of LAP, calculated as (waist circumference [cm]-65)×(triglyceride concentration [mmol/l]). Multiple linear regression analyses were conducted to assess the risk factors of natural logarithm-transformed LAP. Of the 815 participants, 40% (325/815) were current alcohol users, 30% (248/815) were current smokers and 7% (53/815) were current betel-quid users. Current betel-quid use, alcohol use, older age, lack of exercise and higher body mass index were found to be significant and independent factors associated with natural logarithm-transformed LAP. Betel-quid and alcohol, but not cigarette use, were independent risk factors of logarithm-transformed LAP, adjusting for age, exercise and body mass index in male Taiwanese factory workers. LAP can be considered as a simple and useful method for screening of cardiometabolic risk. © The Author 2014. Published by Oxford University Press on behalf of Royal Society of Tropical Medicine and Hygiene. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Borges, Nattai R; Reaburn, Peter R; Doering, Thomas M; Argus, Christos K; Driller, Matthew W
2017-04-01
This study aimed at examining the autonomic cardiovascular modulation in well-trained masters and young cyclists following high-intensity interval training (HIT). Nine masters (age 55.6 ± 5.0 years) and eight young cyclists (age 25.9 ± 3.0 years) completed a HIT protocol of 6 x 30 sec at 175% of peak power output, with 4.5-min' rest between efforts. Immediately following HIT, heart rate and R-R intervals were monitored for 30-min during passive supine recovery. Autonomic modulation was examined by i) heart rate recovery in the first 60-sec of recovery (HRR 60 ); ii) the time constant of the 30-min heart rate recovery curve (HRRτ); iii) the time course of the root mean square for successive 30-sec R-R interval (RMSSD 30 ); and iv) time and frequency domain analyses of subsequent 5-min R-R interval segments. No significant between-group differences were observed for HRR 60 (P = 0.096) or HRR τ (P = 0.617). However, a significant interaction effect was found for RMSSD 30 (P = 0.021), with the master cyclists showing higher RMSSD 30 values following HIT. Similar results were observed in the time and frequency domain analyses with significant interaction effects found for the natural logarithm of the RMSSD (P = 0.008), normalised low-frequency power (P = 0.016) and natural logarithm of high-frequency power (P = 0.012). Following high-intensity interval training, master cyclists demonstrated greater post-exercise parasympathetic reactivation compared to young cyclists, indicating that physical training at older ages has significant effects on autonomic function.
Cochlea and other spiral forms in nature and art.
Marinković, Slobodan; Stanković, Predrag; Štrbac, Mile; Tomić, Irina; Ćetković, Mila
2012-01-01
The original appearance of the cochlea and the specific shape of a spiral are interesting for both the scientists and artists. Yet, a correlation between the cochlea and the spiral forms in nature and art has been very rarely mentioned. The aim of this study was to investigate the possible correlation between the cochlea and the other spiral objects in nature, as well as the artistic presentation of the spiral forms. We explored data related to many natural objects and examined 13,625 artworks created by 2049 artists. We also dissected 2 human cochleas and prepared histologic slices of a rat cochlea. The cochlea is a spiral, cone-shaped osseous structure that resembles certain other spiral forms in nature. It was noticed that parts of some plants are arranged in a spiral manner, often according to Fibonacci numbers. Certain animals, their parts, or their products also represent various types of spirals. Many of them, including the cochlea, belong to the logarithmic type. Nature created spiral forms in the living world to pack a larger number of structures in a limited space and also to improve their function. Because the cochlea and other spiral forms have a certain aesthetic value, many artists presented them in their works of art. There is a mathematical and geometric correlation between the cochlea and natural spiral objects, and the same functional reason for their formation. The artists' imagery added a new aspect to those domains. Obviously, the creativity of nature and Homo sapiens has no limits--like the infinite distal part of the spiral. Copyright © 2012 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Joyce, G. S.
2012-07-01
The mathematical properties of the face-centred cubic lattice Green function \\begin{equation*} \\fl G(w) \\equiv {1\\over {\\pi ^3}}\\int _{0}^{\\pi }\\int _{0}^{\\pi }\\int _{0}^{\\pi } {{d\\theta _1\\,d\\theta _2\\,d\\theta _3}\\over {w-c(\\theta _1)\\,c(\\theta _2)- c(\\theta _2)\\,c(\\theta _3)-c(\\theta _3)\\,c(\\theta _1)}} \\end{equation*} and the associated logarithmic integral \\begin{eqnarray*} \\fl S(w) \\equiv {1\\over {\\pi ^3}}\\int _{0}^{\\pi }\\int _{0}^{\\pi }\\int _{0}^{\\pi } \\ln [ w-c(\\theta _1)\\,c(\\theta _2)-c(\\theta _2)\\,c(\\theta _3)\
Optimization of the Monte Carlo code for modeling of photon migration in tissue.
Zołek, Norbert S; Liebert, Adam; Maniewski, Roman
2006-10-01
The Monte Carlo method is frequently used to simulate light transport in turbid media because of its simplicity and flexibility, allowing to analyze complicated geometrical structures. Monte Carlo simulations are, however, time consuming because of the necessity to track the paths of individual photons. The time consuming computation is mainly associated with the calculation of the logarithmic and trigonometric functions as well as the generation of pseudo-random numbers. In this paper, the Monte Carlo algorithm was developed and optimized, by approximation of the logarithmic and trigonometric functions. The approximations were based on polynomial and rational functions, and the errors of these approximations are less than 1% of the values of the original functions. The proposed algorithm was verified by simulations of the time-resolved reflectance at several source-detector separations. The results of the calculation using the approximated algorithm were compared with those of the Monte Carlo simulations obtained with an exact computation of the logarithm and trigonometric functions as well as with the solution of the diffusion equation. The errors of the moments of the simulated distributions of times of flight of photons (total number of photons, mean time of flight and variance) are less than 2% for a range of optical properties, typical of living tissues. The proposed approximated algorithm allows to speed up the Monte Carlo simulations by a factor of 4. The developed code can be used on parallel machines, allowing for further acceleration.
The nature of arms in spiral galaxies. IV. Symmetries and asymmetries
NASA Astrophysics Data System (ADS)
del Río, M. S.; Cepa, J.
1999-01-01
A Fourier analysis of the intensity distribution in the planes of nine spiral galaxies is performed. In terms of the arm classification scheme of \\cite[Elmegreen & Elmegreen (1987)]{ee87} seven of the galaxies have well-defined arms (classes 12 and 9) and two have intermediate-type arms (class 5). The galaxies studied are NGC 157, 753, 895, 4321, 6764, 6814, 6951, 7479 and 7723. For each object Johnson B-band images are available which are decomposed into angular components, for different angular periodicities. No a priori assumption is made concerning the form of the arms. The base function used in the analysis is a logarithmic spiral. The main result obtained with this method is that the dominant component (or mode) usually changes at corotation. In some cases, this change to a different mode persists only for a short range about corotation, but in other cases the change is permanent. The agreement between pitch angles found with this method and by fitting logarithmic spirals to mean arm positions (del Río & Cepa 1998b, hereafter \\cite[Paper III]{p3}) is good, except for those cases where bars are strong and dominant. Finally, a comparison is made with the ``symmetrization'' method introduced by Elmegreen, Elmegreen & Montenegro (1992, hereafter EEM), which also shows the different symmetric components.
Homotopy method for optimization of variable-specific-impulse low-thrust trajectories
NASA Astrophysics Data System (ADS)
Chi, Zhemin; Yang, Hongwei; Chen, Shiyu; Li, Junfeng
2017-11-01
The homotopy method has been used as a useful tool in solving fuel-optimal trajectories with constant-specific-impulse low thrust. However, the specific impulse is often variable for many practical solar electric power-limited thrusters. This paper investigates the application of the homotopy method for optimization of variable-specific-impulse low-thrust trajectories. Difficulties arise when the two commonly-used homotopy functions are employed for trajectory optimization. The optimal power throttle level and the optimal specific impulse are coupled with the commonly-used quadratic and logarithmic homotopy functions. To overcome these difficulties, a modified logarithmic homotopy function is proposed to serve as a gateway for trajectory optimization, leading to decoupled expressions of both the optimal power throttle level and the optimal specific impulse. The homotopy method based on this homotopy function is proposed. Numerical simulations validate the feasibility and high efficiency of the proposed method.
Concreteness and Psychological Distance in Natural Language Use
Snefjella, Bryor; Kuperman, Victor
2015-01-01
Existing evidence shows that more abstract mental representations are formed, and more abstract language is used, to characterize phenomena which are more distant from self. Yet the precise form of the functional relationship between distance and linguistic abstractness has been unknown. In four studies, we test whether more abstract language is used in textual references to more geographically distant cities (Study 1), times further into the past or future (Study 2), references to more socially distant people (Study 3), and references to a specific topic (Study 4). Using millions of linguistic productions from thousands of social media users, we determine that linguistic concreteness is a curvilinear function of the logarithm of distance and discuss psychological underpinnings of the mathematical properties of the relationship. We also demonstrate that gradient curvilinear effects of geographic and temporal distance on concreteness are near-identical, suggesting uniformity in representation of abstractness along multiple dimensions. PMID:26239108
Concreteness and Psychological Distance in Natural Language Use.
Snefjella, Bryor; Kuperman, Victor
2015-09-01
Existing evidence shows that more abstract mental representations are formed and more abstract language is used to characterize phenomena that are more distant from the self. Yet the precise form of the functional relationship between distance and linguistic abstractness is unknown. In four studies, we tested whether more abstract language is used in textual references to more geographically distant cities (Study 1), time points further into the past or future (Study 2), references to more socially distant people (Study 3), and references to a specific topic (Study 4). Using millions of linguistic productions from thousands of social-media users, we determined that linguistic concreteness is a curvilinear function of the logarithm of distance, and we discuss psychological underpinnings of the mathematical properties of this relationship. We also demonstrated that gradient curvilinear effects of geographic and temporal distance on concreteness are nearly identical, which suggests uniformity in representation of abstractness along multiple dimensions. © The Author(s) 2015.
One-loop renormalization of Lee-Wick gauge theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grinstein, Benjamin; O'Connell, Donal
2008-11-15
We examine the renormalization of Lee-Wick gauge theory to one-loop order. We show that only knowledge of the wave function renormalization is necessary to determine the running couplings, anomalous dimensions, and vector boson masses. In particular, the logarithmic running of the Lee-Wick vector boson mass is exactly related to the running of the coupling. In the case of an asymptotically free theory, the vector boson mass runs to infinity in the ultraviolet. Thus, the UV fixed point of the pure gauge theory is an ordinary quantum field theory. We find that the coupling runs more quickly in Lee-Wick gauge theorymore » than in ordinary gauge theory, so the Lee-Wick standard model does not naturally unify at any scale. Finally, we present results on the beta function of more general theories containing dimension six operators which differ from previous results in the literature.« less
Are there common mathematical structures in economics and physics?
NASA Astrophysics Data System (ADS)
Mimkes, Jürgen
2016-12-01
Economics is a field that looks into the future. We may know a few things ahead (ex ante), but most things we only know, afterwards (ex post). How can we work in a field, where much of the important information is missing? Mathematics gives two answers: 1. Probability theory leads to microeconomics: the Lagrange function optimizes utility under constraints of economic terms (like costs). The utility function is the entropy, the logarithm of probability. The optimal result is given by a probability distribution and an integrating factor. 2. Calculus leads to macroeconomics: In economics we have two production factors, capital and labour. This requires two dimensional calculus with exact and not-exact differentials, which represent the "ex ante" and "ex post" terms of economics. An integrating factor turns a not-exact term (like income) into an exact term (entropy, the natural production function). The integrating factor is the same as in microeconomics and turns the not-exact field of economics into an exact physical science.
Rayleigh approximation to ground state of the Bose and Coulomb glasses
Ryan, S. D.; Mityushev, V.; Vinokur, V. M.; Berlyand, L.
2015-01-01
Glasses are rigid systems in which competing interactions prevent simultaneous minimization of local energies. This leads to frustration and highly degenerate ground states the nature and properties of which are still far from being thoroughly understood. We report an analytical approach based on the method of functional equations that allows us to construct the Rayleigh approximation to the ground state of a two-dimensional (2D) random Coulomb system with logarithmic interactions. We realize a model for 2D Coulomb glass as a cylindrical type II superconductor containing randomly located columnar defects (CD) which trap superconducting vortices induced by applied magnetic field. Our findings break ground for analytical studies of glassy systems, marking an important step towards understanding their properties. PMID:25592417
Lee, Scott A; Pinnick, David A; Anderson, A
2015-01-01
Raman spectroscopy has been used to study the eigenvectors and eigenvalues of the vibrational modes of crystalline cytidine at 295 K and high pressures by evaluating the logarithmic derivative of the vibrational frequency ω with respect to pressure P: [Formula: see text]. Crystalline samples of molecular materials have strong intramolecular bonds and weak intermolecular bonds. This hierarchy of bonding strengths causes the vibrational optical modes localized within a molecular unit ("internal" modes) to be relatively high in frequency while the modes in which the molecular units vibrate against each other ("external" modes) have relatively low frequencies. The value of the logarithmic derivative is a useful diagnostic probe of the nature of the eigenvector of the vibrational modes because stretching modes (which are predominantly internal to the molecule) have low logarithmic derivatives while external modes have higher logarithmic derivatives. In crystalline cytidine, the modes at 85.8, 101.4, and 110.6 cm(-1) are external in which the molecules of the unit cell vibrate against each other in either translational or librational motions (or some linear combination thereof). All of the modes above 320 cm(-1) are predominantly internal stretching modes. The remaining modes below 320 cm(-1) include external modes and internal modes, mostly involving either torsional or bending motions of groups of atoms within a molecule.
NASA Astrophysics Data System (ADS)
Goradia, Shantilal
2012-10-01
When Rutherford discovered the nuclear force in 1919, he felt the force he discovered reflected some deviation of Newtonian gravity. Einstein too in his 1919 paper published the failure of the general relativity and Newtonian gravity to explain nuclear force and, in his concluding remarks, he retracted his earlier introduction of the cosmological constant. Consistent with his genius, we modify Newtonian gravity as probabilistic gravity using natural Planck units for a realistic study of nature. The result is capable of expressing both (1) nuclear force [strong coupling], and (2) Newtonian gravity in one equation, implying in general, in layman's words, that gravity is the cumulative effect of all quantum mechanical forces which are impossible to measure at long distances. Non discovery of graviton and quantum gravity silently support our findings. Continuing to climb on the shoulders of the giants enables us to see horizons otherwise unseen, as reflected in our book: ``Quantum Consciousness - The Road to Reality,'' and physics/0210040, where we derive the fine structure constant as a function of the age of the universe in Planck times consistent with Gamow's hint, using natural logarithm consistent with Feynman's hint.
Noise-induced phase space transport in two-dimensional Hamiltonian systems.
Pogorelov, I V; Kandrup, H E
1999-08-01
First passage time experiments were used to explore the effects of low amplitude noise as a source of accelerated phase space diffusion in two-dimensional Hamiltonian systems, and these effects were then compared with the effects of periodic driving. The objective was to quantify and understand the manner in which "sticky" chaotic orbits that, in the absence of perturbations, are confined near regular islands for very long times, can become "unstuck" much more quickly when subjected to even very weak perturbations. For both noise and periodic driving, the typical escape time scales logarithmically with the amplitude of the perturbation. For white noise, the details seem unimportant: Additive and multiplicative noise typically have very similar effects, and the presence or absence of a friction related to the noise by a fluctuation-dissipation theorem is also largely irrelevant. Allowing for colored noise can significantly decrease the efficacy of the perturbation, but only when the autocorrelation time, which vanishes for white noise, becomes so large that there is little power at frequencies comparable to the natural frequencies of the unperturbed orbit. Similarly, periodic driving is relatively inefficient when the driving frequency is not comparable to these natural frequencies. This suggests that noise-induced extrinsic diffusion, like modulational diffusion associated with periodic driving, is a resonance phenomenon. The logarithmic dependence of the escape time on amplitude reflects the fact that the time required for perturbed and unperturbed orbits to diverge a given distance scales logarithmically in the amplitude of the perturbation.
The natural logarithm transforms the abbreviated injury scale and improves accuracy scoring.
Wang, Xu; Gu, Xiaoming; Zhang, Zhiliang; Qiu, Fang; Zhang, Keming
2012-11-01
The Injury Severity Score (ISS) and the New Injury Severity Score (NISS) are widely used for anatomic severity assessments, but they do not display a linear relation to mortality. The mortality rates are significantly different between pairs of the Abbreviated Injury Scale (AIS) triplets that generate the same ISS/NISS total. The Logarithm Injury Severity Score (LISS) is defined as a change in AIS values by raising each AIS severity score (1-6) by taking the natural logarithm to a power of 5.53 multiplied by 1.7987 and then adding the three most severe injuries (i.e. highest AIS), regardless of body region. LISS values were calculated for every patient in three large independent data sets: 3,784, 4,436, and 4,018 patients treated over a six-year period at Class A tertiary comprehensive hospitals in China. The power of LISS to predict morality was then compared with previously calculated NISS values for the same patients in each of the three data sets. We found that LISS is more predictive of survival as well (Hangzhou: receiver operating characteristic (ROC): NISS=0.931, LISS=0.949, p=0.006; Similarly, Zhejiang and Shenyang: ROC NISS vs. LISS, p<0.05). Moreover, LISS provides a better fit throughout its entire range of predicting (Hosmer-Lemeshow statistic for Hangzhou NISS=15.76, p=0.027; LISS=13.79, p=0.055; Similarly, for Zhejiang and Shenyang). LISS should be used as the standard summary measure of human trauma.
Respiratory health in Turkish asbestos cement workers: the role of environmental exposure.
Akkurt, Ibrahim; Onal, Buhara; Demir, Ahmet Uğur; Tüzün, Dilek; Sabir, Handan; Ulusoy, Lütfi; Karadağ, Kaan O; Ersoy, Nihat; Cöplü, Lütfi
2006-08-01
Benign and malignant pleural and lung diseases due to environmental asbestos exposure constitute an important health problem in Turkey. The country has widespread natural deposits of asbestos in rural parts of central and eastern regions. Few data exists about the respiratory health effects of occupational asbestos exposure in Turkey. A cross-sectional study was conducted to investigate respiratory health effects of occupational asbestos exposure and the contribution of environmental asbestos exposure. Investigations included asbestos dust measurements in the workplace and application of an interviewer-administered questionnaire, a standard posteroanterior chest X-ray and spirometry. Information on birthplace of the workers was obtained in 406 workers and used to identify environmental exposure to asbestos, through a map of geographic locations with known asbestos exposure. Asbestos dust concentration in the ambient air of the work sites (fiber/ml) ranged between 0.2 and 0.76 (mean: 0.25, median: 0.22). Environmental exposure to asbestos was determined in 24.4% of the workers. After the adjustment for age, smoking, occupational asbestos exposure, and potential risk factors environmental asbestos exposure was associated with small irregular opacities grade > or = 1/0 (44.2% vs. 26.6%, P < 0.01), FVC% (97.8 vs. 104.5, P < 0.0001), and FEV1% (92.4 vs. 99.9, P < .0001). Occupational exposure to asbestos was associated with small irregular opacities grade > or = 1/0 (OR: 2.0, 95% CI: 1.3-3.1, per 1 unit increase in the natural logarithm of fiber/ml) and FEV1/FVC% (beta: 1.1, SEM: 0.54; P < 0.05, per 1 unit increase in the natural logarithm of fiber/ml). Environmental exposure to asbestos could increase the risk of asbestosis and lung function impairment in workers occupationally exposed to asbestos, independent from occupational exposure and smoking. Copyright 2006 Wiley-Liss, Inc.
Incorporating Scale-Dependent Fracture Stiffness for Improved Reservoir Performance Prediction
NASA Astrophysics Data System (ADS)
Crawford, B. R.; Tsenn, M. C.; Homburg, J. M.; Stehle, R. C.; Freysteinson, J. A.; Reese, W. C.
2017-12-01
We present a novel technique for predicting dynamic fracture network response to production-driven changes in effective stress, with the potential for optimizing depletion planning and improving recovery prediction in stress-sensitive naturally fractured reservoirs. A key component of the method involves laboratory geomechanics testing of single fractures in order to develop a unique scaling relationship between fracture normal stiffness and initial mechanical aperture. Details of the workflow are as follows: tensile, opening mode fractures are created in a variety of low matrix permeability rocks with initial, unstressed apertures in the micrometer to millimeter range, as determined from image analyses of X-ray CT scans; subsequent hydrostatic compression of these fractured samples with synchronous radial strain and flow measurement indicates that both mechanical and hydraulic aperture reduction varies linearly with the natural logarithm of effective normal stress; these stress-sensitive single-fracture laboratory observations are then upscaled to networks with fracture populations displaying frequency-length and length-aperture scaling laws commonly exhibited by natural fracture arrays; functional relationships between reservoir pressure reduction and fracture network porosity, compressibility and directional permeabilities as generated by such discrete fracture network modeling are then exported to the reservoir simulator for improved naturally fractured reservoir performance prediction.
Cost drivers and resource allocation in military health care systems.
Fulton, Larry; Lasdon, Leon S; McDaniel, Reuben R
2007-03-01
This study illustrates the feasibility of incorporating technical efficiency considerations in the funding of military hospitals and identifies the primary drivers for hospital costs. Secondary data collected for 24 U.S.-based Army hospitals and medical centers for the years 2001 to 2003 are the basis for this analysis. Technical efficiency was measured by using data envelopment analysis; subsequently, efficiency estimates were included in logarithmic-linear cost models that specified cost as a function of volume, complexity, efficiency, time, and facility type. These logarithmic-linear models were compared against stochastic frontier analysis models. A parsimonious, three-variable, logarithmic-linear model composed of volume, complexity, and efficiency variables exhibited a strong linear relationship with observed costs (R(2) = 0.98). This model also proved reliable in forecasting (R(2) = 0.96). Based on our analysis, as much as $120 million might be reallocated to improve the United States-based Army hospital performance evaluated in this study.
Late-time structure of the Bunch-Davies de Sitter wavefunction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anninos, Dionysios; Anous, Tarek; Freedman, Daniel Z.
2015-11-30
We examine the late time behavior of the Bunch-Davies wavefunction for interacting light fields in a de Sitter background. We use perturbative techniques developed in the framework of AdS/CFT, and analytically continue to compute tree and loop level contributions to the Bunch-Davies wavefunction. We consider self-interacting scalars of general mass, but focus especially on the massless and conformally coupled cases. We show that certain contributions grow logarithmically in conformal time both at tree and loop level. We also consider gauge fields and gravitons. The four-dimensional Fefferman-Graham expansion of classical asymptotically de Sitter solutions is used to show that the wavefunctionmore » contains no logarithmic growth in the pure graviton sector at tree level. Finally, assuming a holographic relation between the wavefunction and the partition function of a conformal field theory, we interpret the logarithmic growths in the language of conformal field theory.« less
Confirming the Lanchestrian linear-logarithmic model of attrition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hartley, D.S. III.
1990-12-01
This paper is the fourth in a series of reports on the breakthrough research in historical validation of attrition in conflict. Significant defense policy decisions, including weapons acquisition and arms reduction, are based in part on models of conflict. Most of these models are driven by their attrition algorithms, usually forms of the Lanchester square and linear laws. None of these algorithms have been validated. The results of this paper confirm the results of earlier papers, using a large database of historical results. The homogeneous linear-logarithmic Lanchestrian attrition model is validated to the extent possible with current initial and finalmore » force size data and is consistent with the Iwo Jima data. A particular differential linear-logarithmic model is described that fits the data very well. A version of Helmbold's victory predicting parameter is also confirmed, with an associated probability function. 37 refs., 73 figs., 68 tabs.« less
Uplink Downlink Rate Balancing and Throughput Scaling in FDD Massive MIMO Systems
NASA Astrophysics Data System (ADS)
Bergel, Itsik; Perets, Yona; Shamai, Shlomo
2016-05-01
In this work we extend the concept of uplink-downlink rate balancing to frequency division duplex (FDD) massive MIMO systems. We consider a base station with large number antennas serving many single antenna users. We first show that any unused capacity in the uplink can be traded off for higher throughput in the downlink in a system that uses either dirty paper (DP) coding or linear zero-forcing (ZF) precoding. We then also study the scaling of the system throughput with the number of antennas in cases of linear Beamforming (BF) Precoding, ZF Precoding, and DP coding. We show that the downlink throughput is proportional to the logarithm of the number of antennas. While, this logarithmic scaling is lower than the linear scaling of the rate in the uplink, it can still bring significant throughput gains. For example, we demonstrate through analysis and simulation that increasing the number of antennas from 4 to 128 will increase the throughput by more than a factor of 5. We also show that a logarithmic scaling of downlink throughput as a function of the number of receive antennas can be achieved even when the number of transmit antennas only increases logarithmically with the number of receive antennas.
SAR-based change detection using hypothesis testing and Markov random field modelling
NASA Astrophysics Data System (ADS)
Cao, W.; Martinis, S.
2015-04-01
The objective of this study is to automatically detect changed areas caused by natural disasters from bi-temporal co-registered and calibrated TerraSAR-X data. The technique in this paper consists of two steps: Firstly, an automatic coarse detection step is applied based on a statistical hypothesis test for initializing the classification. The original analytical formula as proposed in the constant false alarm rate (CFAR) edge detector is reviewed and rewritten in a compact form of the incomplete beta function, which is a builtin routine in commercial scientific software such as MATLAB and IDL. Secondly, a post-classification step is introduced to optimize the noisy classification result in the previous step. Generally, an optimization problem can be formulated as a Markov random field (MRF) on which the quality of a classification is measured by an energy function. The optimal classification based on the MRF is related to the lowest energy value. Previous studies provide methods for the optimization problem using MRFs, such as the iterated conditional modes (ICM) algorithm. Recently, a novel algorithm was presented based on graph-cut theory. This method transforms a MRF to an equivalent graph and solves the optimization problem by a max-flow/min-cut algorithm on the graph. In this study this graph-cut algorithm is applied iteratively to improve the coarse classification. At each iteration the parameters of the energy function for the current classification are set by the logarithmic probability density function (PDF). The relevant parameters are estimated by the method of logarithmic cumulants (MoLC). Experiments are performed using two flood events in Germany and Australia in 2011 and a forest fire on La Palma in 2009 using pre- and post-event TerraSAR-X data. The results show convincing coarse classifications and considerable improvement by the graph-cut post-classification step.
Evaporation Loss of Light Elements as a Function of Cooling Rate: Logarithmic Law
NASA Technical Reports Server (NTRS)
Xiong, Yong-Liang; Hewins, Roger H.
2003-01-01
Knowledge about the evaporation loss of light elements is important to our understanding of chondrule formation processes. The evaporative loss of light elements (such as B and Li) as a function of cooling rate is of special interest because recent investigations of the distribution of Li, Be and B in meteoritic chondrules have revealed that Li varies by 25 times, and B and Be varies by about 10 times. Therefore, if we can extrapolate and interpolate with confidence the evaporation loss of B and Li (and other light elements such as K, Na) at a wide range of cooling rates of interest based upon limited experimental data, we would be able to assess the full range of scenarios relating to chondrule formation processes. Here, we propose that evaporation loss of light elements as a function of cooling rate should obey the logarithmic law.
Conformal amplitude hierarchy and the Poincaré disk
NASA Astrophysics Data System (ADS)
Shimada, Hirohiko
2018-02-01
The amplitude for the singlet channels in the 4-point function of the fundamental field in the conformal field theory of the 2d O(n) model is studied as a function of n. For a generic value of n, the 4-point function has infinitely many amplitudes, whose landscape can be very spiky as the higher amplitude changes its sign many times at the simple poles, which generalize the unique pole of the energy operator amplitude at n = 0. In the stadard parameterization of n by angle in unit of π, we find that the zeros and poles happen at the rational angles, forming a hierarchical tree structure inherent in the Poincaré disk. Some relation between the amplitude and the Farey path, a piecewise geodesic that visits these zeros and poles, is suggested. In this hierarchy, the symmetry of the congruence subgroup Γ(2) of SL(2, ℤ) naturally arises from the two clearly distinct even/odd classes of the rational angles, in which one respectively gets the truncated operator algebras and the logarithmic 4-point functions.
NASA Astrophysics Data System (ADS)
Antonov, N. V.; Gulitskiy, N. M.
2015-01-01
Inertial-range asymptotic behavior of a vector (e.g., magnetic) field, passively advected by a strongly anisotropic turbulent flow, is studied by means of the field-theoretic renormalization group and the operator product expansion. The advecting velocity field is Gaussian, not correlated in time, with the pair correlation function of the form ∝δ (t -t') /k⊥d -1 +ξ , where k⊥=|k⊥| and k⊥ is the component of the wave vector, perpendicular to the distinguished direction ("direction of the flow")—the d -dimensional generalization of the ensemble introduced by Avellaneda and Majda [Commun. Math. Phys. 131, 381 (1990), 10.1007/BF02161420]. The stochastic advection-diffusion equation for the transverse (divergence-free) vector field includes, as special cases, the kinematic dynamo model for magnetohydrodynamic turbulence and the linearized Navier-Stokes equation. In contrast to the well-known isotropic Kraichnan's model, where various correlation functions exhibit anomalous scaling behavior with infinite sets of anomalous exponents, here the dependence on the integral turbulence scale L has a logarithmic behavior: Instead of powerlike corrections to ordinary scaling, determined by naive (canonical) dimensions, the anomalies manifest themselves as polynomials of logarithms of L . The key point is that the matrices of scaling dimensions of the relevant families of composite operators appear nilpotent and cannot be diagonalized. The detailed proof of this fact is given for the correlation functions of arbitrary order.
The critical role of logarithmic transformation in Nernstian equilibrium potential calculations.
Sawyer, Jemima E R; Hennebry, James E; Revill, Alexander; Brown, Angus M
2017-06-01
The membrane potential, arising from uneven distribution of ions across cell membranes containing selectively permeable ion channels, is of fundamental importance to cell signaling. The necessity of maintaining the membrane potential may be appreciated by expressing Ohm's law as current = voltage/resistance and recognizing that no current flows when voltage = 0, i.e., transmembrane voltage gradients, created by uneven transmembrane ion concentrations, are an absolute requirement for the generation of currents that precipitate the action and synaptic potentials that consume >80% of the brain's energy budget and underlie the electrical activity that defines brain function. The concept of the equilibrium potential is vital to understanding the origins of the membrane potential. The equilibrium potential defines a potential at which there is no net transmembrane ion flux, where the work created by the concentration gradient is balanced by the transmembrane voltage difference, and derives from a relationship describing the work done by the diffusion of ions down a concentration gradient. The Nernst equation predicts the equilibrium potential and, as such, is fundamental to understanding the interplay between transmembrane ion concentrations and equilibrium potentials. Logarithmic transformation of the ratio of internal and external ion concentrations lies at the heart of the Nernst equation, but most undergraduate neuroscience students have little understanding of the logarithmic function. To compound this, no current undergraduate neuroscience textbooks describe the effect of logarithmic transformation in appreciable detail, leaving the majority of students with little insight into how ion concentrations determine, or how ion perturbations alter, the membrane potential. Copyright © 2017 the American Physiological Society.
Wave propagation model of heat conduction and group speed
NASA Astrophysics Data System (ADS)
Zhang, Long; Zhang, Xiaomin; Peng, Song
2018-03-01
In view of the finite relaxation model of non-Fourier's law, the Cattaneo and Vernotte (CV) model and Fourier's law are presented in this work for comparing wave propagation modes. Independent variable translation is applied to solve the partial differential equation. Results show that the general form of the time spatial distribution of temperature for the three media comprises two solutions: those corresponding to the positive and negative logarithmic heating rates. The former shows that a group of heat waves whose spatial distribution follows the exponential function law propagates at a group speed; the speed of propagation is related to the logarithmic heating rate. The total speed of all the possible heat waves can be combined to form the group speed of the wave propagation. The latter indicates that the spatial distribution of temperature, which follows the exponential function law, decays with time. These features show that propagation accelerates when heated and decelerates when cooled. For the model media that follow Fourier's law and correspond to the positive heat rate of heat conduction, the propagation mode is also considered the propagation of a group of heat waves because the group speed has no upper bound. For the finite relaxation model with non-Fourier media, the interval of group speed is bounded and the maximum speed can be obtained when the logarithmic heating rate is exactly the reciprocal of relaxation time. And for the CV model with a non-Fourier medium, the interval of group speed is also bounded and the maximum value can be obtained when the logarithmic heating rate is infinite.
Refinement of Scoring Procedures for the Basic Attributes Test (BAT) Battery
1993-03-01
see Carretta, 1991). Research on the BAT summary scores has shown that some of them (a) are significantly positively skewed and platykurtic , (b) contain...for positively skewed and platykurtic data distributions, and those that were applied here to the BAT data, are the square-root and natural logarithm
7 CFR 400.303 - Initial selection criteria.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 6 2010-01-01 2010-01-01 false Initial selection criteria. 400.303 Section 400.303... Regulations for the 1991 and Succeeding Crop Years § 400.303 Initial selection criteria. (a) Nonstandard... .30 or greater; and (4) Either of the following apply: (i) The natural logarithm of the cumulative...
Moreno-Artero, E; Redondo, P
2015-10-01
A large number of flaps, particularly rotation and transposition flaps, have been described for the closure of skin defects left by oncologic surgery of the nose. The logarithmic spiral flap is a variant of the rotation flap. We present a series of 15 patients with different types of skin tumor on the nose. The skin defect resulting from excision of the tumor by micrographic surgery was reconstructed using various forms of the logarithmic spiral flap. There are 3 essential aspects to flap design: commencement of the pedicle at the upper or lower border of the wound, a width of the distal end of the flap equal to the vertical diameter of the defect, and a progressive increase in the radius of the spiral from the distal end of the flap to its base. The cosmetic and functional results of surgical reconstruction were satisfactory, and no patient required additional treatment to improve scar appearance. The logarithmic spiral flap is useful for the closure of circular or oval defects situated on the lateral surface of the nose and nasal ala. The flap initiates at one of the borders of the wound as a pedicle with a radius that increases progressively to create a spiral. We propose the logarithmic spiral flap as an excellent option for the closure of circular or oval defects of the nose. Copyright © 2015 Elsevier España, S.L.U. and AEDV. All rights reserved.
Effective field theory approach to heavy quark fragmentation
Fickinger, Michael; Fleming, Sean; Kim, Chul; ...
2016-11-17
Using an approach based on Soft Collinear Effective Theory (SCET) and Heavy Quark Effective Theory (HQET) we determine the b-quark fragmentation function from electron-positron annihilation data at the Z-boson peak at next-to-next-to leading order with next-to-next-to leading log resummation of DGLAP logarithms, and next-to-next-to-next-to leading log resummation of endpoint logarithms. This analysis improves, by one order, the previous extraction of the b-quark fragmentation function. We find that while the addition of the next order in the calculation does not much shift the extracted form of the fragmentation function, it does reduce theoretical errors indicating that the expansion is converging. Usingmore » an approach based on effective field theory allows us to systematically control theoretical errors. Furthermore, while the fits of theory to data are generally good, the fits seem to be hinting that higher order correction from HQET may be needed to explain the b-quark fragmentation function at smaller values of momentum fraction.« less
Detection of a Divot in the Scattering Population's Size Distribution
NASA Astrophysics Data System (ADS)
Shankman, Cory; Gladman, B.; Kaib, N.; Kavelaars, J.; Petit, J.
2012-10-01
Via joint analysis of the calibrated Canada France Ecliptic Place Survey (CFEPS, Petit et al 2011, AJ 142, 131), which found scattering Kuiper Belt objects, and models of their orbital distribution, we show that there should be enough kilometer-scale scattering objects to supply the Jupiter Family Comets (JFCs). Surprisingly, our analysis favours a divot (an abrupt drop and then recovery) in the size distribution at a diameter of 100 km, which results in a temporary flattening of the cumulative size distribution until it returns to a collisional equilibrium slope. Using the absolutely calibrated CFEPS survey we estimate that there are 2 x 10**9 scattering objects with H_g < 18, which is sufficient to provide the currently estimated JFC resupply rate. We also find that the primordial disk from which the scattering objects came must have had a "hot" initial inclination distribution before the giant planets scattered it out. We find that a divot, in the absolute magnitude number distribution, with a bright-end logarithmic slope of 0.8, a drop at a g-band H magnitude of 9, and a faint side logarithmic slope of 0.5 satisfies our data and simultaneously explains several existing nagging puzzles about Kuiper Belt luminosity functions (see Gladman et al., this meeting). Multiple explanations of how such a feature could have arisen will be discussed. This research was supported by the Natural Sciences and Engineering Research Council of Canada.
Significant Figure Rules for General Arithmetic Functions.
ERIC Educational Resources Information Center
Graham, D. M.
1989-01-01
Provides some significant figure rules used in chemistry including the general theoretical basis; logarithms and antilogarithms; exponentiation (with exactly known exponents); sines and cosines; and the extreme value rule. (YP)
Rayleigh approximation to ground state of the Bose and Coulomb glasses
Ryan, S. D.; Mityushev, V.; Vinokur, V. M.; ...
2015-01-16
Glasses are rigid systems in which competing interactions prevent simultaneous minimization of local energies. This leads to frustration and highly degenerate ground states the nature and properties of which are still far from being thoroughly understood. We report an analytical approach based on the method of functional equations that allows us to construct the Rayleigh approximation to the ground state of a two-dimensional (2D) random Coulomb system with logarithmic interactions. We realize a model for 2D Coulomb glass as a cylindrical type II superconductor containing randomly located columnar defects (CD) which trap superconducting vortices induced by applied magnetic field. Ourmore » findings break ground for analytical studies of glassy systems, marking an important step towards understanding their properties.« less
Synthesis and Characterization of High-Dielectric-Constant Nanographite-Polyurethane Composite
NASA Astrophysics Data System (ADS)
Mishra, Praveen; Bhat, Badekai Ramachandra; Bhattacharya, B.; Mehra, R. M.
2018-05-01
In the face of ever-growing demand for capacitors and energy storage devices, development of high-dielectric-constant materials is of paramount importance. Among various dielectric materials available, polymer dielectrics are preferred for their good processability. We report herein synthesis and characterization of nanographite-polyurethane composite with high dielectric constant. Nanographite showed good dispersibility in the polyurethane matrix. The thermosetting nature of polyurethane gives the composite the ability to withstand higher temperature without melting. The resultant composite was studied for its dielectric constant (ɛ) as a function of frequency. The composite exhibited logarithmic variation of ɛ from 3000 at 100 Hz to 225 at 60 kHz. The material also exhibited stable dissipation factor (tan δ) across the applied frequencies, suggesting its ability to resist current leakage.
NASA Astrophysics Data System (ADS)
Bremer, James
2018-05-01
We describe a method for the numerical evaluation of normalized versions of the associated Legendre functions Pν- μ and Qν- μ of degrees 0 ≤ ν ≤ 1, 000, 000 and orders - ν ≤ μ ≤ ν for arguments in the interval (- 1 , 1). Our algorithm, which runs in time independent of ν and μ, is based on the fact that while the associated Legendre functions themselves are extremely expensive to represent via polynomial expansions, the logarithms of certain solutions of the differential equation defining them are not. We exploit this by numerically precomputing the logarithms of carefully chosen solutions of the associated Legendre differential equation and representing them via piecewise trivariate Chebyshev expansions. These precomputed expansions, which allow for the rapid evaluation of the associated Legendre functions over a large swath of parameter domain mentioned above, are supplemented with asymptotic and series expansions in order to cover it entirely. The results of numerical experiments demonstrating the efficacy of our approach are presented, and our code for evaluating the associated Legendre functions is publicly available.
Evaluating the Use of Problem-Based Video Podcasts to Teach Mathematics in Higher Education
ERIC Educational Resources Information Center
Kay, Robin; Kletskin, Ilona
2012-01-01
Problem-based video podcasts provide short, web-based, audio-visual explanations of how to solve specific procedural problems in subject areas such as mathematics or science. A series of 59 problem-based video podcasts covering five key areas (operations with functions, solving equations, linear functions, exponential and logarithmic functions,…
Design, construction and calibration of a portable boundary layer wind tunnel for field use
USDA-ARS?s Scientific Manuscript database
Wind tunnels have been used for several decades to study wind erosion processes. Portable wind tunnels offer the advantage of testing natural surfaces in the field, but they must be carefully designed to insure that a logarithmic boundary layer is formed and that wind erosion processes may develop ...
Radionuclides in Soils Along a Mountain-Basin Transect in the Koratepa Mountains of Uzbekistan
USDA-ARS?s Scientific Manuscript database
Wind tunnels have been used for several decades to study wind erosion processes. Portable wind tunnels offer the advantage of testing natural surfaces in the field, but they must be carefully designed to insure that a logarithmic boundary layer is formed and that wind erosion processes may develop ...
USING SCIR TO PREDICT THE RATE OF BIOREMEDIATION OF MTBE
The 13C of MTBE was determined in ground water from four wells at a gasoline spill site in Orange County California. The natural logarithm of the fraction of MTBE remaining after biodegradation was estimated by subtracting the 13C of MTBE in gasoline from the 13C of MTBE in th...
Howard, Robert W
2014-09-01
The power law of practice holds that a power function best interrelates skill performance and amount of practice. However, the law's validity and generality are moot. Some researchers argue that it is an artifact of averaging individual exponential curves while others question whether the law generalizes to complex skills and to performance measures other than response time. The present study tested the power law's generality to development over many years of a very complex cognitive skill, chess playing, with 387 skilled participants, most of whom were grandmasters. A power or logarithmic function best fit grouped data but individuals showed much variability. An exponential function usually was the worst fit to individual data. Groups differing in chess talent were compared and a power function best fit the group curve for the more talented players while a quadratic function best fit that for the less talented. After extreme amounts of practice, a logarithmic function best fit grouped data but a quadratic function best fit most individual curves. Individual variability is great and the power law or an exponential law are not the best descriptions of individual chess skill development. Copyright © 2014 Elsevier B.V. All rights reserved.
EOS Interpolation and Thermodynamic Consistency
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gammel, J. Tinka
2015-11-16
As discussed in LA-UR-08-05451, the current interpolator used by Grizzly, OpenSesame, EOSPAC, and similar routines is the rational function interpolator from Kerley. While the rational function interpolator is well-suited for interpolation on sparse grids with logarithmic spacing and it preserves monotonicity in 1-d, it has some known problems.
Logarithmic Compression of Sensory Signals within the Dendritic Tree of a Collision-Sensitive Neuron
2012-01-01
Neurons in a variety of species, both vertebrate and invertebrate, encode the kinematics of objects approaching on a collision course through a time-varying firing rate profile that initially increases, then peaks, and eventually decays as collision becomes imminent. In this temporal profile, the peak firing rate signals when the approaching object's subtended size reaches an angular threshold, an event which has been related to the timing of escape behaviors. In a locust neuron called the lobula giant motion detector (LGMD), the biophysical basis of this angular threshold computation relies on a multiplicative combination of the object's angular size and speed, achieved through a logarithmic-exponential transform. To understand how this transform is implemented, we modeled the encoding of angular velocity along the pathway leading to the LGMD based on the experimentally determined activation pattern of its presynaptic neurons. These simulations show that the logarithmic transform of angular speed occurs between the synaptic conductances activated by the approaching object onto the LGMD's dendritic tree and its membrane potential at the spike initiation zone. Thus, we demonstrate an example of how a single neuron's dendritic tree implements a mathematical step in a neural computation important for natural behavior. PMID:22492048
Logarithmic compression methods for spectral data
Dunham, Mark E.
2003-01-01
A method is provided for logarithmic compression, transmission, and expansion of spectral data. A log Gabor transformation is made of incoming time series data to output spectral phase and logarithmic magnitude values. The output phase and logarithmic magnitude values are compressed by selecting only magnitude values above a selected threshold and corresponding phase values to transmit compressed phase and logarithmic magnitude values. A reverse log Gabor transformation is then performed on the transmitted phase and logarithmic magnitude values to output transmitted time series data to a user.
ERIC Educational Resources Information Center
Tindle, C. T.
1996-01-01
Describes a method to teach acoustics to students with minimal mathematical backgrounds. Discusses the uses of charts in teaching topics of sound intensity level and the decibel scale. Avoids the difficulties of working with logarithm functions. (JRH)
Santos, Abel; Law, Cheryl Suwen; Chin Lei, Dominique Wong; Pereira, Taj; Losic, Dusan
2016-11-03
In this study, we present an advanced nanofabrication approach to produce gradient-index photonic crystal structures based on nanoporous anodic alumina. An apodization strategy is for the first time applied to a sinusoidal pulse anodisation process in order to engineer the photonic stop band of nanoporous anodic alumina (NAA) in depth. Four apodization functions are explored, including linear positive, linear negative, logarithmic positive and logarithmic negative, with the aim of finely tuning the characteristic photonic stop band of these photonic crystal structures. We systematically analyse the effect of the amplitude difference (from 0.105 to 0.840 mA cm -2 ), the pore widening time (from 0 to 6 min), the anodisation period (from 650 to 950 s) and the anodisation time (from 15 to 30 h) on the quality and the position of the characteristic photonic stop band and the interferometric colour of these photonic crystal structures using the aforementioned apodization functions. Our results reveal that a logarithmic negative apodisation function is the most optimal approach to obtain unprecedented well-resolved and narrow photonic stop bands across the UV-visible-NIR spectrum of NAA-based gradient-index photonic crystals. Our study establishes a fully comprehensive rationale towards the development of unique NAA-based photonic crystal structures with finely engineered optical properties for advanced photonic devices such as ultra-sensitive optical sensors, selective optical filters and all-optical platforms for quantum computing.
Infrared Standards to Improve Chamber 7V Beam Irradiance Calibrations
1981-01-01
In addition to high thermometric resolution, thermistors have another useful operational feature in that the natural log of the voltage drop is very...conjunction with a third provided by AEDC. These equations are known as: Eq. i. T R - ÷ AIT 1 where T 1 is the thermometric temperature indicated by...3.58078 x 10 -5 | Equation 3 is t h e thermometric calibration provided by AEDC relating to the natural logarithm of V I. It is repeated here for
Chen, Chen; Xie, Yuanchang
2016-06-01
Annual Average Daily Traffic (AADT) is often considered as a main covariate for predicting crash frequencies at urban and suburban intersections. A linear functional form is typically assumed for the Safety Performance Function (SPF) to describe the relationship between the natural logarithm of expected crash frequency and covariates derived from AADTs. Such a linearity assumption has been questioned by many researchers. This study applies Generalized Additive Models (GAMs) and Piecewise Linear Negative Binomial (PLNB) regression models to fit intersection crash data. Various covariates derived from minor-and major-approach AADTs are considered. Three different dependent variables are modeled, which are total multiple-vehicle crashes, rear-end crashes, and angle crashes. The modeling results suggest that a nonlinear functional form may be more appropriate. Also, the results show that it is important to take into consideration the joint safety effects of multiple covariates. Additionally, it is found that the ratio of minor to major-approach AADT has a varying impact on intersection safety and deserves further investigations. Copyright © 2016 Elsevier Ltd. All rights reserved.
On the effectiveness of noise masks: naturalistic vs. un-naturalistic image statistics.
Hansen, Bruce C; Hess, Robert F
2012-05-01
It has been argued that the human visual system is optimized for identification of broadband objects embedded in stimuli possessing orientation averaged power spectra fall-offs that obey the 1/f(β) relationship typically observed in natural scene imagery (i.e., β=2.0 on logarithmic axes). Here, we were interested in whether individual spatial channels leading to recognition are functionally optimized for narrowband targets when masked by noise possessing naturalistic image statistics (β=2.0). The current study therefore explores the impact of variable β noise masks on the identification of narrowband target stimuli ranging in spatial complexity, while simultaneously controlling for physical or perceived differences between the masks. The results show that β=2.0 noise masks produce the largest identification thresholds regardless of target complexity, and thus do not seem to yield functionally optimized channel processing. The differential masking effects are discussed in the context of contrast gain control. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Stepina, I. A.; Popov, V. E.
2011-06-01
The exchangeable portion of the selectively sorbed 137Cs extractable by a 1 M ammonium acetate solution (α Ex ) for soils, illite, bentonite, and tripolite was found to increase with the increasing concentration of the competitive cation M+ (K+ or NH{4/+}) and can be approximated by a logarithmic relationship. For clinoptilolite, the values of α Ex did not depend on the concentration of M+. The expression 1 - α Ex ( C M= n )/α Ex ( C M = 16) as a function of the M+ concentration (where α Ex ( C M= n ) is the α Ex value at the competitive cation concentration equal to 16 mmol/dm3) was proposed to compare the dependence of α Ex on the concentration of K+ or NH{4/+}in different sorbents. For soils and illite, these dependences almost coincided, which indicated that the selective sorption of 137Cs in soils is determined by the presence of illite-group minerals.
Neyman-Pearson biometric score fusion as an extension of the sum rule
NASA Astrophysics Data System (ADS)
Hube, Jens Peter
2007-04-01
We define the biometric performance invariance under strictly monotonic functions on match scores as normalization symmetry. We use this symmetry to clarify the essential difference between the standard score-level fusion approaches of sum rule and Neyman-Pearson. We then express Neyman-Pearson fusion assuming match scores defined using false acceptance rates on a logarithmic scale. We show that by stating Neyman-Pearson in this form, it reduces to sum rule fusion for ROC curves with logarithmic slope. We also introduce a one parameter model of biometric performance and use it to express Neyman-Pearson fusion as a weighted sum rule.
Precise Determination of the Absorption Maximum in Wide Bands
ERIC Educational Resources Information Center
Eriksson, Karl-Hugo; And Others
1977-01-01
A precise method of determining absorption maxima where Gaussian functions occur is described. The method is based on a logarithmic transformation of the Gaussian equation and is suited for a mini-computer. (MR)
Generalization and capacity of extensively large two-layered perceptrons.
Rosen-Zvi, Michal; Engel, Andreas; Kanter, Ido
2002-09-01
The generalization ability and storage capacity of a treelike two-layered neural network with a number of hidden units scaling as the input dimension is examined. The mapping from the input to the hidden layer is via Boolean functions; the mapping from the hidden layer to the output is done by a perceptron. The analysis is within the replica framework where an order parameter characterizing the overlap between two networks in the combined space of Boolean functions and hidden-to-output couplings is introduced. The maximal capacity of such networks is found to scale linearly with the logarithm of the number of Boolean functions per hidden unit. The generalization process exhibits a first-order phase transition from poor to perfect learning for the case of discrete hidden-to-output couplings. The critical number of examples per input dimension, alpha(c), at which the transition occurs, again scales linearly with the logarithm of the number of Boolean functions. In the case of continuous hidden-to-output couplings, the generalization error decreases according to the same power law as for the perceptron, with the prefactor being different.
Opportunities to Learn Reasoning and Proof in High School Mathematics Textbooks
ERIC Educational Resources Information Center
Thompson, Denisse R.; Senk, Sharon L.; Johnson, Gwendolyn J.
2012-01-01
The nature and extent of reasoning and proof in the written (i.e., intended) curriculum of 20 contemporary high school mathematics textbooks were explored. Both the narrative and exercise sets in lessons dealing with the topics of exponents, logarithms, and polynomials were examined. The extent of proof-related reasoning varied by topic and…
Donald R. Satterlund; Harold F. Haupt
1967-01-01
Study of interception storage of snow by two species of sapling conifers in northern Idaho revealed that cumulative snow catch follows the classical law of autocatakinetic growth, or [equation - see PDF] where I, is interception storage, e is the interception storage capacity of the tree, e is the base of the natural logarithm, k is a constant expressing the rate of...
Non-additive non-interacting kinetic energy of rare gas dimers
NASA Astrophysics Data System (ADS)
Jiang, Kaili; Nafziger, Jonathan; Wasserman, Adam
2018-03-01
Approximations of the non-additive non-interacting kinetic energy (NAKE) as an explicit functional of the density are the basis of several electronic structure methods that provide improved computational efficiency over standard Kohn-Sham calculations. However, within most fragment-based formalisms, there is no unique exact NAKE, making it difficult to develop general, robust approximations for it. When adjustments are made to the embedding formalisms to guarantee uniqueness, approximate functionals may be more meaningfully compared to the exact unique NAKE. We use numerically accurate inversions to study the exact NAKE of several rare-gas dimers within partition density functional theory, a method that provides the uniqueness for the exact NAKE. We find that the NAKE decreases nearly exponentially with atomic separation for the rare-gas dimers. We compute the logarithmic derivative of the NAKE with respect to the bond length for our numerically accurate inversions as well as for several approximate NAKE functionals. We show that standard approximate NAKE functionals do not reproduce the correct behavior for this logarithmic derivative and propose two new NAKE functionals that do. The first of these is based on a re-parametrization of a conjoint Perdew-Burke-Ernzerhof (PBE) functional. The second is a simple, physically motivated non-decomposable NAKE functional that matches the asymptotic decay constant without fitting.
The four-loop six-gluon NMHV ratio function
Dixon, Lance J.; von Hippel, Matt; McLeod, Andrew J.
2016-01-11
We use the hexagon function bootstrap to compute the ratio function which characterizes the next-to-maximally-helicity-violating (NMHV) six-point amplitude in planar N=4 super-Yang-Mills theory at four loops. A powerful constraint comes from dual superconformal invariance, in the form of a Q¯ differential equation, which heavily constrains the first derivatives of the transcendental functions entering the ratio function. At four loops, it leaves only a 34-parameter space of functions. Constraints from the collinear limits, and from the multi-Regge limit at the leading-logarithmic (LL) and next-to-leading-logarithmic (NLL) order, suffice to fix these parameters and obtain a unique result. We test the result againstmore » multi-Regge predictions at NNLL and N 3LL, and against predictions from the operator product expansion involving one and two flux-tube excitations; all cross-checks are satisfied. We study the analytical and numerical behavior of the parity-even and parity-odd parts on various lines and surfaces traversing the three-dimensional space of cross ratios. As part of this program, we characterize all irreducible hexagon functions through weight eight in terms of their coproduct. As a result, we also provide representations of the ratio function in particular kinematic regions in terms of multiple polylogarithms.« less
The four-loop six-gluon NMHV ratio function
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dixon, Lance J.; von Hippel, Matt; McLeod, Andrew J.
2016-01-11
We use the hexagon function bootstrap to compute the ratio function which characterizes the next-to-maximally-helicity-violating (NMHV) six-point amplitude in planar N = 4 super-Yang-Mills theory at four loops. A powerful constraint comes from dual superconformal invariance, in the form of a Q - differential equation, which heavily constrains the first derivatives of the transcendental functions entering the ratio function. At four loops, it leaves only a 34-parameter space of functions. Constraints from the collinear limits, and from the multi-Regge limit at the leading-logarithmic (LL) and next-to-leading-logarithmic (NLL) order, suffice to fix these parameters and obtain a unique result. We testmore » the result against multi- Regge predictions at NNLL and N 3LL, and against predictions from the operator product expansion involving one and two flux-tube excitations; all cross-checks are satisfied. We also study the analytical and numerical behavior of the parity-even and parity-odd parts on various lines and surfaces traversing the three-dimensional space of cross ratios. As part of this program, we characterize all irreducible hexagon functions through weight eight in terms of their coproduct. Furthermore, we provide representations of the ratio function in particular kinematic regions in terms of multiple polylogarithms.« less
Leading logarithmic corrections to the muonium hyperfine splitting and to the hydrogen Lamb shift
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karshenboim, S.G.
1994-12-31
Main leading corrections with recoil logarithm log(M/m) and low-energy logarithm log(Za) to the Muonium hyperfine splitting axe discussed. Logarithmic corrections have magnitudes of 0.1 {divided_by} 0.3 kHz. Non-leading higher order corrections axe expected to be not larger than 0.1 kHz. Leading logarithmic correction to the Hydrogen Lamb shift is also obtained.
Modeling of Metal-Ferroelectric-Semiconductor Field Effect Transistors
NASA Technical Reports Server (NTRS)
Duen Ho, Fat; Macleod, Todd C.
1998-01-01
The characteristics for a MFSFET (metal-ferroelectric-semiconductor field effect transistor) is very different than a conventional MOSFET and must be modeled differently. The drain current has a hysteresis shape with respect to the gate voltage. The position along the hysteresis curve is dependent on the last positive or negative polling of the ferroelectric material. The drain current also has a logarithmic decay after the last polling. A model has been developed to describe the MFSFET drain current for both gate voltage on and gate voltage off conditions. This model takes into account the hysteresis nature of the MFSFET and the time dependent decay. The model is based on the shape of the Fermi-Dirac function which has been modified to describe the MFSFET's drain current. This is different from the model proposed by Chen et. al. and that by Wu.
Jet shapes in dijet events at the LHC in SCET
NASA Astrophysics Data System (ADS)
Hornig, Andrew; Makris, Yiannis; Mehen, Thomas
2016-04-01
We consider the class of jet shapes known as angularities in dijet production at hadron colliders. These angularities are modified from the original definitions in e + e - collisions to be boost invariant along the beam axis. These shapes apply to the constituents of jets defined with respect to either k T -type (anti- k T , C/ A, and k T ) algorithms and cone-type algorithms. We present an SCET factorization formula and calculate the ingredients needed to achieve next-to-leading-log (NLL) accuracy in kinematic regions where non-global logarithms are not large. The factorization formula involves previously unstudied "unmeasured beam functions," which are present for finite rapidity cuts around the beams. We derive relations between the jet functions and the shape-dependent part of the soft function that appear in the factorized cross section and those previously calculated for e + e - collisions, and present the calculation of the non-trivial, color-connected part of the soft-function to O({α}_s) . This latter part of the soft function is universal in the sense that it applies to any experimental setup with an out-of-jet p T veto and rapidity cuts together with two identified jets and it is independent of the choice of jet (sub-)structure measurement. In addition, we implement the recently introduced soft-collinear refactorization to resum logarithms of the jet size, valid in the region of non-enhanced non-global logarithm effects. While our results are valid for all 2 → 2 channels, we compute explicitly for the qq' → qq' channel the color-flow matrices and plot the NLL resummed differential dijet cross section as an explicit example, which shows that the normalization and scale uncertainty is reduced when the soft function is refactorized. For this channel, we also plot the jet size R dependence, the p T cut dependence, and the dependence on the angularity parameter a.
Jet shapes in dijet events at the LHC in SCET
Hornig, Andrew; Makris, Yiannis; Mehen, Thomas
2016-04-15
Here, we consider the class of jet shapes known as angularities in dijet production at hadron colliders. These angularities are modified from the original definitions in e + e- collisions to be boost invariant along the beam axis. These shapes apply to the constituents of jets defined with respect to either k T-type (anti-k T, C/A, and k T) algorithms and cone-type algorithms. We present an SCET factorization formula and calculate the ingredients needed to achieve next-to-leading-log (NLL) accuracy in kinematic regions where non-global logarithms are not large. The factorization formula involves previously unstudied “unmeasured beam functions,” which are present for finite rapidity cuts around the beams. We derive relations between the jet functions and the shape-dependent part of the soft function that appear in the factorized cross section and those previously calculated for e +e - collisions, and present the calculation of the non-trivial, color-connected part of the soft-function to O(αs) . This latter part of the soft function is universal in the sense that it applies to any experimental setup with an out-of-jet p T veto and rapidity cuts together with two identified jets and it is independent of the choice of jet (sub-)structure measurement. In addition, we implement the recently introduced soft-collinear refactorization to resum logarithms of the jet size, valid in the region of non-enhanced non-global logarithm effects. While our results are valid for all 2 → 2 channels, we compute explicitly for the qq' → qq' channel the color-flow matrices and plot the NLL resummed differential dijet cross section as an explicit example, which shows that the normalization and scale uncertainty is reduced when the soft function is refactorized. For this channel, we also plot the jet size R dependence, the pmore » $$cut\\atop{T}$$ dependence, and the dependence on the angularity parameter a.« less
Assessing the role of pavement macrotexture in preventing crashes on highways.
Pulugurtha, Srinivas S; Kusam, Prasanna R; Patel, Kuvleshay J
2010-02-01
The objective of this article is to assess the role of pavement macrotexture in preventing crashes on highways in the State of North Carolina. Laser profilometer data obtained from the North Carolina Department of Transportation (NCDOT) for highways comprising four corridors are processed to calculate pavement macrotexture at 100-m (approximately 330-ft) sections according to the American Society for Testing and Materials (ASTM) standards. Crash data collected over the same lengths of the corridors were integrated with the calculated pavement macrotexture for each section. Scatterplots were generated to assess the role of pavement macrotexture on crashes and logarithm of crashes. Regression analyses were conducted by considering predictor variables such as million vehicle miles of travel (as a function of traffic volume and length), the number of interchanges, the number of at-grade intersections, the number of grade-separated interchanges, and the number of bridges, culverts, and overhead signs along with pavement macrotexture to study the statistical significance of relationship between pavement macrotexture and crashes (both linear and log-linear) when compared to other predictor variables. Scatterplots and regression analysis conducted indicate a more statistically significant relationship between pavement macrotexture and logarithm of crashes than between pavement macrotexture and crashes. The coefficient for pavement macrotexture, in general, is negative, indicating that the number of crashes or logarithm of crashes decreases as it increases. The relation between pavement macrotexture and logarithm of crashes is generally stronger than between most other predictor variables and crashes or logarithm of crashes. Based on results obtained, it can be concluded that maintaining pavement macrotexture greater than or equal to 1.524 mm (0.06 in.) as a threshold limit would possibly reduce crashes and provide safe transportation to road users on highways.
NASA Astrophysics Data System (ADS)
Soeryana, E.; Fadhlina, N.; Sukono; Rusyaman, E.; Supian, S.
2017-01-01
Investments in stocks investors are also faced with the issue of risk, due to daily price of stock also fluctuate. For minimize the level of risk, investors usually forming an investment portfolio. Establishment of a portfolio consisting of several stocks are intended to get the optimal composition of the investment portfolio. This paper discussed about optimizing investment portfolio of Mean-Variance to stocks by using mean and volatility is not constant based on logarithmic utility function. Non constant mean analysed using models Autoregressive Moving Average (ARMA), while non constant volatility models are analysed using the Generalized Autoregressive Conditional heteroscedastic (GARCH). Optimization process is performed by using the Lagrangian multiplier technique. As a numerical illustration, the method is used to analyse some Islamic stocks in Indonesia. The expected result is to get the proportion of investment in each Islamic stock analysed.
Zheng, Xiaoming
2017-12-01
The purpose of this work was to examine the effects of relationship functions between diagnostic image quality and radiation dose on the governing equations for image acquisition parameter variations in X-ray imaging. Various equations were derived for the optimal selection of peak kilovoltage (kVp) and exposure parameter (milliAmpere second, mAs) in computed tomography (CT), computed radiography (CR), and direct digital radiography. Logistic, logarithmic, and linear functions were employed to establish the relationship between radiation dose and diagnostic image quality. The radiation dose to the patient, as a function of image acquisition parameters (kVp, mAs) and patient size (d), was used in radiation dose and image quality optimization. Both logistic and logarithmic functions resulted in the same governing equation for optimal selection of image acquisition parameters using a dose efficiency index. For image quality as a linear function of radiation dose, the same governing equation was derived from the linear relationship. The general equations should be used in guiding clinical X-ray imaging through optimal selection of image acquisition parameters. The radiation dose to the patient could be reduced from current levels in medical X-ray imaging.
Nonlinear interactions and their scaling in the logarithmic region of turbulent channels
NASA Astrophysics Data System (ADS)
Moarref, Rashad; Sharma, Ati S.; Tropp, Joel A.; McKeon, Beverley J.
2014-11-01
The nonlinear interactions in wall turbulence redistribute the turbulent kinetic energy across different scales and different wall-normal locations. To better understand these interactions in the logarithmic region of turbulent channels, we decompose the velocity into a weighted sum of resolvent modes (McKeon & Sharma, J. Fluid Mech., 2010). The resolvent modes represent the linear amplification mechanisms in the Navier-Stokes equations (NSE) and the weights represent the scaling influence of the nonlinearity. An explicit equation for the unknown weights is obtained by projecting the NSE onto the known resolvent modes (McKeon et al., Phys. Fluids, 2013). The weights of triad modes -the modes that directly interact via the quadratic nonlinearity in the NSE- are coupled via interaction coefficients that depend solely on the resolvent modes. We use the hierarchies of self-similar modes in the logarithmic region (Moarref et al., J. Fluid Mech., 2013) to extend the notion of triad modes to triad hierarchies. It is shown that the interaction coefficients for the triad modes that belong to a triad hierarchy follow an exponential function. These scalings can be used to better understand the interaction of flow structures in the logarithmic region and develop analytical results therein. The support of Air Force Office of Scientific Research under Grants FA 9550-09-1-0701 (P.M. Rengasamy Ponnappan) and FA 9550-12-1-0469 (P.M. Doug Smith) is gratefully acknowledged.
The asymptotic form of non-global logarithms, black disc saturation, and gluonic deserts
NASA Astrophysics Data System (ADS)
Neill, Duff
2017-01-01
We develop an asymptotic perturbation theory for the large logarithmic behavior of the non-linear integro-differential equation describing the soft correlations of QCD jet measurements, the Banfi-Marchesini-Smye (BMS) equation. This equation captures the late-time evolution of radiating color dipoles after a hard collision. This allows us to prove that at large values of the control variable (the non-global logarithm, a function of the infra-red energy scales associated with distinct hard jets in an event), the distribution has a gaussian tail. We compute the decay width analytically, giving a closed form expression, and find it to be jet geometry independent, up to the number of legs of the dipole in the active jet. Enabling the asymptotic expansion is the correct perturbative seed, where we perturb around an anzats encoding formally no real emissions, an intuition motivated by the buffer region found in jet dynamics. This must be supplemented with the correct application of the BFKL approximation to the BMS equation in collinear limits. Comparing to the asymptotics of the conformally related evolution equation encountered in small-x physics, the Balitisky-Kovchegov (BK) equation, we find that the asymptotic form of the non-global logarithms directly maps to the black-disc unitarity limit of the BK equation, despite the contrasting physical pictures. Indeed, we recover the equations of saturation physics in the final state dynamics of QCD.
Biological basis for space-variant sensor design I: parameters of monkey and human spatial vision
NASA Astrophysics Data System (ADS)
Rojer, Alan S.; Schwartz, Eric L.
1991-02-01
Biological sensor design has long provided inspiration for sensor design in machine vision. However relatively little attention has been paid to the actual design parameters provided by biological systems as opposed to the general nature of biological vision architectures. In the present paper we will provide a review of current knowledge of primate spatial vision design parameters and will present recent experimental and modeling work from our lab which demonstrates that a numerical conformal mapping which is a refinement of our previous complex logarithmic model provides the best current summary of this feature of the primate visual system. In this paper we will review recent work from our laboratory which has characterized some of the spatial architectures of the primate visual system. In particular we will review experimental and modeling studies which indicate that: . The global spatial architecture of primate visual cortex is well summarized by a numerical conformal mapping whose simplest analytic approximation is the complex logarithm function . The columnar sub-structure of primate visual cortex can be well summarized by a model based on a band-pass filtered white noise. We will also refer to ongoing work in our lab which demonstrates that: . The joint columnar/map structure of primate visual cortex can be modeled and summarized in terms of a new algorithm the ''''proto-column'''' algorithm. This work provides a reference-point for current engineering approaches to novel architectures for
Recovery of severely compacted soils in the Mojave Desert, California, USA
Webb, R.H.
2002-01-01
Often as a result of large-scale military maneuvers in the past, many soils in the Mojave Desert are highly vulnerable to soil compaction, particularly when wet. Previous studies indicate that natural recovery of severely compacted desert soils is extremely slow, and some researchers have suggested that subsurface compaction may not recover. Poorly sorted soils, particularly those with a loamy sand texture, are most vulnerable to soil compaction, and these soils are the most common in alluvial fans of the Mojave Desert. Recovery of compacted soil is expected to vary as a function of precipitation amounts, wetting-and-drying cycles, freeze-thaw cycles, and bioturbation, particularly root growth. Compaction recovery, as estimated using penetration depth and bulk density, was measured at 19 sites with 32 site-time combinations, including the former World War II Army sites of Camps Ibis, Granite, Iron Mountain, Clipper, and Essex. Although compaction at these sites was caused by a wide variety of forces, ranging from human trampling to tank traffic, the data do not allow segregation of differences in recovery rates for different compaction forces. The recovery rate appears to be logarithmic, with the highest rate of change occurring in the first few decades following abandonment. Some higher-elevation sites have completely recovered from soil compaction after 70 years. Using a linear model of recovery, the full recovery time ranges from 92 to 100 years; using a logarithmic model, which asymptotically approaches full recovery, the time required for 85% recovery ranges from 105-124 years.
40 CFR 53.62 - Test procedure: Full wind tunnel test.
Code of Federal Regulations, 2014 CFR
2014-07-01
... accuracy of 5 percent or better (e.g., hot-wire anemometry). For the wind speeds specified in table F-2 of... candidate sampler as a function of aerodynamic particle diameter (Dae) on semi-logarithmic graph paper where...
40 CFR 53.62 - Test procedure: Full wind tunnel test.
Code of Federal Regulations, 2012 CFR
2012-07-01
... accuracy of 5 percent or better (e.g., hot-wire anemometry). For the wind speeds specified in table F-2 of... candidate sampler as a function of aerodynamic particle diameter (Dae) on semi-logarithmic graph paper where...
40 CFR 53.62 - Test procedure: Full wind tunnel test.
Code of Federal Regulations, 2011 CFR
2011-07-01
... accuracy of 5 percent or better (e.g., hot-wire anemometry). For the wind speeds specified in table F-2 of... candidate sampler as a function of aerodynamic particle diameter (Dae) on semi-logarithmic graph paper where...
40 CFR 53.62 - Test procedure: Full wind tunnel test.
Code of Federal Regulations, 2013 CFR
2013-07-01
... accuracy of 5 percent or better (e.g., hot-wire anemometry). For the wind speeds specified in table F-2 of... candidate sampler as a function of aerodynamic particle diameter (Dae) on semi-logarithmic graph paper where...
The rationale for chemical time-series sampling has its roots in the same fundamental relationships as govern well hydraulics. Samples of ground water are collected as a function of increasing time of pumpage. The most efficient pattern of collection consists of logarithmically s...
Coarse graining Escherichia coli chemotaxis: from multi-flagella propulsion to logarithmic sensing.
Curk, Tine; Matthäus, Franziska; Brill-Karniely, Yifat; Dobnikar, Jure
2012-01-01
Various sensing mechanisms in nature can be described by the Weber-Fechner law stating that the response to varying stimuli is proportional to their relative rather than absolute changes. The chemotaxis of bacteria Escherichia coli is an example where such logarithmic sensing enables sensitivity over large range of concentrations. It has recently been experimentally demonstrated that under certain conditions E. coli indeed respond to relative gradients of ligands. We use numerical simulations of bacteria in food gradients to investigate the limits of validity of the logarithmic behavior. We model the chemotactic signaling pathway reactions, couple them to a multi-flagella model for propelling and take the effects of rotational diffusion into account to accurately reproduce the experimental observations of single cell swimming. Using this simulation scheme we analyze the type of response of bacteria subject to exponential ligand profiles and identify the regimes of absolute gradient sensing, relative gradient sensing, and a rotational diffusion dominated regime. We explore dependance of the swimming speed, average run time and the clockwise (CW) bias on ligand variation and derive a small set of relations that define a coarse grained model for bacterial chemotaxis. Simulations based on this coarse grained model compare well with microfluidic experiments on E. coli diffusion in linear and exponential gradients of aspartate.
78 FR 25515 - Order Making Fiscal Year 2013 Annual Adjustments to Transaction Fee Rates
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-01
... appropriation for fiscal year 2013. To make the adjustment, the Commission must project the aggregate dollar....0102 and [sigma] = 0.122, respectively. 4. Assume that the natural logarithm of ADS follows a random... given by exp ([mu] + [sigma]\\2\\/2), or on average ADS t = 1.0178 x ADS t-1 . 6. For March 2013, this...
Pan, Huanyu; Devasahayam, Sheila; Bandyopadhyay, Sri
2017-07-21
This paper examines the effect of a broad range of crosshead speed (0.05 to 100 mm/min) and a small range of temperature (25 °C and 45 °C) on the failure behaviour of high density polyethylene (HDPE) specimens containing a) standard size blunt notch and b) standard size blunt notch plus small sharp crack - all tested in air. It was observed that the yield stress properties showed linear increase with the natural logarithm of strain rate. The stress intensity factors under blunt notch and sharp crack conditions also increased linearly with natural logarithm of the crosshead speed. The results indicate that in the practical temperature range of 25 °C and 45 °C under normal atmosphere and increasing strain rates, HDPE specimens with both blunt notches and sharp cracks possess superior fracture properties. SEM microstructure studies of fracture surfaces showed craze initiation mechanisms at lower strain rate, whilst at higher strain rates there is evidence of dimple patterns absorbing the strain energy and creating plastic deformation. The stress intensity factor and the yield strength were higher at 25 °C compared to those at 45 °C.
The Fractal Nature of Wood Revealed by Drying
2000-01-01
structure of pore space. Take the natural 298 Table 1. The fractal dimensions of Ginkgo and Chinese chestnut obtained at varable temperatures species 200C 400...C 600C 800C 1000C Ginkgo 2.106 2.547 2.851 2.863 2.876 Chinese chestnut 2.008 2.566 2.814 2.896 2.972 logarithm of both sides of Eq. (2) and we have...The materials for this investigation came from two species, one was a 37-year- old plantation-grown Ginkgo ( Ginkgo biloba) and the other was a 48-year
NASA Astrophysics Data System (ADS)
Wang, Chunbai; Mitra, Ambar K.
2016-01-01
Any boundary surface evolving in viscous fluid is driven with surface capillary currents. By step function defined for the fluid-structure interface, surface currents are found near a flat wall in a logarithmic form. The general flat-plate boundary layer is demonstrated through the interface kinematics. The dynamics analysis elucidates the relationship of the surface currents with the adhering region as well as the no-slip boundary condition. The wall skin friction coefficient, displacement thickness, and the logarithmic velocity-defect law of the smooth flat-plate boundary-layer flow are derived with the advent of the forced evolving boundary method. This fundamental theory has wide applications in applied science and engineering.
Collective modes in two-dimensional one-component-plasma with logarithmic interaction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khrapak, Sergey A.; Forschungsgruppe Komplexe Plasmen, Deutsches Zentrum für Luft- und Raumfahrt, Oberpfaffenhofen; Joint Institute for High Temperatures, Russian Academy of Sciences, Moscow
The collective modes of a familiar two-dimensional one-component-plasma with the repulsive logarithmic interaction between the particles are analysed using the quasi-crystalline approximation (QCA) combined with the molecular dynamic simulation of the equilibrium structural properties. It is found that the dispersion curves in the strongly coupled regime are virtually independent of the coupling strength. Arguments based on the excluded volume consideration for the radial distribution function allow us to derive very simple expressions for the dispersion relations, which show excellent agreement with the exact QCA dispersion over the entire domain of wavelengths. Comparison with the results of the conventional fluid analysismore » is performed, and the difference is explained.« less
The analytic structure of non-global logarithms: Convergence of the dressed gluon expansion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larkoski, Andrew J.; Moult, Ian; Neill, Duff Austin
Non-global logarithms (NGLs) are the leading manifestation of correlations between distinct phase space regions in QCD and gauge theories and have proven a challenge to understand using traditional resummation techniques. Recently, the dressed gluon ex-pansion was introduced that enables an expansion of the NGL series in terms of a “dressed gluon” building block, defined by an all-orders factorization theorem. Here, we clarify the nature of the dressed gluon expansion, and prove that it has an infinite radius of convergence as a solution to the leading logarithmic and large-N c master equation for NGLs, the Banfi-Marchesini-Smye (BMS) equation. The dressed gluonmore » expansion therefore provides an expansion of the NGL series that can be truncated at any order, with reliable uncertainty estimates. In contrast, manifest in the results of the fixed-order expansion of the BMS equation up to 12-loops is a breakdown of convergence at a finite value of α slog. We explain this finite radius of convergence using the dressed gluon expansion, showing how the dynamics of the buffer region, a region of phase space near the boundary of the jet that was identified in early studies of NGLs, leads to large contributions to the fixed order expansion. We also use the dressed gluon expansion to discuss the convergence of the next-to-leading NGL series, and the role of collinear logarithms that appear at this order. Finally, we show how an understanding of the analytic behavior obtained from the dressed gluon expansion allows us to improve the fixed order NGL series using conformal transformations to extend the domain of analyticity. Furthermore, this allows us to calculate the NGL distribution for all values of α slog from the coefficients of the fixed order expansion.« less
The analytic structure of non-global logarithms: Convergence of the dressed gluon expansion
Larkoski, Andrew J.; Moult, Ian; Neill, Duff Austin
2016-11-15
Non-global logarithms (NGLs) are the leading manifestation of correlations between distinct phase space regions in QCD and gauge theories and have proven a challenge to understand using traditional resummation techniques. Recently, the dressed gluon ex-pansion was introduced that enables an expansion of the NGL series in terms of a “dressed gluon” building block, defined by an all-orders factorization theorem. Here, we clarify the nature of the dressed gluon expansion, and prove that it has an infinite radius of convergence as a solution to the leading logarithmic and large-N c master equation for NGLs, the Banfi-Marchesini-Smye (BMS) equation. The dressed gluonmore » expansion therefore provides an expansion of the NGL series that can be truncated at any order, with reliable uncertainty estimates. In contrast, manifest in the results of the fixed-order expansion of the BMS equation up to 12-loops is a breakdown of convergence at a finite value of α slog. We explain this finite radius of convergence using the dressed gluon expansion, showing how the dynamics of the buffer region, a region of phase space near the boundary of the jet that was identified in early studies of NGLs, leads to large contributions to the fixed order expansion. We also use the dressed gluon expansion to discuss the convergence of the next-to-leading NGL series, and the role of collinear logarithms that appear at this order. Finally, we show how an understanding of the analytic behavior obtained from the dressed gluon expansion allows us to improve the fixed order NGL series using conformal transformations to extend the domain of analyticity. Furthermore, this allows us to calculate the NGL distribution for all values of α slog from the coefficients of the fixed order expansion.« less
Optical image encryption system using nonlinear approach based on biometric authentication
NASA Astrophysics Data System (ADS)
Verma, Gaurav; Sinha, Aloka
2017-07-01
A nonlinear image encryption scheme using phase-truncated Fourier transform (PTFT) and natural logarithms is proposed in this paper. With the help of the PTFT, the input image is truncated into phase and amplitude parts at the Fourier plane. The phase-only information is kept as the secret key for the decryption, and the amplitude distribution is modulated by adding an undercover amplitude random mask in the encryption process. Furthermore, the encrypted data is kept hidden inside the face biometric-based phase mask key using the base changing rule of logarithms for secure transmission. This phase mask is generated through principal component analysis. Numerical experiments show the feasibility and the validity of the proposed nonlinear scheme. The performance of the proposed scheme has been studied against the brute force attacks and the amplitude-phase retrieval attack. Simulation results are presented to illustrate the enhanced system performance with desired advantages in comparison to the linear cryptosystem.
Zalvidea; Colautti; Sicre
2000-05-01
An analysis of the Strehl ratio and the optical transfer function as imaging quality parameters of optical elements with enhanced focal length is carried out by employing the Wigner distribution function. To this end, we use four different pupil functions: a full circular aperture, a hyper-Gaussian aperture, a quartic phase plate, and a logarithmic phase mask. A comparison is performed between the quality parameters and test images formed by these pupil functions at different defocus distances.
Song, Tianqi; Garg, Sudhanshu; Mokhtar, Reem; Bui, Hieu; Reif, John
2018-01-19
A main goal in DNA computing is to build DNA circuits to compute designated functions using a minimal number of DNA strands. Here, we propose a novel architecture to build compact DNA strand displacement circuits to compute a broad scope of functions in an analog fashion. A circuit by this architecture is composed of three autocatalytic amplifiers, and the amplifiers interact to perform computation. We show DNA circuits to compute functions sqrt(x), ln(x) and exp(x) for x in tunable ranges with simulation results. A key innovation in our architecture, inspired by Napier's use of logarithm transforms to compute square roots on a slide rule, is to make use of autocatalytic amplifiers to do logarithmic and exponential transforms in concentration and time. In particular, we convert from the input that is encoded by the initial concentration of the input DNA strand, to time, and then back again to the output encoded by the concentration of the output DNA strand at equilibrium. This combined use of strand-concentration and time encoding of computational values may have impact on other forms of molecular computation.
Extraction of quark transversity distribution and Collins fragmentation functions with QCD evolution
NASA Astrophysics Data System (ADS)
Kang, Zhong-Bo; Prokudin, Alexei; Sun, Peng; Yuan, Feng
2016-01-01
We study the transverse-momentum-dependent (TMD) evolution of the Collins azimuthal asymmetries in e+e- annihilations and semi-inclusive hadron production in deep inelastic scattering processes. All the relevant coefficients are calculated up to the next-to-leading-logarithmic-order accuracy. By applying the TMD evolution at the approximate next-to-leading-logarithmic order in the Collins-Soper-Sterman formalism, we extract transversity distributions for u and d quarks and Collins fragmentation functions from current experimental data by a global analysis of the Collins asymmetries in back-to-back dihadron productions in e+e- annihilations measured by BELLE and BABAR collaborations and semi-inclusive hadron production in deep inelastic scattering data from HERMES, COMPASS, and JLab HALL A experiments. The impact of the evolution effects and the relevant theoretical uncertainties are discussed. We further discuss the TMD interpretation for our results and illustrate the unpolarized quark distribution, transversity distribution, unpolarized quark fragmentation, and Collins fragmentation functions depending on the transverse momentum and the hard momentum scale. We make detailed predictions for future experiments and discuss their impact.
Real-Time Implementation of Nonlinear Processing Functions.
1981-08-01
crystal devices and then to use them in a coherent optical data- processing apparatus using halftone masks custom designed at the University oi Southern...California. With the halftone mask technique, we have demonstrated logarithmic nonlinear transformation, allowing us to separate multiplicative images...improved.,_ This device allowed nonlinear functions to be implemented directly wit - out the need for specially made halftone masks. Besides
The time dependence of rock healing as a universal relaxation process, a tutorial
NASA Astrophysics Data System (ADS)
Snieder, Roel; Sens-Schönfelder, Christoph; Wu, Renjie
2017-01-01
The material properties of earth materials often change after the material has been perturbed (slow dynamics). For example, the seismic velocity of subsurface materials changes after earthquakes, and granular materials compact after being shaken. Such relaxation processes are associated by observables that change logarithmically with time. Since the logarithm diverges for short and long times, the relaxation can, strictly speaking, not have a log-time dependence. We present a self-contained description of a relaxation function that consists of a superposition of decaying exponentials that has log-time behaviour for intermediate times, but converges to zero for long times, and is finite for t = 0. The relaxation function depends on two parameters, the minimum and maximum relaxation time. These parameters can, in principle, be extracted from the observed relaxation. As an example, we present a crude model of a fracture that is closing under an external stress. Although the fracture model violates some of the assumptions on which the relaxation function is based, it follows the relaxation function well. We provide qualitative arguments that the relaxation process, just like the Gutenberg-Richter law, is applicable to a wide range of systems and has universal properties.
Anomalous dynamics of intruders in a crowded environment of mobile obstacles
Sentjabrskaja, Tatjana; Zaccarelli, Emanuela; De Michele, Cristiano; Sciortino, Francesco; Tartaglia, Piero; Voigtmann, Thomas; Egelhaaf, Stefan U.; Laurati, Marco
2016-01-01
Many natural and industrial processes rely on constrained transport, such as proteins moving through cells, particles confined in nanocomposite materials or gels, individuals in highly dense collectives and vehicular traffic conditions. These are examples of motion through crowded environments, in which the host matrix may retain some glass-like dynamics. Here we investigate constrained transport in a colloidal model system, in which dilute small spheres move in a slowly rearranging, glassy matrix of large spheres. Using confocal differential dynamic microscopy and simulations, here we discover a critical size asymmetry, at which anomalous collective transport of the small particles appears, manifested as a logarithmic decay of the density autocorrelation functions. We demonstrate that the matrix mobility is central for the observed anomalous behaviour. These results, crucially depending on size-induced dynamic asymmetry, are of relevance for a wide range of phenomena ranging from glassy systems to cell biology. PMID:27041068
Herbicide Orange Site Characterization Study Naval Construction Battalion Center
1987-01-01
U.S. Testing Laboratories for analysis. Over 200 additional analyses were performed for a variety of quality assurance criteria. The resultant data...TABLE 9. NCBC PERFORMANCE AUDIT SAMPLE ANALYSIS SUNMARYa (SERIES 1) TCDD Sppb ) Reported Detection Relative b Sample Number Concentration Limit...limit rather than estimating the variance of the results. The sample results were transformed using the natural logarithm. The Shapiro-Wilk W test
A diameter increment model for Red Fir in California and Southern Oregon
K. Leroy Dolph
1992-01-01
Periodic (10-year) diameter increment of individual red fir trees in Califomia and southern Oregon can be predicted from initial diameter and crown ratio of each tree, site index, percent slope, and aspect of the site. The model actually predicts the natural logarithm ofthe change in squared diameter inside bark between the startand the end of a 10-year growth period....
The asymptotic form of non-global logarithms, black disc saturation, and gluonic deserts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neill, Duff
Here, we develop an asymptotic perturbation theory for the large logarithmic behavior of the non-linear integro-differential equation describing the soft correlations of QCD jet measurements, the Banfi-Marchesini-Smye (BMS) equation. Furthermore, this equation captures the late-time evolution of radiating color dipoles after a hard collision. This allows us to prove that at large values of the control variable (the non-global logarithm, a function of the infra-red energy scales associated with distinct hard jets in an event), the distribution has a gaussian tail. We also compute the decay width analytically, giving a closed form expression, and find it to be jet geometrymore » independent, up to the number of legs of the dipole in the active jet. By enabling the asymptotic expansion we find that the perturbative seed is correct; we perturb around an anzats encoding formally no real emissions, an intuition motivated by the buffer region found in jet dynamics. This must be supplemented with the correct application of the BFKL approximation to the BMS equation in collinear limits. Comparing to the asymptotics of the conformally related evolution equation encountered in small-x physics, the Balitisky-Kovchegov (BK) equation, we find that the asymptotic form of the non-global logarithms directly maps to the black-disc unitarity limit of the BK equation, despite the contrasting physical pictures. Indeed, we recover the equations of saturation physics in the final state dynamics of QCD.« less
The asymptotic form of non-global logarithms, black disc saturation, and gluonic deserts
Neill, Duff
2017-01-25
Here, we develop an asymptotic perturbation theory for the large logarithmic behavior of the non-linear integro-differential equation describing the soft correlations of QCD jet measurements, the Banfi-Marchesini-Smye (BMS) equation. Furthermore, this equation captures the late-time evolution of radiating color dipoles after a hard collision. This allows us to prove that at large values of the control variable (the non-global logarithm, a function of the infra-red energy scales associated with distinct hard jets in an event), the distribution has a gaussian tail. We also compute the decay width analytically, giving a closed form expression, and find it to be jet geometrymore » independent, up to the number of legs of the dipole in the active jet. By enabling the asymptotic expansion we find that the perturbative seed is correct; we perturb around an anzats encoding formally no real emissions, an intuition motivated by the buffer region found in jet dynamics. This must be supplemented with the correct application of the BFKL approximation to the BMS equation in collinear limits. Comparing to the asymptotics of the conformally related evolution equation encountered in small-x physics, the Balitisky-Kovchegov (BK) equation, we find that the asymptotic form of the non-global logarithms directly maps to the black-disc unitarity limit of the BK equation, despite the contrasting physical pictures. Indeed, we recover the equations of saturation physics in the final state dynamics of QCD.« less
Global stability and quadratic Hamiltonian structure in Lotka-Volterra and quasi-polynomial systems
NASA Astrophysics Data System (ADS)
Szederkényi, Gábor; Hangos, Katalin M.
2004-04-01
We show that the global stability of quasi-polynomial (QP) and Lotka-Volterra (LV) systems with the well-known logarithmic Lyapunov function is equivalent to the existence of a local generalized dissipative Hamiltonian description of the LV system with a diagonal quadratic form as a Hamiltonian function. The Hamiltonian function can be calculated and the quadratic dissipativity neighborhood of the origin can be estimated by solving linear matrix inequalities.
NASA Astrophysics Data System (ADS)
Agostini, Lionel; Leschziner, Michael
2017-01-01
Direct numerical simulation data for channel flow at a friction Reynolds number of 4200, generated by Lozano-Durán and Jiménez [J. Fluid Mech. 759, 432 (2014), 10.1017/jfm.2014.575], are used to examine the properties of near-wall turbulence within subranges of eddy-length scale. Attention is primarily focused on the intermediate layer (mesolayer) covering the logarithmic velocity region within the range of wall-scaled wall-normal distance of 80-1500. The examination is based on a number of statistical properties, including premultiplied and compensated spectra, the premultiplied derivative of the second-order structure function, and three scalar parameters that characterize the anisotropic or isotropic state of the various length-scale subranges. This analysis leads to the delineation of three regions within the map of wall-normal-wise premultiplied spectra, each characterized by distinct turbulence properties. A question of particular interest is whether the Townsend-Perry attached-eddy hypothesis (AEH) can be shown to be valid across the entire mesolayer, in contrast to the usual focus on the outer portion of the logarithmic-velocity layer at high Reynolds numbers, which is populated with very-large-scale motions. This question is addressed by reference to properties in the premultiplied scalewise derivative of the second-order structure function (PMDS2) and joint probability density functions of streamwise-velocity fluctuations and their streamwise and spanwise derivatives. This examination provides evidence, based primarily on the existence of a plateau region in the PMDS2, for the qualified validity of the AEH right down the lower limit of the logarithmic velocity range.
Frømyr, Tomas-Roll; Bourgeaux-Goget, Marie; Hansen, Finn Knut
2015-05-01
A method has been developed to characterize the dispersion of multi-wall carbon nanotubes in water using a disc centrifuge for the detection of individual carbon nanotubes, residual aggregates, and contaminants. Carbon nanotubes produced by arc-discharge have been measured and compared with carbon nanotubes produced by chemical vapour deposition. Studies performed on both pristine (see text) arc-discharge nanotubes is rather strong and that high ultra-sound intensity is required to achieve complete dispersion of carbon nanotube bundles. The logarithm of the mode of the particle size distribution of the arc-discharge carbon nanotubes was found to be a linear function of the logarithm of the total ultrasonic energy input in the dispersion process.
Logarithmic entanglement lightcone in many-body localized systems
NASA Astrophysics Data System (ADS)
Deng, Dong-Ling; Li, Xiaopeng; Pixley, J. H.; Wu, Yang-Le; Das Sarma, S.
2017-01-01
We theoretically study the response of a many-body localized system to a local quench from a quantum information perspective. We find that the local quench triggers entanglement growth throughout the whole system, giving rise to a logarithmic lightcone. This saturates the modified Lieb-Robinson bound for quantum information propagation in many-body localized systems previously conjectured based on the existence of local integrals of motion. In addition, near the localization-delocalization transition, we find that the final states after the local quench exhibit volume-law entanglement. We also show that the local quench induces a deterministic orthogonality catastrophe for highly excited eigenstates, where the typical wave-function overlap between the pre- and postquench eigenstates decays exponentially with the system size.
Spectroscopy of the Schwarzschild black hole at arbitrary frequencies.
Casals, Marc; Ottewill, Adrian
2012-09-14
Linear field perturbations of a black hole are described by the Green function of the wave equation that they obey. After Fourier decomposing the Green function, its two natural contributions are given by poles (quasinormal modes) and a largely unexplored branch cut in the complex frequency plane. We present new analytic methods for calculating the branch cut on a Schwarzschild black hole for arbitrary values of the frequency. The branch cut yields a power-law tail decay for late times in the response of a black hole to an initial perturbation. We determine explicitly the first three orders in the power-law and show that the branch cut also yields a new logarithmic behavior T(-2ℓ-5)lnT for late times. Before the tail sets in, the quasinormal modes dominate the black hole response. For electromagnetic perturbations, the quasinormal mode frequencies approach the branch cut at large overtone index n. We determine these frequencies up to n(-5/2) and, formally, to arbitrary order. Highly damped quasinormal modes are of particular interest in that they have been linked to quantum properties of black holes.
Binomial test statistics using Psi functions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowman, Kimiko o
2007-01-01
For the negative binomial model (probability generating function (p + 1 - pt){sup -k}) a logarithmic derivative is the Psi function difference {psi}(k + x) - {psi}(k); this and its derivatives lead to a test statistic to decide on the validity of a specified model. The test statistic uses a data base so there exists a comparison available between theory and application. Note that the test function is not dominated by outliers. Applications to (i) Fisher's tick data, (ii) accidents data, (iii) Weldon's dice data are included.
QLog Solar-Cell Mode Photodiode Logarithmic CMOS Pixel Using Charge Compression and Readout †
Ni, Yang
2018-01-01
In this paper, we present a new logarithmic pixel design currently under development at New Imaging Technologies SA (NIT). This new logarithmic pixel design uses charge domain logarithmic signal compression and charge-transfer-based signal readout. This structure gives a linear response in low light conditions and logarithmic response in high light conditions. The charge transfer readout efficiently suppresses the reset (KTC) noise by using true correlated double sampling (CDS) in low light conditions. In high light conditions, thanks to charge domain logarithmic compression, it has been demonstrated that 3000 electrons should be enough to cover a 120 dB dynamic range with a mobile phone camera-like signal-to-noise ratio (SNR) over the whole dynamic range. This low electron count permits the use of ultra-small floating diffusion capacitance (sub-fF) without charge overflow. The resulting large conversion gain permits a single photon detection capability with a wide dynamic range without a complex sensor/system design. A first prototype sensor with 320 × 240 pixels has been implemented to validate this charge domain logarithmic pixel concept and modeling. The first experimental results validate the logarithmic charge compression theory and the low readout noise due to the charge-transfer-based readout. PMID:29443903
QLog Solar-Cell Mode Photodiode Logarithmic CMOS Pixel Using Charge Compression and Readout.
Ni, Yang
2018-02-14
In this paper, we present a new logarithmic pixel design currently under development at New Imaging Technologies SA (NIT). This new logarithmic pixel design uses charge domain logarithmic signal compression and charge-transfer-based signal readout. This structure gives a linear response in low light conditions and logarithmic response in high light conditions. The charge transfer readout efficiently suppresses the reset (KTC) noise by using true correlated double sampling (CDS) in low light conditions. In high light conditions, thanks to charge domain logarithmic compression, it has been demonstrated that 3000 electrons should be enough to cover a 120 dB dynamic range with a mobile phone camera-like signal-to-noise ratio (SNR) over the whole dynamic range. This low electron count permits the use of ultra-small floating diffusion capacitance (sub-fF) without charge overflow. The resulting large conversion gain permits a single photon detection capability with a wide dynamic range without a complex sensor/system design. A first prototype sensor with 320 × 240 pixels has been implemented to validate this charge domain logarithmic pixel concept and modeling. The first experimental results validate the logarithmic charge compression theory and the low readout noise due to the charge-transfer-based readout.
Investigation of logarithmic spiral nanoantennas at optical frequencies
NASA Astrophysics Data System (ADS)
Verma, Anamika; Pandey, Awanish; Mishra, Vigyanshu; Singh, Ten; Alam, Aftab; Dinesh Kumar, V.
2013-12-01
The first study is reported of a logarithmic spiral antenna in the optical frequency range. Using the finite integration technique, we investigated the spectral and radiation properties of a logarithmic spiral nanoantenna and a complementary structure made of thin gold film. A comparison is made with results for an Archimedean spiral nanoantenna. Such nanoantennas can exhibit broadband behavior that is independent of polarization. Two prominent features of logarithmic spiral nanoantennas are highly directional far field emission and perfectly circularly polarized radiation when excited by a linearly polarized source. The logarithmic spiral nanoantenna promises potential advantages over Archimedean spirals and could be harnessed for several applications in nanophotonics and allied areas.
NASA Astrophysics Data System (ADS)
Székely, B.; Kania, A.; Standovár, T.; Heilmeier, H.
2016-06-01
The horizontal variation and vertical layering of the vegetation are important properties of the canopy structure determining the habitat; three-dimensional (3D) distribution of objects (shrub layers, understory vegetation, etc.) is related to the environmental factors (e.g., illumination, visibility). It has been shown that gaps in forests, mosaic-like structures are essential to biodiversity; various methods have been introduced to quantify this property. As the distribution of gaps in the vegetation is a multi-scale phenomenon, in order to capture it in its entirety, scale-independent methods are preferred; one of these is the calculation of lacunarity. We used Airborne Laser Scanning point clouds measured over a forest plantation situated in a former floodplain. The flat topographic relief ensured that the tree growth is independent of the topographic effects. The tree pattern in the plantation crops provided various quasi-regular and irregular patterns, as well as various ages of the stands. The point clouds were voxelized and layers of voxels were considered as images for two-dimensional input. These images calculated for a certain vicinity of reference points were taken as images for the computation of lacunarity curves, providing a stack of lacunarity curves for each reference points. These sets of curves have been compared to reveal spatial changes of this property. As the dynamic range of the lacunarity values is very large, the natural logarithms of the values were considered. Logarithms of lacunarity functions show canopy-related variations, we analysed these variations along transects. The spatial variation can be related to forest properties and ecology-specific aspects.
NASA Astrophysics Data System (ADS)
Boughezal, Radja; Isgrò, Andrea; Petriello, Frank
2018-04-01
We present a detailed derivation of the power corrections to the factorization theorem for the 0-jettiness event shape variable T . Our calculation is performed directly in QCD without using the formalism of effective field theory. We analytically calculate the next-to-leading logarithmic power corrections for small T at next-to-leading order in the strong coupling constant, extending previous computations which obtained only the leading-logarithmic power corrections. We address a discrepancy in the literature between results for the leading-logarithmic power corrections to a particular definition of 0-jettiness. We present a numerical study of the power corrections in the context of their application to the N -jettiness subtraction method for higher-order calculations, using gluon-fusion Higgs production as an example. The inclusion of the next-to-leading-logarithmic power corrections further improves the numerical efficiency of the approach beyond the improvement obtained from the leading-logarithmic power corrections.
Vacuum energy from noncommutative models
NASA Astrophysics Data System (ADS)
Mignemi, S.; Samsarov, A.
2018-04-01
The vacuum energy is computed for a scalar field in a noncommutative background in several models of noncommutative geometry. One may expect that the noncommutativity introduces a natural cutoff on the ultraviolet divergences of field theory. Our calculations show however that this depends on the particular model considered: in some cases the divergences are suppressed and the vacuum energy is only logarithmically divergent, in other cases they are stronger than in the commutative theory.
A Collection of Numbers Whose Proof of Irrationality Is Like that of the Number "e"
ERIC Educational Resources Information Center
Osler, Thomas J.; Stugard, Nicholas
2006-01-01
In some elementary courses, it is shown that square root of 2 is irrational. It is also shown that the roots like square root of 3, cube root of 2, etc., are irrational. Much less often, it is shown that the number "e," the base of the natural logarithm, is irrational, even though a proof is available that uses only elementary calculus. In this…
USDA-ARS?s Scientific Manuscript database
Growth-phase dependent gene regulation has recently been demonstrated to occur in B. pertussis, with many transcripts, including known virulence factors, significantly decreasing during the transition from logarithmic to stationary-phase growth. Given that B. pertussis is thought to have derived fro...
NASA Astrophysics Data System (ADS)
Zaryankin, A. E.
2017-11-01
The compatibility of the semiempirical turbulence theory of L. Prandtl with the actual flow pattern in a turbulent boundary layer is considered in this article, and the final calculation results of the boundary layer is analyzed based on the mentioned theory. It shows that accepted additional conditions and relationships, which integrate the differential equation of L. Prandtl, associating the turbulent stresses in the boundary layer with the transverse velocity gradient, are fulfilled only in the near-wall region where the mentioned equation loses meaning and are inconsistent with the physical meaning on the main part of integration. It is noted that an introduced concept about the presence of a laminar sublayer between the wall and the turbulent boundary layer is the way of making of a physical meaning to the logarithmic velocity profile, and can be defined as adjustment of the actual flow to the formula that is inconsistent with the actual boundary conditions. It shows that coincidence of the experimental data with the actual logarithmic profile is obtained as a result of the use of not particular physical value, as an argument, but function of this value.
Stratified Flow Past a Hill: Dividing Streamline Concept Revisited
NASA Astrophysics Data System (ADS)
Leo, Laura S.; Thompson, Michael Y.; Di Sabatino, Silvana; Fernando, Harindra J. S.
2016-06-01
The Sheppard formula (Q J R Meteorol Soc 82:528-529, 1956) for the dividing streamline height H_s assumes a uniform velocity U_∞ and a constant buoyancy frequency N for the approach flow towards a mountain of height h, and takes the form H_s/h=( {1-F} ) , where F=U_{∞}/Nh. We extend this solution to a logarithmic approach-velocity profile with constant N. An analytical solution is obtained for H_s/h in terms of Lambert-W functions, which also suggests alternative scaling for H_s/h. A `modified' logarithmic velocity profile is proposed for stably stratified atmospheric boundary-layer flows. A field experiment designed to observe H_s is described, which utilized instrumentation from the spring field campaign of the Mountain Terrain Atmospheric Modeling and Observations (MATERHORN) Program. Multiple releases of smoke at F≈ 0.3-0.4 support the new formulation, notwithstanding the limited success of experiments due to logistical constraints. No dividing streamline is discerned for F≈ 10, since, if present, it is too close to the foothill. Flow separation and vortex shedding is observed in this case. The proposed modified logarithmic profile is in reasonable agreement with experimental observations.
Phase pupil functions for focal-depth enhancement derived from a Wigner distribution function.
Zalvidea, D; Sicre, E E
1998-06-10
A method for obtaining phase-retardation functions, which give rise to an increase of the image focal depth, is proposed. To this end, the Wigner distribution function corresponding to a specific aperture that has an associated small depth of focus in image space is conveniently sheared in the phase-space domain to generate a new Wigner distribution function. From this new function a more uniform on-axis image irradiance can be accomplished. This approach is illustrated by comparison of the imaging performance of both the derived phase function and a previously reported logarithmic phase distribution.
Cota, Wesley; Ferreira, Silvio C; Ódor, Géza
2016-03-01
We provide numerical evidence for slow dynamics of the susceptible-infected-susceptible model evolving on finite-size random networks with power-law degree distributions. Extensive simulations were done by averaging the activity density over many realizations of networks. We investigated the effects of outliers in both highly fluctuating (natural cutoff) and nonfluctuating (hard cutoff) most connected vertices. Logarithmic and power-law decays in time were found for natural and hard cutoffs, respectively. This happens in extended regions of the control parameter space λ(1)<λ<λ(2), suggesting Griffiths effects, induced by the topological inhomogeneities. Optimal fluctuation theory considering sample-to-sample fluctuations of the pseudothresholds is presented to explain the observed slow dynamics. A quasistationary analysis shows that response functions remain bounded at λ(2). We argue these to be signals of a smeared transition. However, in the thermodynamic limit the Griffiths effects loose their relevancy and have a conventional critical point at λ(c)=0. Since many real networks are composed by heterogeneous and weakly connected modules, the slow dynamics found in our analysis of independent and finite networks can play an important role for the deeper understanding of such systems.
NASA Astrophysics Data System (ADS)
Fyodorov, Yan V.; Bouchaud, Jean-Philippe
2008-09-01
We investigate some implications of the freezing scenario proposed by Carpentier and Le Doussal (CLD) for a random energy model (REM) with logarithmically correlated random potential. We introduce a particular (circular) variant of the model, and show that the integer moments of the partition function in the high-temperature phase are given by the well-known Dyson Coulomb gas integrals. The CLD freezing scenario allows one to use those moments for extracting the distribution of the free energy in both high- and low-temperature phases. In particular, it yields the full distribution of the minimal value in the potential sequence. This provides an explicit new class of extreme-value statistics for strongly correlated variables, manifestly different from the standard Gumbel class.
Helicity evolution at small x : Flavor singlet and nonsinglet observables
Kovchegov, Yuri V.; Pitonyak, Daniel; Sievert, Matthew D.
2017-01-30
We extend our earlier results for the quark helicity evolution at small x to derive the small-x asymptotics of the flavor singlet and flavor nonsinglet quark helicity TMDs and PDFs and of the g 1 structure function. In the flavor singlet case we rederive the evolution equations obtained in our previous paper on the subject, performing additional cross-checks of our results. In the flavor nonsinglet case we construct new small-x evolution equations by employing the large-N c limit. Here, all evolution equations resum double-logarithmic powers of α sln 2(1/x) in the polarization-dependent evolution along with the single-logarithmic powers of αmore » sln(1/x) in the unpolarized evolution which includes saturation effects.« less
Helicity evolution at small x : Flavor singlet and nonsinglet observables
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kovchegov, Yuri V.; Pitonyak, Daniel; Sievert, Matthew D.
We extend our earlier results for the quark helicity evolution at small x to derive the small-x asymptotics of the flavor singlet and flavor nonsinglet quark helicity TMDs and PDFs and of the g 1 structure function. In the flavor singlet case we rederive the evolution equations obtained in our previous paper on the subject, performing additional cross-checks of our results. In the flavor nonsinglet case we construct new small-x evolution equations by employing the large-N c limit. Here, all evolution equations resum double-logarithmic powers of α sln 2(1/x) in the polarization-dependent evolution along with the single-logarithmic powers of αmore » sln(1/x) in the unpolarized evolution which includes saturation effects.« less
Scaling of Rényi entanglement entropies of the free fermi-gas ground state: a rigorous proof.
Leschke, Hajo; Sobolev, Alexander V; Spitzer, Wolfgang
2014-04-25
In a remarkable paper [Phys. Rev. Lett. 96, 100503 (2006)], Gioev and Klich conjectured an explicit formula for the leading asymptotic growth of the spatially bipartite von Neumann entanglement entropy of noninteracting fermions in multidimensional Euclidean space at zero temperature. Based on recent progress by one of us (A. V. S.) in semiclassical functional calculus for pseudodifferential operators with discontinuous symbols, we provide here a complete proof of that formula and of its generalization to Rényi entropies of all orders α>0. The special case α=1/2 is also known under the name logarithmic negativity and often considered to be a particularly useful quantification of entanglement. These formulas exhibiting a "logarithmically enhanced area law" have been used already in many publications.
Gauge boson exchange in AdS d+1
NASA Astrophysics Data System (ADS)
D'Hoker, Eric; Freedman, Daniel Z.
1999-04-01
We study the amplitude for exchange of massless gauge bosons between pairs of massive scalar fields in anti-de Sitter space. In the AdS/CFT correspondence this amplitude describes the contribution of conserved flavor symmetry currents to 4-point functions of scalar operators in the boundary conformal theory. A concise, covariant, Y2K compatible derivation of the gauge boson propagator in AdS d + 1 is given. Techniques are developed to calculate the two bulk integrals over AdS space leading to explicit expressions or convenient, simple integral representations for the amplitude. The amplitude contains leading power and sub-leading logarithmic singularities in the gauge boson channel and leading logarithms in the crossed channel. The new methods of this paper are expected to have other applications in the study of the Maldacena conjecture.
Reform in Mathematics Education: "What Do We Teach for and Against?"
ERIC Educational Resources Information Center
Petric, Marius
2011-01-01
This study examines the implementation of a problem-based math curriculum that uses problem situations related to global warming and pollution to involve students in modeling polynomial, exponential, and logarithmic functions. Each instructional module includes activities that engage students in investigating current social justice and…
Demonstrating the Light-Emitting Diode.
ERIC Educational Resources Information Center
Johnson, David A.
1995-01-01
Describes a simple inexpensive circuit which can be used to quickly demonstrate the basic function and versatility of the solid state diode. Can be used to demonstrate the light-emitting diode (LED) as a light emitter, temperature sensor, light detector with both a linear and logarithmic response, and charge storage device. (JRH)
Ryabov, Artem; Berestneva, Ekaterina; Holubec, Viktor
2015-09-21
The paper addresses Brownian motion in the logarithmic potential with time-dependent strength, U(x, t) = g(t)log(x), subject to the absorbing boundary at the origin of coordinates. Such model can represent kinetics of diffusion-controlled reactions of charged molecules or escape of Brownian particles over a time-dependent entropic barrier at the end of a biological pore. We present a simple asymptotic theory which yields the long-time behavior of both the survival probability (first-passage properties) and the moments of the particle position (dynamics). The asymptotic survival probability, i.e., the probability that the particle will not hit the origin before a given time, is a functional of the potential strength. As such, it exhibits a rather varied behavior for different functions g(t). The latter can be grouped into three classes according to the regime of the asymptotic decay of the survival probability. We distinguish 1. the regular (power-law decay), 2. the marginal (power law times a slow function of time), and 3. the regime of enhanced absorption (decay faster than the power law, e.g., exponential). Results of the asymptotic theory show good agreement with numerical simulations.
Statistical scaling of pore-scale Lagrangian velocities in natural porous media.
Siena, M; Guadagnini, A; Riva, M; Bijeljic, B; Pereira Nunes, J P; Blunt, M J
2014-08-01
We investigate the scaling behavior of sample statistics of pore-scale Lagrangian velocities in two different rock samples, Bentheimer sandstone and Estaillades limestone. The samples are imaged using x-ray computer tomography with micron-scale resolution. The scaling analysis relies on the study of the way qth-order sample structure functions (statistical moments of order q of absolute increments) of Lagrangian velocities depend on separation distances, or lags, traveled along the mean flow direction. In the sandstone block, sample structure functions of all orders exhibit a power-law scaling within a clearly identifiable intermediate range of lags. Sample structure functions associated with the limestone block display two diverse power-law regimes, which we infer to be related to two overlapping spatially correlated structures. In both rocks and for all orders q, we observe linear relationships between logarithmic structure functions of successive orders at all lags (a phenomenon that is typically known as extended power scaling, or extended self-similarity). The scaling behavior of Lagrangian velocities is compared with the one exhibited by porosity and specific surface area, which constitute two key pore-scale geometric observables. The statistical scaling of the local velocity field reflects the behavior of these geometric observables, with the occurrence of power-law-scaling regimes within the same range of lags for sample structure functions of Lagrangian velocity, porosity, and specific surface area.
Transverse parton distribution functions at next-to-next-to-leading order: the quark-to-quark case.
Gehrmann, Thomas; Lübbert, Thomas; Yang, Li Lin
2012-12-14
We present a calculation of the perturbative quark-to-quark transverse parton distribution function at next-to-next-to-leading order based on a gauge invariant operator definition. We demonstrate for the first time that such a definition works beyond the first nontrivial order. We extract from our calculation the coefficient functions relevant for a next-to-next-to-next-to-leading logarithmic Q(T) resummation in a large class of processes at hadron colliders.
Stress Energy Tensor in LCFT and LOGARITHMIC Sugawara Construction
NASA Astrophysics Data System (ADS)
Kogan, Ian I.; Nichols, Alexander
We discuss the partners of the stress energy tensor and their structure in Logarithmic conformal field theories. In particular we draw attention to the fundamental differences between theories with zero and non-zero central charge. However they are both characterised by at least two independent parameters. We show how, by using a generalised Sugawara construction, one can calculate the logarithmic partner of T. We show that such a construction works in the c=-2 theory using the conformal dimension one primary currents which generate a logarithmic extension of the Kac-Moody algebra. This is an expanded version of a talk presented by A. Nichols at the conference on Logarithmic Conformal Field Theory and its Applications in Tehran Iran, 2001.
Compression technique for large statistical data bases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eggers, S.J.; Olken, F.; Shoshani, A.
1981-03-01
The compression of large statistical databases is explored and are proposed for organizing the compressed data, such that the time required to access the data is logarithmic. The techniques exploit special characteristics of statistical databases, namely, variation in the space required for the natural encoding of integer attributes, a prevalence of a few repeating values or constants, and the clustering of both data of the same length and constants in long, separate series. The techniques are variations of run-length encoding, in which modified run-lengths for the series are extracted from the data stream and stored in a header, which ismore » used to form the base level of a B-tree index into the database. The run-lengths are cumulative, and therefore the access time of the data is logarithmic in the size of the header. The details of the compression scheme and its implementation are discussed, several special cases are presented, and an analysis is given of the relative performance of the various versions.« less
Kinetics of the B1-B2 phase transition in KCl under rapid compression
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Chuanlong; Smith, Jesse S.; Sinogeikin, Stanislav V.
2016-01-28
Kinetics of the B1-B2 phase transition in KCl has been investigated under various compression rates (0.03–13.5 GPa/s) in a dynamic diamond anvil cell using time-resolved x-ray diffraction and fast imaging. Our experimental data show that the volume fraction across the transition generally gives sigmoidal curves as a function of pressure during rapid compression. Based upon classical nucleation and growth theories (Johnson-Mehl-Avrami-Kolmogorov theories), we propose a model that is applicable for studying kinetics for the compression rates studied. The fit of the experimental volume fraction as a function of pressure provides information on effective activation energy and average activation volume at amore » given compression rate. The resulting parameters are successfully used for interpreting several experimental observables that are compression-rate dependent, such as the transition time, grain size, and over-pressurization. The effective activation energy (Q{sub eff}) is found to decrease linearly with the logarithm of compression rate. When Q{sub eff} is applied to the Arrhenius equation, this relationship can be used to interpret the experimentally observed linear relationship between the logarithm of the transition time and logarithm of the compression rates. The decrease of Q{sub eff} with increasing compression rate results in the decrease of the nucleation rate, which is qualitatively in agreement with the observed change of the grain size with compression rate. The observed over-pressurization is also well explained by the model when an exponential relationship between the average activation volume and the compression rate is assumed.« less
Modal Testing of the NPSAT1 Engineering Development Unit
2012-07-01
erkläre ich, dass die vorliegende Master Arbeit von mir selbstständig und nur unter Verwendung der angegebenen Quellen und Hilfsmittel angefertigt...logarithmic scale . As 5 Figure 2 shows, natural frequencies are indicated by large values of the first CMIF (peaks), and multiple modes can be detected by...structure’s behavior. Ewins even states, “that no large- scale modal test should be permitted to proceed until some preliminary SDOF analyses have
Value function in economic growth model
NASA Astrophysics Data System (ADS)
Bagno, Alexander; Tarasyev, Alexandr A.; Tarasyev, Alexander M.
2017-11-01
Properties of the value function are examined in an infinite horizon optimal control problem with an unlimited integrand index appearing in the quality functional with a discount factor. Optimal control problems of such type describe solutions in models of economic growth. Necessary and sufficient conditions are derived to ensure that the value function satisfies the infinitesimal stability properties. It is proved that value function coincides with the minimax solution of the Hamilton-Jacobi equation. Description of the growth asymptotic behavior for the value function is provided for the logarithmic, power and exponential quality functionals and an example is given to illustrate construction of the value function in economic growth models.
ERIC Educational Resources Information Center
Reed, Cameron
2016-01-01
How can old-fashioned tables of logarithms be computed without technology? Today, of course, no practicing mathematician, scientist, or engineer would actually use logarithms to carry out a calculation, let alone worry about deriving them from scratch. But high school students may be curious about the process. This article develops a…
Natural Scale for Employee's Payment Based on the Entropy Law
NASA Astrophysics Data System (ADS)
Cosma, Ioan; Cosma, Adrian
2009-05-01
An econophysical modeling fated to establish an equitable scale of employees' salary in accordance with the importance and effectiveness of labor is considered. Our model, based on the concept and law of entropy, can designate all the parameters connected to the level of personal incomes and taxations, and also to the distribution of employees versus amount of salary in any remuneration system. Consistent with the laws of classical and statistical thermodynamics, this scale reveals that the personal incomes increased progressively in a natural logarithmic way, different compared with other scales arbitrary established by the governments of each country or by employing companies.
Logarithmic scaling for fluctuations of a scalar concentration in wall turbulence.
Mouri, Hideaki; Morinaga, Takeshi; Yagi, Toshimasa; Mori, Kazuyasu
2017-12-01
Within wall turbulence, there is a sublayer where the mean velocity and the variance of velocity fluctuations vary logarithmically with the height from the wall. This logarithmic scaling is also known for the mean concentration of a passive scalar. By using heat as such a scalar in a laboratory experiment of a turbulent boundary layer, the existence of the logarithmic scaling is shown here for the variance of fluctuations of the scalar concentration. It is reproduced by a model of energy-containing eddies that are attached to the wall.
Gandler, W; Shapiro, H
1990-01-01
Logarithmic amplifiers (log amps), which produce an output signal proportional to the logarithm of the input signal, are widely used in cytometry for measurements of parameters that vary over a wide dynamic range, e.g., cell surface immunofluorescence. Existing log amp circuits all deviate to some extent from ideal performance with respect to dynamic range and fidelity to the logarithmic curve; accuracy in quantitative analysis using log amps therefore requires that log amps be individually calibrated. However, accuracy and precision may be limited by photon statistics and system noise when very low level input signals are encountered.
Stress Energy tensor in LCFT and the Logarithmic Sugawara construction
NASA Astrophysics Data System (ADS)
Kogan, Ian I.; Nichols, Alexander
2002-01-01
We discuss the partners of the stress energy tensor and their structure in Logarithmic conformal field theories. In particular we draw attention to the fundamental differences between theories with zero and non-zero central charge. However they are both characterised by at least two independent parameters. We show how, by using a generalised Sugawara construction, one can calculate the logarithmic partner of T. We show that such a construction works in the c = -2 theory using the conformal dimension one primary currents which generate a logarithmic extension of the Kac-Moody algebra.
Equilibrium Solutions of the Logarithmic Hamiltonian Leapfrog for the N-body Problem
NASA Astrophysics Data System (ADS)
Minesaki, Yukitaka
2018-04-01
We prove that a second-order logarithmic Hamiltonian leapfrog for the classical general N-body problem (CGNBP) designed by Mikkola and Tanikawa and some higher-order logarithmic Hamiltonian methods based on symmetric multicompositions of the logarithmic algorithm exactly reproduce the orbits of elliptic relative equilibrium solutions in the original CGNBP. These methods are explicit symplectic methods. Before this proof, only some implicit discrete-time CGNBPs proposed by Minesaki had been analytically shown to trace the orbits of elliptic relative equilibrium solutions. The proof is therefore the first existence proof for explicit symplectic methods. Such logarithmic Hamiltonian methods with a variable time step can also precisely retain periodic orbits in the classical general three-body problem, which generic numerical methods with a constant time step cannot do.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stevens, Mark J.; Saleh, Omar A.
We calculated the force-extension curves for a flexible polyelectrolyte chain with varying charge separations by performing Monte Carlo simulations of a 5000 bead chain using a screened Coulomb interaction. At all charge separations, the force-extension curves exhibit a Pincus-like scaling regime at intermediate forces and a logarithmic regime at large forces. As the charge separation increases, the Pincus regime shifts to a larger range of forces and the logarithmic regime starts are larger forces. We also found that force-extension curve for the corresponding neutral chain has a logarithmic regime. Decreasing the diameter of bead in the neutral chain simulations removedmore » the logarithmic regime, and the force-extension curve tends to the freely jointed chain limit. In conclusion, this result shows that only excluded volume is required for the high force logarithmic regime to occur.« less
A meta-cognitive learning algorithm for a Fully Complex-valued Relaxation Network.
Savitha, R; Suresh, S; Sundararajan, N
2012-08-01
This paper presents a meta-cognitive learning algorithm for a single hidden layer complex-valued neural network called "Meta-cognitive Fully Complex-valued Relaxation Network (McFCRN)". McFCRN has two components: a cognitive component and a meta-cognitive component. A Fully Complex-valued Relaxation Network (FCRN) with a fully complex-valued Gaussian like activation function (sech) in the hidden layer and an exponential activation function in the output layer forms the cognitive component. The meta-cognitive component contains a self-regulatory learning mechanism which controls the learning ability of FCRN by deciding what-to-learn, when-to-learn and how-to-learn from a sequence of training data. The input parameters of cognitive components are chosen randomly and the output parameters are estimated by minimizing a logarithmic error function. The problem of explicit minimization of magnitude and phase errors in the logarithmic error function is converted to system of linear equations and output parameters of FCRN are computed analytically. McFCRN starts with zero hidden neuron and builds the number of neurons required to approximate the target function. The meta-cognitive component selects the best learning strategy for FCRN to acquire the knowledge from training data and also adapts the learning strategies to implement best human learning components. Performance studies on a function approximation and real-valued classification problems show that proposed McFCRN performs better than the existing results reported in the literature. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Mantry, Sonny; Petriello, Frank
2010-05-01
We derive a factorization theorem for the Higgs boson transverse momentum (pT) and rapidity (Y) distributions at hadron colliders, using the soft-collinear effective theory (SCET), for mh≫pT≫ΛQCD, where mh denotes the Higgs mass. In addition to the factorization of the various scales involved, the perturbative physics at the pT scale is further factorized into two collinear impact-parameter beam functions (IBFs) and an inverse soft function (ISF). These newly defined functions are of a universal nature for the study of differential distributions at hadron colliders. The additional factorization of the pT-scale physics simplifies the implementation of higher order radiative corrections in αs(pT). We derive formulas for factorization in both momentum and impact parameter space and discuss the relationship between them. Large logarithms of the relevant scales in the problem are summed using the renormalization group equations of the effective theories. Power corrections to the factorization theorem in pT/mh and ΛQCD/pT can be systematically derived. We perform multiple consistency checks on our factorization theorem including a comparison with known fixed-order QCD results. We compare the SCET factorization theorem with the Collins-Soper-Sterman approach to low-pT resummation.
One Concept and Two Narrations: The Case of the Logarithm
ERIC Educational Resources Information Center
Hamdan, May
2008-01-01
Through an account of the history of exponential functions as presented in traditional calculus textbooks, I present my observations and remarks on the spiral development of the concept, and my concerns about the general presentations of the subject. In this article I emphasize how the different arrangements and sequencing of the subjects required…
Using Spreadsheets to Discover Meaning for Parameters in Nonlinear Models
ERIC Educational Resources Information Center
Green, Kris H.
2008-01-01
This paper explores the use of spreadsheets to develop an exploratory environment where mathematics students can develop their own understanding of the parameters of commonly encountered families of functions: linear, logarithmic, exponential and power. The key to this understanding involves opening up the definition of rate of change from the…
Inclusive production of small radius jets in heavy-ion collisions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kang, Zhong-Bo; Ringer, Felix; Vitev, Ivan
Here, we develop a new formalism to describe the inclusive production of small radius jets in heavy-ion collisions, which is consistent with jet calculations in the simpler proton–proton system. Only at next-to-leading order (NLO) and beyond, the jet radius parameter R and the jet algorithm dependence of the jet cross section can be studied and a meaningful comparison to experimental measurements is possible. We are able to consistently achieve NLO accuracy by making use of the recently developed semi-inclusive jet functions within Soft Collinear Effective Theory (SCET). Additionally, single logarithms of the jet size parameter αmore » $$n\\atop{s}$$ln nR leading logarithmic (NLL R) accuracy in proton–proton collisions. The medium modified semi-inclusive jet functions are obtained within the framework of SCET with Glauber gluons that describe the interaction of jets with the medium. We also present numerical results for the suppression of inclusive jet cross sections in heavy ion collisions at the LHC and the formalism developed here can be extended directly to corresponding jet substructure observables.« less
Inclusive production of small radius jets in heavy-ion collisions
Kang, Zhong-Bo; Ringer, Felix; Vitev, Ivan
2017-03-31
Here, we develop a new formalism to describe the inclusive production of small radius jets in heavy-ion collisions, which is consistent with jet calculations in the simpler proton–proton system. Only at next-to-leading order (NLO) and beyond, the jet radius parameter R and the jet algorithm dependence of the jet cross section can be studied and a meaningful comparison to experimental measurements is possible. We are able to consistently achieve NLO accuracy by making use of the recently developed semi-inclusive jet functions within Soft Collinear Effective Theory (SCET). Additionally, single logarithms of the jet size parameter αmore » $$n\\atop{s}$$ln nR leading logarithmic (NLL R) accuracy in proton–proton collisions. The medium modified semi-inclusive jet functions are obtained within the framework of SCET with Glauber gluons that describe the interaction of jets with the medium. We also present numerical results for the suppression of inclusive jet cross sections in heavy ion collisions at the LHC and the formalism developed here can be extended directly to corresponding jet substructure observables.« less
Zhang, Wanli; Yang, Shiju; Li, Chuandong; Zhang, Wei; Yang, Xinsong
2018-08-01
This paper focuses on stochastic exponential synchronization of delayed memristive neural networks (MNNs) by the aid of systems with interval parameters which are established by using the concept of Filippov solution. New intermittent controller and adaptive controller with logarithmic quantization are structured to deal with the difficulties induced by time-varying delays, interval parameters as well as stochastic perturbations, simultaneously. Moreover, not only control cost can be reduced but also communication channels and bandwidth are saved by using these controllers. Based on novel Lyapunov functions and new analytical methods, several synchronization criteria are established to realize the exponential synchronization of MNNs with stochastic perturbations via intermittent control and adaptive control with or without logarithmic quantization. Finally, numerical simulations are offered to substantiate our theoretical results. Copyright © 2018 Elsevier Ltd. All rights reserved.
Entropy and complexity analysis of hydrogenic Rydberg atoms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lopez-Rosa, S.; Departamento de Fisica Aplicada II, Universidad de Sevilla, 41012-Sevilla; Toranzo, I. V.
The internal disorder of hydrogenic Rydberg atoms as contained in their position and momentum probability densities is examined by means of the following information-theoretic spreading quantities: the radial and logarithmic expectation values, the Shannon entropy, and the Fisher information. As well, the complexity measures of Cramer-Rao, Fisher-Shannon, and Lopez Ruiz-Mancini-Calvet types are investigated in both reciprocal spaces. The leading term of these quantities is rigorously calculated by use of the asymptotic properties of the concomitant entropic functionals of the Laguerre and Gegenbauer orthogonal polynomials which control the wavefunctions of the Rydberg states in both position and momentum spaces. The associatedmore » generalized Heisenberg-like, logarithmic and entropic uncertainty relations are also given. Finally, application to linear (l= 0), circular (l=n- 1), and quasicircular (l=n- 2) states is explicitly done.« less
Quantum corrections to conductivity in graphene with vacancies
NASA Astrophysics Data System (ADS)
Araujo, E. N. D.; Brant, J. C.; Archanjo, B. S.; Medeiros-Ribeiro, G.; Alves, E. S.
2018-06-01
In this work, different regions of a graphene device were exposed to a 30 keV helium ion beam creating a series of alternating strips of vacancy-type defects and pristine graphene. From magnetoconductance measurements as function of temperature, density of carriers and density of strips we show that the electron-electron interaction is important to explain the logarithmic quantum corrections to the Drude conductivity in graphene with vacancies. It is known that vacancies in graphene behave as local magnetic moments that interact with the conduction electrons and leads to a logarithmic correction to the conductance through the Kondo effect. However, our work shows that it is necessary to account for the non-homogeneity of the sample to avoid misinterpretations about the Kondo physics due the difficulties in separating the electron-electron interaction from the Kondo effect.
Non-abelian factorisation for next-to-leading-power threshold logarithms
NASA Astrophysics Data System (ADS)
Bonocore, D.; Laenen, E.; Magnea, L.; Vernazza, L.; White, C. D.
2016-12-01
Soft and collinear radiation is responsible for large corrections to many hadronic cross sections, near thresholds for the production of heavy final states. There is much interest in extending our understanding of this radiation to next-to-leading power (NLP) in the threshold expansion. In this paper, we generalise a previously proposed all-order NLP factorisation formula to include non-abelian corrections. We define a nonabelian radiative jet function, organising collinear enhancements at NLP, and compute it for quark jets at one loop. We discuss in detail the issue of double counting between soft and collinear regions. Finally, we verify our prescription by reproducing all NLP logarithms in Drell-Yan production up to NNLO, including those associated with double real emission. Our results constitute an important step in the development of a fully general resummation formalism for NLP threshold effects.
Maximum entropy perception-action space: a Bayesian model of eye movement selection
NASA Astrophysics Data System (ADS)
Colas, Francis; Bessière, Pierre; Girard, Benoît
2011-03-01
In this article, we investigate the issue of the selection of eye movements in a free-eye Multiple Object Tracking task. We propose a Bayesian model of retinotopic maps with a complex logarithmic mapping. This model is structured in two parts: a representation of the visual scene, and a decision model based on the representation. We compare different decision models based on different features of the representation and we show that taking into account uncertainty helps predict the eye movements of subjects recorded in a psychophysics experiment. Finally, based on experimental data, we postulate that the complex logarithmic mapping has a functional relevance, as the density of objects in this space in more uniform than expected. This may indicate that the representation space and control strategies are such that the object density is of maximum entropy.
How Do Students Acquire an Understanding of Logarithmic Concepts?
ERIC Educational Resources Information Center
Mulqueeny, Ellen
2012-01-01
The use of logarithms, an important tool for calculus and beyond, has been reduced to symbol manipulation without understanding in most entry-level college algebra courses. The primary aim of this research, therefore, was to investigate college students' understanding of logarithmic concepts through the use of a series of instructional tasks…
Hurst, Michelle; Monahan, K Leigh; Heller, Elizabeth; Cordes, Sara
2014-11-01
When placing numbers along a number line with endpoints 0 and 1000, children generally space numbers logarithmically until around the age of 7, when they shift to a predominantly linear pattern of responding. This developmental shift of responding on the number placement task has been argued to be indicative of a shift in the format of the underlying representation of number (Siegler & Opfer, ). In the current study, we provide evidence from both child and adult participants to suggest that performance on the number placement task may not reflect the structure of the mental number line, but instead is a function of the fluency (i.e. ease) with which the individual can work with the values in the sequence. In Experiment 1, adult participants respond logarithmically when placing numbers on a line with less familiar anchors (1639 to 2897), despite linear responding on control tasks with standard anchors involving a similar range (0 to 1287) and a similar numerical magnitude (2000 to 3000). In Experiment 2, we show a similar developmental shift in childhood from logarithmic to linear responding for a non-numerical sequence with no inherent magnitude (the alphabet). In conclusion, we argue that the developmental trend towards linear behavior on the number line task is a product of successful strategy use and mental fluency with the values of the sequence, resulting from familiarity with endpoints and increased knowledge about general ordering principles of the sequence.A video abstract of this article can be viewed at:http://www.youtube.com/watch?v=zg5Q2LIFk3M. © 2014 John Wiley & Sons Ltd.
Factorization for jet radius logarithms in jet mass spectra at the LHC
Kolodrubetz, Daniel W.; Pietrulewicz, Piotr; Stewart, Iain W.; ...
2016-12-14
To predict the jet mass spectrum at a hadron collider it is crucial to account for the resummation of logarithms between the transverse momentum of the jet and its invariant mass m J . For small jet areas there are additional large logarithms of the jet radius R, which affect the convergence of the perturbative series. We present an analytic framework for exclusive jet production at the LHC which gives a complete description of the jet mass spectrum including realistic jet algorithms and jet vetoes. It factorizes the scales associated with m J , R, and the jet veto, enablingmore » in addition the systematic resummation of jet radius logarithms in the jet mass spectrum beyond leading logarithmic order. We discuss the factorization formulae for the peak and tail region of the jet mass spectrum and for small and large R, and the relations between the different regimes and how to combine them. Regions of experimental interest are classified which do not involve large nonglobal logarithms. We also present universal results for nonperturbative effects and discuss various jet vetoes.« less
Collinearly-improved BK evolution meets the HERA data
Iancu, E.; Madrigal, J. D.; Mueller, A. H.; ...
2015-10-03
In a previous publication, we have established a collinearly-improved version of the Balitsky–Kovchegov (BK) equation, which resums to all orders the radiative corrections enhanced by large double transverse logarithms. Here, we study the relevance of this equation as a tool for phenomenology, by confronting it to the HERA data. To that aim, we first improve the perturbative accuracy of our resummation, by including two classes of single-logarithmic corrections: those generated by the first non-singular terms in the DGLAP splitting functions and those expressing the one-loop running of the QCD coupling. The equation thus obtained includes all the next-to-leading order correctionsmore » to the BK equation which are enhanced by (single or double) collinear logarithms. Furthermore, we then use numerical solutions to this equation to fit the HERA data for the electron–proton reduced cross-section at small Bjorken x. We obtain good quality fits for physically acceptable initial conditions. Our best fit, which shows a good stability up to virtualities as large as Q 2 = 400 GeV 2 for the exchanged photon, uses as an initial condition the running-coupling version of the McLerran–Venugopalan model, with the QCD coupling running according to the smallest dipole prescription.« less
Reducing bias and analyzing variability in the time-left procedure.
Trujano, R Emmanuel; Orduña, Vladimir
2015-04-01
The time-left procedure was designed to evaluate the psychophysical function for time. Although previous results indicated a linear relationship, it is not clear what role the observed bias toward the time-left option plays in this procedure and there are no reports of how variability changes with predicted indifference. The purposes of this experiment were to reduce bias experimentally, and to contrast the difference limen (a measure of variability around indifference) with predictions from scalar expectancy theory (linear timing) and behavioral economic model (logarithmic timing). A control group of 6 rats performed the original time-left procedure with C=60 s and S=5, 10,…, 50, 55 s, whereas a no-bias group of 6 rats performed the same conditions in a modified time-left procedure in which only a single response per choice trial was allowed. Results showed that bias was reduced for the no-bias group, observed indifference grew linearly with predicted indifference for both groups, and difference limen and Weber ratios decreased as expected indifference increased for the control group, which is consistent with linear timing, whereas for the no-bias group they remained constant, consistent with logarithmic timing. Therefore, the time-left procedure generates results consistent with logarithmic perceived time once bias is experimentally reduced. Copyright © 2015 Elsevier B.V. All rights reserved.
Aircraft Airframe Cost Estimation Using a Random Coefficients Model
1979-12-01
approach will also be used here. 2 Model Formulation Several different types of equations could be used for the basic form of the CER, such as linear ...5) Marcotte developed several CER’s for fighter aircraft airframes using the log- linear model . A plot of the residuals from the CER for recurring...of the natural logarithm. Ordinary Least Squares The ordinary least squares procedure starts with the equation for the general linear model . The
Yang, Hang; Wang, Mengyue; Yu, Junping; Wei, Hongping
2015-01-01
The global emergence of multidrug-resistant (MDR) bacteria is a growing threat to public health worldwide. Natural bacteriophage lysins are promising alternatives in the treatment of infections caused by Gram-positive pathogens, but not Gram-negative ones, like Acinetobacter baumannii and Pseudomonas aeruginosa, due to the barriers posed by their outer membranes. Recently, modifying a natural lysin with an antimicrobial peptide was found able to break the barriers, and to kill Gram-negative pathogens. Herein, a new peptide-modified lysin (PlyA) was constructed by fusing the cecropin A peptide residues 1–8 (KWKLFKKI) with the OBPgp279 lysin and its antibacterial activity was studied. PlyA showed good and broad antibacterial activities against logarithmic phase A. baumannii and P. aeruginosa, but much reduced activities against the cells in stationary phase. Addition of outer membrane permeabilizers (EDTA and citric acid) could enhance the antibacterial activity of PlyA against stationary phase cells. Finally, no antibacterial activity of PlyA could be observed in some bio-matrices, such as culture media, milk, and sera. In conclusion, we reported here a novel peptide-modified lysin with significant antibacterial activity against both logarithmic (without OMPs) and stationary phase (with OMPs) A. baumannii and P. aeruginosa cells in buffer, but further optimization is needed to achieve broad activity in diverse bio-matrices. PMID:26733995
Hollenbeak, Christopher S
2005-10-15
While risk-adjusted outcomes are often used to compare the performance of hospitals and physicians, the most appropriate functional form for the risk adjustment process is not always obvious for continuous outcomes such as costs. Semi-log models are used most often to correct skewness in cost data, but there has been limited research to determine whether the log transformation is sufficient or whether another transformation is more appropriate. This study explores the most appropriate functional form for risk-adjusting the cost of coronary artery bypass graft (CABG) surgery. Data included patients undergoing CABG surgery at four hospitals in the midwest and were fit to a Box-Cox model with random coefficients (BCRC) using Markov chain Monte Carlo methods. Marginal likelihoods and Bayes factors were computed to perform model comparison of alternative model specifications. Rankings of hospital performance were created from the simulation output and the rankings produced by Bayesian estimates were compared to rankings produced by standard models fit using classical methods. Results suggest that, for these data, the most appropriate functional form is not logarithmic, but corresponds to a Box-Cox transformation of -1. Furthermore, Bayes factors overwhelmingly rejected the natural log transformation. However, the hospital ranking induced by the BCRC model was not different from the ranking produced by maximum likelihood estimates of either the linear or semi-log model. Copyright (c) 2005 John Wiley & Sons, Ltd.
Renormalizability of quasiparton distribution functions
Ishikawa, Tomomi; Ma, Yan-Qing; Qiu, Jian-Wei; ...
2017-11-21
Quasi-parton distribution functions have received a lot of attentions in both perturbative QCD and lattice QCD communities in recent years because they not only carry good information on the parton distribution functions, but also could be evaluated by lattice QCD simulations. However, unlike the parton distribution functions, the quasi-parton distribution functions have perturbative ultraviolet power divergences because they are not defined by twist-2 operators. Here in this article, we identify all sources of ultraviolet divergences for the quasi-parton distribution functions in coordinate-space, and demonstrated that power divergences, as well as all logarithmic divergences can be renormalized multiplicatively to all ordersmore » in QCD perturbation theory.« less
Renormalizability of quasiparton distribution functions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ishikawa, Tomomi; Ma, Yan-Qing; Qiu, Jian-Wei
Quasi-parton distribution functions have received a lot of attentions in both perturbative QCD and lattice QCD communities in recent years because they not only carry good information on the parton distribution functions, but also could be evaluated by lattice QCD simulations. However, unlike the parton distribution functions, the quasi-parton distribution functions have perturbative ultraviolet power divergences because they are not defined by twist-2 operators. Here in this article, we identify all sources of ultraviolet divergences for the quasi-parton distribution functions in coordinate-space, and demonstrated that power divergences, as well as all logarithmic divergences can be renormalized multiplicatively to all ordersmore » in QCD perturbation theory.« less
Intensity-distance attenuation law in the continental Portugal using intensity data points
NASA Astrophysics Data System (ADS)
Le Goff, Boris; Bezzeghoud, Mourad; Borges, José Fernando
2013-04-01
Several attempts have been done to evaluate the intensity attenuation with the epicentral distance in the Iberian Peninsula [1, 2]. So far, the results are not satisfying or not using the intensity data points of the available events. We developed a new intensity law for the continental Portugal, using the macroseismic reports that provide intensity data points, instrumental magnitudes and instrumental locations. We collected 31 events from the Instituto Portugues do Mar e da Atmosfera (IPMA, Portugal; ex-IM), covering the period between 1909 and 1997, with a largest magnitude of 8.2, closed to the African-Eurasian plate boundary. For each event, the intensity data points are plotted versus the distance and different trend lines are achieved (linear, exponential and logarithmic). The better fits are obtained with the logarithmic trend lines. We evaluate a form of the attenuation equation as follow: I = c0(M) + c1(M).ln(R) (1) where I, M and R are, respectively, the intensity, the magnitude and the epicentral distance. To solve this equation, we investigate two methods. The first one consists in plotting the slope of the different logarithmic trends versus the magnitude, to estimate the parameter c1(M), and to evaluate how the intensity behaves in function of the magnitude. Another plot, representing the intercepts versus the magnitude, allows to determine the second parameter, c0(M). The second method consists in using the inverse theory. From the data, we recover the parameters of the model, using a linear inverse matrix. Both parameters, c0(M) and c1(M), are provided with their associated errors. A sensibility test will be achieved, using the macroseismic data, to estimate the resolution power of both methods. This new attenuation law will be used with the Bakun and Wentworth method [3] in order to reestimate the epicentral region and the magnitude estimation of the 1909 Benavente event. This attenuation law may also be adapted to be used in Probabilistic Seismic Hazard Analysis. [1] Lopez Casado, C., Molina Palacios, S., Delgado, J., and Pelaez, J.A., 2000, BSSA, 90, 1, pp. 34-47 [2] Sousa, M. L., and Oliveira, C. S., 1997, Natural Hazard, 14: 207-225 [3] Bakun, W. H., and Wentworth, C. M., 1997, BSSA, vol.87, No. 6, pp. 1502-1521
Numerical study on the influences of Nanliu River runoff and tides on water age in Lianzhou Bay
NASA Astrophysics Data System (ADS)
Yu, Jing; Zhang, Xueqing; Liu, Jinliang; Liu, Rui; Wang, Xing
2016-09-01
The concept of water age is applied to calculate the timescales of the transport processes of freshwater in Lianzhou Bay, using a model based on ECOMSED. In this study, water age is defined as the time that has elapsed since the water parcel enters the Nanliu River. The results show that the mean age at a specified position and the runoff of the Nanliu River are well correlated and can be approximately expressed by a natural logarithmic function. During the neap tide, it takes 70, 60 and 40 days in the dry, normal and rainy seasons for water to travel from the mouth of the Nanliu River to the northeast of Lianzhou Bay, respectively, which is not beneficial to water exchange in the bay. Tides significantly influence the model results; it takes five less days for the tracer to be transported from the mouth of the Nanliu River to the north of Guantouling during the spring tide than during the neap tide.
A quantitative description of normal AV nodal conduction curve in man.
Teague, S; Collins, S; Wu, D; Denes, P; Rosen, K; Arzbaecher, R
1976-01-01
The AV nodal conduction curve generated by the atrial extrastimulus technique has been described only qualitatively in man, making clinical comparison of known normal curves with those of suspected AV nodal dysfunction difficult. Also, the effects of physiological and pharmacological interventions have not been quantifiable. In 50 patients with normal AV conduction as defined by normal AH (less than 130 ms), normal AV nodal effective and functional refractory periods (less than 380 and less than 500 ms), and absence of demonstrable dual AV nodal pathways, we found that conduction curves (at sinus rhythm or longest paced cycle length) can be described by an exponential equation of the form delta = Ae-Bx. In this equation, delta is the increase in AV nodal conduction time of an extrastimulus compared to that of a regular beat and x is extrastimulus interval. The natural logarithm of this equation is linear in the semilogarithmic plane, thus permitting the constants A and B to be easily determined by a least-squares regression analysis with a hand calculator.
Analysis of Alaskan burn severity patterns using remotely sensed data
Duffy, P.A.; Epting, J.; Graham, J.M.; Rupp, T.S.; McGuire, A.D.
2007-01-01
Wildland fire is the dominant large-scale disturbance mechanism in the Alaskan boreal forest, and it strongly influences forest structure and function. In this research, patterns of burn severity in the Alaskan boreal forest are characterised using 24 fires. First, the relationship between burn severity and area burned is quantified using a linear regression. Second, the spatial correlation of burn severity as a function of topography is modelled using a variogram analysis. Finally, the relationship between vegetation type and spatial patterns of burn severity is quantified using linear models where variograms account for spatial correlation. These results show that: 1) average burn severity increases with the natural logarithm of the area of the wildfire, 2) burn severity is more variable in topographically complex landscapes than in flat landscapes, and 3) there is a significant relationship between burn severity and vegetation type in flat landscapes but not in topographically complex landscapes. These results strengthen the argument that differential flammability of vegetation exists in some boreal landscapes of Alaska. Additionally, these results suggest that through feedbacks between vegetation and burn severity, the distribution of forest vegetation through time is likely more stable in flat terrain than it is in areas with more complex topography. ?? IAWF 2007.
[Quantitative relationships of intra- and interspecific competition in Cryptocarya concinna].
Zhang, Chi; Huang, Zhongliang; Li, Jiong; Shi, Junhui; Li, Lin
2006-01-01
The monsoon evergreen broad-leaved forest (MEBF) in Dinghushan Nature Reserve (DNR) has been considered as a zonal vegetation in lower subtropical China, with a history of more than 400 years. In this paper, the intra- and interspecific competition intensity in Cryptocarya concinna, one of the constructive species in MEBF in DNR was quantitatively analyzed by Hegyi single-tree competition index model. The results showed that the intraspecific competition intensity in C. concinna decreased gradually with increasing tree diameter. For C. concinna, its intraspecific competition was weaker than its interspecific competition with Aporosa yunnanensis. The competition intensity of interspecific competition with C. concinna followed the order of A. yunnanensis > Schima superba > Gironniera subaequalis > Acmena acuminatissima > Castanopsis chinensis > Syzygium rehderianum > Pygeum topengii > Blastus cochinchinensis > Sarcosperma laurinum > Pterospermum lanceaefolium > Cryptocarya chinensis. The relationship of the DBH of objective tree and the competition intensity between competitive tree and objective tree in the whole forest and C. concinna population nearly conformed to power function, while that between other competitive tree and the objective C. concinna tree conformed to logarithm function. There was a significantly negative correlation between the competition intensity and the DBH of objective tree.
The functional dependence of canopy conductance on water vapor pressure deficit revisited
NASA Astrophysics Data System (ADS)
Fuchs, Marcel; Stanghellini, Cecilia
2018-03-01
Current research seeking to relate between ambient water vapor deficit (D) and foliage conductance (g F ) derives a canopy conductance (g W ) from measured transpiration by inverting the coupled transpiration model to yield g W = m - n ln(D) where m and n are fitting parameters. In contrast, this paper demonstrates that the relation between coupled g W and D is g W = AP/D + B, where P is the barometric pressure, A is the radiative term, and B is the convective term coefficient of the Penman-Monteith equation. A and B are functions of g F and of meteorological parameters but are mathematically independent of D. Keeping A and B constant implies constancy of g F . With these premises, the derived g W is a hyperbolic function of D resembling the logarithmic expression, in contradiction with the pre-set constancy of g F . Calculations with random inputs that ensure independence between g F and D reproduce published experimental scatter plots that display a dependence between g W and D in contradiction with the premises. For this reason, the dependence of g W on D is a computational artifact unrelated to any real effect of ambient humidity on stomatal aperture and closure. Data collected in a maize field confirm the inadequacy of the logarithmic function to quantify the relation between canopy conductance and vapor pressure deficit.
Calculation of the transverse parton distribution functions at next-to-next-to-leading order
NASA Astrophysics Data System (ADS)
Gehrmann, Thomas; Lübbert, Thomas; Yang, Li Lin
2014-06-01
We describe the perturbative calculation of the transverse parton distribution functions in all partonic channels up to next-to-next-to-leading order based on a gauge invariant operator definition. We demonstrate the cancellation of light-cone divergences and show that universal process-independent transverse parton distribution functions can be obtained through a refactorization. Our results serve as the first explicit higher-order calculation of these functions starting from first principles, and can be used to perform next-to-next-to-next-to-leading logarithmic q T resummation for a large class of processes at hadron colliders.
Simulations of stretching a flexible polyelectrolyte with varying charge separation
Stevens, Mark J.; Saleh, Omar A.
2016-07-22
We calculated the force-extension curves for a flexible polyelectrolyte chain with varying charge separations by performing Monte Carlo simulations of a 5000 bead chain using a screened Coulomb interaction. At all charge separations, the force-extension curves exhibit a Pincus-like scaling regime at intermediate forces and a logarithmic regime at large forces. As the charge separation increases, the Pincus regime shifts to a larger range of forces and the logarithmic regime starts are larger forces. We also found that force-extension curve for the corresponding neutral chain has a logarithmic regime. Decreasing the diameter of bead in the neutral chain simulations removedmore » the logarithmic regime, and the force-extension curve tends to the freely jointed chain limit. In conclusion, this result shows that only excluded volume is required for the high force logarithmic regime to occur.« less
NASA Astrophysics Data System (ADS)
Tehsin, Sara; Rehman, Saad; Awan, Ahmad B.; Chaudry, Qaiser; Abbas, Muhammad; Young, Rupert; Asif, Afia
2016-04-01
Sensitivity to the variations in the reference image is a major concern when recognizing target objects. A combinational framework of correlation filters and logarithmic transformation has been previously reported to resolve this issue alongside catering for scale and rotation changes of the object in the presence of distortion and noise. In this paper, we have extended the work to include the influence of different logarithmic bases on the resultant correlation plane. The meaningful changes in correlation parameters along with contraction/expansion in the correlation plane peak have been identified under different scenarios. Based on our research, we propose some specific log bases to be used in logarithmically transformed correlation filters for achieving suitable tolerance to different variations. The study is based upon testing a range of logarithmic bases for different situations and finding an optimal logarithmic base for each particular set of distortions. Our results show improved correlation and target detection accuracies.
Jou, Jerwen
2014-10-01
Subjects performed Sternberg-type memory recognition tasks (Sternberg paradigm) in four experiments. Category-instance names were used as learning and testing materials. Sternberg's original experiments demonstrated a linear relation between reaction time (RT) and memory-set size (MSS). A few later studies found no relation, and other studies found a nonlinear relation (logarithmic) between the two variables. These deviations were used as evidence undermining Sternberg's serial scan theory. This study identified two confounding variables in the fixed-set procedure of the paradigm (where multiple probes are presented at test for a learned memory set) that could generate a MSS RT function that was either flat or logarithmic rather than linearly increasing. These two confounding variables were task-switching cost and repetition priming. The former factor worked against smaller memory sets and in favour of larger sets whereas the latter factor worked in the opposite way. Results demonstrated that a null or a logarithmic RT-to-MSS relation could be the artefact of the combined effects of these two variables. The Sternberg paradigm has been used widely in memory research, and a thorough understanding of the subtle methodological pitfalls is crucial. It is suggested that a varied-set procedure (where only one probe is presented at test for a learned memory set) is a more contamination-free procedure for measuring the MSS effects, and that if a fixed-set procedure is used, it is worthwhile examining the RT function of the very first trials across the MSSs, which are presumably relatively free of contamination by the subsequent trials.
The vibrational properties of Chinese fir wood during moisture sorption process
Jiali Jiang; Jianxiong Lu; Zhiyong Cai
2012-01-01
The vibrational properties of Chinese fir (Cunninghamia lanceolata) wood were investigated in this study as a function of changes in moisture content (MC) and grain direction. The dynamic modulus of elasticity (DMOE) and logarithmic decrement σ were examined using a cantilever beam vibration testing apparatus. It was observed that DMOE and 6 of wood vaned...
Deriving a Utility Function For the U.S. Economy
1988-04-01
Jorgenson, D.W., L.J. Lau, and T.M. Stoker, "The Transcendental Logarithmic Model of Ag- gregate Consumer Behavior ," in R.L. Baseman and G. Rhodes (eds...Jorgenson, D.W., L.J. Lau, and T.M. Stoker, "Aggregate Consumer Behavior and Individual Welfare," Macro Economic Analysis, eds. D. Currie, R. Nabay, D. Peel
Estimating leaf area and leaf biomass of open-grown deciduous urban trees
David J. Nowak
1996-01-01
Logarithmic regression equations were developed to predict leaf area and leaf biomass for open-grown deciduous urban trees based on stem diameter and crown parameters. Equations based on crown parameters produced more reliable estimates. The equations can be used to help quantify forest structure and functions, particularly in urbanizing and urban/suburban areas.
Dry Weight of Several Piedmont Hardwoods
Bobby G. Blackmon; Charles W. Ralston
1968-01-01
Forty-four sample hardwood trees felled on 24 plots were separated into three above-ground components- stem, branches, and leaves--and weighed for dry matter content. Tree, stand, and site variables were tested for significant relationships with dry weight of tree parts. Weight increase of stems was a logarithmic function ,of both stem diameter and height, whereas for...
Predicting Body Fat Using Data on the BMI
ERIC Educational Resources Information Center
Mills, Terence C.
2005-01-01
A data set contained in the "Journal of Statistical Education's" data archive provides a way of exploring regression analysis at a variety of teaching levels. An appropriate functional form for the relationship between percentage body fat and the BMI is shown to be the semi-logarithmic, with variation in the BMI accounting for a little over half…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Yunshan; DeVore, Peter T. S.; Jalali, Bahram
Optical computing accelerators help alleviate bandwidth and power consumption bottlenecks in electronics. In this paper, we show an approach to implementing logarithmic-type analog co-processors in silicon photonics and use it to perform the exponentiation operation and the recovery of a signal in the presence of multiplicative distortion. Finally, the function is realized by exploiting nonlinear-absorption-enhanced Raman amplification saturation in a silicon waveguide.
Leading chiral logarithms for the nucleon mass
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vladimirov, Alexey A.; Bijnens, Johan
2016-01-22
We give a short introduction to the calculation of the leading chiral logarithms, and present the results of the recent evaluation of the LLog series for the nucleon mass within the heavy baryon theory. The presented results are the first example of LLog calculation in the nucleon ChPT. We also discuss some regularities observed in the leading logarithmical series for nucleon mass.
Maximally Informative Hierarchical Representations of High-Dimensional Data
2015-05-11
will be considered dis- crete but the domain of the X i ’s is not restricted. Entropy is defined in the usual way as H(X) ⌘ E X [log 1/p(x)]. We use...natural logarithms so that the unit of information is nats. Higher-order entropies can be constructed in various ways from this standard definition. For...sense, not truly high-dimensional and can be charac- terized separately. On the other hand, the entropy of X, H(X), can naively be considered the
Interplay between Shear Loading and Structural Aging in a Physical Gelatin Gel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ronsin, O.; Caroli, C.; Baumberger, T.
2009-09-25
We show that the aging of the mechanical relaxation of a gelatin gel exhibits the same scaling phenomenology as polymer and colloidal glasses. In addition, gelatin is known to exhibit logarithmic structural aging (stiffening). We find that stress accelerates this process. However, this effect is definitely irreducible to a mere age shift with respect to natural aging. We suggest that it is interpretable in terms of elastically aided elementary (coil->helix) local events whose dynamics gradually slows down as aging increases geometric frustration.
A computer graphics display and data compression technique
NASA Technical Reports Server (NTRS)
Teague, M. J.; Meyer, H. G.; Levenson, L. (Editor)
1974-01-01
The computer program discussed is intended for the graphical presentation of a general dependent variable X that is a function of two independent variables, U and V. The required input to the program is the variation of the dependent variable with one of the independent variables for various fixed values of the other. The computer program is named CRP, and the output is provided by the SD 4060 plotter. Program CRP is an extremely flexible program that offers the user a wide variety of options. The dependent variable may be presented in either a linear or a logarithmic manner. Automatic centering of the plot is provided in the ordinate direction, and the abscissa is scaled automatically for a logarithmic plot. A description of the carpet plot technique is given along with the coordinates system used in the program. Various aspects of the program logic are discussed and detailed documentation of the data card format is presented.
Entanglement entropy of 2D conformal quantum critical points: hearing the shape of a quantum drum.
Fradkin, Eduardo; Moore, Joel E
2006-08-04
The entanglement entropy of a pure quantum state of a bipartite system A union or logical sumB is defined as the von Neumann entropy of the reduced density matrix obtained by tracing over one of the two parts. In one dimension, the entanglement of critical ground states diverges logarithmically in the subsystem size, with a universal coefficient that for conformally invariant critical points is related to the central charge of the conformal field theory. We find that the entanglement entropy of a standard class of z=2 conformal quantum critical points in two spatial dimensions, in addition to a nonuniversal "area law" contribution linear in the size of the AB boundary, generically has a universal logarithmically divergent correction, which is completely determined by the geometry of the partition and by the central charge of the field theory that describes the critical wave function.
Logarithmic Superdiffusion in Two Dimensional Driven Lattice Gases
NASA Astrophysics Data System (ADS)
Krug, J.; Neiss, R. A.; Schadschneider, A.; Schmidt, J.
2018-03-01
The spreading of density fluctuations in two-dimensional driven diffusive systems is marginally anomalous. Mode coupling theory predicts that the diffusivity in the direction of the drive diverges with time as (ln t)^{2/3} with a prefactor depending on the macroscopic current-density relation and the diffusion tensor of the fluctuating hydrodynamic field equation. Here we present the first numerical verification of this behavior for a particular version of the two-dimensional asymmetric exclusion process. Particles jump strictly asymmetrically along one of the lattice directions and symmetrically along the other, and an anisotropy parameter p governs the ratio between the two rates. Using a novel massively parallel coupling algorithm that strongly reduces the fluctuations in the numerical estimate of the two-point correlation function, we are able to accurately determine the exponent of the logarithmic correction. In addition, the variation of the prefactor with p provides a stringent test of mode coupling theory.
Entanglement entropy of ABJM theory and entropy of topological black hole
NASA Astrophysics Data System (ADS)
Nian, Jun; Zhang, Xinyu
2017-07-01
In this paper we discuss the supersymmetric localization of the 4D N = 2 offshell gauged supergravity on the background of the AdS4 neutral topological black hole, which is the gravity dual of the ABJM theory defined on the boundary {S}^1× H^2 . We compute the large- N expansion of the supergravity partition function. The result gives the black hole entropy with the logarithmic correction, which matches the previous result of the entanglement entropy of the ABJM theory up to some stringy effects. Our result is consistent with the previous on-shell one-loop computation of the logarithmic correction to black hole entropy. It provides an explicit example of the identification of the entanglement entropy of the boundary conformal field theory with the bulk black hole entropy beyond the leading order given by the classical Bekenstein-Hawking formula, which consequently tests the AdS/CFT correspondence at the subleading order.
Two Universality Properties Associated with the Monkey Model of Zipf's Law
NASA Astrophysics Data System (ADS)
Perline, Richard; Perline, Ron
2016-03-01
The distribution of word probabilities in the monkey model of Zipf's law is associated with two universality properties: (1) the power law exponent converges strongly to $-1$ as the alphabet size increases and the letter probabilities are specified as the spacings from a random division of the unit interval for any distribution with a bounded density function on $[0,1]$; and (2), on a logarithmic scale the version of the model with a finite word length cutoff and unequal letter probabilities is approximately normally distributed in the part of the distribution away from the tails. The first property is proved using a remarkably general limit theorem for the logarithm of sample spacings from Shao and Hahn, and the second property follows from Anscombe's central limit theorem for a random number of i.i.d. random variables. The finite word length model leads to a hybrid Zipf-lognormal mixture distribution closely related to work in other areas.
Method of detecting system function by measuring frequency response
Morrison, John L.; Morrison, William H.; Christophersen, Jon P.; Motloch, Chester G.
2013-01-08
Methods of rapidly measuring an impedance spectrum of an energy storage device in-situ over a limited number of logarithmically distributed frequencies are described. An energy storage device is excited with a known input signal, and a response is measured to ascertain the impedance spectrum. An excitation signal is a limited time duration sum-of-sines consisting of a select number of frequencies. In one embodiment, magnitude and phase of each frequency of interest within the sum-of-sines is identified when the selected frequencies and sample rate are logarithmic integer steps greater than two. This technique requires a measurement with a duration of one period of the lowest frequency. In another embodiment, where selected frequencies are distributed in octave steps, the impedance spectrum can be determined using a captured time record that is reduced to a half-period of the lowest frequency.
Prediction of Intrinsic Cesium Desorption from Na-Smectite in Mixed Cation Solutions.
Fukushi, Keisuke; Fukiage, Tomo
2015-09-01
Quantitative understanding of the stability of sorbed radionuclides in smectite is necessary to assess the performance of engineering barriers used for nuclear waste disposal. Our previous study demonstrated that the spatial organization of the smectite platelets triggered by the divalent cations led to the apparent fixation of intrinsic Cs in smectite, because some Cs is retained inside the formed tactoids. Natural water is usually a mixture of Na(+) and divalent cations (Ca(2+) and Mg(2+)). This study therefore investigated the desorption behavior of intrinsic Cs in Na-smecite in mixed Na(+)-divalent cation solutions under widely various cation concentrations using batch experiments, grain size measurements, and cation exchange modeling (CEM). Results show that increased Na(+) concentrations facilitate Cs desorption because Na(+) serves as the dispersion agent. A linear relation was obtained between the logarithm of the Na(+) fraction and the accessible Cs fraction in smectite. That relation enables the prediction of accessible Cs fraction as a function of solution cationic compositions. The corrected CEM considering the effects of the spatial organization suggests that the stability of intrinsic Cs in the smectite is governed by the Na(+) concentration, and suggests that it is almost independent of the concentrations of divalent cations in natural water.
Velocity distribution in a turbulent flow near a rough wall
NASA Astrophysics Data System (ADS)
Korsun, A. S.; Pisarevsky, M. I.; Fedoseev, V. N.; Kreps, M. V.
2017-11-01
Velocity distribution in the zone of developed wall turbulence, regardless of the conditions on the wall, is described by the well-known Prandtl logarithmic profile. In this distribution, the constant, that determines the value of the velocity, is determined by the nature of the interaction of the flow with the wall and depends on the viscosity of the fluid, the dynamic velocity, and the parameters of the wall roughness.In extreme cases depending on the ratio between the thickness of the viscous sublayer and the size of the roughness the constant takes on a value that does not depend on viscosity, or leads to a ratio for a smooth wall.It is essential that this logarithmic profile is the result not only of the Prandtl theory, but can be derived from general considerations of the theory of dimensions, and also follows from the condition of local equilibrium of generation and dissipation of turbulent energy in the wall area. This allows us to consider the profile as a universal law of velocity distribution in the wall area of a turbulent flow.The profile approximation up to the maximum speed line with subsequent integration makes possible to obtain the resistance law for channels of simple shape. For channels of complex shape with rough walls, the universal profile can be used to formulate the boundary condition when applied to the calculation of turbulence models.This paper presents an empirical model for determining the constant of the universal logarithmic profile. The zone of roughness is described by a set of parameters and is considered as a porous structure with variable porosity.
Can power-law scaling and neuronal avalanches arise from stochastic dynamics?
Touboul, Jonathan; Destexhe, Alain
2010-02-11
The presence of self-organized criticality in biology is often evidenced by a power-law scaling of event size distributions, which can be measured by linear regression on logarithmic axes. We show here that such a procedure does not necessarily mean that the system exhibits self-organized criticality. We first provide an analysis of multisite local field potential (LFP) recordings of brain activity and show that event size distributions defined as negative LFP peaks can be close to power-law distributions. However, this result is not robust to change in detection threshold, or when tested using more rigorous statistical analyses such as the Kolmogorov-Smirnov test. Similar power-law scaling is observed for surrogate signals, suggesting that power-law scaling may be a generic property of thresholded stochastic processes. We next investigate this problem analytically, and show that, indeed, stochastic processes can produce spurious power-law scaling without the presence of underlying self-organized criticality. However, this power-law is only apparent in logarithmic representations, and does not survive more rigorous analysis such as the Kolmogorov-Smirnov test. The same analysis was also performed on an artificial network known to display self-organized criticality. In this case, both the graphical representations and the rigorous statistical analysis reveal with no ambiguity that the avalanche size is distributed as a power-law. We conclude that logarithmic representations can lead to spurious power-law scaling induced by the stochastic nature of the phenomenon. This apparent power-law scaling does not constitute a proof of self-organized criticality, which should be demonstrated by more stringent statistical tests.
NASA Astrophysics Data System (ADS)
Neff, Patrizio; Lankeit, Johannes; Ghiba, Ionel-Dumitrel; Martin, Robert; Steigmann, David
2015-08-01
We consider a family of isotropic volumetric-isochoric decoupled strain energies based on the Hencky-logarithmic (true, natural) strain tensor log U, where μ > 0 is the infinitesimal shear modulus, is the infinitesimal bulk modulus with the first Lamé constant, are dimensionless parameters, is the gradient of deformation, is the right stretch tensor and is the deviatoric part (the projection onto the traceless tensors) of the strain tensor log U. For small elastic strains, the energies reduce to first order to the classical quadratic Hencky energy which is known to be not rank-one convex. The main result in this paper is that in plane elastostatics the energies of the family are polyconvex for , extending a previous finding on its rank-one convexity. Our method uses a judicious application of Steigmann's polyconvexity criteria based on the representation of the energy in terms of the principal invariants of the stretch tensor U. These energies also satisfy suitable growth and coercivity conditions. We formulate the equilibrium equations, and we prove the existence of minimizers by the direct methods of the calculus of variations.
Electronic filters, signal conversion apparatus, hearing aids and methods
NASA Technical Reports Server (NTRS)
Morley, Jr., Robert E. (Inventor); Engebretson, A. Maynard (Inventor); Engel, George L. (Inventor); Sullivan, Thomas J. (Inventor)
1994-01-01
An electronic filter for filtering an electrical signal. Signal processing circuitry therein includes a logarithmic filter having a series of filter stages with inputs and outputs in cascade and respective circuits associated with the filter stages for storing electrical representations of filter parameters. The filter stages include circuits for respectively adding the electrical representations of the filter parameters to the electrical signal to be filtered thereby producing a set of filter sum signals. At least one of the filter stages includes circuitry for producing a filter signal in substantially logarithmic form at its output by combining a filter sum signal for that filter stage with a signal from an output of another filter stage. The signal processing circuitry produces an intermediate output signal, and a multiplexer connected to the signal processing circuit multiplexes the intermediate output signal with the electrical signal to be filtered so that the logarithmic filter operates as both a logarithmic prefilter and a logarithmic postfilter. Other electronic filters, signal conversion apparatus, electroacoustic systems, hearing aids and methods are also disclosed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kolodrubetz, Daniel W.; Pietrulewicz, Piotr; Stewart, Iain W.
To predict the jet mass spectrum at a hadron collider it is crucial to account for the resummation of logarithms between the transverse momentum of the jet and its invariant mass m J . For small jet areas there are additional large logarithms of the jet radius R, which affect the convergence of the perturbative series. We present an analytic framework for exclusive jet production at the LHC which gives a complete description of the jet mass spectrum including realistic jet algorithms and jet vetoes. It factorizes the scales associated with m J , R, and the jet veto, enablingmore » in addition the systematic resummation of jet radius logarithms in the jet mass spectrum beyond leading logarithmic order. We discuss the factorization formulae for the peak and tail region of the jet mass spectrum and for small and large R, and the relations between the different regimes and how to combine them. Regions of experimental interest are classified which do not involve large nonglobal logarithms. We also present universal results for nonperturbative effects and discuss various jet vetoes.« less
Regional and directional compliance of the healthy aorta: an ex vivo study in a porcine model.
Krüger, Tobias; Veseli, Kujtim; Lausberg, Henning; Vöhringer, Luise; Schneider, Wilke; Schlensak, Christian
2016-07-01
To gain differential knowledge about the physiological compliance and wall strength of the different regions of the aorta, including the ascending aorta, arch and descending aorta in both the circumferential and longitudinal directions, and to generate a hypothesis on the pathophysiological mechanisms that lead to Type A aortic dissection. Fresh tissue specimens from 22 ex vivo porcine aortas were analysed on a tensile tester. Regional and directional compliance, failure stress and failure strain were recorded. Aortic compliance appeared as a linear function of the natural logarithm (ln) of wall stress. Compliance significantly decreased along the length of the aorta. In the ascending aorta, longitudinal compliance significantly (P = 0.003) exceeded circumferential compliance, and the outer curvature was more compliant than the inner curvature (P = 0.03). In the descending aorta, this relationship is reversed: the circumferential compliance exceeded the longitudinal compliance, and the outer aspect was more compliant (P = 0.003). The median circumferential failure stress of all aortic segments was in the range of 2000-2750 kPa, whereas the longitudinal failure stress in the ascending aorta and the arch had values of 750-1000 kPa, which were significantly lower (P < 0.05). Surprisingly, the longitudinal failure stress of the inner aspect of the descending aorta was extraordinarily high (2000 kPa). Failure strain, similar to compliance, was highest in the ascending aorta and decreased along the aorta. The aorta appears to be a complex organ with distinct regional and directional differences in compliance and wall strength that is designed to effectively absorb the kinetic energy of cardiac systole and to cushion the momentum of systolic impact. Under normotensive conditions and a preconditioned physiological morphology, the aortic wall works in the steep part of the logarithmic strain-stress function; under hypertensive conditions and pathological morphology, the wall reacts in an non-compliant manner. The high longitudinal compliance and low failure stress of the ascending aorta and subsequent pathological changes may be the main determinants of the recurrent patho-anatomy of Type A aortic dissection. © The Author 2016. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.
Regional and directional compliance of the healthy aorta: an ex vivo study in a porcine model†
Krüger, Tobias; Veseli, Kujtim; Lausberg, Henning; Vöhringer, Luise; Schneider, Wilke; Schlensak, Christian
2016-01-01
OBJECTIVES To gain differential knowledge about the physiological compliance and wall strength of the different regions of the aorta, including the ascending aorta, arch and descending aorta in both the circumferential and longitudinal directions, and to generate a hypothesis on the pathophysiological mechanisms that lead to Type A aortic dissection. METHODS Fresh tissue specimens from 22 ex vivo porcine aortas were analysed on a tensile tester. Regional and directional compliance, failure stress and failure strain were recorded. RESULTS Aortic compliance appeared as a linear function of the natural logarithm (ln) of wall stress. Compliance significantly decreased along the length of the aorta. In the ascending aorta, longitudinal compliance significantly (P = 0.003) exceeded circumferential compliance, and the outer curvature was more compliant than the inner curvature (P = 0.03). In the descending aorta, this relationship is reversed: the circumferential compliance exceeded the longitudinal compliance, and the outer aspect was more compliant (P = 0.003). The median circumferential failure stress of all aortic segments was in the range of 2000–2750 kPa, whereas the longitudinal failure stress in the ascending aorta and the arch had values of 750–1000 kPa, which were significantly lower (P < 0.05). Surprisingly, the longitudinal failure stress of the inner aspect of the descending aorta was extraordinarily high (2000 kPa). Failure strain, similar to compliance, was highest in the ascending aorta and decreased along the aorta. CONCLUSION The aorta appears to be a complex organ with distinct regional and directional differences in compliance and wall strength that is designed to effectively absorb the kinetic energy of cardiac systole and to cushion the momentum of systolic impact. Under normotensive conditions and a preconditioned physiological morphology, the aortic wall works in the steep part of the logarithmic strain–stress function; under hypertensive conditions and pathological morphology, the wall reacts in an non-compliant manner. The high longitudinal compliance and low failure stress of the ascending aorta and subsequent pathological changes may be the main determinants of the recurrent patho-anatomy of Type A aortic dissection. PMID:26993474
Pigmentary Maculopathy Associated with Chronic Exposure to Pentosan Polysulfate Sodium.
Pearce, William A; Chen, Rui; Jain, Nieraj
2018-05-22
To describe the clinical features of a unique pigmentary maculopathy noted in the setting of chronic exposure to pentosan polysulfate sodium (PPS), a therapy for interstitial cystitis (IC). Retrospective case series. Six adult patients evaluated by a single clinician between May 1, 2015, and October 1, 2017. Patients were identified by query of the electronic medical record system. Local records were reviewed, including results of the clinical examination, retinal imaging, and visual function assessment with static perimetry and electroretinography. Molecular testing assessed for known macular dystrophy and mitochondrial cytopathy genotypes. Mean best-corrected visual acuity (BCVA; in logarithm of the minimum angle of resolution units), median cumulative PPS exposure, subjective nature of the associated visual disturbance, qualitative examination and imaging features, and molecular testing results. The median age at presentation was 60 years (range, 37-62 years). All patients received PPS for a diagnosis of IC, with a median cumulative exposure of 2263 g (range, 1314-2774 g), over a median duration of exposure of 186 months (range, 144-240 months). Most patients (4 of 6) reported difficulty reading as the most bothersome symptom. Mean BCVA was 0.1±0.18 logarithm of the minimum angle of resolution. On fundus examination, nearly all eyes showed subtle paracentral hyperpigmentation at the level of the retinal pigment epithelium (RPE) with a surrounding array of vitelliform-like deposits. Four eyes of 2 patients showed paracentral RPE atrophy, and no eyes demonstrated choroidal neovascularization. Multimodal retinal imaging demonstrated abnormality of the RPE generally contained in a well-delineated area in the posterior pole. None of the 4 patients who underwent molecular testing of nuclear DNA returned a pathogenic mutation. Additionally, all 6 patients showed negative results for pathogenic variants in the mitochondrial gene MTTL1. We describe a novel and possibly avoidable maculopathy associated with chronic exposure to PPS. Patients reported symptoms of difficulty reading and prolonged dark adaptation despite generally intact visual acuity and subtle funduscopic findings. Multimodal imaging and functional studies are suggestive of a primary RPE injury. Additional investigation is warranted to explore causality further. Copyright © 2018 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.
Soft-Collinear Mode for Jet Rates in Soft-Collinear Effective Theory
Chien, Yang-Ting; Lee, Christopher; Hornig, Andrew
2016-01-29
We propose the addition of a new "soft-collinear" mode to soft collinear effective theory (SCET) below the usual soft scale to factorize and resum logarithms of jet radii R in jet cross sections. We consider exclusive 2-jet cross sections in e +e - collisions with an energy veto Λ on additional jets. The key observation is that there are actually two pairs of energy scales whose ratio is R: the transverse momentum QR of the energetic particles inside jets and their total energy Q, and the transverse momentum ΛR of soft particles that are cut out of the jet cones and their energy Λ. The soft-collinear mode is necessary to factorize and resum logarithms of the latter hierarchy. We show how this factorization occurs in the jet thrust cross section for cone and k T-type algorithms at O(α s) and using the thrust cone algorithm at O(αmore » $$2\\atop{s}$$). We identify the presence of hard-collinear, in-jet soft, global (veto) soft, and soft-collinear modes in the jet thrust cross section. We also observe here that the in-jet soft modes measured with thrust are actually the "csoft" modes of the theory SCET +. We dub the new theory with both csoft and soft-collinear modes "SCET ++". We go on to explain the relation between the "unmeasured" jet function appearing in total exclusive jet cross sections and the hard-collinear and csoft functions in measured jet thrust cross sections. We do not resum logs that are non-global in origin, arising from the ratio of the scales of soft radiation whose thrust is measured at Q$${{\\tau}}$$/R and of the soft-collinear radiation at 2ΛR. Their resummation would require the introduction of additional operators beyond those we consider here. The steps we outline here are a necessary part of summing logs of R that are global in nature and have not been factorized and resummed beyond leading-log level previously.« less
Soft-Collinear Mode for Jet Rates in Soft-Collinear Effective Theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chien, Yang-Ting; Lee, Christopher; Hornig, Andrew
We propose the addition of a new "soft-collinear" mode to soft collinear effective theory (SCET) below the usual soft scale to factorize and resum logarithms of jet radii R in jet cross sections. We consider exclusive 2-jet cross sections in e +e - collisions with an energy veto Λ on additional jets. The key observation is that there are actually two pairs of energy scales whose ratio is R: the transverse momentum QR of the energetic particles inside jets and their total energy Q, and the transverse momentum ΛR of soft particles that are cut out of the jet cones and their energy Λ. The soft-collinear mode is necessary to factorize and resum logarithms of the latter hierarchy. We show how this factorization occurs in the jet thrust cross section for cone and k T-type algorithms at O(α s) and using the thrust cone algorithm at O(αmore » $$2\\atop{s}$$). We identify the presence of hard-collinear, in-jet soft, global (veto) soft, and soft-collinear modes in the jet thrust cross section. We also observe here that the in-jet soft modes measured with thrust are actually the "csoft" modes of the theory SCET +. We dub the new theory with both csoft and soft-collinear modes "SCET ++". We go on to explain the relation between the "unmeasured" jet function appearing in total exclusive jet cross sections and the hard-collinear and csoft functions in measured jet thrust cross sections. We do not resum logs that are non-global in origin, arising from the ratio of the scales of soft radiation whose thrust is measured at Q$${{\\tau}}$$/R and of the soft-collinear radiation at 2ΛR. Their resummation would require the introduction of additional operators beyond those we consider here. The steps we outline here are a necessary part of summing logs of R that are global in nature and have not been factorized and resummed beyond leading-log level previously.« less
High speed high dynamic range high accuracy measurement system
Deibele, Craig E.; Curry, Douglas E.; Dickson, Richard W.; Xie, Zaipeng
2016-11-29
A measuring system includes an input that emulates a bandpass filter with no signal reflections. A directional coupler connected to the input passes the filtered input to electrically isolated measuring circuits. Each of the measuring circuits includes an amplifier that amplifies the signal through logarithmic functions. The output of the measuring system is an accurate high dynamic range measurement.
Detrended fluctuation analysis of short datasets: An application to fetal cardiac data
NASA Astrophysics Data System (ADS)
Govindan, R. B.; Wilson, J. D.; Preißl, H.; Eswaran, H.; Campbell, J. Q.; Lowery, C. L.
2007-02-01
Using detrended fluctuation analysis (DFA) we perform scaling analysis of short datasets of length 500-1500 data points. We quantify the long range correlation (exponent α) by computing the mean value of the local exponents αL (in the asymptotic regime). The local exponents are obtained as the (numerical) derivative of the logarithm of the fluctuation function F(s) with respect to the logarithm of the scale factor s:αL=dlog10F(s)/dlog10s. These local exponents display huge variations and complicate the correct quantification of the underlying correlations. We propose the use of the phase randomized surrogate (PRS), which preserves the long range correlations of the original data, to minimize the variations in the local exponents. Using the numerically generated uncorrelated and long range correlated data, we show that performing DFA on several realizations of PRS and estimating αL from the averaged fluctuation functions (of all realizations) can minimize the variations in αL. The application of this approach to the fetal cardiac data (RR intervals) is discussed and we show that there is a statistically significant correlation between α and the gestation age.
On a new coordinate system with astrophysical application: Spiral coordinates
NASA Astrophysics Data System (ADS)
Campos, L. M. B. C.; Gil, P. J. S.
In this presentation are introduced spiral coordinates, which are a particular case of conformal coordinates, i.e. orthogonal curvelinear coordinates with equal factors along all coordinate axis. The spiral coordinates in the plane have as coordinate curves two families of logarithmic spirals, making a constant angle, respectively phi and pi / 2-phi, with all radial lines, where phi is a parameter. They can be obtained from a complex function, representing a spiral potential flow, due to the superposition of a source/sink with a vortex; the parameter phi in this case specifies the ratio of the ass flux of source/sink to the circulation of the vortex. Regardless of hydrodynamical or other interpretations, spiral coordinates are particulary convenient in situation where physical quantities vary only along a logarithmicspiral. The example chosen is the propagation of Alfven waves along a logarithmic spiral, as an approximation to Parker's spiral. The equation of dissipative MHD are written in spiral coordinates, and eliminated to specify the Alfven wave equation in spiral coordinates; the latter is solved exactly in terms of Bessel functions, and the results analyzed for values of the parameters corresponding to the solar wind.
Analytic Evolution of Singular Distribution Amplitudes in QCD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tandogan Kunkel, Asli
2014-08-01
Distribution amplitudes (DAs) are the basic functions that contain information about the quark momentum. DAs are necessary to describe hard exclusive processes in quantum chromodynamics. We describe a method of analytic evolution of DAs that have singularities such as nonzero values at the end points of the support region, jumps at some points inside the support region and cusps. We illustrate the method by applying it to the evolution of a at (constant) DA, antisymmetric at DA, and then use the method for evolution of the two-photon generalized distribution amplitude. Our approach to DA evolution has advantages over the standardmore » method of expansion in Gegenbauer polynomials [1, 2] and over a straightforward iteration of an initial distribution with evolution kernel. Expansion in Gegenbauer polynomials requires an infinite number of terms in order to accurately reproduce functions in the vicinity of singular points. Straightforward iteration of an initial distribution produces logarithmically divergent terms at each iteration. In our method the logarithmic singularities are summed from the start, which immediately produces a continuous curve. Afterwards, in order to get precise results, only one or two iterations are needed.« less
Four Theorems on the Psychometric Function
May, Keith A.; Solomon, Joshua A.
2013-01-01
In a 2-alternative forced-choice (2AFC) discrimination task, observers choose which of two stimuli has the higher value. The psychometric function for this task gives the probability of a correct response for a given stimulus difference, . This paper proves four theorems about the psychometric function. Assuming the observer applies a transducer and adds noise, Theorem 1 derives a convenient general expression for the psychometric function. Discrimination data are often fitted with a Weibull function. Theorem 2 proves that the Weibull “slope” parameter, , can be approximated by , where is the of the Weibull function that fits best to the cumulative noise distribution, and depends on the transducer. We derive general expressions for and , from which we derive expressions for specific cases. One case that follows naturally from our general analysis is Pelli's finding that, when , . We also consider two limiting cases. Theorem 3 proves that, as sensitivity improves, 2AFC performance will usually approach that for a linear transducer, whatever the actual transducer; we show that this does not apply at signal levels where the transducer gradient is zero, which explains why it does not apply to contrast detection. Theorem 4 proves that, when the exponent of a power-function transducer approaches zero, 2AFC performance approaches that of a logarithmic transducer. We show that the power-function exponents of 0.4–0.5 fitted to suprathreshold contrast discrimination data are close enough to zero for the fitted psychometric function to be practically indistinguishable from that of a log transducer. Finally, Weibull reflects the shape of the noise distribution, and we used our results to assess the recent claim that internal noise has higher kurtosis than a Gaussian. Our analysis of for contrast discrimination suggests that, if internal noise is stimulus-independent, it has lower kurtosis than a Gaussian. PMID:24124456
Virtual photon structure functions and the parton content of the electron
DOE Office of Scientific and Technical Information (OSTI.GOV)
Drees, M.; Godbole, R.M.
1994-09-01
We point out that in processes involving the parton content of the photon the usual effective photon approximation should be modified. The reason is that the parton content of virtual photons is logarithmically suppressed compared to real photons. We describe this suppression using several simple, physically motivated [ital Ansa]$[ital uml---tze]. Although the parton content of the electron in general no longer factorizes into an electron flux function and a photon structure function, it can still be expressed as a single integral. Numerical examples are given for the [ital e][sup +][ital e][sup [minus
A CMOS current-mode log(x) and log(1/x) functions generator
NASA Astrophysics Data System (ADS)
Al-Absi, Munir A.; Al-Tamimi, Karama M.
2014-08-01
A novel Complementary Metal Oxide Semiconductor (CMOS) current-mode low-voltage and low-power controllable logarithmic function circuit is presented. The proposed design utilises one Operational Transconductance Amplifier (OTA) and two PMOS transistors biased in weak inversion region. The proposed design provides high dynamic range, controllable amplitude, high accuracy and is insensitive to temperature variations. The circuit operates on a ±0.6 V power supply and consumes 0.3 μW. The functionality of the proposed circuit was verified using HSPICE with 0.35 μm 2P4M CMOS process technology.
[Obtaining a fermented chickpea extract (Cicer arietinum L.) and its use as a milk extensor].
Morales de León, J; Cassís Nosthas, M L; Cecin Salomón, P
2000-06-01
Chickpea (Cicer arietinum L) is cultivated in the North part of México and it is considered a good source of vegetal protein of low cost (20% average), nevertheless, the 80% used for the exportation and only the 20% less was used for animal feeding. The main objective in this study is to obtain a fermented chickpea extract for using in dairy extensor. Chickpea water absorbtion kinetics were carried out in e temperature conditions:while the conditions were established, chickpea was grounded and fermented in different amounts with its natural flora, L. casei, L. plantarum and a mixture culture of both microorganism in logarithmic phase. The results showed that the presence of microorganism of chickpea natural flora interferes during the fermentation, so before the inoculation it was necessary treat the chickpea extract (CE) terminally in a dilution 1:4 during 20 min at 7.7 kg/cm2 of pressure. The use of a mixture culture of 5% of L. casei and 5% L. plantarum inoculated in MRS broth was used to decrease fermentation time. Its addition in logarithmic phase to the sterile chickpea extract increased the lactic acid production and decreased the pH value in 6 h which was less time that one obtained with each of lactobacillus. The fermented extract obtained finally, presented similar sensory characteristics to the ones of dairy products. Therefore, chickpea is a good alternative as a extensor for this kind of products.
Maximum-entropy probability distributions under Lp-norm constraints
NASA Technical Reports Server (NTRS)
Dolinar, S.
1991-01-01
Continuous probability density functions and discrete probability mass functions are tabulated which maximize the differential entropy or absolute entropy, respectively, among all probability distributions with a given L sub p norm (i.e., a given pth absolute moment when p is a finite integer) and unconstrained or constrained value set. Expressions for the maximum entropy are evaluated as functions of the L sub p norm. The most interesting results are obtained and plotted for unconstrained (real valued) continuous random variables and for integer valued discrete random variables. The maximum entropy expressions are obtained in closed form for unconstrained continuous random variables, and in this case there is a simple straight line relationship between the maximum differential entropy and the logarithm of the L sub p norm. Corresponding expressions for arbitrary discrete and constrained continuous random variables are given parametrically; closed form expressions are available only for special cases. However, simpler alternative bounds on the maximum entropy of integer valued discrete random variables are obtained by applying the differential entropy results to continuous random variables which approximate the integer valued random variables in a natural manner. All the results are presented in an integrated framework that includes continuous and discrete random variables, constraints on the permissible value set, and all possible values of p. Understanding such as this is useful in evaluating the performance of data compression schemes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mantry, Sonny; Petriello, Frank
We derive a factorization theorem for the Higgs boson transverse momentum (p{sub T}) and rapidity (Y) distributions at hadron colliders, using the soft-collinear effective theory (SCET), for m{sub h}>>p{sub T}>>{Lambda}{sub QCD}, where m{sub h} denotes the Higgs mass. In addition to the factorization of the various scales involved, the perturbative physics at the p{sub T} scale is further factorized into two collinear impact-parameter beam functions (IBFs) and an inverse soft function (ISF). These newly defined functions are of a universal nature for the study of differential distributions at hadron colliders. The additional factorization of the p{sub T}-scale physics simplifies themore » implementation of higher order radiative corrections in {alpha}{sub s}(p{sub T}). We derive formulas for factorization in both momentum and impact parameter space and discuss the relationship between them. Large logarithms of the relevant scales in the problem are summed using the renormalization group equations of the effective theories. Power corrections to the factorization theorem in p{sub T}/m{sub h} and {Lambda}{sub QCD}/p{sub T} can be systematically derived. We perform multiple consistency checks on our factorization theorem including a comparison with known fixed-order QCD results. We compare the SCET factorization theorem with the Collins-Soper-Sterman approach to low-p{sub T} resummation.« less
Quantum loop corrections of a charged de Sitter black hole
NASA Astrophysics Data System (ADS)
Naji, J.
2018-03-01
A charged black hole in de Sitter (dS) space is considered and logarithmic corrected entropy used to study its thermodynamics. Logarithmic corrections of entropy come from thermal fluctuations, which play a role of quantum loop correction. In that case we are able to study the effect of quantum loop on black hole thermodynamics and statistics. As a black hole is a gravitational object, it helps to obtain some information about the quantum gravity. The first and second laws of thermodynamics are investigated for the logarithmic corrected case and we find that it is only valid for the charged dS black hole. We show that the black hole phase transition disappears in the presence of logarithmic correction.
The relationship between perceived discomfort of static posture holding and posture holding time.
Ogutu, Jack; Park, Woojin
2015-01-01
Few studies have investigated mathematical characteristics of the discomfort-time relationship during prolonged static posture holding (SPH) on an individual basis. Consequently, the discomfort-time relationship is not clearly understood at individual trial level. The objective of this study was to examine discomfort-time sequence data obtained from a large number of maximum-duration SPH trials to understand the perceived discomfort-posture holding time relationship at the individual SPH trial level. Thirty subjects (15 male, 15 female) participated in this study as paid volunteers. The subjects performed maximum-duration SPH trials employing 12 different wholebody static postures. The hand-held load for all the task trials was a ``generic'' box weighing 2 kg. Three mathematical functions, that is, linear, logarithmic and power functions were examined as possible mathematical models for representing individual discomfort-time profiles of SPH trials. Three different time increase patterns (negatively accelerated, linear and positively accelerated) were observed in the discomfort-time sequences data. The power function model with an additive constant term was found to adequately fit most (96.4%) of the observed discomfort-time sequences, and thus, was recommended as a general mathematical representation of the perceived discomfort-posture holding time relationship in SPH. The new knowledge on the nature of the discomfort-time relationship in SPH and the power function representation found in this study will facilitate analyzing discomfort-time data of SPH and developing future posture analysis tools for work-related discomfort control.
NASA Technical Reports Server (NTRS)
Van De Griend, A. A.; Owe, M.
1993-01-01
The spatial variation of both the thermal emissivity (8-14 microns) and Normalized Difference Vegetation Index (NDVI) was measured for a series of natural surfaces within a savanna environment in Botswana. The measurements were performed with an emissivity-box and with a combined red and near-IR radiometer, with spectral bands corresponding to NOAA/AVHRR. It was found that thermal emissivity was highly correlated with NDVI after logarithmic transformation, with a correlation coefficient of R = 0.94. This empirical relationship is of potential use for energy balance studies using thermal IR remote sensing. The relationship was used in combination with AVHRR (GAC), AVHRR (LAC), and Landsat (TM) data to demonstrate and compare the spatial variability of various spatial scales.
Fructose: Pure, White, and Deadly? Fructose, by Any Other Name, Is a Health Hazard
Bray, George A.
2010-01-01
The worldwide consumption of sucrose, and thus fructose, has risen logarithmically since 1800. Many concerns about the health hazards of calorie-sweetened beverages, including soft drinks and fruit drinks and the fructose they provide, have been voiced over the past 10 years. These concerns are related to higher energy intake, risk of obesity, risk of diabetes, risk of cardiovascular disease, risk of gout in men, and risk of metabolic syndrome. Fructose appears to be responsible for most of the metabolic risks, including high production of lipids, increased thermogenesis, and higher blood pressure associated with sugar or high fructose corn syrup. Some claim that sugar is natural, but natural does not assure safety. PMID:20663467
On the geometry of mixed states and the Fisher information tensor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Contreras, I., E-mail: icontrer@illinois.edu; Ercolessi, E., E-mail: ercolessi@bo.infn.it; Schiavina, M., E-mail: michele.schiavina@math.uzh.ch
2016-06-15
In this paper, we will review the co-adjoint orbit formulation of finite dimensional quantum mechanics, and in this framework, we will interpret the notion of quantum Fisher information index (and metric). Following previous work of part of the authors, who introduced the definition of Fisher information tensor, we will show how its antisymmetric part is the pullback of the natural Kostant–Kirillov–Souriau symplectic form along some natural diffeomorphism. In order to do this, we will need to understand the symmetric logarithmic derivative as a proper 1-form, settling the issues about its very definition and explicit computation. Moreover, the fibration of co-adjointmore » orbits, seen as spaces of mixed states, is also discussed.« less
Fructose: pure, white, and deadly? Fructose, by any other name, is a health hazard.
Bray, George A
2010-07-01
The worldwide consumption of sucrose, and thus fructose, has risen logarithmically since 1800. Many concerns about the health hazards of calorie-sweetened beverages, including soft drinks and fruit drinks and the fructose they provide, have been voiced over the past 10 years. These concerns are related to higher energy intake, risk of obesity, risk of diabetes, risk of cardiovascular disease, risk of gout in men, and risk of metabolic syndrome. Fructose appears to be responsible for most of the metabolic risks, including high production of lipids, increased thermogenesis, and higher blood pressure associated with sugar or high fructose corn syrup. Some claim that sugar is natural, but natural does not assure safety. 2010 Diabetes Technology Society.
[Spectral reflectance characteristics and modeling of typical Takyr Solonetzs water content].
Zhang, Jun-hua; Jia, Ke-li
2015-03-01
Based on the analysis of the spectral reflectance of the typical Takyr Solonetzs soil in Ningxia, the relationship of soil water content and spectral reflectance was determined, and a quantitative model for the prediction of soil water content was constructed. The results showed that soil spectral reflectance decreased with the increasing soil water content when it was below the water holding capacity but increased with the increasing soil water content when it was higher than the water holding capacity. Soil water content presented significantly negative correlation with original reflectance (r), smooth reflectance (R), logarithm of reflectance (IgR), and positive correlation with the reciprocal of R and logarithm of reciprocal [lg (1/R)]. The correlation coefficient of soil water content and R in the whole wavelength was 0.0013, 0.0397 higher than r and lgR, respectively. Average correlation coefficient of soil water content with 1/R and [lg (1/R)] at the wavelength of 950-1000 nm was 0.2350 higher than that of 400-950 nm. The relationships of soil water content with the first derivate differential (R') , the first derivate differential of logarithm (lgR)' and the first derivate differential of logarithm of reciprocal [lg(1/R)]' were unstable. Base on the coefficients of r, lg(1/R), R' and (lgR)', different regression models were established to predict soil water content, and the coefficients of determination were 0.7610, 0.8184, 0.8524 and 0.8255, respectively. The determination coefficient for power function model of R'. reached 0.9447, while the fitting degree between the predicted value based on this model and on-site measured value was 0.8279. The model of R' had the highest fitted accuracy, while that of r had the lowest one. The results could provide a scientific basis for soil water content prediction and field irrigation in the Takyr Solonetzs region.
Logarithmic spiral trajectories generated by Solar sails
NASA Astrophysics Data System (ADS)
Bassetto, Marco; Niccolai, Lorenzo; Quarta, Alessandro A.; Mengali, Giovanni
2018-02-01
Analytic solutions to continuous thrust-propelled trajectories are available in a few cases only. An interesting case is offered by the logarithmic spiral, that is, a trajectory characterized by a constant flight path angle and a fixed thrust vector direction in an orbital reference frame. The logarithmic spiral is important from a practical point of view, because it may be passively maintained by a Solar sail-based spacecraft. The aim of this paper is to provide a systematic study concerning the possibility of inserting a Solar sail-based spacecraft into a heliocentric logarithmic spiral trajectory without using any impulsive maneuver. The required conditions to be met by the sail in terms of attitude angle, propulsive performance, parking orbit characteristics, and initial position are thoroughly investigated. The closed-form variations of the osculating orbital parameters are analyzed, and the obtained analytical results are used for investigating the phasing maneuver of a Solar sail along an elliptic heliocentric orbit. In this mission scenario, the phasing orbit is composed of two symmetric logarithmic spiral trajectories connected with a coasting arc.
Electronic filters, repeated signal charge conversion apparatus, hearing aids and methods
NASA Technical Reports Server (NTRS)
Morley, Jr., Robert E. (Inventor); Engebretson, A. Maynard (Inventor); Engel, George L. (Inventor); Sullivan, Thomas J. (Inventor)
1993-01-01
An electronic filter for filtering an electrical signal. Signal processing circuitry therein includes a logarithmic filter having a series of filter stages with inputs and outputs in cascade and respective circuits associated with the filter stages for storing electrical representations of filter parameters. The filter stages include circuits for respectively adding the electrical representations of the filter parameters to the electrical signal to be filtered thereby producing a set of filter sum signals. At least one of the filter stages includes circuitry for producing a filter signal in substantially logarithmic form at its output by combining a filter sum signal for that filter stage with a signal from an output of another filter stage. The signal processing circuitry produces an intermediate output signal, and a multiplexer connected to the signal processing circuit multiplexes the intermediate output signal with the electrical signal to be filtered so that the logarithmic filter operates as both a logarithmic prefilter and a logarithmic postfilter. Other electronic filters, signal conversion apparatus, electroacoustic systems, hearing aids and methods are also disclosed.
Space Mathematics: A Resource for Secondary School Teachers
NASA Technical Reports Server (NTRS)
Kastner, Bernice
1985-01-01
A collection of mathematical problems related to NASA space science projects is presented. In developing the examples and problems, attention was given to preserving the authenticity and significance of the original setting while keeping the level of mathematics within the secondary school curriculum. Computation and measurement, algebra, geometry, probability and statistics, exponential and logarithmic functions, trigonometry, matrix algebra, conic sections, and calculus are among the areas addressed.
Analog optical computing primitives in silicon photonics
Jiang, Yunshan; DeVore, Peter T. S.; Jalali, Bahram
2016-03-15
Optical computing accelerators help alleviate bandwidth and power consumption bottlenecks in electronics. In this paper, we show an approach to implementing logarithmic-type analog co-processors in silicon photonics and use it to perform the exponentiation operation and the recovery of a signal in the presence of multiplicative distortion. Finally, the function is realized by exploiting nonlinear-absorption-enhanced Raman amplification saturation in a silicon waveguide.
The critical wave speed for the Fisher Kolmogorov Petrowskii Piscounov equation with cut-off
NASA Astrophysics Data System (ADS)
Dumortier, Freddy; Popovic, Nikola; Kaper, Tasso J.
2007-04-01
The Fisher-Kolmogorov-Petrowskii-Piscounov (FKPP) equation with cut-off was introduced in (Brunet and Derrida 1997 Shift in the velocity of a front due to a cut-off Phys. Rev. E 56 2597-604) to model N-particle systems in which concentrations less than ɛ = 1/N are not attainable. It was conjectured that the cut-off function, which sets the reaction terms to zero if the concentration is below the small threshold ɛ, introduces a substantial shift in the propagation speed of the corresponding travelling waves. In this paper, we prove the conjecture of Brunet and Derrida, showing that the speed of propagation is given by c_crit(\\varepsilon)=2-{\\pi^2}/{(\\ln\\varepsilon)^2}+\\cal{O}((\\ln\\varepsilon)^{-3}) , as ɛ → 0, for a large class of cut-off functions. Moreover, we extend this result to a more general family of scalar reaction-diffusion equations with cut-off. The main mathematical techniques used in our proof are the geometric singular perturbation theory and the blow-up method, which lead naturally to the identification of the reasons for the logarithmic dependence of ccrit on ɛ as well as for the universality of the corresponding leading-order coefficient (π2).
Kurozumi, Akira; Okada, Yosuke; Arao, Tadashi; Tanaka, Yoshiya
Objective Visceral fat obesity and metabolic syndrome correlate with atherosclerosis in part due to insulin resistance and various other factors. The aim of this study was to determine the relationship between vascular endothelial dysfunction and excess visceral adipose tissue (VAT) in Japanese patients with type 2 diabetes mellitus (T2DM). Methods In 71 T2DM patients, the reactive hyperemia index (RHI) was measured using an Endo-PAT 2000, and VAT and subcutaneous adipose tissue (SAT) were measured via CT. We also measured various metabolic markers, including high-molecular-weight adiponectin (HMW-AN). Results VAT correlated negatively with the natural logarithm of RHI (L_RHI), the primary endpoint (p=0.042, r=-0.242). L_RHI did not correlate with SAT, VAT/SAT, abdominal circumference, homeostasis model assessment for insulin resistance, urinary C-peptide reactivity, HMW-AN, or alanine amino transferase, the secondary endpoints. A linear multivariate analysis via the forced entry method using age, sex, VAT, and smoking history as independent variables and L_RHI as the dependent variable revealed a lack of any determinants of L_RHI. Conclusion Excess VAT worsens the vascular endothelial function, represented by RHI which was analyzed using Endo-PAT, in Japanese patients with T2DM.
NASA Astrophysics Data System (ADS)
Tripathy, Mukta; Schweizer, Kenneth S.
2011-04-01
In paper II of this series we apply the center-of-mass version of Nonlinear Langevin Equation theory to study how short-range attractive interactions influence the elastic shear modulus, transient localization length, activated dynamics, and kinetic arrest of a variety of nonspherical particle dense fluids (and the spherical analog) as a function of volume fraction and attraction strength. The activation barrier (roughly the natural logarithm of the dimensionless relaxation time) is predicted to be a rich function of particle shape, volume fraction, and attraction strength, and the dynamic fragility varies significantly with particle shape. At fixed volume fraction, the barrier grows in a parabolic manner with inverse temperature nondimensionalized by an onset value, analogous to what has been established for thermal glass-forming liquids. Kinetic arrest boundaries lie at significantly higher volume fractions and attraction strengths relative to their dynamic crossover analogs, but their particle shape dependence remains the same. A limited universality of barrier heights is found based on the concept of an effective mean-square confining force. The mean hopping time and self-diffusion constant in the attractive glass region of the nonequilibrium phase diagram is predicted to vary nonmonotonically with attraction strength or inverse temperature, qualitatively consistent with recent computer simulations and colloid experiments.
He, Yue; Wu, Yu-Mei; Zhao, Qun; Wang, Tong; Song, Fang; Zhu, Li
2014-02-01
To investigate the relationship between cervical intraepithelial neoplasia (CIN) and high-risk human papilloma virus (HR-HPV) during pregnancy and postpartum in China. In this prospective case-control study, 168 pregnant women with CIN and cervicitis were diagnosed by colposcopic cervical biopsy. All the cases underwent hybrid capture assay version II (HCII) to detect HR-HPV DNA load amounts and the tests were completed in 3-6 months after childbirth. During pregnancy: as the CIN grade increased, the HR-HPV infection rates increased (P = 0.002), but HR-HPV DNA load amounts (in logarithms) did not change obviously (P = 0.719). 3-6 months postpartum: as the CIN grade increased, the natural negative rate of HR-HPV decreased (P = 0.000), while the amount of HR-HPV DNA (in logarithms) increased (P = 0.036); especially the amount of HR-HPV DNA in pregnant women with CINIII was significantly higher than that of other grades. During pregnancy and 3-6 months postpartum : the amount of HR-HPV DNA (in logarithms) during pregnancy was higher than that of 3-6 months postpartum with the same grade of CIN. The findings emphasize the importance of undergoing the HCII test 3-6 months postpartum. It should be noted that HR-HPV may turn negative in pregnancy with CINIII 3-6 months after childbirth. Further treatments of pregnancy with CIN should be considered according to the CIN grade diagnosed by cervical biopsy via colposcopy 3-6 months after birth, but not according to the persistence of HR-HPV during pregnancy. © 2013 The Authors. Journal of Obstetrics and Gynaecology Research © 2013 Japan Society of Obstetrics and Gynecology.
NASA Astrophysics Data System (ADS)
Shintani, Masaru; Umeno, Ken
2018-04-01
The power law is present ubiquitously in nature and in our societies. Therefore, it is important to investigate the characteristics of power laws in the current era of big data. In this paper we prove that the superposition of non-identical stochastic processes with power laws converges in density to a unique stable distribution. This property can be used to explain the universality of stable laws that the sums of the logarithmic returns of non-identical stock price fluctuations follow stable distributions.
Strongly localized image states of spherical graphitic particles.
Gumbs, Godfrey; Balassis, Antonios; Iurov, Andrii; Fekete, Paula
2014-01-01
We investigate the localization of charged particles by the image potential of spherical shells, such as fullerene buckyballs. These spherical image states exist within surface potentials formed by the competition between the attractive image potential and the repulsive centripetal force arising from the angular motion. The image potential has a power law rather than a logarithmic behavior. This leads to fundamental differences in the nature of the effective potential for the two geometries. Our calculations have shown that the captured charge is more strongly localized closest to the surface for fullerenes than for cylindrical nanotube.
Compact exponential product formulas and operator functional derivative
NASA Astrophysics Data System (ADS)
Suzuki, Masuo
1997-02-01
A new scheme for deriving compact expressions of the logarithm of the exponential product is proposed and it is applied to several exponential product formulas. A generalization of the Dynkin-Specht-Wever (DSW) theorem on free Lie elements is given, and it is used to study the relation between the traditional method (based on the DSW theorem) and the present new scheme. The concept of the operator functional derivative is also proposed, and it is applied to ordered exponentials, such as time-evolution operators for time-dependent Hamiltonians.
A class of nonideal solutions. 1: Definition and properties
NASA Technical Reports Server (NTRS)
Zeleznik, F. J.
1983-01-01
A class of nonideal solutions is defined by constructing a function to represent the composition dependence of thermodynamic properties for members of the class, and some properties of these solutions are studied. The constructed function has several useful features: (1) its parameters occur linearly; (2) it contains a logarithmic singularity in the dilute solution region and contains ideal solutions and regular solutions as special cases; and (3) it is applicable to N-ary systems and reduces to M-ary systems (M or = N) in a form-invariant manner.
Transistor circuit increases range of logarithmic current amplifier
NASA Technical Reports Server (NTRS)
Gilmour, G.
1966-01-01
Circuit increases the range of a logarithmic current amplifier by combining a commercially available amplifier with a silicon epitaxial transistor. A temperature compensating network is provided for the transistor.
Goode, D.J.; Appel, C.A.
1992-01-01
More accurate alternatives to the widely used harmonic mean interblock transmissivity are proposed for block-centered finite-difference models of ground-water flow in unconfined aquifers and in aquifers having smoothly varying transmissivity. The harmonic mean is the exact interblock transmissivity for steady-state one-dimensional flow with no recharge if the transmissivity is assumed to be spatially uniform over each finite-difference block, changing abruptly at the block interface. However, the harmonic mean may be inferior to other means if transmissivity varies in a continuous or smooth manner between nodes. Alternative interblock transmissivity functions are analytically derived for the case of steady-state one-dimensional flow with no recharge. The second author has previously derived the exact interblock transmissivity, the logarithmic mean, for one-dimensional flow when transmissivity is a linear function of distance in the direction of flow. We show that the logarithmic mean transmissivity is also exact for uniform flow parallel to the direction of changing transmissivity in a two- or three-dimensional model, regardless of grid orientation relative to the flow vector. For the case of horizontal flow in a homogeneous unconfined or water-table aquifer with a horizontal bottom and with areally distributed recharge, the exact interblock transmissivity is the unweighted arithmetic mean of transmissivity at the nodes. This mean also exhibits no grid-orientation effect for unidirectional flow in a two-dimensional model. For horizontal flow in an unconfined aquifer with no recharge where hydraulic conductivity is a linear function of distance in the direction of flow the exact interblock transmissivity is the product of the arithmetic mean saturated thickness and the logarithmic mean hydraulic conductivity. For several hypothetical two- and three-dimensional cases with smoothly varying transmissivity or hydraulic conductivity, the harmonic mean is shown to yield the least accurate solution to the flow equation of the alternatives considered. Application of the alternative interblock transmissivities to a regional aquifer system model indicates that the changes in computed heads and fluxes are typically small, relative to model calibration error. For this example, the use of alternative interblock transmissivities resulted in an increase in computational effort of less than 3 percent. Numerical algorithms to compute alternative interblock transmissivity functions in a modular three-dimensional flow model are presented and documented.
Logarithmic corrections to black hole entropy from Kerr/CFT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pathak, Abhishek; Porfyriadis, Achilleas P.; Strominger, Andrew
It has been shown by A. Sen that logarithmic corrections to the black hole area-entropy law are entirely determined macroscopically from the massless particle spectrum. They therefore serve as powerful consistency checks on any proposed enumeration of quantum black hole microstates. Furthermore, Sen’s results include a macroscopic computation of the logarithmic corrections for a five-dimensional near extremal Kerr-Newman black hole. We compute these corrections microscopically using a stringy embedding of the Kerr/CFT correspondence and find perfect agreement.
Logarithmic corrections to black hole entropy from Kerr/CFT
Pathak, Abhishek; Porfyriadis, Achilleas P.; Strominger, Andrew; ...
2017-04-14
It has been shown by A. Sen that logarithmic corrections to the black hole area-entropy law are entirely determined macroscopically from the massless particle spectrum. They therefore serve as powerful consistency checks on any proposed enumeration of quantum black hole microstates. Furthermore, Sen’s results include a macroscopic computation of the logarithmic corrections for a five-dimensional near extremal Kerr-Newman black hole. We compute these corrections microscopically using a stringy embedding of the Kerr/CFT correspondence and find perfect agreement.
Wen, Cheng; Dallimer, Martin; Carver, Steve; Ziv, Guy
2018-05-06
Despite the great potential of mitigating carbon emission, development of wind farms is often opposed by local communities due to the visual impact on landscape. A growing number of studies have applied nonmarket valuation methods like Choice Experiments (CE) to value the visual impact by eliciting respondents' willingness to pay (WTP) or willingness to accept (WTA) for hypothetical wind farms through survey questions. Several meta-analyses have been found in the literature to synthesize results from different valuation studies, but they have various limitations related to the use of the prevailing multivariate meta-regression analysis. In this paper, we propose a new meta-analysis method to establish general functions for the relationships between the estimated WTP or WTA and three wind farm attributes, namely the distance to residential/coastal areas, the number of turbines and turbine height. This method involves establishing WTA or WTP functions for individual studies, fitting the average derivative functions and deriving the general integral functions of WTP or WTA against wind farm attributes. Results indicate that respondents in different studies consistently showed increasing WTP for moving wind farms to greater distances, which can be fitted by non-linear (natural logarithm) functions. However, divergent preferences for the number of turbines and turbine height were found in different studies. We argue that the new analysis method proposed in this paper is an alternative to the mainstream multivariate meta-regression analysis for synthesizing CE studies and the general integral functions of WTP or WTA against wind farm attributes are useful for future spatial modelling and benefit transfer studies. We also suggest that future multivariate meta-analyses should include non-linear components in the regression functions. Copyright © 2018. Published by Elsevier B.V.
Stability of Local Quantum Dissipative Systems
NASA Astrophysics Data System (ADS)
Cubitt, Toby S.; Lucia, Angelo; Michalakis, Spyridon; Perez-Garcia, David
2015-08-01
Open quantum systems weakly coupled to the environment are modeled by completely positive, trace preserving semigroups of linear maps. The generators of such evolutions are called Lindbladians. In the setting of quantum many-body systems on a lattice it is natural to consider Lindbladians that decompose into a sum of local interactions with decreasing strength with respect to the size of their support. For both practical and theoretical reasons, it is crucial to estimate the impact that perturbations in the generating Lindbladian, arising as noise or errors, can have on the evolution. These local perturbations are potentially unbounded, but constrained to respect the underlying lattice structure. We show that even for polynomially decaying errors in the Lindbladian, local observables and correlation functions are stable if the unperturbed Lindbladian has a unique fixed point and a mixing time that scales logarithmically with the system size. The proof relies on Lieb-Robinson bounds, which describe a finite group velocity for propagation of information in local systems. As a main example, we prove that classical Glauber dynamics is stable under local perturbations, including perturbations in the transition rates, which may not preserve detailed balance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Power, U.F.; Collins, J.K.
1989-06-01
The elimination of sewage effluent-associated poliovirus, Escherichia coli, and a 22-nm icosahedral coliphage by the common mussel, Mytilus edulis, was studied. Both laboratory-and commercial-scale recirculating, UV depuration systems were used in this study. In the laboratory system, the logarithms of the poliovirus, E. coli, and coliphage levels were reduced by 1.86, 2.9, and 2.16, respectively, within 52 h of depuration. The relative patterns and rates of elimination of the three organisms suggest that they are eliminated from mussels by different mechanisms during depuration under suitable conditions. Poliovirus was not included in experiments undertaken in the commercial-scale depuration system. The differencesmore » in the relative rates and patterns of elimination were maintained for E. coli and coliphage in this system, with the logarithm of the E. coli levels being reduced by 3.18 and the logarithm of the coliphage levels being reduced by 0.87. The results from both depuration systems suggest that E. coli is an inappropriate indicator of the efficiency of virus elimination during depuration. The coliphage used appears to be a more representative indicator. Depuration under stressful conditions appeared to have a negligible affect on poliovirus and coliphage elimination rates from mussels. However, the rate and pattern of E. coli elimination were dramatically affected by these conditions. Therefore, monitoring E. coli counts might prove useful in ensuring that mussels are functioning well during depuration.« less
Claessens, T E; Georgakopoulos, D; Afanasyeva, M; Vermeersch, S J; Millar, H D; Stergiopulos, N; Westerhof, N; Verdonck, P R; Segers, P
2006-04-01
The linear time-varying elastance theory is frequently used to describe the change in ventricular stiffness during the cardiac cycle. The concept assumes that all isochrones (i.e., curves that connect pressure-volume data occurring at the same time) are linear and have a common volume intercept. Of specific interest is the steepest isochrone, the end-systolic pressure-volume relationship (ESPVR), of which the slope serves as an index for cardiac contractile function. Pressure-volume measurements, achieved with a combined pressure-conductance catheter in the left ventricle of 13 open-chest anesthetized mice, showed a marked curvilinearity of the isochrones. We therefore analyzed the shape of the isochrones by using six regression algorithms (two linear, two quadratic, and two logarithmic, each with a fixed or time-varying intercept) and discussed the consequences for the elastance concept. Our main observations were 1) the volume intercept varies considerably with time; 2) isochrones are equally well described by using quadratic or logarithmic regression; 3) linear regression with a fixed intercept shows poor correlation (R(2) < 0.75) during isovolumic relaxation and early filling; and 4) logarithmic regression is superior in estimating the fixed volume intercept of the ESPVR. In conclusion, the linear time-varying elastance fails to provide a sufficiently robust model to account for changes in pressure and volume during the cardiac cycle in the mouse ventricle. A new framework accounting for the nonlinear shape of the isochrones needs to be developed.
Wetting in a phase separating polymer blend film: quench depth dependence
Geoghegan; Ermer; Jungst; Krausch; Brenn
2000-07-01
We have used 3He nuclear reaction analysis to measure the growth of the wetting layer as a function of immiscibility (quench depth) in blends of deuterated polystyrene and poly(alpha-methylstyrene) undergoing surface-directed spinodal decomposition. We are able to identify three different laws for the surface layer growth with time t. For the deepest quenches, the forces driving phase separation dominate (high thermal noise) and the surface layer grows with a t(1/3) coarsening behavior. For shallower quenches, a logarithmic behavior is observed, indicative of a low noise system. The crossover from logarithmic growth to t(1/3) behavior is close to where a wetting transition should occur. We also discuss the possibility of a "plating transition" extending complete wetting to deeper quenches by comparing the surface field with thermal noise. For the shallowest quench, a critical blend exhibits a t(1/2) behavior. We believe this surface layer growth is driven by the curvature of domains at the surface and shows how the wetting layer forms in the absence of thermal noise. This suggestion is reinforced by a slower growth at later times, indicating that the surface domains have coalesced. Atomic force microscopy measurements in each of the different regimes further support the above. The surface in the region of t(1/3) growth is initially somewhat rougher than that in the regime of logarithmic growth, indicating the existence of droplets at the surface.
Hydrodynamics of confined colloidal fluids in two dimensions
NASA Astrophysics Data System (ADS)
Sané, Jimaan; Padding, Johan T.; Louis, Ard A.
2009-05-01
We apply a hybrid molecular dynamics and mesoscopic simulation technique to study the dynamics of two-dimensional colloidal disks in confined geometries. We calculate the velocity autocorrelation functions and observe the predicted t-1 long-time hydrodynamic tail that characterizes unconfined fluids, as well as more complex oscillating behavior and negative tails for strongly confined geometries. Because the t-1 tail of the velocity autocorrelation function is cut off for longer times in finite systems, the related diffusion coefficient does not diverge but instead depends logarithmically on the overall size of the system. The Langevin equation gives a poor approximation to the velocity autocorrelation function at both short and long times.
Multilayer neural networks with extensively many hidden units.
Rosen-Zvi, M; Engel, A; Kanter, I
2001-08-13
The information processing abilities of a multilayer neural network with a number of hidden units scaling as the input dimension are studied using statistical mechanics methods. The mapping from the input layer to the hidden units is performed by general symmetric Boolean functions, whereas the hidden layer is connected to the output by either discrete or continuous couplings. Introducing an overlap in the space of Boolean functions as order parameter, the storage capacity is found to scale with the logarithm of the number of implementable Boolean functions. The generalization behavior is smooth for continuous couplings and shows a discontinuous transition to perfect generalization for discrete ones.
NASA Astrophysics Data System (ADS)
Haldar, Amritendu; Biswas, Ritabrata
2018-06-01
We investigate the effect of thermal fluctuations on the thermodynamics of a Lovelock-AdS black hole. Taking the first order logarithmic correction term in entropy we analyze the thermodynamic potentials like Helmholtz free energy, enthalpy and Gibbs free energy. We find that all the thermodynamic potentials are decreasing functions of correction coefficient α . We also examined this correction coefficient must be positive by analysing P{-}V diagram. Further we study the P{-}V criticality and stability and find that presence of logarithmic correction in it is necessary to have critical points and stable phases. When P{-}V criticality appears, we calculate the critical volume V_c, critical pressure P_c and critical temperature T_c using different equations and show that there is no critical point for this black hole without thermal fluctuations. We also study the geometrothermodynamics of this kind of black holes. The Ricci scalar of the Ruppeiner metric is graphically analysed.
Renormalization of dijet operators at order 1 /Q 2 in soft-collinear effective theory
NASA Astrophysics Data System (ADS)
Goerke, Raymond; Inglis-Whalen, Matthew
2018-05-01
We make progress towards resummation of power-suppressed logarithms in dijet event shapes such as thrust, which have the potential to improve high-precision fits for the value of the strong coupling constant. Using a newly developed formalism for Soft-Collinear Effective Theory (SCET), we identify and compute the anomalous dimensions of all the operators that contribute to event shapes at order 1 /Q 2. These anomalous dimensions are necessary to resum power-suppressed logarithms in dijet event shape distributions, although an additional matching step and running of observable-dependent soft functions will be necessary to complete the resummation. In contrast to standard SCET, the new formalism does not make reference to modes or λ-scaling. Since the formalism does not distinguish between collinear and ultrasoft degrees of freedom at the matching scale, fewer subleading operators are required when compared to recent similar work. We demonstrate how the overlap subtraction prescription extends to these subleading operators.
Exact density-potential pairs from complex-shifted axisymmetric systems
NASA Astrophysics Data System (ADS)
Ciotti, Luca; Marinacci, Federico
2008-07-01
In a previous paper, the complex-shift method has been applied to self-gravitating spherical systems, producing new analytical axisymmetric density-potential pairs. We now extend the treatment to the Miyamoto-Nagai disc and the Binney logarithmic halo, and we study the resulting axisymmetric and triaxial analytical density-potential pairs; we also show how to obtain the surface density of shifted systems from the complex shift of the surface density of the parent model. In particular, the systems obtained from Miyamoto-Nagai discs can be used to describe disc galaxies with a peanut-shaped bulge or with a central triaxial bar, depending on the direction of the shift vector. By using a constructive method that can be applied to generic axisymmetric systems, we finally show that the Miyamoto-Nagai and the Satoh discs, and the Binney logarithmic halo cannot be obtained from the complex shift of any spherical parent distribution. As a by-product of this study, we also found two new generating functions in closed form for even and odd Legendre polynomials, respectively.
Entanglement entropy in (3 + 1)-d free U(1) gauge theory
NASA Astrophysics Data System (ADS)
Soni, Ronak M.; Trivedi, Sandip P.
2017-02-01
We consider the entanglement entropy for a free U(1) theory in 3+1 dimensions in the extended Hilbert space definition. By taking the continuum limit carefully we obtain a replica trick path integral which calculates this entanglement entropy. The path integral is gauge invariant, with a gauge fixing delta function accompanied by a Faddeev -Popov determinant. For a spherical region it follows that the result for the logarithmic term in the entanglement, which is universal, is given by the a anomaly coefficient. We also consider the extractable part of the entanglement, which corresponds to the number of Bell pairs which can be obtained from entanglement distillation or dilution. For a spherical region we show that the coefficient of the logarithmic term for the extractable part is different from the extended Hilbert space result. We argue that the two results will differ in general, and this difference is accounted for by a massless scalar living on the boundary of the region of interest.
Detailed kinetics of titanium nitride synthesis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rode, H.; Hlavacek, V.
1995-02-01
A thermogravimetric analyzer is used to study the synthesis of TiN from Ti powder over a wide range of temperature, conversion and heating rate, and for two Ti precursor powders with different morphologies. Conversions to TiN up to 99% are obtained with negligible oxygen contamination. Nonisothermal initial rate and isothermal data are used in a nonlinear least-squares minimization to determine the most appropriate rate law. The logarithmic rate law offers an excellent agreement between the experimental and calculated conversions to TiN and can predict afterburning, which is an important experimentally observed phenomenon. Due to the form of the logarithmic ratemore » law, the observed activation energy is a function of effective particle size, extent of conversion, and temperature even when the intrinsic activation energy remains constant. This aspect explains discrepancies among activation energies obtained in previous studies. The frequently used sedimentation particle size is a poor measure of the powder reactivity. The BET surface area indicates the powder reactivity much better.« less
Zhang, Ling
2017-01-01
The main purpose of this paper is to investigate the strong convergence and exponential stability in mean square of the exponential Euler method to semi-linear stochastic delay differential equations (SLSDDEs). It is proved that the exponential Euler approximation solution converges to the analytic solution with the strong order [Formula: see text] to SLSDDEs. On the one hand, the classical stability theorem to SLSDDEs is given by the Lyapunov functions. However, in this paper we study the exponential stability in mean square of the exact solution to SLSDDEs by using the definition of logarithmic norm. On the other hand, the implicit Euler scheme to SLSDDEs is known to be exponentially stable in mean square for any step size. However, in this article we propose an explicit method to show that the exponential Euler method to SLSDDEs is proved to share the same stability for any step size by the property of logarithmic norm.
Simulating the component counts of combinatorial structures.
Arratia, Richard; Barbour, A D; Ewens, W J; Tavaré, Simon
2018-02-09
This article describes and compares methods for simulating the component counts of random logarithmic combinatorial structures such as permutations and mappings. We exploit the Feller coupling for simulating permutations to provide a very fast method for simulating logarithmic assemblies more generally. For logarithmic multisets and selections, this approach is replaced by an acceptance/rejection method based on a particular conditioning relationship that represents the distribution of the combinatorial structure as that of independent random variables conditioned on a weighted sum. We show how to improve its acceptance rate. We illustrate the method by estimating the probability that a random mapping has no repeated component sizes, and establish the asymptotic distribution of the difference between the number of components and the number of distinct component sizes for a very general class of logarithmic structures. Copyright © 2018. Published by Elsevier Inc.
Double Resummation for Higgs Production
NASA Astrophysics Data System (ADS)
Bonvini, Marco; Marzani, Simone
2018-05-01
We present the first double-resummed prediction of the inclusive cross section for the main Higgs production channel in proton-proton collisions, namely, gluon fusion. Our calculation incorporates to all orders in perturbation theory two distinct towers of logarithmic corrections which are enhanced, respectively, at threshold, i.e., large x , and in the high-energy limit, i.e., small x . Large-x logarithms are resummed to next-to-next-to-next-to-leading logarithmic accuracy, while small-x ones to leading logarithmic accuracy. The double-resummed cross section is furthermore matched to the state-of-the-art fixed-order prediction at next-to-next-to-next-to-leading accuracy. We find that double resummation corrects the Higgs production rate by 2% at the currently explored center-of-mass energy of 13 TeV and its impact reaches 10% at future circular colliders at 100 TeV.
Unifying Quantum Physics with Biology
NASA Astrophysics Data System (ADS)
Goradia, Shantilal
2014-09-01
We find that the natural logarithm of the age of the universe in quantum mechanical units is close to 137. Since science is not religion, it is our moral duty to recognize the importance of this finding on the following ground. The experimentally obtained number 137 is a mystical number in science, as if written by the hand of God. It is found in cosmology; unlike other theories, it works in biology too. A formula by Boltzmann also works in both: biology and physics, as if it is in the heart of God. His formula simply leads to finding the logarithm of microstates. One of the two conflicting theories of physics (1) Einstein's theory of General Relativity and (2) Quantum Physics, the first applies only in cosmology, but the second applies in biology too. Since we have to convert the age of the universe, 13 billion years, into 1,300,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 Planck times to get close to 137, quantum physics clearly shows the characteristics of unifying with biology. The proof of its validity also lies in its ability to extend information system observed in biology.
[Effects of algae and kaolinite particles on the survival of bacteriophage MS2].
He, Qiang; Wu, Qing-Qing; Ma, Hong-Fang; Zhou, Zhen-Ming; Yuan, Bao-Ling
2014-08-01
In this study, Bacteriophage MS2, Kaolinite and Microcystis aeruginosa were selected as model materials for human enteric viruses, inorganic and organic particles, respectively. The influence of the inorganic (Kaolinite) or organic (Microcystis aeruginosa) particles on the survival of MS2 at different conditions, such as particles concentration, pH, ion concentration and natural organic matter (NOM) were studied. The results showed that Kaolinite had no effect on the survival of phage MS2 except that apparent survival of MS2 increased 1 logarithm in higher hardness water. Microcystis aeruginosa addition reduced 1 logarithm of MS2 survival. However, when the pH value was greater than 4.0 or the concentration of Microcystis aeruginosa was less than 1.0 x 10(6) cells x L(-1), Microcystis aeruginosa addition had no influence on the survival of MS2. In higher hardness water, Microcystis aeruginosa protected MS2 viruses and then increased the survival of MS2. In drinking water, resource containing higher concentration of particles, the survival ability of virus would be enhanced with the increase of the hardness and then elevated the risks of drinking water safety.
NASA Astrophysics Data System (ADS)
Yanaga, Ryuichiro; Kawahara, Hideki
2003-10-01
A new parameter extraction procedure based on logarithmic transformation of the temporal axis was applied to investigate auditory effects on voice F0 control to overcome artifacts due to natural fluctuations and nonlinearities in speech production mechanisms. The proposed method may add complementary information to recent findings reported by using frequency shift feedback method [Burnett and Larson, J. Acoust. Soc. Am. 112 (2002)], in terms of dynamic aspects of F0 control. In a series of experiments, dependencies of system parameters in F0 control on subjects, F0 and style (musical expressions and speaking) were tested using six participants. They were three male and three female students specialized in musical education. They were asked to sustain a Japanese vowel /a/ for about 10 s repeatedly up to 2 min in total while hearing F0 modulated feedback speech, that was modulated using an M-sequence. The results replicated qualitatively the previous finding [Kawahara and Williams, Vocal Fold Physiology, (1995)] and provided more accurate estimates. Relations with designing an artificial singer also will be discussed. [Work partly supported by the grant in aids in scientific research (B) 14380165 and Wakayama University.
van Turnhout, J.
2016-01-01
The dielectric spectra of colloidal systems often contain a typical low frequency dispersion, which usually remains unnoticed, because of the presence of strong conduction losses. The KK relations offer a means for converting ε′ into ε″ data. This allows us to calculate conduction free ε″ spectra in which the l.f. dispersion will show up undisturbed. This interconversion can be done on line with a moving frame of logarithmically spaced ε′ data. The coefficients of the conversion frames were obtained by kernel matching and by using symbolic differential operators. Logarithmic derivatives and differences of ε′ and ε″ provide another option for conduction free data analysis. These difference-based functions actually derived from approximations to the distribution function, have the additional advantage of improving the resolution power of dielectric studies. A high resolution is important because of the rich relaxation structure of colloidal suspensions. The development of all-in-1 modeling facilitates the conduction free and high resolution data analysis. This mathematical tool allows the apart-together fitting of multiple data and multiple model functions. It proved also useful to go around the KK conversion altogether. This was achieved by the combined approximating ε′ and ε″ data with a complex rational fractional power function. The all-in-1 minimization turned out to be also highly useful for the dielectric modeling of a suspension with the complex dipolar coefficient. It guarantees a secure correction for the electrode polarization, so that the modeling with the help of the differences ε′ and ε″ can zoom in on the genuine colloidal relaxations. PMID:27242997
The time resolution of the St Petersburg paradox
Peters, Ole
2011-01-01
A resolution of the St Petersburg paradox is presented. In contrast to the standard resolution, utility is not required. Instead, the time-average performance of the lottery is computed. The final result can be phrased mathematically identically to Daniel Bernoulli's resolution, which uses logarithmic utility, but is derived using a conceptually different argument. The advantage of the time resolution is the elimination of arbitrary utility functions. PMID:22042904
Pathogen Inactivated Plasma Concentrated: Preparation and Uses
2004-09-01
REPORT DATE 01 SEP 2004 2 . REPORT TYPE N/A 3. DATES COVERED - 4. TITLE AND SUBTITLE Pathogen Inactivated Plasma Concentrated: Preparation...Concentrated: Preparation and Uses 22 - 2 RTO-MP-HFM-109 Results: Both UVC and ozone yielded a PPV logarithmic reduction factor (LRF) of 6, for a...technology to be marketed; the industry name is Plas+SD [ 2 ]. This process functions by attacking the lipid sheathes that surround enveloped viruses
Generating series for GUE correlators
NASA Astrophysics Data System (ADS)
Dubrovin, Boris; Yang, Di
2017-11-01
We extend to the Toda lattice hierarchy the approach of Bertola et al. (Phys D Nonlinear Phenom 327:30-57, 2016; IMRN, 2016) to computation of logarithmic derivatives of tau-functions in terms of the so-called matrix resolvents of the corresponding difference Lax operator. As a particular application we obtain explicit generating series for connected GUE correlators. On this basis an efficient recursive procedure for computing the correlators in full genera is developed.
A critical assessment of viscous models of trench topography and corner flow
NASA Technical Reports Server (NTRS)
Zhang, J.; Hager, B. H.; Raefsky, A.
1984-01-01
Stresses for Newtonian viscous flow in a simple geometry (e.g., corner flow, bending flow) are obtained in order to study the effect of imposed velocity boundary conditions. Stress for a delta function velocity boundary condition decays as 1/R(2); for a step function velocity, stress goes as 1/R; for a discontinuity in curvature, the stress singularity is logarithmic. For corner flow, which has a discontinuity of velocity at a certain point, the corresponding stress has a 1/R singularity. However, for a more realistic circular-slab model, the stress singularity becomes logarithmic. Thus the stress distribution is very sensitive to the boundary conditions, and in evaluating the applicability of viscous models of trench topography it is essential to use realistic geometries. Topography and seismicity data from northern Hoshu, Japan, were used to construct a finite element model, with flow assumed tangent to the top of the grid, for both Newtonian and non-Newtonian flow (power law 3 rheology). Normal stresses at the top of the grid are compared to the observed trench topography and gravity anomalies. There is poor agreement. Purely viscous models of subducting slables with specified velocity boundary conditions do not predict normal stress patterns compatible with observed topography and gravity. Elasticity and plasticity appear to be important for the subduction process.
Topologically massive gravity and the AdS/CFT correspondence
NASA Astrophysics Data System (ADS)
Skenderis, Kostas; Taylor, Marika; van Rees, Balt C.
2009-09-01
We set up the AdS/CFT correspondence for topologically massive gravity (TMG) in three dimensions. The first step in this procedure is to determine the appropriate fall off conditions at infinity. These cannot be fixed a priori as they depend on the bulk theory under consideration and are derived by solving asymptotically the non-linear field equations. We discuss in detail the asymptotic structure of the field equations for TMG, showing that it contains leading and subleading logarithms, determine the map between bulk fields and CFT operators, obtain the appropriate counterterms needed for holographic renormalization and compute holographically one- and two-point functions at and away from the ``chiral point'' (μ = 1). The 2-point functions at the chiral point are those of a logarithmic CFT (LCFT) with cL = 0,cR = 3l/GN and b = -3l/GN, where b is a parameter characterizing different c = 0 LCFTs. The bulk correlators away from the chiral point (μ ≠ 1) smoothly limit to the LCFT ones as μ → 1. Away from the chiral point, the CFT contains a state of negative norm and the expectation value of the energy momentum tensor in that state is also negative, reflecting a corresponding bulk instability due to negative energy modes.
NASA Astrophysics Data System (ADS)
Antonov, N. V.; Gulitskiy, N. M.
2015-10-01
In this work we study the generalization of the problem considered in [Phys. Rev. E 91, 013002 (2015), 10.1103/PhysRevE.91.013002] to the case of finite correlation time of the environment (velocity) field. The model describes a vector (e.g., magnetic) field, passively advected by a strongly anisotropic turbulent flow. Inertial-range asymptotic behavior is studied by means of the field theoretic renormalization group and the operator product expansion. The advecting velocity field is Gaussian, with finite correlation time and preassigned pair correlation function. Due to the presence of distinguished direction n , all the multiloop diagrams in this model vanish, so that the results obtained are exact. The inertial-range behavior of the model is described by two regimes (the limits of vanishing or infinite correlation time) that correspond to the two nontrivial fixed points of the RG equations. Their stability depends on the relation between the exponents in the energy spectrum E ∝k⊥1 -ξ and the dispersion law ω ∝k⊥2 -η . In contrast to the well-known isotropic Kraichnan's model, where various correlation functions exhibit anomalous scaling behavior with infinite sets of anomalous exponents, here the corrections to ordinary scaling are polynomials of logarithms of the integral turbulence scale L .
Granger Test to Determine Causes of Harmful algal Blooms in TaiLake during the Last Decade
NASA Astrophysics Data System (ADS)
Guo, W.; Wu, F.
2016-12-01
Eutrophication-driven harmful cyanobacteria blooms can threaten stability of lake ecosystems. A key to solving this problem is identifying the main cause of algal blooms so that appropriate remediation can be employed. A test of causality was used to analyze data for Meiling Bay in Tai Lake (Ch: Taihu) from 2000 to 2012. After filtration of data by use of the stationary test and the co-integration test, the Granger causality test and impulse response analysis were used to analyze potential bloom causes from physicochemical parameters to chlorophyll-a concentration. Results of stationary tests showed that logarithms of secchi disk depth (lnSD), suspended solids (lnSS), lnNH4-N/NOx-N and pH were determined to be stationary as a function of time and could not be considered to be causal for changes in biomass of phytoplankton observed during that period. Results of co-integration tests indicated existence of long-run co-integrating relationships among natural logarithms of chlorophyll-a (lnChl-a), water temperature (lnWT), total organic carbon (lnTOC) and ratio of nitrogen to phosphorus (lnN/P). The Granger causality test suggested that once thresholds for nutrients such as nitrogen and phosphorus had been reached, WT could increase the likelihood or severities of cyanobacteria blooms. An unidirectional Granger relationship from N/P to Chl-a was established, the result indicated that because concentrations of TN in Meiliang Bay had reached their thresholds, it no longer limited proliferation of cyanobacteria and TP should be controlled to reduce the likelihood of algae blooms. The impulse response analysis implied that lagging effects of water temperature and N/P ratio could influence the variation of Chla concentration at certain lag periods. The results can advance understanding of mechanisms on formation of harmful cyanobacteria blooms.
Gordon, R; Burford, I R
1984-01-01
Romanomermis culicivorax juveniles, dissected out of Aedes aegypti larvae 7 days after infection, were incubated under controlled conditions in isotonic saline containing (1)C-U-palmitic acid to investigate the nature of the transport mechanism(s) used by the nematode for transcuticular uptake of palmitic acid. Net uptake of the isotope by the nematode was of a logarithmic nature with respect to time. Uptake of palmitic acid was accomplished by a combination of diffusion and a mediated process which was substrate saturable and competitively inhibited by myristic and stearic acids. Both 2,4-dinitrophenol and ouabain inhibited uptake of palmitic acid and thus supported the hypothesis that the carrier system is of the active transport variety and is coupled to a NaK ATPase pump.
Quasi Sturmian basis for the two-electon continuum
NASA Astrophysics Data System (ADS)
Zaytsev, A. S.; Ancarani, L. U.; Zaytsev, S. A.
2016-02-01
A new type of basis functions is proposed to describe a two-electron continuum which arises as a final state in electron-impact ionization and double photoionization of atomic systems. We name these functions, which are calculated in terms of the recently introduced quasi Sturmian functions, Convoluted Quasi Sturmian functions (CQS); by construction, they look asymptotically like a six-dimensional spherical wave. The driven equation describing an ( e, 3 e) process on helium in the framework of the Temkin-Poet model is solved numerically in the entire space (rather than in a finite region of space) using expansions on CQS basis functions. We show that quite rapid convergence of the solution expansion can be achieved by multiplying the basis functions by the logarithmic phase factor corresponding to the Coulomb electron-electron interaction.
Operator algebra as an application of logarithmic representation of infinitesimal generators
NASA Astrophysics Data System (ADS)
Iwata, Yoritaka
2018-02-01
The operator algebra is introduced based on the framework of logarithmic representation of infinitesimal generators. In conclusion a set of generally-unbounded infinitesimal generators is characterized as a module over the Banach algebra.
Entropy production of doubly stochastic quantum channels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Müller-Hermes, Alexander, E-mail: muellerh@posteo.net; Department of Mathematical Sciences, University of Copenhagen, 2100 Copenhagen; Stilck França, Daniel, E-mail: dsfranca@mytum.de
2016-02-15
We study the entropy increase of quantum systems evolving under primitive, doubly stochastic Markovian noise and thus converging to the maximally mixed state. This entropy increase can be quantified by a logarithmic-Sobolev constant of the Liouvillian generating the noise. We prove a universal lower bound on this constant that stays invariant under taking tensor-powers. Our methods involve a new comparison method to relate logarithmic-Sobolev constants of different Liouvillians and a technique to compute logarithmic-Sobolev inequalities of Liouvillians with eigenvectors forming a projective representation of a finite abelian group. Our bounds improve upon similar results established before and as an applicationmore » we prove an upper bound on continuous-time quantum capacities. In the last part of this work we study entropy production estimates of discrete-time doubly stochastic quantum channels by extending the framework of discrete-time logarithmic-Sobolev inequalities to the quantum case.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
McGoldrick, P.R.; Allison, T.G.
The BASIC2 INTERPRETER was developed to provide a high-level easy-to-use language for performing both control and computational functions in the MCS-80. The package is supplied as two alternative implementations, hardware and software. The ''software'' implementation provides the following capabilities: entry and editing of BASIC programs, device-independent I/O, special functions to allow access from BASIC to any I/O port, formatted printing, special INPUT/OUTPUT-and-proceed statements to allow I/O without interrupting BASIC program execution, full arithmetic expressions, limited string manipulation (10 or fewer characters), shorthand forms for common BASIC keywords, immediate mode BASIC statement execution, and capability of running a BASIC program thatmore » is stored in PROM. The allowed arithmetic operations are addition, subtraction, multiplication, division, and raising a number to a positive integral power. In the second, or ''hardware'', implementation of BASIC2 requiring an Am9511 Arithmetic Processing Unit (APU) interfaced to the 8080 microprocessor, arithmetic operations are performed by the APU. The following additional built-in functions are available in this implementation: square root, sine, cosine, tangent, arcsine, arccosine, arctangent, exponential, logarithm base e, and logarithm base 10. MCS-80,8080-based microcomputers; 8080 Assembly language; Approximately 8K bytes of RAM to store the assembled interpreter, additional user program space, and necessary peripheral devices. The hardware implementation requires an Am9511 Arithmetic Processing Unit and an interface board (reference 2).« less
Logarithms in the Year 10 A.C.
ERIC Educational Resources Information Center
Kalman, Dan; Mitchell, Charles E.
1981-01-01
An alternative application of logarithms in the high school algebra curriculum that is not undermined by the existence and widespread availability of calculators is presented. The importance and use of linear relationships are underscored in the proposed lessons. (MP)
Lattice QCD Thermodynamics and RHIC-BES Particle Production within Generic Nonextensive Statistics
NASA Astrophysics Data System (ADS)
Tawfik, Abdel Nasser
2018-05-01
The current status of implementing Tsallis (nonextensive) statistics on high-energy physics is briefly reviewed. The remarkably low freezeout-temperature, which apparently fails to reproduce the firstprinciple lattice QCD thermodynamics and the measured particle ratios, etc. is discussed. The present work suggests a novel interpretation for the so-called " Tsallis-temperature". It is proposed that the low Tsallis-temperature is due to incomplete implementation of Tsallis algebra though exponential and logarithmic functions to the high-energy particle-production. Substituting Tsallis algebra into grand-canonical partition-function of the hadron resonance gas model seems not assuring full incorporation of nonextensivity or correlations in that model. The statistics describing the phase-space volume, the number of states and the possible changes in the elementary cells should be rather modified due to interacting correlated subsystems, of which the phase-space is consisting. Alternatively, two asymptotic properties, each is associated with a scaling function, are utilized to classify a generalized entropy for such a system with large ensemble (produced particles) and strong correlations. Both scaling exponents define equivalence classes for all interacting and noninteracting systems and unambiguously characterize any statistical system in its thermodynamic limit. We conclude that the nature of lattice QCD simulations is apparently extensive and accordingly the Boltzmann-Gibbs statistics is fully fulfilled. Furthermore, we found that the ratios of various particle yields at extreme high and extreme low energies of RHIC-BES is likely nonextensive but not necessarily of Tsallis type.
Power Laws are Disguised Boltzmann Laws
NASA Astrophysics Data System (ADS)
Richmond, Peter; Solomon, Sorin
Using a previously introduced model on generalized Lotka-Volterra dynamics together with some recent results for the solution of generalized Langevin equations, we derive analytically the equilibrium mean field solution for the probability distribution of wealth and show that it has two characteristic regimes. For large values of wealth, it takes the form of a Pareto style power law. For small values of wealth, w<=wm, the distribution function tends sharply to zero. The origin of this law lies in the random multiplicative process built into the model. Whilst such results have been known since the time of Gibrat, the present framework allows for a stable power law in an arbitrary and irregular global dynamics, so long as the market is ``fair'', i.e., there is no net advantage to any particular group or individual. We further show that the dynamics of relative wealth is independent of the specific nature of the agent interactions and exhibits a universal character even though the total wealth may follow an arbitrary and complicated dynamics. In developing the theory, we draw parallels with conventional thermodynamics and derive for the system some new relations for the ``thermodynamics'' associated with the Generalized Lotka-Volterra type of stochastic dynamics. The power law that arises in the distribution function is identified with new additional logarithmic terms in the familiar Boltzmann distribution function for the system. These are a direct consequence of the multiplicative stochastic dynamics and are absent for the usual additive stochastic processes.
Nystrom, Elizabeth A.; Burns, Douglas A.
2011-01-01
TOPMODEL uses a topographic wetness index computed from surface-elevation data to simulate streamflow and subsurface-saturation state, represented by the saturation deficit. Depth to water table was computed from simulated saturation-deficit values using computed soil properties. In the Fishing Brook Watershed, TOPMODEL was calibrated to the natural logarithm of streamflow at the study area outlet and depth to water table at Sixmile Wetland using a combined multiple-objective function. Runoff and depth to water table responded differently to some of the model parameters, and the combined multiple-objective function balanced the goodness-of-fit of the model realizations with respect to these parameters. Results show that TOPMODEL reasonably simulated runoff and depth to water table during the study period. The simulated runoff had a Nash-Sutcliffe efficiency of 0.738, but the model underpredicted total runoff by 14 percent. Depth to water table computed from simulated saturation-deficit values matched observed water-table depth moderately well; the root mean squared error of absolute depth to water table was 91 millimeters (mm), compared to the mean observed depth to water table of 205 mm. The correlation coefficient for temporal depth-to-water-table fluctuations was 0.624. The variability of the TOPMODEL simulations was assessed using prediction intervals grouped using the combined multiple-objective function. The calibrated TOPMODEL results for the entire study area were applied to several subwatersheds within the study area using computed hydrogeomorphic properties of the subwatersheds.
Hilbert and Blaschke phases in the temporal coherence function of stationary broadband light.
Fernández-Pousa, Carlos R; Maestre, Haroldo; Torregrosa, Adrián J; Capmany, Juan
2008-10-27
We show that the minimal phase of the temporal coherence function gamma (tau) of stationary light having a partially-coherent symmetric spectral peak can be computed as a relative logarithmic Hilbert transform of its amplitude with respect to its asymptotic behavior. The procedure is applied to experimental data from amplified spontaneous emission broadband sources in the 1.55 microm band with subpicosecond coherence times, providing examples of degrees of coherence with both minimal and non-minimal phase. In the latter case, the Blaschke phase is retrieved and the position of the Blaschke zeros determined.
Cointegration of output, capital, labor, and energy
NASA Astrophysics Data System (ADS)
Stresing, R.; Lindenberger, D.; Kã¼mmel, R.
2008-11-01
Cointegration analysis is applied to the linear combinations of the time series of (the logarithms of) output, capital, labor, and energy for Germany, Japan, and the USA since 1960. The computed cointegration vectors represent the output elasticities of the aggregate energy-dependent Cobb-Douglas function. The output elasticities give the economic weights of the production factors capital, labor, and energy. We find that they are for labor much smaller and for energy much larger than the cost shares of these factors. In standard economic theory output elasticities equal cost shares. Our heterodox findings support results obtained with LINEX production functions.
Compact exponential product formulas and operator functional derivative
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suzuki, M.
1997-02-01
A new scheme for deriving compact expressions of the logarithm of the exponential product is proposed and it is applied to several exponential product formulas. A generalization of the Dynkin{endash}Specht{endash}Wever (DSW) theorem on free Lie elements is given, and it is used to study the relation between the traditional method (based on the DSW theorem) and the present new scheme. The concept of the operator functional derivative is also proposed, and it is applied to ordered exponentials, such as time-evolution operators for time-dependent Hamiltonians. {copyright} {ital 1997 American Institute of Physics.}
Deducing growth mechanisms for minerals from the shapes of crystal size distributions
Eberl, D.D.; Drits, V.A.; Srodon, J.
1998-01-01
Crystal size distributions (CSDs) of natural and synthetic samples are observed to have several distinct and different shapes. We have simulated these CSDs using three simple equations: the Law of Proportionate Effect (LPE), a mass balance equation, and equations for Ostwald ripening. The following crystal growth mechanisms are simulated using these equations and their modifications: (1) continuous nucleation and growth in an open system, during which crystals nucleate at either a constant, decaying, or accelerating nucleation rate, and then grow according to the LPE; (2) surface-controlled growth in an open system, during which crystals grow with an essentially unlimited supply of nutrients according to the LPE; (3) supply-controlled growth in an open system, during which crystals grow with a specified, limited supply of nutrients according to the LPE; (4) supply- or surface-controlled Ostwald ripening in a closed system, during which the relative rate of crystal dissolution and growth is controlled by differences in specific surface area and by diffusion rate; and (5) supply-controlled random ripening in a closed system, during which the rate of crystal dissolution and growth is random with respect to specific surface area. Each of these mechanisms affects the shapes of CSDs. For example, mechanism (1) above with a constant nucleation rate yields asymptotically-shaped CSDs for which the variance of the natural logarithms of the crystal sizes (??2) increases exponentially with the mean of the natural logarithms of the sizes (??). Mechanism (2) yields lognormally-shaped CSDs, for which ??2 increases linearly with ??, whereas mechanisms (3) and (5) do not change the shapes of CSDs, with ??2 remaining constant with increasing ??. During supply-controlled Ostwald ripening (4), initial lognormally-shaped CSDs become more symmetric, with ??2 decreasing with increasing ??. Thus, crystal growth mechanisms often can be deduced by noting trends in ?? versus ??2 of CSDs for a series of related samples.
Application of a minicomputer-based system in measuring intraocular fluid dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bronzino, J.D.; D'Amato, D.P.; O'Rourke, J.
A complete, computerized system has been developed to automate and display radionuclide clearance studies in an ophthalmology clinical laboratory. The system is based on a PDP-8E computer with a 16-k core memory and includes a dual-drive Decassette system and an interactive display terminal. The software controls the acquisition of data from an NIM scaler, times the procedures, and analyzes and simultaneously displays logarithmically converted data on a fully annotated graph. Animal studies and clinical experiments are presented to illustrate the nature of these displays and the results obtained using this automated eye physiometer.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pisano, Cristian; Bacchetta, Alessandro; Delcarro, Filippo
We present a first attempt at a global fit of unpolarized quark transverse momentum dependent distribution and fragmentation functions from available data on semi-inclusive deep-inelastic scattering, Drell-Yan and $Z$ boson production processes. This analysis is performed in the low transverse momentum region, at leading order in perturbative QCD and with the inclusion of energy scale evolution effects at the next-to-leading logarithmic accuracy.
The Coast Artillery Journal. Volume 57, Number 6, December 1922
1922-12-01
theorems ; Chapter III, to application; Chapters IV, V and VI, to infinitesimals and differentials, trigonometric functions, and logarithms and...taneously." There are chapters on complex numbers with simple and direct discussion of the roots of unity; on elementary theorems on the roots of an...through the centuries from the time of Pythagoras , an interest shared on the one extreme by nearly every noted mathematician and on the other extreme by
NASA Astrophysics Data System (ADS)
Shieh, Lih-Yir; Kan, Hung-Chih
2014-04-01
We demonstrate that plotting the P-V diagram of an ideal gas Carnot cycle on a logarithmic scale results in a more intuitive approach for deriving the final form of the efficiency equation. The same approach also facilitates the derivation of the efficiency of other thermodynamic engines that employ adiabatic ideal gas processes, such as the Brayton cycle, the Otto cycle, and the Diesel engine. We finally demonstrate that logarithmic plots of isothermal and adiabatic processes help with visualization in approximating an arbitrary process in terms of an infinite number of Carnot cycles.
Prediction of Soil pH Hyperspectral Spectrum in Guanzhong Area of Shaanxi Province Based on PLS
NASA Astrophysics Data System (ADS)
Liu, Jinbao; Zhang, Yang; Wang, Huanyuan; Cheng, Jie; Tong, Wei; Wei, Jing
2017-12-01
The soil pH of Fufeng County, Yangling County and Wugong County in Shaanxi Province was studied. The spectral reflectance was measured by ASD Field Spec HR portable terrain spectrum, and its spectral characteristics were analyzed. The first deviation of the original spectral reflectance of the soil, the second deviation, the logarithm of the reciprocal logarithm, the first order differential of the reciprocal logarithm and the second order differential of the reciprocal logarithm were used to establish the soil pH Spectral prediction model. The results showed that the correlation between the reflectance spectra after SNV pre-treatment and the soil pH was significantly improved. The optimal prediction model of soil pH established by partial least squares method was a prediction model based on the first order differential of the reciprocal logarithm of spectral reflectance. The principal component factor was 10, the decision coefficient Rc2 = 0.9959, the model root means square error RMSEC = 0.0076, the correction deviation SEC = 0.0077; the verification decision coefficient Rv2 = 0.9893, the predicted root mean square error RMSEP = 0.0157, The deviation of SEP = 0.0160, the model was stable, the fitting ability and the prediction ability were high, and the soil pH can be measured quickly.
Resumming double non-global logarithms in the evolution of a jet
NASA Astrophysics Data System (ADS)
Hatta, Y.; Iancu, E.; Mueller, A. H.; Triantafyllopoulos, D. N.
2018-02-01
We consider the Banfi-Marchesini-Smye (BMS) equation which resums `non-global' energy logarithms in the QCD evolution of the energy lost by a pair of jets via soft radiation at large angles. We identify a new physical regime where, besides the energy logarithms, one has to also resum (anti)collinear logarithms. Such a regime occurs when the jets are highly collimated (boosted) and the relative angles between successive soft gluon emissions are strongly increasing. These anti-collinear emissions can violate the correct time-ordering for time-like cascades and result in large radiative corrections enhanced by double collinear logs, making the BMS evolution unstable beyond leading order. We isolate the first such a correction in a recent calculation of the BMS equation to next-to-leading order by Caron-Huot. To overcome this difficulty, we construct a `collinearly-improved' version of the leading-order BMS equation which resums the double collinear logarithms to all orders. Our construction is inspired by a recent treatment of the Balitsky-Kovchegov (BK) equation for the high-energy evolution of a space-like wavefunction, where similar time-ordering issues occur. We show that the conformal mapping relating the leading-order BMS and BK equations correctly predicts the physical time-ordering, but it fails to predict the detailed structure of the collinear improvement.
NASA Astrophysics Data System (ADS)
Li, Xuxu; Li, Xinyang; wang, Caixia
2018-03-01
This paper proposes an efficient approach to decrease the computational costs of correlation-based centroiding methods used for point source Shack-Hartmann wavefront sensors. Four typical similarity functions have been compared, i.e. the absolute difference function (ADF), ADF square (ADF2), square difference function (SDF), and cross-correlation function (CCF) using the Gaussian spot model. By combining them with fast search algorithms, such as three-step search (TSS), two-dimensional logarithmic search (TDL), cross search (CS), and orthogonal search (OS), computational costs can be reduced drastically without affecting the accuracy of centroid detection. Specifically, OS reduces calculation consumption by 90%. A comprehensive simulation indicates that CCF exhibits a better performance than other functions under various light-level conditions. Besides, the effectiveness of fast search algorithms has been verified.
Kamgang-Youbi, Georges; Herry, Jean-Marie; Bellon-Fontaine, Marie-Noëlle; Brisset, Jean-Louis; Doubla, Avaly; Naïtali, Murielle
2007-01-01
This study aimed to characterize the bacterium-destroying properties of a gliding arc plasma device during electric discharges and also under temporal postdischarge conditions (i.e., when the discharge was switched off). This phenomenon was reported for the first time in the literature in the case of the plasma destruction of microorganisms. When cells of a model bacterium, Hafnia alvei, were exposed to electric discharges, followed or not followed by temporal postdischarges, the survival curves exhibited a shoulder and then log-linear decay. These destruction kinetics were modeled using GinaFiT, a freeware tool to assess microbial survival curves, and adjustment parameters were determined. The efficiency of postdischarge treatments was clearly affected by the discharge time (t*); both the shoulder length and the inactivation rate kmax were linearly modified as a function of t*. Nevertheless, all conditions tested (t* ranging from 2 to 5 min) made it possible to achieve an abatement of at least 7 decimal logarithm units. Postdischarge treatment was also efficient against bacteria not subjected to direct discharge, and the disinfecting properties of “plasma-activated water” were dependent on the treatment time for the solution. Water treated with plasma for 2 min achieved a 3.7-decimal-logarithm-unit reduction in 20 min after application to cells, and abatement greater than 7 decimal logarithm units resulted from the same contact time with water activated with plasma for 10 min. These disinfecting properties were maintained during storage of activated water for 30 min. After that, they declined as the storage time increased. PMID:17557841
Size-dependent standard deviation for growth rates: Empirical results and theoretical modeling
NASA Astrophysics Data System (ADS)
Podobnik, Boris; Horvatic, Davor; Pammolli, Fabio; Wang, Fengzhong; Stanley, H. Eugene; Grosse, I.
2008-05-01
We study annual logarithmic growth rates R of various economic variables such as exports, imports, and foreign debt. For each of these variables we find that the distributions of R can be approximated by double exponential (Laplace) distributions in the central parts and power-law distributions in the tails. For each of these variables we further find a power-law dependence of the standard deviation σ(R) on the average size of the economic variable with a scaling exponent surprisingly close to that found for the gross domestic product (GDP) [Phys. Rev. Lett. 81, 3275 (1998)]. By analyzing annual logarithmic growth rates R of wages of 161 different occupations, we find a power-law dependence of the standard deviation σ(R) on the average value of the wages with a scaling exponent β≈0.14 close to those found for the growth of exports, imports, debt, and the growth of the GDP. In contrast to these findings, we observe for payroll data collected from 50 states of the USA that the standard deviation σ(R) of the annual logarithmic growth rate R increases monotonically with the average value of payroll. However, also in this case we observe a power-law dependence of σ(R) on the average payroll with a scaling exponent β≈-0.08 . Based on these observations we propose a stochastic process for multiple cross-correlated variables where for each variable (i) the distribution of logarithmic growth rates decays exponentially in the central part, (ii) the distribution of the logarithmic growth rate decays algebraically in the far tails, and (iii) the standard deviation of the logarithmic growth rate depends algebraically on the average size of the stochastic variable.
Size-dependent standard deviation for growth rates: empirical results and theoretical modeling.
Podobnik, Boris; Horvatic, Davor; Pammolli, Fabio; Wang, Fengzhong; Stanley, H Eugene; Grosse, I
2008-05-01
We study annual logarithmic growth rates R of various economic variables such as exports, imports, and foreign debt. For each of these variables we find that the distributions of R can be approximated by double exponential (Laplace) distributions in the central parts and power-law distributions in the tails. For each of these variables we further find a power-law dependence of the standard deviation sigma(R) on the average size of the economic variable with a scaling exponent surprisingly close to that found for the gross domestic product (GDP) [Phys. Rev. Lett. 81, 3275 (1998)]. By analyzing annual logarithmic growth rates R of wages of 161 different occupations, we find a power-law dependence of the standard deviation sigma(R) on the average value of the wages with a scaling exponent beta approximately 0.14 close to those found for the growth of exports, imports, debt, and the growth of the GDP. In contrast to these findings, we observe for payroll data collected from 50 states of the USA that the standard deviation sigma(R) of the annual logarithmic growth rate R increases monotonically with the average value of payroll. However, also in this case we observe a power-law dependence of sigma(R) on the average payroll with a scaling exponent beta approximately -0.08 . Based on these observations we propose a stochastic process for multiple cross-correlated variables where for each variable (i) the distribution of logarithmic growth rates decays exponentially in the central part, (ii) the distribution of the logarithmic growth rate decays algebraically in the far tails, and (iii) the standard deviation of the logarithmic growth rate depends algebraically on the average size of the stochastic variable.
A new family of distribution functions for spherical galaxies
NASA Astrophysics Data System (ADS)
Gerhard, Ortwin E.
1991-06-01
The present study describes a new family of anisotropic distribution functions for stellar systems designed to keep control of the orbit distribution at fixed energy. These are quasi-separable functions of energy and angular momentum, and they are specified in terms of a circularity function h(x) which fixes the distribution of orbits on the potential's energy surfaces outside some anisotropy radius. Detailed results are presented for a particular set of radially anisotropic circularity functions h-alpha(x). In the scale-free logarithmic potential, exact analytic solutions are shown to exist for all scale-free circularity functions. Intrinsic and projected velocity dispersions are calculated and the expected properties are presented in extensive tables and graphs. Several applications of the quasi-separable distribution functions are discussed. They include the effects of anisotropy or a dark halo on line-broadening functions, the radial orbit instability in anisotropic spherical systems, and violent relaxation in spherical collapse.
Transverse vetoes with rapidity cutoff in SCET
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hornig, Andrew; Kang, Daekyoung; Makris, Yiannis
We consider di-jet production in hadron collisions where a transverse veto is imposed on radiation for (pseudo-)rapidities in the central region only, where this central region is defined with rapidity cutoff. For the case where the transverse measurement (e.g., transverse energy or min p T for jet veto) is parametrically larger relative to the typical transverse momentum beyond the cutoff, the cross section is insensitive to the cutoff parameter and is factorized in terms of collinear and soft degrees of freedom. The virtuality for these degrees of freedom is set by the transverse measurement, as in typical transverse-momentum dependent observablesmore » such as Drell-Yan, Higgs production, and the event shape broadening. This paper focuses on the other region, where the typical transverse momentum below and beyond the cutoff is of similar size. In this region the rapidity cutoff further resolves soft radiation into (u)soft and soft-collinear radiation with different rapidities but identical virtuality. This gives rise to rapidity logarithms of the rapidity cutoff parameter which we resum using renormalization group methods. We factorize the cross section in this region in terms of soft and collinear functions in the framework of soft-collinear effective theory, then further refactorize the soft function as a convolution of the (u)soft and soft-collinear functions. All these functions are calculated at one-loop order. As an example, we calculate a differential cross section for a specific partonic channel, qq ' → qq ' , for the jet shape angularities and show that the refactorization allows us to resum the rapidity logarithms and significantly reduce theoretical uncertainties in the jet shape spectrum.« less
Transverse vetoes with rapidity cutoff in SCET
Hornig, Andrew; Kang, Daekyoung; Makris, Yiannis; ...
2017-12-11
We consider di-jet production in hadron collisions where a transverse veto is imposed on radiation for (pseudo-)rapidities in the central region only, where this central region is defined with rapidity cutoff. For the case where the transverse measurement (e.g., transverse energy or min p T for jet veto) is parametrically larger relative to the typical transverse momentum beyond the cutoff, the cross section is insensitive to the cutoff parameter and is factorized in terms of collinear and soft degrees of freedom. The virtuality for these degrees of freedom is set by the transverse measurement, as in typical transverse-momentum dependent observablesmore » such as Drell-Yan, Higgs production, and the event shape broadening. This paper focuses on the other region, where the typical transverse momentum below and beyond the cutoff is of similar size. In this region the rapidity cutoff further resolves soft radiation into (u)soft and soft-collinear radiation with different rapidities but identical virtuality. This gives rise to rapidity logarithms of the rapidity cutoff parameter which we resum using renormalization group methods. We factorize the cross section in this region in terms of soft and collinear functions in the framework of soft-collinear effective theory, then further refactorize the soft function as a convolution of the (u)soft and soft-collinear functions. All these functions are calculated at one-loop order. As an example, we calculate a differential cross section for a specific partonic channel, qq ' → qq ' , for the jet shape angularities and show that the refactorization allows us to resum the rapidity logarithms and significantly reduce theoretical uncertainties in the jet shape spectrum.« less
Resumming double logarithms in the QCD evolution of color dipoles
Iancu, E.; Madrigal, J. D.; Mueller, A. H.; ...
2015-05-01
The higher-order perturbative corrections, beyond leading logarithmic accuracy, to the BFKL evolution in QCD at high energy are well known to suffer from a severe lack-of-convergence problem, due to radiative corrections enhanced by double collinear logarithms. Via an explicit calculation of Feynman graphs in light cone (time-ordered) perturbation theory, we show that the corrections enhanced by double logarithms (either energy-collinear, or double collinear) are associated with soft gluon emissions which are strictly ordered in lifetime. These corrections can be resummed to all orders by solving an evolution equation which is non-local in rapidity. This equation can be equivalently rewritten inmore » local form, but with modified kernel and initial conditions, which resum double collinear logs to all orders. We extend this resummation to the next-to-leading order BFKL and BK equations. The first numerical studies of the collinearly-improved BK equation demonstrate the essential role of the resummation in both stabilizing and slowing down the evolution.« less
Coulomb Logarithm in Nonideal and Degenerate Plasmas
NASA Astrophysics Data System (ADS)
Filippov, A. V.; Starostin, A. N.; Gryaznov, V. K.
2018-03-01
Various methods for determining the Coulomb logarithm in the kinetic theory of transport and various variants of the choice of the plasma screening constant, taking into account and disregarding the contribution of the ion component and the boundary value of the electron wavevector are considered. The correlation of ions is taken into account using the Ornstein-Zernike integral equation in the hypernetted-chain approximation. It is found that the effect of ion correlation in a nondegenerate plasma is weak, while in a degenerate plasma, this effect must be taken into account when screening is determined by the electron component alone. The calculated values of the electrical conductivity of a hydrogen plasma are compared with the values determined experimentally in the megabar pressure range. It is shown that the values of the Coulomb logarithm can indeed be smaller than unity. Special experiments are proposed for a more exact determination of the Coulomb logarithm in a magnetic field for extremely high pressures, for which electron scattering by ions prevails.
The energy distribution of subjets and the jet shape
Kang, Zhong-Bo; Ringer, Felix; Waalewijn, Wouter J.
2017-07-13
We present a framework that describes the energy distribution of subjets of radius r within a jet of radius R. We consider both an inclusive sample of subjets as well as subjets centered around a predetermined axis, from which the jet shape can be obtained. For r << R we factorize the physics at angular scales r and R to resum the logarithms of r/R. For central subjets, we consider both the standard jet axis and the winner-take-all axis, which involve double and single logarithms of r/R, respectively. All relevant one-loop matching coefficients are given, and an inconsistency in somemore » previous results for cone jets is resolved. Our results for the standard jet shape differ from previous calculations at next-to-leading logarithmic order, because we account for the recoil of the standard jet axis due to soft radiation. Numerical results are presented for an inclusive subjet sample for pp → jet + X at next-to-leading order plus leading logarithmic order.« less
Logarithmic Laplacian Prior Based Bayesian Inverse Synthetic Aperture Radar Imaging.
Zhang, Shuanghui; Liu, Yongxiang; Li, Xiang; Bi, Guoan
2016-04-28
This paper presents a novel Inverse Synthetic Aperture Radar Imaging (ISAR) algorithm based on a new sparse prior, known as the logarithmic Laplacian prior. The newly proposed logarithmic Laplacian prior has a narrower main lobe with higher tail values than the Laplacian prior, which helps to achieve performance improvement on sparse representation. The logarithmic Laplacian prior is used for ISAR imaging within the Bayesian framework to achieve better focused radar image. In the proposed method of ISAR imaging, the phase errors are jointly estimated based on the minimum entropy criterion to accomplish autofocusing. The maximum a posterior (MAP) estimation and the maximum likelihood estimation (MLE) are utilized to estimate the model parameters to avoid manually tuning process. Additionally, the fast Fourier Transform (FFT) and Hadamard product are used to minimize the required computational efficiency. Experimental results based on both simulated and measured data validate that the proposed algorithm outperforms the traditional sparse ISAR imaging algorithms in terms of resolution improvement and noise suppression.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kang, Zhong-Bo; Ringer, Felix; Waalewijn, Wouter J.
We present a framework that describes the energy distribution of subjets of radius r within a jet of radius R. We consider both an inclusive sample of subjets as well as subjets centered around a predetermined axis, from which the jet shape can be obtained. For r << R we factorize the physics at angular scales r and R to resum the logarithms of r/R. For central subjets, we consider both the standard jet axis and the winner-take-all axis, which involve double and single logarithms of r/R, respectively. All relevant one-loop matching coefficients are given, and an inconsistency in somemore » previous results for cone jets is resolved. Our results for the standard jet shape differ from previous calculations at next-to-leading logarithmic order, because we account for the recoil of the standard jet axis due to soft radiation. Numerical results are presented for an inclusive subjet sample for pp → jet + X at next-to-leading order plus leading logarithmic order.« less
Volatilities, Traded Volumes, and Price Increments in Derivative Securities
NASA Astrophysics Data System (ADS)
Kim, Kyungsik; Lim, Gyuchang; Kim, Soo Yong; Scalas, Enrico
2007-03-01
We apply the detrended fluctuation analysis (DFA) to the statistics of the Korean treasury bond (KTB) futures from which the logarithmic increments, volatilities, and traded volumes are estimated over a specific time lag. For our case, the logarithmic increment of futures prices has no long-memory property, while the volatility and the traded volume exhibit the existence of long-memory property. To analyze and calculate whether the volatility clustering is due to the inherent higher-order correlation not detected by applying directly the DFA to logarithmic increments of the KTB futures, it is of importance to shuffle the original tick data of futures prices and to generate the geometric Brownian random walk with the same mean and standard deviation. It is really shown from comparing the three tick data that the higher-order correlation inherent in logarithmic increments makes the volatility clustering. Particularly, the result of the DFA on volatilities and traded volumes may be supported the hypothesis of price changes.
Volatilities, traded volumes, and the hypothesis of price increments in derivative securities
NASA Astrophysics Data System (ADS)
Lim, Gyuchang; Kim, SooYong; Scalas, Enrico; Kim, Kyungsik
2007-08-01
A detrended fluctuation analysis (DFA) is applied to the statistics of Korean treasury bond (KTB) futures from which the logarithmic increments, volatilities, and traded volumes are estimated over a specific time lag. In this study, the logarithmic increment of futures prices has no long-memory property, while the volatility and the traded volume exhibit the existence of the long-memory property. To analyze and calculate whether the volatility clustering is due to a inherent higher-order correlation not detected by with the direct application of the DFA to logarithmic increments of KTB futures, it is of importance to shuffle the original tick data of future prices and to generate a geometric Brownian random walk with the same mean and standard deviation. It was found from a comparison of the three tick data that the higher-order correlation inherent in logarithmic increments leads to volatility clustering. Particularly, the result of the DFA on volatilities and traded volumes can be supported by the hypothesis of price changes.
NASA Astrophysics Data System (ADS)
Ji, Sungchul
A new mathematical formula referred to as the Planckian distribution equation (PDE) has been found to fit long-tailed histograms generated in various fields of studies, ranging from atomic physics to single-molecule enzymology, cell biology, brain neurobiology, glottometrics, econophysics, and to cosmology. PDE can be derived from a Gaussian-like equation (GLE) by non-linearly transforming its variable, x, while keeping the y coordinate constant. Assuming that GLE represents a random distribution (due to its symmetry), it is possible to define a binary logarithm of the ratio between the areas under the curves of PDE and GLE as a measure of the non-randomness (or order) underlying the biophysicochemical processes generating long-tailed histograms that fit PDE. This new function has been named the Planckian information, IP, which (i) may be a new measure of order that can be applied widely to both natural and human sciences and (ii) can serve as the opposite of the Boltzmann-Gibbs entropy, S, which is a measure of disorder. The possible rationales for the universality of PDE may include (i) the universality of the wave-particle duality embedded in PDE, (ii) the selection of subsets of random processes (thereby breaking the symmetry of GLE) as the basic mechanism of generating order, organization, and function, and (iii) the quantity-quality complementarity as the connection between PDE and Peircean semiotics.
Statistics of Advective Stretching in Three-dimensional Incompressible Flows
NASA Astrophysics Data System (ADS)
Subramanian, Natarajan; Kellogg, Louise H.; Turcotte, Donald L.
2009-09-01
We present a method to quantify kinematic stretching in incompressible, unsteady, isoviscous, three-dimensional flows. We extend the method of Kellogg and Turcotte (J. Geophys. Res. 95:421-432, 1990) to compute the axial stretching/thinning experienced by infinitesimal ellipsoidal strain markers in arbitrary three-dimensional incompressible flows and discuss the differences between our method and the computation of Finite Time Lyapunov Exponent (FTLE). We use the cellular flow model developed in Solomon and Mezic (Nature 425:376-380, 2003) to study the statistics of stretching in a three-dimensional unsteady cellular flow. We find that the probability density function of the logarithm of normalised cumulative stretching (log S) for a globally chaotic flow, with spatially heterogeneous stretching behavior, is not Gaussian and that the coefficient of variation of the Gaussian distribution does not decrease with time as t^{-1/2} . However, it is observed that stretching becomes exponential log S˜ t and the probability density function of log S becomes Gaussian when the time dependence of the flow and its three-dimensionality are increased to make the stretching behaviour of the flow more spatially uniform. We term these behaviors weak and strong chaotic mixing respectively. We find that for strongly chaotic mixing, the coefficient of variation of the Gaussian distribution decreases with time as t^{-1/2} . This behavior is consistent with a random multiplicative stretching process.
Olmez, Hülya Kaptan; Aran, Necla
2005-02-01
Mathematical models describing the growth kinetic parameters (lag phase duration and growth rate) of Bacillus cereus as a function of temperature, pH, sodium lactate and sodium chloride concentrations were obtained in this study. In order to get a residual distribution closer to a normal distribution, the natural logarithm of the growth kinetic parameters were used in modeling. For reasons of parsimony, the polynomial models were reduced to contain only the coefficients significant at a level of p
Four theorems on the psychometric function.
May, Keith A; Solomon, Joshua A
2013-01-01
In a 2-alternative forced-choice (2AFC) discrimination task, observers choose which of two stimuli has the higher value. The psychometric function for this task gives the probability of a correct response for a given stimulus difference, Δx. This paper proves four theorems about the psychometric function. Assuming the observer applies a transducer and adds noise, Theorem 1 derives a convenient general expression for the psychometric function. Discrimination data are often fitted with a Weibull function. Theorem 2 proves that the Weibull "slope" parameter, β, can be approximated by β(Noise) x β(Transducer), where β(Noise) is the β of the Weibull function that fits best to the cumulative noise distribution, and β(Transducer) depends on the transducer. We derive general expressions for β(Noise) and β(Transducer), from which we derive expressions for specific cases. One case that follows naturally from our general analysis is Pelli's finding that, when d' ∝ (Δx)(b), β ≈ β(Noise) x b. We also consider two limiting cases. Theorem 3 proves that, as sensitivity improves, 2AFC performance will usually approach that for a linear transducer, whatever the actual transducer; we show that this does not apply at signal levels where the transducer gradient is zero, which explains why it does not apply to contrast detection. Theorem 4 proves that, when the exponent of a power-function transducer approaches zero, 2AFC performance approaches that of a logarithmic transducer. We show that the power-function exponents of 0.4-0.5 fitted to suprathreshold contrast discrimination data are close enough to zero for the fitted psychometric function to be practically indistinguishable from that of a log transducer. Finally, Weibull β reflects the shape of the noise distribution, and we used our results to assess the recent claim that internal noise has higher kurtosis than a Gaussian. Our analysis of β for contrast discrimination suggests that, if internal noise is stimulus-independent, it has lower kurtosis than a Gaussian.
NASA Astrophysics Data System (ADS)
Downie, John D.
1995-08-01
The transmission properties of some bacteriorhodopsin-film spatial light modulators are uniquely suited to allow nonlinear optical image-processing operations to be applied to images with multiplicative noise characteristics. A logarithmic amplitude-transmission characteristic of the film permits the conversion of multiplicative noise to additive noise, which may then be linearly filtered out in the Fourier plane of the transformed image. I present experimental results demonstrating the principle and the capability for several different image and noise situations, including deterministic noise and speckle. The bacteriorhodopsin film studied here displays the logarithmic transmission response for write intensities spanning a dynamic range greater than 2 orders of magnitude.
Next-to-leading order Balitsky-Kovchegov equation with resummation
Lappi, T.; Mantysaari, H.
2016-05-03
Here, we solve the Balitsky-Kovchegov evolution equation at next-to-leading order accuracy including a resummation of large single and double transverse momentum logarithms to all orders. We numerically determine an optimal value for the constant under the large transverse momentum logarithm that enables including a maximal amount of the full NLO result in the resummation. When this value is used, the contribution from the α 2 s terms without large logarithms is found to be small at large saturation scales and at small dipoles. Close to initial conditions relevant for phenomenological applications, these fixed-order corrections are shown to be numerically important.
NASA Technical Reports Server (NTRS)
Downie, John D.
1995-01-01
The transmission properties of some bacteriorhodopsin-film spatial light modulators are uniquely suited to allow nonlinear optical image-processing operations to be applied to images with multiplicative noise characteristics. A logarithmic amplitude-transmission characteristic of the film permits the conversion of multiplicative noise to additive noise, which may then be linearly filtered out in the Fourier plane of the transformed image. I present experimental results demonstrating the principle and the capability for several different image and noise situations, including deterministic noise and speckle. The bacteriorhodopsin film studied here displays the logarithmic transmission response for write intensities spanning a dynamic range greater than 2 orders of magnitude.
Wade, E.J.; Stone, R.S.
1959-03-10
Electronic,amplifier circuits, especially a logai-ithmic amplifier characterizxed by its greatly improved strability are discussed. According to the in ention, means are provided to feed bach the output valtagee to a diode in the amplifier input circuit, the diode being utilized to produce the logarithmic characteristics. The diode is tics, The diode isition therewith and having its filament operated from thc same source s the filament of the logarithmic diode. A bias current of relatively large value compareii with the signal current is continuously passed through the compiting dioie to render the diode insensitivy to variations in the signal current. by this odes kdu to variaelled, so that the stability of the amlifier will be unimpaired.
Itoh, Taihei; Kimura, Masaomi; Sasaki, Shingo; Owada, Shingen; Horiuchi, Daisuke; Sasaki, Kenichi; Ishida, Yuji; Takahiko, Kinjo; Okumura, Ken
2014-04-01
Low conduction velocity (CV) in the area showing low electrogram amplitude (EA) is characteristic of reentry circuit of atypical atrial flutter (AFL). The quantitative relationship between CV and EA remains unclear. We characterized AFL reentry circuit in the right atrium (RA), focusing on the relationship between local CV and bipolar EA on the circuit. We investigated 26 RA AFL (10 with typical AFL; 10 atypical incisional AFL; 6 atypical nonincisional AFL) using CARTO system. By referring to isochronal and propagation maps delineated during AFL, points activated faster on the circuit were selected (median, 7 per circuit). At the 196 selected points obtained from all patients, local CV measured between the adjacent points and bipolar EA were analyzed. There was a highly significant correlation between local CV and natural logarithm of EA (lnEA) (R(2) = 0.809, P < 0.001). Among 26 AFL, linear regression analysis of mean CV, calculated by dividing circuit length (152.3 ± 41.7 mm) by tachycardia cycle length (TCL) (median 246 msec), and mean lnEA, calculated by dividing area under curve of lnEA during one tachycardia cycle by TCL, showed y = 0.695 + 0.191x (where: y = mean CV, x = lnEA; R(2) = 0.993, P < 0.001). Local CV estimated from EA with the use of this formula showed a highly significant linear correlation with that measured by the map (R(2) = 0.809, P < 0.001). The lnEA and estimated local CV show a highly positive linear correlation. CV is possibly estimated by EA measured by CARTO mapping. © 2013 Wiley Periodicals, Inc.
Sample allocation balancing overall representativeness and stratum precision.
Diaz-Quijano, Fredi Alexander
2018-05-07
In large-scale surveys, it is often necessary to distribute a preset sample size among a number of strata. Researchers must make a decision between prioritizing overall representativeness or precision of stratum estimates. Hence, I evaluated different sample allocation strategies based on stratum size. The strategies evaluated herein included allocation proportional to stratum population; equal sample for all strata; and proportional to the natural logarithm, cubic root, and square root of the stratum population. This study considered the fact that, from a preset sample size, the dispersion index of stratum sampling fractions is correlated with the population estimator error and the dispersion index of stratum-specific sampling errors would measure the inequality in precision distribution. Identification of a balanced and efficient strategy was based on comparing those both dispersion indices. Balance and efficiency of the strategies changed depending on overall sample size. As the sample to be distributed increased, the most efficient allocation strategies were equal sample for each stratum; proportional to the logarithm, to the cubic root, to square root; and that proportional to the stratum population, respectively. Depending on sample size, each of the strategies evaluated could be considered in optimizing the sample to keep both overall representativeness and stratum-specific precision. Copyright © 2018 Elsevier Inc. All rights reserved.
Zhu, Xiaoli; Sun, Liya; Chen, Yangyang; Ye, Zonghuang; Shen, Zhongming; Li, Genxi
2013-09-15
Graphene, a single atom thick and two dimensional carbon nano-material, has been proven to possess many unique properties, one of which is the recent discovery that it can interact with single-stranded DNA through noncovalent π-π stacking. In this work, we demonstrate that a new strategy to fabricate many kinds of biosensors can be developed by combining this property with cascade chemical reactions. Taking the fabrication of glucose sensor as an example, while the detection target, glucose, may regulate the graphene-DNA interaction through three cascade chemical reactions, electrochemical techniques are employed to detect the target-regulated graphene-DNA interaction. Experimental results show that in a range from 5μM to 20mM, the glucose concentration is in a natural logarithm with the logarithm of the amperometric response, suggesting a best detection limit and detection range. The proposed biosensor also shows favorable selectivity, and it has the advantage of no need for labeling. What is more, by controlling the cascade chemical reactions, detection of a variety of other targets may be achieved, thus the strategy proposed in this work may have a wide application potential in the future. Copyright © 2013 Elsevier B.V. All rights reserved.
Resummation of jet veto logarithms at N 3 LL a + NNLO for W + W ? production at the LHC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dawson, S.; Jaiswal, P.; Li, Ye
We compute the resummed on-shell W+W- production cross section under a jet veto at the LHC to partial N3LL order matched to the fixed-order NNLO result. Differential NNLO cross sections are obtained from an implementation of qT subtraction in Sherpa. The two-loop virtual corrections to the qq¯→W+W- amplitude, used in both fixed-order and resummation predictions, are extracted from the public code qqvvamp. We perform resummation using soft collinear effective theory, with approximate beam functions where only the logarithmic terms are included at two-loop. In addition to scale uncertainties from the hard matching scale and the factorization scale, rapidity scale variationsmore » are obtained within the analytic regulator approach. Our resummation results show a decrease in the jet veto cross section compared to NNLO fixed-order predictions, with reduced scale uncertainties compared to NNLL+NLO resummed predictions. We include the loop-induced gg contribution with jet veto resummation to NLL+LO. The prediction shows good agreement with recent LHC measurements.« less
Resummation of jet veto logarithms at N 3 LL a + NNLO for W + W ? production at the LHC
Dawson, S.; Jaiswal, P.; Li, Ye; ...
2016-12-01
We compute the resummed on-shell W+W- production cross section under a jet veto at the LHC to partial N3LL order matched to the fixed-order NNLO result. Differential NNLO cross sections are obtained from an implementation of qT subtraction in Sherpa. The two-loop virtual corrections to the qq¯→W+W- amplitude, used in both fixed-order and resummation predictions, are extracted from the public code qqvvamp. We perform resummation using soft collinear effective theory, with approximate beam functions where only the logarithmic terms are included at two-loop. In addition to scale uncertainties from the hard matching scale and the factorization scale, rapidity scale variationsmore » are obtained within the analytic regulator approach. Our resummation results show a decrease in the jet veto cross section compared to NNLO fixed-order predictions, with reduced scale uncertainties compared to NNLL+NLO resummed predictions. We include the loop-induced gg contribution with jet veto resummation to NLL+LO. The prediction shows good agreement with recent LHC measurements.« less
Climatology of contribution-weighted tropical rain rates based on TRMM 3B42
NASA Astrophysics Data System (ADS)
Venugopal, V.; Wallace, J. M.
2016-10-01
The climatology of annual mean tropical rain rate is investigated based on merged Tropical Rainfall Measuring Mission (TRMM) 3B42 data. At 0.25° × 0.25° spatial resolution, and 3-hourly temporal resolution, half the rain is concentrated within only ˜1% of the area of the tropics at any given instant. When plotted as a function of logarithm of rain rate, the cumulative contribution of rate-ranked rain occurrences to the annual mean rainfall in each grid box is S shaped and its derivative, the contribution-weighted rain rate spectrum, is Gaussian shaped. The 50% intercept of the cumulative contribution R50 is almost equivalent to the contribution-weighted mean logarithmic rain rate RL¯ based on all significant rain occurrences. The spatial patterns of R50 and RL¯ are similar to those obtained by mapping the fraction of the annual accumulation explained by rain occurrences with rates above various specified thresholds. The geographical distribution of R50 confirms the existence of patterns noted in prior analyses based on TRMM precipitation radar data and reveals several previously unnoticed features.
NASA Technical Reports Server (NTRS)
Betts, J. N.; Holland, H. D.
1991-01-01
Data for the burial efficiency of organic carbon with marine sediments have been compiled for 69 locations. The burial efficiency as here defined is the ratio of the quantity of organic carbon which is ultimately buried to that which reaches the sediment-water interface. As noted previously, the sedimentation rate exerts a dominant influence on the burial efficiency. The logarithm of the burial efficiency is linearly related to the logarithm of the sedimentation rate at low sedimentation rates. At high sedimentation rates the burial efficiency can exceed 50% and becomes nearly independent of the sedimentation rate. The residual of the burial efficiency after the effect of the sedimentation rate has been subtracted is a weak function of the O2 concentration in bottom waters. The scatter is sufficiently large, so that the effect of the O2 concentration in bottom waters on the burial efficiency of organic matter could be either negligible or a minor but significant part of the mechanism that controls the level of O2 in the atmosphere.
The logarithmic Cardy case: Boundary states and annuli
NASA Astrophysics Data System (ADS)
Fuchs, Jürgen; Gannon, Terry; Schaumann, Gregor; Schweigert, Christoph
2018-05-01
We present a model-independent study of boundary states in the Cardy case that covers all conformal field theories for which the representation category of the chiral algebra is a - not necessarily semisimple - modular tensor category. This class, which we call finite CFTs, includes all rational theories, but goes much beyond these, and in particular comprises many logarithmic conformal field theories. We show that the following two postulates for a Cardy case are compatible beyond rational CFT and lead to a universal description of boundary states that realizes a standard mathematical setup: First, for bulk fields, the pairing of left and right movers is given by (a coend involving) charge conjugation; and second, the boundary conditions are given by the objects of the category of chiral data. For rational theories our proposal reproduces the familiar result for the boundary states of the Cardy case. Further, with the help of sewing we compute annulus amplitudes. Our results show in particular that these possess an interpretation as partition functions, a constraint that for generic finite CFTs is much more restrictive than for rational ones.
Deposition and persistence of beachcast seabird carcasses
van Pelt, Thomas I.; Piatt, John F.
1995-01-01
Following a massive wreck of guillemots (Uria aalge) in late winter and spring of 1993, we monitored the deposition and subsequent disappearance of 398 beachcast guillemot carcasses on two beaches in Resurrection Bay, Alaska, during a 100 day period. Deposition of carcasses declined logarithmically with time after the original event. Since fresh carcasses were more likely to be removed between counts than older carcasses, persistence rates increased logarithmically over time. Scavenging appeared to be the primary cause of carcass removal, followed by burial in beach debris and sand. Along-shore transport was negligible. We present an equation which estimates the number of carcasses deposited at time zero from beach surveys conducted some time later, using non-linear persistence rates that are a function of time. We use deposition rates to model the accumulation of beached carcasses, accounting for further deposition subsequent to the original event. Finally, we present a general method for extrapolating from a single count the number of carcasses cumulatively deposited on surveyed beaches, and discuss how our results can be used to assess the magnitude of mass seabird mortality events from beach surveys.
Dynamical conductivity at the dirty superconductor-metal quantum phase transition.
Del Maestro, Adrian; Rosenow, Bernd; Hoyos, José A; Vojta, Thomas
2010-10-01
We study the transport properties of ultrathin disordered nanowires in the neighborhood of the superconductor-metal quantum phase transition. To this end we combine numerical calculations with analytical strong-disorder renormalization group results. The quantum critical conductivity at zero temperature diverges logarithmically as a function of frequency. In the metallic phase, it obeys activated scaling associated with an infinite-randomness quantum critical point. We extend the scaling theory to higher dimensions and discuss implications for experiments.
Longitudinal structure function from logarithmic slopes of F2 at low x
NASA Astrophysics Data System (ADS)
Boroun, G. R.
2018-01-01
Using Laplace transform techniques, I calculate the longitudinal structure function FL(x ,Q2) from the scaling violations of the proton structure function F2(x ,Q2) and make a critical study of this relationship between the structure functions at leading order (LO) up to next-to-next-to leading order (NNLO) analysis at small x . Furthermore, I consider heavy quark contributions to the relation between the structure functions, which leads to compact formula for Nf=3 +Heavy . The nonlinear corrections to the longitudinal structure function at LO up to NNLO analysis are shown in the Nf=4 (light quark flavor) based on the nonlinear corrections at R =2 and R =4 GeV-1 . The results are compared with experimental data of the longitudinal proton structure function FL in the range of 6.5 ≤Q2≤800 GeV2 .
NASA Astrophysics Data System (ADS)
Weiss, J. R.; Saunders, A.; Qiu, Q.; Foster, J. H.; Gomez, D.; Bevis, M. G.; Smalley, R., Jr.; Cimbaro, S.; Lenzano, L. E.; Barón, J.; Baez, J. C.; Echalar, A.; Avery, J.; Wright, T. J.
2017-12-01
We use a large regional network of continuous GPS sites to investigate postseismic deformation following the Mw 8.8 Maule and Mw 8.1 Pisagua earthquakes in Chile. Geodetic observations of surface displacements associated with megathrust earthquakes aid our understanding of the subduction zone earthquake cycle including postseismic processes such as afterslip and viscoelastic relaxation. The observations also help place constraints on the rheology and structure of the crust and upper mantle. We first empirically model the data and find that, while single-term logarithmic functions adequately fit the postseismic timeseries, they do a poor job of characterizing the rapid displacements in the days to weeks following the earthquakes. Combined exponential-logarithmic functions better capture the inferred near-field transition between afterslip and viscous relaxation, however displacements are best fit by three-term exponential functions with characteristic decay times of 15, 250, and 1500 days. Viscoelastic modeling of the velocity field and timeseries following the Maule earthquake suggests that the rheology is complex but is consistent with a 100-km-thick asthenosphere channel of viscosity 1018 Pa s sandwiched between a 40-km-thick elastic lid and a strong viscoelastic upper mantle. Variations in lid thickness of up to 40 km may be present and in some locations rapid deformation within the first months to years following the Maule event requires an even lower effective viscosity or a significant contribution from afterslip. We investigate this further by jointly inverting the GPS data for the time evolution of afterslip and viscous flow in the mantle wedge surrounding the Maule event.
NASA Astrophysics Data System (ADS)
Yu, Fei; Zuo, Jian; Mu, Kai-jun; Zhang, Zhen-wei; Zhang, Liang-liang; Zhang, Lei-wei; Zhang, Cun-lin
2013-08-01
Terahertz spectroscopy is a powerful tool for materials investigation. The low frequency vibrations were usually investigated by means of absorption coefficient regardless of the refractive index. It leads to the disregard of some inherent low-frequency vibrational information of the chemical compounds. Moreover, due to the scattering inside the sample, there are some distortions of the absorption features, so that the absorption dependent material identification is not valid enough. Here, a statistical parameter named reduced absorption cross section (RACS) is introduced. This can not only help us investigate the molecular dynamics but also distinguish one chemical compound with another which has similar function-groups. Experiments are carried out on L-Tyrosine and L-Phenylalanine and the different mass ratios of their mixtures as an example of the application of RACS. The results come out that the RACS spectrum of L-Tyrosine and L-Phenylalanine reserve the spectral fingerprint information of absorption spectrum. The log plot of RACSs of the two amino acids show power-law behavior σR(~ν) ~ (ν~α), and there is a linear relation between the wavenumber and the RACS in the double logarithmic plot. The exponents α, at the same time, are the slopes of the RACS curves in the double logarithmic plot. The big differences of the exponents α between the two amino acids and their mixtures can be seen visually from the slopes of the RACS curves. So we can use RACS analytical method to distinguish some complex compounds with similar function-groups and mixtures from another which has similar absorption peaks in THz region.
Product and Quotient Rules from Logarithmic Differentiation
ERIC Educational Resources Information Center
Chen, Zhibo
2012-01-01
A new application of logarithmic differentiation is presented, which provides an alternative elegant proof of two basic rules of differentiation: the product rule and the quotient rule. The proof can intrigue students, help promote their critical thinking and rigorous reasoning and deepen their understanding of previously encountered concepts. The…
Regularized Laplacian determinants of self-similar fractals
NASA Astrophysics Data System (ADS)
Chen, Joe P.; Teplyaev, Alexander; Tsougkas, Konstantinos
2018-06-01
We study the spectral zeta functions of the Laplacian on fractal sets which are locally self-similar fractafolds, in the sense of Strichartz. These functions are known to meromorphically extend to the entire complex plane, and the locations of their poles, sometimes referred to as complex dimensions, are of special interest. We give examples of locally self-similar sets such that their complex dimensions are not on the imaginary axis, which allows us to interpret their Laplacian determinant as the regularized product of their eigenvalues. We then investigate a connection between the logarithm of the determinant of the discrete graph Laplacian and the regularized one.
Sulfate passivation in the lead-acid system as a capacity limiting process
NASA Astrophysics Data System (ADS)
Kappus, W.; Winsel, A.
1982-10-01
Calculations of the discharge capacity of Pb and PbO 2 electrodes as a function of various parameters are presented. They are based on the solution-precipitation mechanism for the discharge reaction and its formulation by Winsel et al. A logarithmic pore size distribution is used to fit experimental porosigrams of Pb and PbO 2 electrodes. Based on this pore size distribution the capacity is calculated as a function of current, BET surface, and porosity of the PbSO 4 diaphragm. The PbSO 4 supersaturation as the driving force of the diffusive transport is chosen as a free parameter.
NASA Astrophysics Data System (ADS)
Sienkiewicz, J.; Holyst, J. A.
2005-05-01
We have examined a topology of 21 public transport networks in Poland. Our data exhibit several universal features in considered systems when they are analyzed from the point of view of evolving networks. Depending on the assumed definition of a network topology the degree distribution can follow a power law p(k) ˜ k-γ or can be described by an exponential function p(k) ˜ exp (-α k). In the first case one observes that mean distances between two nodes are a linear function of logarithms of their degrees product.
NASA Astrophysics Data System (ADS)
Grosberg, Alexander Y.; Nechaev, Sergei K.
2015-08-01
We consider flexible branched polymer, with quenched branch structure, and show that its conformational entropy as a function of its gyration radius R, at large R, obeys, in the scaling sense, Δ S˜ {R}2/({a}2L), with a bond length (or Kuhn segment) and L defined as an average spanning distance. We show that this estimate is valid up to at most the logarithmic correction for any tree. We do so by explicitly computing the largest eigenvalues of Kramers matrices for both regular and ‘sparse’ three-branched trees, uncovering on the way their peculiar mathematical properties.
Definition and Evolution of Transverse Momentum Distributions
NASA Astrophysics Data System (ADS)
Echevarría, Miguel G.; Idilbi, Ahmad; Scimemi, Ignazio
We consider the definition of unpolarized transverse-momentum-dependent parton distribution functions while staying on-the-light-cone. By imposing a requirement of identical treatment of two collinear sectors, our approach, compatible with a generic factorization theorem with the soft function included, is valid for all non-ultra-violet regulators (as it should), an issue which causes much confusion in the whole field. We explain how large logarithms can be resummed in a way which can be considered as an alternative to the use of Collins-Soper evolution equation. The evolution properties are also discussed and the gauge-invariance, in both classes of gauges, regular and singular, is emphasized.
Characterization of mixing in an electroosmotically stirred continuous micro mixer
NASA Astrophysics Data System (ADS)
Beskok, Ali
2005-11-01
We present theoretical and numerical studies of mixing in a straight micro channel with zeta potential patterned surfaces. A steady pressure driven flow is maintained in the channel in addition to a time dependent electroosmotic flow, generated by a stream-wise AC electric field. The zeta potential patterns are placed critically in the channel to achieve spatially asymmetric time-dependent flow patterns that lead to chaotic stirring. Fixing the geometry, we performed parametric studies of passive particle motion that led to generation of Poincare sections and characterization of chaotic strength by finite time Lyapunov exponents. The parametric studies were performed as a function of the Womersley number (normalized AC frequency) and the ratio of Poiseuille flow and electroosmotic velocities. After determining the non-dimensional parameters that led to high chaotic strength, we performed spectral element simulations of species transport and mixing at high Peclet numbers, and characterized mixing efficiency using the Mixing Index inverse. Mixing lengths proportional to the natural logarithm of the Peclet number are reported. Using the optimum non-dimensional parameters and the typical magnitudes involved in electroosmotic flows, we were able to determine the physical dimensions and operation conditions for a prototype micro-mixer.
Parameter identification of JONSWAP spectrum acquired by airborne LIDAR
NASA Astrophysics Data System (ADS)
Yu, Yang; Pei, Hailong; Xu, Chengzhong
2017-12-01
In this study, we developed the first linear Joint North Sea Wave Project (JONSWAP) spectrum (JS), which involves a transformation from the JS solution to the natural logarithmic scale. This transformation is convenient for defining the least squares function in terms of the scale and shape parameters. We identified these two wind-dependent parameters to better understand the wind effect on surface waves. Due to its efficiency and high-resolution, we employed the airborne Light Detection and Ranging (LIDAR) system for our measurements. Due to the lack of actual data, we simulated ocean waves in the MATLAB environment, which can be easily translated into industrial programming language. We utilized the Longuet-Higgin (LH) random-phase method to generate the time series of wave records and used the fast Fourier transform (FFT) technique to compute the power spectra density. After validating these procedures, we identified the JS parameters by minimizing the mean-square error of the target spectrum to that of the estimated spectrum obtained by FFT. We determined that the estimation error is relative to the amount of available wave record data. Finally, we found the inverse computation of wind factors (wind speed and wind fetch length) to be robust and sufficiently precise for wave forecasting.
Density and energy relaxation in an open one-dimensional system
NASA Astrophysics Data System (ADS)
Jose, Prasanth P.; Bagchi, Biman
2004-05-01
A new master equation to mimic the dynamics of a collection of interacting random walkers in an open system is proposed and solved numerically. In this model, the random walkers interact through excluded volume interaction (single-file system); and the total number of walkers in the lattice can fluctuate because of exchange with a bath. In addition, the movement of the random walkers is biased by an external perturbation. Two models for the latter are considered: (1) an inverse potential (V∝1/r), where r is the distance between the center of the perturbation and the random walker and (2) an inverse of sixth power potential (V∝1/r6). The calculated density of the walkers and the total energy show interesting dynamics. When the size of the system is comparable to the range of the perturbing field, the energy relaxation is found to be highly nonexponential. In this range, the system can show stretched exponential (e-(t/τs)β) and even logarithmic time dependence of energy relaxation over a limited range of time. Introduction of density exchange in the lattice markedly weakens this nonexponentiality of the relaxation function, irrespective of the nature of perturbation.
Synchronization in scale-free networks: The role of finite-size effects
NASA Astrophysics Data System (ADS)
Torres, D.; Di Muro, M. A.; La Rocca, C. E.; Braunstein, L. A.
2015-06-01
Synchronization problems in complex networks are very often studied by researchers due to their many applications to various fields such as neurobiology, e-commerce and completion of tasks. In particular, scale-free networks with degree distribution P(k)∼ k-λ , are widely used in research since they are ubiquitous in Nature and other real systems. In this paper we focus on the surface relaxation growth model in scale-free networks with 2.5< λ <3 , and study the scaling behavior of the fluctuations, in the steady state, with the system size N. We find a novel behavior of the fluctuations characterized by a crossover between two regimes at a value of N=N* that depends on λ: a logarithmic regime, found in previous research, and a constant regime. We propose a function that describes this crossover, which is in very good agreement with the simulations. We also find that, for a system size above N* , the fluctuations decrease with λ, which means that the synchronization of the system improves as λ increases. We explain this crossover analyzing the role of the network's heterogeneity produced by the system size N and the exponent of the degree distribution.
Rizzo, Stanislao; Tartaro, Ruggero; Barca, Francesco; Caporossi, Tomaso; Bacherini, Daniela; Giansanti, Fabrizio
2017-12-08
The inverted flap (IF) technique has recently been introduced in macular hole (MH) surgery. The IF technique has shown an increase of the success rate in the case of large MHs and in MHs associated with high myopia. This study reports the anatomical and functional results in a large series of patients affected by MH treated using pars plana vitrectomy and gas tamponade combined with internal limiting membrane (ILM) peeling or IF. This is a retrospective, consecutive, nonrandomized comparative study of patients affected by idiopathic or myopic MH treated using small-gauge pars plana vitrectomy (25- or 23-gauge) between January 2011 and May 2016. The patients were divided into two groups according to the ILM removal technique (complete removal vs. IF). A subgroup analysis was performed according to the MH diameter (MH < 400 µm and MH ≥ 400 µm), axial length (AL < 26 mm and AL ≥ 26 mm), and the presence of chorioretinal atrophy in the macular area (present or absent). We included 620 eyes of 570 patients affected by an MH, 300 patients underwent pars plana vitrectomy and ILM peeling and 320 patients underwent pars plana vitrectomy and IF. Overall, 84.94% of the patients had complete anatomical success characterized by MH closure after the operation. In particular, among the patients who underwent only ILM peeling the closure rate was 78.75%; among the patients who underwent the IF technique, it was 91.93% (P = 0.001); and among the patients affected by full-thickness MH ≥400 µm, success was achieved in 95.6% of the cases in the IF group and in 78.6% in the ILM peeling group (P = 0.001); among the patients with an axial length ≥26 mm, success was achieved in 88.4% of the cases in the IF group and in 38.9% in the ILM peeling group (P = 0.001). Average preoperative best-corrected visual acuity was 0.77 (SD = 0.32) logarithm of the minimum angle of resolution (20/118 Snellen) in the peeling group and 0.74 (SD = 0.33) logarithm of the minimum angle of resolution (20/110 Snellen) in the IF group (P = 0.31). Mean postoperative best-corrected visual acuity was 0.52 (SD = 0.42) logarithm of the minimum angle of resolution (20/66 Snellen) in the peeling group and 0.43 (SD = 0.31) logarithm of the minimum angle of resolution (20/53 Snellen) in the IF group (P = 0.003). Vitrectomy associated with the inverted ILM flap technique seems to be effective surgery for idiopathic and myopic large MHs, improving both functional and anatomical outcomes.
Comments on "The multisynapse neural network and its application to fuzzy clustering".
Yu, Jian; Hao, Pengwei
2005-05-01
In the above-mentioned paper, Wei and Fahn proposed a neural architecture, the multisynapse neural network, to solve constrained optimization problems including high-order, logarithmic, and sinusoidal forms, etc. As one of its main applications, a fuzzy bidirectional associative clustering network (FBACN) was proposed for fuzzy-partition clustering according to the objective-functional method. The connection between the objective-functional-based fuzzy c-partition algorithms and FBACN is the Lagrange multiplier approach. Unfortunately, the Lagrange multiplier approach was incorrectly applied so that FBACN does not equivalently minimize its corresponding constrained objective-function. Additionally, Wei and Fahn adopted traditional definition of fuzzy c-partition, which is not satisfied by FBACN. Therefore, FBACN can not solve constrained optimization problems, either.
Critical N = (1, 1) general massive supergravity
NASA Astrophysics Data System (ADS)
Deger, Nihat Sadik; Moutsopoulos, George; Rosseel, Jan
2018-04-01
In this paper we study the supermultiplet structure of N = (1, 1) General Massive Supergravity at non-critical and critical points of its parameter space. To do this, we first linearize the theory around its maximally supersymmetric AdS3 vacuum and obtain the full linearized Lagrangian including fermionic terms. At generic values, linearized modes can be organized as two massless and 2 massive multiplets where supersymmetry relates them in the standard way. At critical points logarithmic modes appear and we find that in three of such points some of the supersymmetry transformations are non-invertible in logarithmic multiplets. However, in the fourth critical point, there is a massive logarithmic multiplet with invertible supersymmetry transformations.
Ericson, M. Nance; Rochelle, James M.
1994-01-01
A logarithmic current measurement circuit for operating upon an input electric signal utilizes a quad, dielectrically isolated, well-matched, monolithic bipolar transistor array. One group of circuit components within the circuit cooperate with two transistors of the array to convert the input signal logarithmically to provide a first output signal which is temperature-dependant, and another group of circuit components cooperate with the other two transistors of the array to provide a second output signal which is temperature-dependant. A divider ratios the first and second output signals to provide a resultant output signal which is independent of temperature. The method of the invention includes the operating steps performed by the measurement circuit.
A Study of Seismic Wave Propagation at Regional Distances in Five Areas of the World
1982-02-08
5.60 CA18 740425 08 56 42.8 17.195S 70.683W 29 5.20 CA20 740706 07 04 25.1 22.128S 64.294W 15 4.30 CA27 741031 08 58 20.9 15.438S 71.050W 50 5.00...earthquake with mb - 5. Natural earthquakes are denoted as circles and rock bursts as triangles, 7 Logarithmic plots of A/T (amplitudes divided by 29 periods...propagation paths are denoted by symbols shown in the legend. 29 Trace amplitudes of S in South America, plotted 61 n against epicentral distance
NASA Astrophysics Data System (ADS)
Asatrian, H. M.; Greub, C.
2014-05-01
We calculate the O(αs) corrections to the double differential decay width dΓ77/(ds1ds2) for the process B¯→Xsγγ, originating from diagrams involving the electromagnetic dipole operator O7. The kinematical variables s1 and s2 are defined as si=(pb-qi)2/mb2, where pb, q1, q2 are the momenta of the b quark and two photons. We introduce a nonzero mass ms for the strange quark to regulate configurations where the gluon or one of the photons become collinear with the strange quark and retain terms which are logarithmic in ms, while discarding terms which go to zero in the limit ms→0. When combining virtual and bremsstrahlung corrections, the infrared and collinear singularities induced by soft and/or collinear gluons drop out. By our cuts the photons do not become soft, but one of them can become collinear with the strange quark. This implies that in the final result a single logarithm of ms survives. In principle, the configurations with collinear photon emission could be treated using fragmentation functions. In a related work we find that similar results can be obtained when simply interpreting ms appearing in the final result as a constituent mass. We do so in the present paper and vary ms between 400 and 600 MeV in the numerics. This work extends a previous paper by us, where only the leading power terms with respect to the (normalized) hadronic mass s3=(pb-q1-q2)2/mb2 were taken into account in the underlying triple differential decay width dΓ77/(ds1ds2ds3).
NASA Astrophysics Data System (ADS)
Ibrahim, Ichsan; Malasan, Hakim L.; Kunjaya, Chatief; Timur Jaelani, Anton; Puannandra Putri, Gerhana; Djamal, Mitra
2018-04-01
In astronomy, the brightness of a source is typically expressed in terms of magnitude. Conventionally, the magnitude is defined by the logarithm of received flux. This relationship is known as the Pogson formula. For received flux with a small signal to noise ratio (S/N), however, the formula gives a large magnitude error. We investigate whether the use of Inverse Hyperbolic Sine function (hereafter referred to as the Asinh magnitude) in the modified formulae could allow for an alternative calculation of magnitudes for small S/N flux, and whether the new approach is better for representing the brightness of that region. We study the possibility of increasing the detection level of gravitational microlensing using 40 selected microlensing light curves from the 2013 and 2014 seasons and by using the Asinh magnitude. Photometric data of the selected events are obtained from the Optical Gravitational Lensing Experiment (OGLE). We found that utilization of the Asinh magnitude makes the events brighter compared to using the logarithmic magnitude, with an average of about 3.42 × 10‑2 magnitude and an average in the difference of error between the logarithmic and the Asinh magnitude of about 2.21 × 10‑2 magnitude. The microlensing events OB140847 and OB140885 are found to have the largest difference values among the selected events. Using a Gaussian fit to find the peak for OB140847 and OB140885, we conclude statistically that the Asinh magnitude gives better mean squared values of the regression and narrower residual histograms than the Pogson magnitude. Based on these results, we also attempt to propose a limit in magnitude value for which use of the Asinh magnitude is optimal with small S/N data.
Guo, Hailin; Ding, Wanwen; Chen, Jingbo; Chen, Xuan; Zheng, Yiqi; Wang, Zhiyong; Liu, Jianxiu
2014-01-01
Zoysiagrass (Zoysia Willd.) is an important warm season turfgrass that is grown in many parts of the world. Salt tolerance is an important trait in zoysiagrass breeding programs. In this study, a genetic linkage map was constructed using sequence-related amplified polymorphism markers and random amplified polymorphic DNA markers based on an F1 population comprising 120 progeny derived from a cross between Zoysia japonica Z105 (salt-tolerant accession) and Z061 (salt-sensitive accession). The linkage map covered 1211 cM with an average marker distance of 5.0 cM and contained 24 linkage groups with 242 marker loci (217 sequence-related amplified polymorphism markers and 25 random amplified polymorphic DNA markers). Quantitative trait loci affecting the salt tolerance of zoysiagrass were identified using the constructed genetic linkage map. Two significant quantitative trait loci (qLF-1 and qLF-2) for leaf firing percentage were detected; qLF-1 at 36.3 cM on linkage group LG4 with a logarithm of odds value of 3.27, which explained 13.1% of the total variation of leaf firing and qLF-2 at 42.3 cM on LG5 with a logarithm of odds value of 2.88, which explained 29.7% of the total variation of leaf firing. A significant quantitative trait locus (qSCW-1) for reduced percentage of dry shoot clipping weight was detected at 44.1 cM on LG5 with a logarithm of odds value of 4.0, which explained 65.6% of the total variation. This study provides important information for further functional analysis of salt-tolerance genes in zoysiagrass. Molecular markers linked with quantitative trait loci for salt tolerance will be useful in zoysiagrass breeding programs using marker-assisted selection.
Dynamical conductivity at the dirty superconductor-metal quantum phase transition
NASA Astrophysics Data System (ADS)
Hoyos, J. A.; Del Maestro, Adrian; Rosenow, Bernd; Vojta, Thomas
2011-03-01
We study the transport properties of ultrathin disordered nanowires in the neighborhood of the superconductor-metal quantum phase transition. To this end we combine numerical calculations with analytical strong-disorder renormalization group results. The quantum critical conductivity at zero temperature diverges logarithmically as a function of frequency. In the metallic phase, it obeys activated scaling associated with an infinite-randomness quantum critical point. We extend the scaling theory to higher dimensions and discuss implications for experiments. Financial support: Fapesp, CNPq, NSF, and Research Corporation.
Enke, Christie
2013-02-19
Methods and instruments for high dynamic range analysis of sample components are described. A sample is subjected to time-dependent separation, ionized, and the ions dispersed with a constant integration time across an array of detectors according to the ions m/z values. Each of the detectors in the array has a dynamically adjustable gain or a logarithmic response function, producing an instrument capable of detecting a ratio of responses or 4 or more orders of magnitude.
How Many Is a Zillion? Sources of Number Distortion
ERIC Educational Resources Information Center
Rips, Lance J.
2013-01-01
When young children attempt to locate the positions of numerals on a number line, the positions are often logarithmically rather than linearly distributed. This finding has been taken as evidence that the children represent numbers on a mental number line that is logarithmically calibrated. This article reports a statistical simulation showing…
Logarithmic Transformations in Regression: Do You Transform Back Correctly?
ERIC Educational Resources Information Center
Dambolena, Ismael G.; Eriksen, Steven E.; Kopcso, David P.
2009-01-01
The logarithmic transformation is often used in regression analysis for a variety of purposes such as the linearization of a nonlinear relationship between two or more variables. We have noticed that when this transformation is applied to the response variable, the computation of the point estimate of the conditional mean of the original response…
Spatially averaged flow over a wavy boundary revisited
McLean, S.R.; Wolfe, S.R.; Nelson, J.M.
1999-01-01
Vertical profiles of streamwise velocity measured over bed forms are commonly used to deduce boundary shear stress for the purpose of estimating sediment transport. These profiles may be derived locally or from some sort of spatial average. Arguments for using the latter procedure are based on the assumption that spatial averaging of the momentum equation effectively removes local accelerations from the problem. Using analogies based on steady, uniform flows, it has been argued that the spatially averaged velocity profiles are approximately logarithmic and can be used to infer values of boundary shear stress. This technique of using logarithmic profiles is investigated using detailed laboratory measurements of flow structure and boundary shear stress over fixed two-dimensional bed forms. Spatial averages over the length of the bed form of mean velocity measurements at constant distances from the mean bed elevation yield vertical profiles that are highly logarithmic even though the effect of the bottom topography is observed throughout the water column. However, logarithmic fits of these averaged profiles do not yield accurate estimates of the measured total boundary shear stress. Copyright 1999 by the American Geophysical Union.
NASA Astrophysics Data System (ADS)
Kypraios, Ioannis; Young, Rupert C. D.; Chatwin, Chris R.; Birch, Phil M.
2009-04-01
θThe window unit in the design of the complex logarithmic r-θ mapping for hybrid optical neural network filter can allow multiple objects of the same class to be detected within the input image. Additionally, the architecture of the neural network unit of the complex logarithmic r-θ mapping for hybrid optical neural network filter becomes attractive for accommodating the recognition of multiple objects of different classes within the input image by modifying the output layer of the unit. We test the overall filter for multiple objects of the same and of different classes' recognition within cluttered input images and video sequences of cluttered scenes. Logarithmic r-θ mapping for hybrid optical neural network filter is shown to exhibit with a single pass over the input data simultaneously in-plane rotation, out-of-plane rotation, scale, log r-θ map translation and shift invariance, and good clutter tolerance by recognizing correctly the different objects within the cluttered scenes. We record in our results additional extracted information from the cluttered scenes about the objects' relative position, scale and in-plane rotation.
NASA Astrophysics Data System (ADS)
Junqueira, T. C.; Lépine, J. R. D.; Braga, C. A. S.; Barros, D. A.
2013-02-01
Aims: We propose a new, more realistic description of the perturbed gravitational potential of spiral galaxies, with spiral arms having Gaussian-shaped groove profiles. The aim is to reach a self-consistent description of the spiral structure, that is, one in which an initial potential perturbation generates, by means of the stellar orbits, spiral arms with a profile similar to that of the imposed perturbation. Self-consistency is a condition for having long-lived structures. Methods: Using the new perturbed potential, we investigate the stable stellar orbits in galactic disks for galaxies with no bar or with only a weak bar. The model is applied to our Galaxy by making use of the axisymmetric component of the potential computed from the Galactic rotation curve, in addition to other input parameters similar to those of our Galaxy. The influence of the bulge mass on the stellar orbits in the inner regions of a disk is also investigated. Results: The new description offers the advantage of easy control of the parameters of the Gaussian profile of its potential. We compute the density contrast between arm and inter-arm regions. We find a range of values for the perturbation amplitude from 400 to 800 km2 s-2 kpc-1, which implies an approximate maximum ratio of the tangential force to the axisymmetric force between 3% and 6%. Good self-consistency of arm shapes is obtained between the Inner Lindblad resonance (ILR) and the 4:1 resonance. Near the 4:1 resonance the response density starts to deviate from the imposed logarithmic spiral form. This creates bifurcations that appear as short arms. Therefore the deviation from a perfect logarithmic spiral in galaxies can be understood as a natural effect of the 4:1 resonance. Beyond the 4:1 resonance we find closed orbits that have similarities with the arms observed in our Galaxy. In regions near the center, elongated stellar orbits appear naturally, in the presence of a massive bulge, without imposing any bar-shaped potential, but only extending the spiral perturbation a little inward of the ILR. This suggests that a bar is formed with a half-size ~3 kpc by a mechanism similar to that of the spiral arms. Conclusions: The potential energy perturbation that we adopted represents an important step in the direction of self-consistency, compared to previous sine function descriptions of the potential. In addition, our model produces a realistic description of the spiral structure, which is able to explain several details that were not yet understood.
[Ophthalmologic reading charts : Part 2: Current logarithmically scaled reading charts].
Radner, W
2016-12-01
To analyze currently available reading charts regarding print size, logarithmic print size progression, and the background of test-item standardization. For the present study, the following logarithmically scaled reading charts were investigated using a measuring microscope (iNexis VMA 2520; Nikon, Tokyo): Eschenbach, Zeiss, OCULUS, MNREAD (Minnesota Near Reading Test), Colenbrander, and RADNER. Calculations were made according to EN-ISO 8596 and the International Research Council recommendations. Modern reading charts and cards exhibit a logarithmic progression of print sizes. The RADNER reading charts comprise four different cards with standardized test items (sentence optotypes), a well-defined stop criterion, accurate letter sizes, and a high print quality. Numbers and Landolt rings are also given in the booklet. The OCULUS cards have currently been reissued according to recent standards and also exhibit a high print quality. In addition to letters, numbers, Landolt rings, and examples taken from a timetable and the telephone book, sheet music is also offered. The Colenbrander cards use short sentences of 44 characters, including spaces, and exhibit inaccuracy at smaller letter sizes, as do the MNREAD cards. The MNREAD cards use sentences of 60 characters, including spaces, and have a high print quality. Modern reading charts show that international standards can be achieved with test items similar to optotypes, by using recent technology and developing new concepts of test-item standardization. Accurate print sizes, high print quality, and a logarithmic progression should become the minimum requirements for reading charts and reading cards in ophthalmology.
Theoretical analysis of the correlation observed in fatigue crack growth rate parameters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chay, S.C.; Liaw, P.K.
Fatigue crack growth rates have been found to follow the Paris-Erdogan rule, da/dN = C{sub o}({Delta}K){sup n}, for many steels, aluminum, nickel and copper alloys. The fatigue crack growth rate behavior in the Paris regime, thus, can be characterized by the parameters C{sub o} and n, which have been obtained for various materials. When n vs the logarithm of C{sub o} were plotted for various experimental results, a very definite linear relationship has been observed by many investigators, and questions have been raised as to the nature of this correlation. This paper presents a theoretical analysis that explains precisely whymore » such a linear correlation should exist between the two parameters, how strong the relationship should be, and how it can be predicted by analysis. This analysis proves that the source of such a correlation is of mathematical nature rather than physical.« less
An Investigation of Students' Errors in Logarithms
ERIC Educational Resources Information Center
Ganesan, Raman; Dindyal, Jaguthsing
2014-01-01
In this study we set out to investigate the errors made by students in logarithms. A test with 16 items was administered to 89 Secondary three students (Year 9). The errors made by the students were categorized using four categories from a framework by Movshovitz-Hadar, Zaslavsky, and Inbar (1987). It was found that students in the top third were…
Decay of Correlations, Quantitative Recurrence and Logarithm Law for Contracting Lorenz Attractors
NASA Astrophysics Data System (ADS)
Galatolo, Stefano; Nisoli, Isaia; Pacifico, Maria Jose
2018-03-01
In this paper we prove that a class of skew products maps with non uniformly hyperbolic base has exponential decay of correlations. We apply this to obtain a logarithm law for the hitting time associated to a contracting Lorenz attractor at all the points having a well defined local dimension, and a quantitative recurrence estimation.
Dead-time compensation for a logarithmic display rate meter
Larson, John A.; Krueger, Frederick P.
1988-09-20
An improved circuit is provided for application to a radiation survey meter that uses a detector that is subject to dead time. The circuit compensates for dead time over a wide range of count rates by producing a dead-time pulse for each detected event, a live-time pulse that spans the interval between dead-time pulses, and circuits that average the value of these pulses over time. The logarithm of each of these values is obtained and the logarithms are subtracted to provide a signal that is proportional to a count rate that is corrected for the effects of dead time. The circuit produces a meter indication and is also capable of producing an audible indication of detected events.
Dead-time compensation for a logarithmic display rate meter
Larson, J.A.; Krueger, F.P.
1987-10-05
An improved circuit is provided for application to a radiation survey meter that uses a detector that is subject to dead time. The circuit compensates for dead time over a wide range of count rates by producing a dead-time pulse for each detected event, a live-time pulse that spans the interval between dead-time pulses, and circuits that average the value of these pulses over time. The logarithm of each of these values is obtained and the logarithms are subtracted to provide a signal that is proportional to a count rate that is corrected for the effects of dead time. The circuit produces a meter indication and is also capable of producing an audible indication of detected events. 5 figs.
Precision studies of the NNLO DGLAP evolution at the LHC with Candia
NASA Astrophysics Data System (ADS)
Cafarella, Alessandro; Corianò, Claudio; Guzzi, Marco
2008-11-01
We summarize the theoretical approach to the solution of the NNLO DGLAP equations using methods based on the logarithmic expansions in x-space and their implementation into the C program CANDIA 1.0. We present the various options implemented in the program and discuss the different solutions. The user can choose the order of the evolution, the type of the solution, which can be either exact or truncated, and the evolution either with a fixed or a varying flavor number, implemented in the varying-flavor-number scheme (VFNS). The renormalization and factorization scale dependencies are treated separately. In the non-singlet sector the program implements an exact NNLO solution. Program summaryProgram title: CANDIA Catalogue identifier: AEBK_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEBK_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 101 376 No. of bytes in distributed program, including test data, etc.: 5 865 234 Distribution format: tar.gz Programming language: C and Fortran Computer: All Operating system: Linux RAM: In the given examples, it ranges from 4 to 490 MB Classification: 11.1, 11.5 Nature of problem: The program provided here solves the DGLAP evolution equations for the parton distribution functions up to NNLO. Solution method: The algorithm implemented is based on the theory of the logarithmic expansions in Bjorken x-space. Additional comments: To be sure of getting the latest version of the program, the authors suggest downloading the code from their official CANDIA website ( http://www.le.infn.it/candia). Running time: In the given examples, it ranges from 1 to 40 minutes. The jobs have been executed on an Intel Core 2 Duo T7250 CPU at 2 GHz with a 64 bit Linux kernel. The test run script included in the package contains 5 sample runs and may take a number of hours to process, depending on the speed of the processor used and the size of the available RAM. http://www.le.infn.it/candia.
Exact infinite-time statistics of the Loschmidt echo for a quantum quench.
Campos Venuti, Lorenzo; Jacobson, N Tobias; Santra, Siddhartha; Zanardi, Paolo
2011-07-01
The equilibration dynamics of a closed quantum system is encoded in the long-time distribution function of generic observables. In this Letter we consider the Loschmidt echo generalized to finite temperature, and show that we can obtain an exact expression for its long-time distribution for a closed system described by a quantum XY chain following a sudden quench. In the thermodynamic limit the logarithm of the Loschmidt echo becomes normally distributed, whereas for small quenches in the opposite, quasicritical regime, the distribution function acquires a universal double-peaked form indicating poor equilibration. These findings, obtained by a central limit theorem-type result, extend to completely general models in the small-quench regime.
NASA Astrophysics Data System (ADS)
Rishi, Rahul; Choudhary, Amit; Singh, Ravinder; Dhaka, Vijaypal Singh; Ahlawat, Savita; Rao, Mukta
2010-02-01
In this paper we propose a system for classification problem of handwritten text. The system is composed of preprocessing module, supervised learning module and recognition module on a very broad level. The preprocessing module digitizes the documents and extracts features (tangent values) for each character. The radial basis function network is used in the learning and recognition modules. The objective is to analyze and improve the performance of Multi Layer Perceptron (MLP) using RBF transfer functions over Logarithmic Sigmoid Function. The results of 35 experiments indicate that the Feed Forward MLP performs accurately and exhaustively with RBF. With the change in weight update mechanism and feature-drawn preprocessing module, the proposed system is competent with good recognition show.
A new model integrating short- and long-term aging of copper added to soils
Zeng, Saiqi; Li, Jumei; Wei, Dongpu
2017-01-01
Aging refers to the processes by which the bioavailability/toxicity, isotopic exchangeability, and extractability of metals added to soils decline overtime. We studied the characteristics of the aging process in copper (Cu) added to soils and the factors that affect this process. Then we developed a semi-mechanistic model to predict the lability of Cu during the aging process with descriptions of the diffusion process using complementary error function. In the previous studies, two semi-mechanistic models to separately predict short-term and long-term aging of Cu added to soils were developed with individual descriptions of the diffusion process. In the short-term model, the diffusion process was linearly related to the square root of incubation time (t1/2), and in the long-term model, the diffusion process was linearly related to the natural logarithm of incubation time (lnt). Both models could predict short-term or long-term aging processes separately, but could not predict the short- and long-term aging processes by one model. By analyzing and combining the two models, we found that the short- and long-term behaviors of the diffusion process could be described adequately using the complementary error function. The effect of temperature on the diffusion process was obtained in this model as well. The model can predict the aging process continuously based on four factors—soil pH, incubation time, soil organic matter content and temperature. PMID:28820888
Macronuclear Cytology of Synchronized Tetrahymena pyriformis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cameron, I. L.; Padilla, G. M.; Miller, Jr., O. L.
1966-05-01
Elliott, Kennedy and Bak ('62) and Elliott ('63) followed fine structural changes in macronuclei of Tetrahymena pyriformis which were synchronized by the heat shock method of Scherbaum and Zeuthen ('54). Using Elliott's morphological descriptions as a basis, we designed our investigations with two main objectives: First, to again study the. morphological changes which occur in the macronucleus of Tetrahymena synchronized by the heat shock method. The second objective was to compare these observations with Tetrahymena synchronized by an alternate method recently reported by Padilla and Cameron ('64). Therefore, we were able to compare the results from two different synchronization methodsmore » and to contrast these findings with the macronuclear cytology of Tetrahymena taken from a logarithmically growing culture. Comparison of cells treated in these three different ways enables us to evaluate the two different synchronization methods and to gain more information on the structural changes taking place in the macronucleus of Tetrahymena as a function of the cell cycle. Our observations were confined primarily to nucleolar morphology. The results indicate that cells synchronized by the Padilla and Cameron method more closely resemble logarithmically growing Tetrahymena in the macronuclear structure than do cells obtained by the Scherbaum and·Zeuthen synchronization method. .« less
A new real-time guidance strategy for aerodynamic ascent flight
NASA Astrophysics Data System (ADS)
Yamamoto, Takayuki; Kawaguchi, Jun'ichiro
2007-12-01
Reusable launch vehicles are conceived to constitute the future space transportation system. If these vehicles use air-breathing propulsion and lift taking-off horizontally, the optimal steering for these vehicles exhibits completely different behavior from that in conventional rockets flight. In this paper, the new guidance strategy is proposed. This method derives from the optimality condition as for steering and an analysis concludes that the steering function takes the form comprised of Linear and Logarithmic terms, which include only four parameters. The parameter optimization of this method shows the acquired terminal horizontal velocity is almost same with that obtained by the direct numerical optimization. This supports the parameterized Liner Logarithmic steering law. And here is shown that there exists a simple linear relation between the terminal states and the parameters to be corrected. The relation easily makes the parameters determined to satisfy the terminal boundary conditions in real-time. The paper presents the guidance results for the practical application cases. The results show the guidance is well performed and satisfies the terminal boundary conditions specified. The strategy built and presented here does guarantee the robust solution in real-time excluding any optimization process, and it is found quite practical.
Logarithmic field dependence of the Thermal Conductivity in La_2-xSr_xCuO_4
NASA Astrophysics Data System (ADS)
Krishana, K.; Ong, N. P.; Kimura, T.
1997-03-01
We have investigated the thermal conductivity κ of La_2-xSr_xCuO4 in fields B upto 14 tesla. To minimize errors caused by the field sensitivity of the thermocouple sensors, we used a sensitive null-detection technique. We find that below Tc κ varies as -logB in high fields and in the low field limit it approaches a constant. The κ vs. B data at these temperatures collapse to a universal curve , which fits very well to an expression involving the digamma function and reminiscent of 2-D weak localization. The field scale derived from this scaling is linear in T. The logarithmic dependence of κ strongly suggests an electronic origin for anomaly in κ below T_c. Our experiment precludes conventional vortex scattering of phonons as the source of the anomaly. The data fit poorly to these models and the derived mean-free-paths are non monotonic and 5 to 8 times larger than obtained from heat capacity. Also comparison of the x=0.17 and x=0.08 samples give field scales opposite to what is expected from vortex scattering.
Portable geiger counter with logarithmic scale (in Portuguese)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oliveira, L.A.C.; de Andrade Chagas, E.; de Bittencourt, F.A.
1971-06-01
From 23rd annual meeting of the Brazilian Society for the Advancement of Science; Curitiba, Brazil (4 Jul 1971). A portable scaler with logarithmic scale covering 3 decades: 1 to 10, 10 to 10/sup 2/, and 10/sup 2/ to 10/sup 3/ cps is presented. Electrica l energy is supplied by 6 volts given by 4 D type batteries. (INIS)
Regional Frequency Computation Users Manual.
1972-07-01
increment of flow used to prevent infinite logarithms for events with zero flow X = Mean logarithm of flow events N = Total years of record S = Unbiased...C LIBRARY 3’jr.RfUTINFES USEO--ALflGpSINvAB3 1002 c PRGRAM ~ SUBRflUTINES CR0UTpR,chfNN-SFE COt’MENTS I-N RNGEN 100 C REFERENCE TOl TAPE ? AT
A new type of density-management diagram for slash pine plantations
Curtis L. VanderSchaaf
2006-01-01
Many Density-Management Diagrams (DMD) have been developed for conifer species throughout the world based on stand density index (SDI). The diagrams often plot the logarithm of average tree size (volume, weight, or quadratic mean diameter) over the logarithm of trees per unit area. A new type of DMD is presented for slash pine (Pinus elliottii var elliottii)...
We measured the mutational and transcriptional response of stationary-phase and logarithmic-phase S. typhimurium TA100 to 3 concentrations of the drinking water mutagen 3-chloro-4-(dichloromethyl)-5-hydroxy-2(5H)-furanone (MX). The mutagenicity of MX in strain TA100 was evaluated...
NASA Technical Reports Server (NTRS)
Gabrielsen, R. E.; Uenal, A.
1981-01-01
Two dimensional Fredholm integral equations with logarithmic potential kernels are numerically solved. The explicit consequence of these solutions to their true solutions is demonstrated. The results are based on a previous work in which numerical solutions were obtained for Fredholm integral equations of the second kind with continuous kernels.
Kinetics of drug release from ointments: Role of transient-boundary layer.
Xu, Xiaoming; Al-Ghabeish, Manar; Krishnaiah, Yellela S R; Rahman, Ziyaur; Khan, Mansoor A
2015-10-15
In the current work, an in vitro release testing method suitable for ointment formulations was developed using acyclovir as a model drug. Release studies were carried out using enhancer cells on acyclovir ointments prepared with oleaginous, absorption, and water-soluble bases. Kinetics and mechanism of drug release was found to be highly dependent on the type of ointment bases. In oleaginous bases, drug release followed a unique logarithmic-time dependent profile; in both absorption and water-soluble bases, drug release exhibited linearity with respect to square root of time (Higuchi model) albeit differences in the overall release profile. To help understand the underlying cause of logarithmic-time dependency of drug release, a novel transient-boundary hypothesis was proposed, verified, and compared to Higuchi theory. Furthermore, impact of drug solubility (under various pH conditions) and temperature on drug release were assessed. Additionally, conditions under which deviations from logarithmic-time drug release kinetics occur were determined using in situ UV fiber-optics. Overall, the results suggest that for oleaginous ointments containing dispersed drug particles, kinetics and mechanism of drug release is controlled by expansion of transient boundary layer, and drug release increases linearly with respect to logarithmic time. Published by Elsevier B.V.
Structural modal parameter identification using local mean decomposition
NASA Astrophysics Data System (ADS)
Keyhani, Ali; Mohammadi, Saeed
2018-02-01
Modal parameter identification is the first step in structural health monitoring of existing structures. Already, many powerful methods have been proposed for this concept and each method has some benefits and shortcomings. In this study, a new method based on local mean decomposition is proposed for modal identification of civil structures from free or ambient vibration measurements. The ability of the proposed method was investigated using some numerical studies and the results compared with those obtained from the Hilbert-Huang transform (HHT). As a major advantage, the proposed method can extract natural frequencies and damping ratios of all active modes from only one measurement. The accuracy of the identified modes depends on their participation in the measured responses. Nevertheless, the identified natural frequencies have reasonable accuracy in both cases of free and ambient vibration measurements, even in the presence of noise. The instantaneous phase angle and the natural logarithm of instantaneous amplitude curves obtained from the proposed method have more linearity rather than those from the HHT algorithm. Also, the end effect is more restricted for the proposed method.
Brown, A M
2001-06-01
The objective of this present study was to introduce a simple, easily understood method for carrying out non-linear regression analysis based on user input functions. While it is relatively straightforward to fit data with simple functions such as linear or logarithmic functions, fitting data with more complicated non-linear functions is more difficult. Commercial specialist programmes are available that will carry out this analysis, but these programmes are expensive and are not intuitive to learn. An alternative method described here is to use the SOLVER function of the ubiquitous spreadsheet programme Microsoft Excel, which employs an iterative least squares fitting routine to produce the optimal goodness of fit between data and function. The intent of this paper is to lead the reader through an easily understood step-by-step guide to implementing this method, which can be applied to any function in the form y=f(x), and is well suited to fast, reliable analysis of data in all fields of biology.
A law of the iterated logarithm for Grenander’s estimator
Dümbgen, Lutz; Wellner, Jon A.; Wolff, Malcolm
2016-01-01
In this note we prove the following law of the iterated logarithm for the Grenander estimator of a monotone decreasing density: If f(t0) > 0, f′(t0) < 0, and f′ is continuous in a neighborhood of t0, then lim supn→∞(n2log logn)1/3(fn^(t0)−f(t0))=|f(t0)f′(t0)/2|1/32Malmost surely where M≡supg∈GTg=(3/4)1/3andTg≡argmaxu{g(u)−u2};here G is the two-sided Strassen limit set on R. The proof relies on laws of the iterated logarithm for local empirical processes, Groeneboom’s switching relation, and properties of Strassen’s limit set analogous to distributional properties of Brownian motion. PMID:28042197
Fechner's law: where does the log transform come from?
Laming, Donald
2010-01-01
This paper looks at Fechner's law in the light of 150 years of subsequent study. In combination with the normal, equal variance, signal-detection model, Fechner's law provides a numerically accurate account of discriminations between two separate stimuli, essentially because the logarithmic transform delivers a model for Weber's law. But it cannot be taken to be a measure of internal sensation because an equally accurate account is provided by a chi(2) model in which stimuli are scaled by their physical magnitude. The logarithmic transform of Fechner's law arises because, for the number of degrees of freedom typically required in the chi(2) model, the logarithm of a chi(2) variable is, to a good approximation, normal. This argument is set within a general theory of sensory discrimination.
Elastic scattering of virtual photons via a quark loop in the double-logarithmic approximation
NASA Astrophysics Data System (ADS)
Ermolaev, B. I.; Ivanov, D. Yu.; Troyan, S. I.
2018-04-01
We calculate the amplitude of elastic photon-photon scattering via a single quark loop in the double-logarithmic approximation, presuming all external photons to be off-shell and unpolarized. At the same time we account for the running coupling effects. We consider this process in the forward kinematics at arbitrary relations between t and the external photon virtualities. We obtain explicit expressions for the photon-photon scattering amplitudes in all double-logarithmic kinematic regions. Then we calculate the small-x asymptotics of the obtained amplitudes and compare them with the parent amplitudes, thereby fixing the applicability regions of the asymptotics, i.e., fixing the applicability region for the nonvacuum Reggeons. We find that these Reggeons should be used at x <10-8 only.
Robson, Barry
2003-01-01
New scientific problems, arising from the human genome project, are challenging the classical means of using statistics. Yet quantified knowledge in the form of rules and rule strengths based on real relationships in data, as opposed to expert opinion, is urgently required for researcher and physician decision support. The problem is that with many parameters, the space to be analyzed is highly dimensional. That is, the combinations of data to examine are subject to a combinatorial explosion as the number of possible events (entries, items, sub-records) (a),(b),(c),... per record (a,b,c,..) increases, and hence much of the space is sparsely populated. These combinatorial considerations are particularly problematic for identifying those associations called "Unicorn Events" which occur significantly less than expected to the extent that they are never seen to be counted. To cope with the combinatorial explosion, a novel numerical "book keeping" approach is taken to generate information terms relating to the combinatorial subsets of events (a,b,c,..), and, most importantly, the zeta (Zeta) function is employed. The incomplete Zeta function zeta(s,n) with s = 1, in which frequencies of occurrence such as n = n(a,b,c,...) determine the range of summation n, is argued to be the natural choice of information function. It emerges from Bayesian integration, taken over the distribution of possible values of information measures for sparse and ample data alike. Expected mutual information l(a;b;c) in nats (i.e., natural units analogous to bits but based on the natural logarithm), such as is available to the observer, is measured as e.g., the difference zeta(s,o(a,b,c..)) - zeta(s,e(a,b,c..)) where o(a,b,c,..) and e(a,b,c,..) are, or relate to, the observed and expected frequencies of occurrence, respectively. For real values of s > 1 the qualitative impact of strongly (positively or negatively) ranked data is preserved despite several numerical approximations. As real s increases, and the output of the information functions converge into three values +1, 0, and -1 nats representing a trinary logic system. For quantitative data, a useful ad hoc method, to report sigma-normalized covariations in an analogous manner to mutual information for significance comparison purposes, is demonstrated. Finally, the potential ability to make use of mutual information in a complex biomedical study, and to include Bayesian prior information derived from statistical, tabular, anecdotal, and expert opinion is briefly illustrated.
Toward microstate counting beyond large N in localization and the dual one-loop quantum supergravity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, James T.; Pando Zayas, Leopoldo A.; Rathee, Vimal
The topologically twisted index for ABJM theory with gauge group U(N)k × U(N)−k has recently been shown, in the large-N limit, to reproduce the BekensteinHawking entropy of certain magnetically charged asymptotically AdS4 black holes. We numerically study the index beyond the large-N limit and provide evidence that it contains a subleading logarithmic term of the form −1/2 log N. On the holographic side, this term naturally arises from a one-loop computation. However, we find that the contribution coming from the near horizon states does not reproduce the field theory answer. We give some possible reasons for this apparent discrepancy.
Asynchronous vibration problem of centrifugal compressor
NASA Technical Reports Server (NTRS)
Fujikawa, T.; Ishiguro, N.; Ito, M.
1980-01-01
An unstable asynchronous vibration problem in a high pressure centrifugal compressor and the remedial actions against it are described. Asynchronous vibration of the compressor took place when the discharge pressure (Pd) was increased, after the rotor was already at full speed. The typical spectral data of the shaft vibration indicate that as the pressure Pd increases, pre-unstable vibration appears and becomes larger, and large unstable asynchronous vibration occurs suddenly (Pd = 5.49MPa). A computer program was used which calculated the logarithmic decrement and the damped natural frequency of the rotor bearing systems. The analysis of the log-decrement is concluded to be effective in preventing unstable vibration in both the design stage and remedial actions.
NASA Astrophysics Data System (ADS)
Parke, L.; Hooper, I. R.; Hicken, R. J.; Dancer, C. E. J.; Grant, P. S.; Youngs, I. J.; Sambles, J. R.; Hibbins, A. P.
2013-10-01
A cold-pressing technique has been developed for fabricating composites composed of a polytetrafluoroethylene-polymer matrix and a wide range of volume-fractions of MnZn-ferrite filler (0%-80%). The electromagnetic properties at centimetre wavelengths of all prepared composites exhibited good reproducibility, with the most heavily loaded composites possessing simultaneously high permittivity (180 ± 10) and permeability (23 ± 2). The natural logarithm of both the relative complex permittivity and permeability shows an approximately linear dependence with the volume fraction of ferrite. Thus, this simple method allows for the manufacture of bespoke materials required in the design and construction of devices based on the principles of transformation optics.
Dominant takeover regimes for genetic algorithms
NASA Technical Reports Server (NTRS)
Noever, David; Baskaran, Subbiah
1995-01-01
The genetic algorithm (GA) is a machine-based optimization routine which connects evolutionary learning to natural genetic laws. The present work addresses the problem of obtaining the dominant takeover regimes in the GA dynamics. Estimated GA run times are computed for slow and fast convergence in the limits of high and low fitness ratios. Using Euler's device for obtaining partial sums in closed forms, the result relaxes the previously held requirements for long time limits. Analytical solution reveal that appropriately accelerated regimes can mark the ascendancy of the most fit solution. In virtually all cases, the weak (logarithmic) dependence of convergence time on problem size demonstrates the potential for the GA to solve large N-P complete problems.
ERIC Educational Resources Information Center
Bongers, Raoul M.; Fernandez, Laure; Bootsma, Reinoud J.
2009-01-01
The authors examined the origins of linear and logarithmic speed-accuracy trade-offs from a dynamic systems perspective on motor control. In each experiment, participants performed 2 reciprocal aiming tasks: (a) a velocity-constrained task in which movement time was imposed and accuracy had to be maximized, and (b) a distance-constrained task in…
Polaron in the dilute critical Bose condensate
NASA Astrophysics Data System (ADS)
Pastukhov, Volodymyr
2018-05-01
The properties of an impurity immersed in a dilute D-dimensional Bose gas at temperatures close to its second-order phase transition point are considered. Particularly by means of the 1/N-expansion, we calculate the leading-order polaron energy and the damping rate in the limit of vanishing boson–boson interaction. It is shown that the perturbative effective mass and the quasiparticle residue diverge logarithmically in the long-length limit, signalling the non-analytic behavior of the impurity spectrum and pole-free structure of the polaron Green’s function in the infrared region, respectively.
Temperature Scaling Law for Quantum Annealing Optimizers.
Albash, Tameem; Martin-Mayor, Victor; Hen, Itay
2017-09-15
Physical implementations of quantum annealing unavoidably operate at finite temperatures. We point to a fundamental limitation of fixed finite temperature quantum annealers that prevents them from functioning as competitive scalable optimizers and show that to serve as optimizers annealer temperatures must be appropriately scaled down with problem size. We derive a temperature scaling law dictating that temperature must drop at the very least in a logarithmic manner but also possibly as a power law with problem size. We corroborate our results by experiment and simulations and discuss the implications of these to practical annealers.
Anomalous symmetry breaking in classical two-dimensional diffusion of coherent atoms
NASA Astrophysics Data System (ADS)
Pugatch, Rami; Bhattacharyya, Dipankar; Amir, Ariel; Sagi, Yoav; Davidson, Nir
2014-03-01
The electromagnetically induced transparency (EIT) spectrum of atoms diffusing in and out of a narrow beam is measured and shown to manifest the two-dimensional δ-function anomaly in a classical setting. In the limit of small-area beams, the EIT line shape is independent of power, and equal to the renormalized local density of states of a free particle Hamiltonian. The measured spectra for different powers and beam sizes collapses to a single universal curve with a characteristic logarithmic Van Hove singularity close to resonance.
Singularity Preserving Numerical Methods for Boundary Integral Equations
NASA Technical Reports Server (NTRS)
Kaneko, Hideaki (Principal Investigator)
1996-01-01
In the past twelve months (May 8, 1995 - May 8, 1996), under the cooperative agreement with Division of Multidisciplinary Optimization at NASA Langley, we have accomplished the following five projects: a note on the finite element method with singular basis functions; numerical quadrature for weakly singular integrals; superconvergence of degenerate kernel method; superconvergence of the iterated collocation method for Hammersteion equations; and singularity preserving Galerkin method for Hammerstein equations with logarithmic kernel. This final report consists of five papers describing these projects. Each project is preceeded by a brief abstract.
The effects of views of nature on autonomic control.
Gladwell, V F; Brown, D K; Barton, J L; Tarvainen, M P; Kuoppa, P; Pretty, J; Suddaby, J M; Sandercock, G R H
2012-09-01
Previously studies have shown that nature improves mood and self-esteem and reduces blood pressure. Walking within a natural environment has been suggested to alter autonomic nervous system control, but the mechanisms are not fully understood. Heart rate variability (HRV) is a non-invasive method of assessing autonomic control and can give an insight into vagal modulation. Our hypothesis was that viewing nature alone within a controlled laboratory environment would induce higher levels of HRV as compared to built scenes. Heart rate (HR) and blood pressure (BP) were measured during viewing different scenes in a controlled environment. HRV was used to investigate alterations in autonomic activity, specifically parasympathetic activity. Each participant lay in the semi-supine position in a laboratory while we recorded 5 min (n = 29) of ECG, BP and respiration as they viewed two collections of slides (one containing nature views and the other built scenes). During viewing of nature, markers of parasympathetic activity were increased in both studies. Root mean squared of successive differences increased 4.2 ± 7.7 ms (t = 2.9, p = 0.008) and natural logarithm of high frequency increased 0.19 ± 0.36 ms(2) Hz(-1) (t = 2.9, p = 0.007) as compared to built scenes. Mean HR and BP were not significantly altered. This study provides evidence that autonomic control of the heart is altered by the simple act of just viewing natural scenes with an increase in vagal activity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dumitru, Adrian; Skokov, Vladimir
The conventional and linearly polarized Weizsäcker-Williams gluon distributions at small x are defined from the two-point function of the gluon field in light-cone gauge. They appear in the cross section for dijet production in deep inelastic scattering at high energy. We determine these functions in the small-x limit from solutions of the JIMWLK evolution equations and show that they exhibit approximate geometric scaling. Also, we discuss the functional distributions of these WW gluon distributions over the JIMWLK ensemble at rapidity Y ~ 1/αs. These are determined by a 2d Liouville action for the logarithm of the covariant gauge function g2trmore » A+(q)A+(-q). For transverse momenta on the order of the saturation scale we observe large variations across configurations (evolution trajectories) of the linearly polarized distribution up to several times its average, and even to negative values.« less
Magnetic anisotropy behaviour of pyrrhotite as determined by low- and high-field experiments
NASA Astrophysics Data System (ADS)
Martín-Hernández, F.; Dekkers, M. J.; Bominaar-Silkens, I. M. A.; Maan, J. C.
2008-07-01
Here we report on the sources of magnetic anisotropy in pyrrhotite, an iron sulphide present in many rocks as an important carrier of the Natural Remanent Magnetization. While the magnetic hysteresis parameters of pyrrhotite are well known, the existing database concerning its anisotropy behaviour is patchy and ambiguous. Therefore, a collection of 11 seemingly single crystals of natural pyrrhotite was scrutinized. Before embarking on the anisotropy determinations the set of single crystals was extensively characterized rock magnetically by measuring Curie temperatures, hysteresis loops, IRM acquisition curves, and FORC diagrams (the latter three all at room temperature). First the variation of the low-field susceptibility as function of applied field and grain size was evaluated for fields ranging from 1 to 450 A m-1. Existing grain size dependent data and the present larger crystals show a logarithmic grain size dependence. This enables estimating the grain size for unimodal pyrrhotite distributions in rocks. Measured trends are better fitted with an exponential function than with a Rayleigh Law style function. Based on the rock magnetic characterization and the behaviour of the anisotropy of magnetic susceptibility six samples (of the original 11) were selected for the high-field anisotropy determinations within the basal plane. Those data were acquired with a torque cantilever-type magnetometer. As expected, most single crystals showed a pure 6-θ curve within their basal plane because of the easy axis configuration. In some crystals, however, lower harmonic terms overlapped the 6-θ term. This may be the dominant source of the observed variation in magnetic anisotropy properties. Torque data of three of the six samples were of sufficient quality to allow evaluation of K1. Re-evaluation of existing torque data and including the present newly derived determinations, yields for the anisotropy constant of pyrrhotite within the basal plane K1: (2.7 +/- 0.2) 104 Jm-3. This is over an order of magnitude more precise than the sparse existing K1 data; only the value reported by Mikami and co-authors in 1959 agrees with the new determination. With this firmly established K1 value meaningful anisotropy models are now possible for pyrrhotite-bearing rocks.
An analog gamma correction scheme for high dynamic range CMOS logarithmic image sensors.
Cao, Yuan; Pan, Xiaofang; Zhao, Xiaojin; Wu, Huisi
2014-12-15
In this paper, a novel analog gamma correction scheme with a logarithmic image sensor dedicated to minimize the quantization noise of the high dynamic applications is presented. The proposed implementation exploits a non-linear voltage-controlled-oscillator (VCO) based analog-to-digital converter (ADC) to perform the gamma correction during the analog-to-digital conversion. As a result, the quantization noise does not increase while the same high dynamic range of logarithmic image sensor is preserved. Moreover, by combining the gamma correction with the analog-to-digital conversion, the silicon area and overall power consumption can be greatly reduced. The proposed gamma correction scheme is validated by the reported simulation results and the experimental results measured for our designed test structure, which is fabricated with 0.35 μm standard complementary-metal-oxide-semiconductor (CMOS) process.
Path Loss Prediction Formula in Urban Area for the Fourth-Generation Mobile Communication Systems
NASA Astrophysics Data System (ADS)
Kitao, Koshiro; Ichitsubo, Shinichi
A site-general type prediction formula is created based on the measurement results in an urban area in Japan assuming that the prediction frequency range required for Fourth-Generation (4G) Mobile Communication Systems is from 3 to 6GHz, the distance range is 0.1 to 3km, and the base station (BS) height range is from 10 to 100m. Based on the measurement results, the path loss (dB) is found to be proportional to the logarithm of the distance (m), the logarithm of the BS height (m), and the logarithm of the frequency (GHz). Furthermore, we examine the extension of existing formulae such as the Okumura-Hata, Walfisch-Ikegami, and Sakagami formulae for 4G systems and propose a prediction formula based on the Extended Sakagami formula.
An Analog Gamma Correction Scheme for High Dynamic Range CMOS Logarithmic Image Sensors
Cao, Yuan; Pan, Xiaofang; Zhao, Xiaojin; Wu, Huisi
2014-01-01
In this paper, a novel analog gamma correction scheme with a logarithmic image sensor dedicated to minimize the quantization noise of the high dynamic applications is presented. The proposed implementation exploits a non-linear voltage-controlled-oscillator (VCO) based analog-to-digital converter (ADC) to perform the gamma correction during the analog-to-digital conversion. As a result, the quantization noise does not increase while the same high dynamic range of logarithmic image sensor is preserved. Moreover, by combining the gamma correction with the analog-to-digital conversion, the silicon area and overall power consumption can be greatly reduced. The proposed gamma correction scheme is validated by the reported simulation results and the experimental results measured for our designed test structure, which is fabricated with 0.35 μm standard complementary-metal-oxide-semiconductor (CMOS) process. PMID:25517692
Book review: A new view on the species abundance distribution
DeAngelis, Donald L.
2018-01-01
The sampled relative abundances of species of a taxonomic group, whether birds, trees, or moths, in a natural community at a particular place vary in a way that suggests a consistent underlying pattern, referred to as the species abundance distribution (SAD). Preston [1] conjectured that the numbers of species, plotted as a histogram of logarithmic abundance classes called octaves, seemed to fit a lognormal distribution; that is, the histograms look like normal distributions, although truncated on the left-hand, or low-species-abundance, end. Although other specific curves for the SAD have been proposed in the literature, Preston’s lognormal distribution is widely cited in textbooks and has stimulated attempts at explanation. An important aspect of Preston’s lognormal distribution is the ‘veil line’, a vertical line drawn exactly at the point of the left-hand truncation in the distribution, to the left of which would be species missing from the sample. Dewdney rejects the lognormal conjecture. Instead, starting with the long-recognized fact that the number of species sampled from a community, when plotted as histograms against population abundance, resembles an inverted J, he presents a mathematical description of an alternative that he calls the ‘J distribution’, a hyperbolic density function truncated at both ends. When multiplied by species richness, R, it becomes the SAD of the sample.
NASA Astrophysics Data System (ADS)
Satoh, Katsuhiko
2013-08-01
The thermodynamic scaling of molecular dynamic properties of rotation and thermodynamic parameters in a nematic phase was investigated by a molecular dynamic simulation using the Gay-Berne potential. A master curve for the relaxation time of flip-flop motion was obtained using thermodynamic scaling, and the dynamic property could be solely expressed as a function of TV^{γ _τ }, where T and V are the temperature and volume, respectively. The scaling parameter γτ was in excellent agreement with the thermodynamic parameter Γ, which is the logarithm of the slope of a line plotted for the temperature and volume at constant P2. This line was fairly linear, and as good as the line for p-azoxyanisole or using the highly ordered small cluster model. The equivalence relation between Γ and γτ was compared with results obtained from the highly ordered small cluster model. The possibility of adapting the molecular model for the thermodynamic scaling of other dynamic rotational properties was also explored. The rotational diffusion constant and rotational viscosity coefficients, which were calculated using established theoretical and experimental expressions, were rescaled onto master curves with the same scaling parameters. The simulation illustrates the universal nature of the equivalence relation for liquid crystals.
Empirical scaling of the length of the longest increasing subsequences of random walks
NASA Astrophysics Data System (ADS)
Mendonça, J. Ricardo G.
2017-02-01
We provide Monte Carlo estimates of the scaling of the length L n of the longest increasing subsequences of n-step random walks for several different distributions of step lengths, short and heavy-tailed. Our simulations indicate that, barring possible logarithmic corrections, {{L}n}∼ {{n}θ} with the leading scaling exponent 0.60≲ θ ≲ 0.69 for the heavy-tailed distributions of step lengths examined, with values increasing as the distribution becomes more heavy-tailed, and θ ≃ 0.57 for distributions of finite variance, irrespective of the particular distribution. The results are consistent with existing rigorous bounds for θ, although in a somewhat surprising manner. For random walks with step lengths of finite variance, we conjecture that the correct asymptotic behavior of L n is given by \\sqrt{n}\\ln n , and also propose the form for the subleading asymptotics. The distribution of L n was found to follow a simple scaling form with scaling functions that vary with θ. Accordingly, when the step lengths are of finite variance they seem to be universal. The nature of this scaling remains unclear, since we lack a working model, microscopic or hydrodynamic, for the behavior of the length of the longest increasing subsequences of random walks.
NASA Astrophysics Data System (ADS)
Douthett, Elwood (Jack) Moser, Jr.
1999-10-01
Cyclic configurations of white and black sites, together with convex (concave) functions used to weight path length, are investigated. The weights of the white set and black set are the sums of the weights of the paths connecting the white sites and black sites, respectively, and the weight between sets is the sum of the weights of the paths that connect sites opposite in color. It is shown that when the weights of all configurations of a fixed number of white and a fixed number of black sites are compared, minimum (maximum) weight of a white set, minimum (maximum) weight of the a black set, and maximum (minimum) weight between sets occur simultaneously. Such configurations are called maximally even configurations. Similarly, the configurations whose weights are the opposite extremes occur simultaneously and are called minimally even configurations. Algorithms that generate these configurations are constructed and applied to the one- dimensional antiferromagnetic spin-1/2 Ising model. Next the goodness of continued fractions as applied to musical intervals (frequency ratios and their base 2 logarithms) is explored. It is shown that, for the intermediate convergents between two consecutive principal convergents of an irrational number, the first half of the intermediate convergents are poorer approximations than the preceding principal convergent while the second half are better approximations; the goodness of a middle intermediate convergent can only be determined by calculation. These convergents are used to determine what equal-tempered systems have intervals that most closely approximate the musical fifth (pn/ qn = log2(3/2)). The goodness of exponentiated convergents ( 2pn/qn~3/2 ) is also investigated. It is shown that, with the exception of a middle convergent, the goodness of the exponential form agrees with that of its logarithmic Counterpart As in the case of the logarithmic form, the goodness of a middle intermediate convergent in the exponential form can only be determined by calculation. A Desirability Function is constructed that simultaneously measures how well multiple intervals fit in a given equal-tempered system. These measurements are made for octave (base 2) and tritave systems (base 3). Combinatorial properties important to music modulation are considered. These considerations lead These considerations lead to the construction of maximally even scales as partitions of an equal-tempered system.
Anharmonic effects in the quantum cluster equilibrium method
NASA Astrophysics Data System (ADS)
von Domaros, Michael; Perlt, Eva
2017-03-01
The well-established quantum cluster equilibrium (QCE) model provides a statistical thermodynamic framework to apply high-level ab initio calculations of finite cluster structures to macroscopic liquid phases using the partition function. So far, the harmonic approximation has been applied throughout the calculations. In this article, we apply an important correction in the evaluation of the one-particle partition function and account for anharmonicity. Therefore, we implemented an analytical approximation to the Morse partition function and the derivatives of its logarithm with respect to temperature, which are required for the evaluation of thermodynamic quantities. This anharmonic QCE approach has been applied to liquid hydrogen chloride and cluster distributions, and the molar volume, the volumetric thermal expansion coefficient, and the isobaric heat capacity have been calculated. An improved description for all properties is observed if anharmonic effects are considered.
Logarithmic Sobolev Inequalities on Path Spaces Over Riemannian Manifolds
NASA Astrophysics Data System (ADS)
Hsu, Elton P.
Let Wo(M) be the space of paths of unit time length on a connected, complete Riemannian manifold M such that γ(0) =o, a fixed point on M, and ν the Wiener measure on Wo(M) (the law of Brownian motion on M starting at o).If the Ricci curvature is bounded by c, then the following logarithmic Sobolev inequality holds:
ERIC Educational Resources Information Center
Marston, Doug; Deno, Stanley L.
The accuracy of predictions of future student performance on the basis of graphing data on semi-logarithmic charts and equal interval graphs was examined. All 83 low-achieving students in grades 3 to 6 read randomly-selected lists of words from the Harris-Jacobson Word List for 1 minute. The number of words read correctly and words read…
White, Sonia L J; Szűcs, Dénes
2012-01-04
The objective of this study was to scrutinize number line estimation behaviors displayed by children in mathematics classrooms during the first three years of schooling. We extend existing research by not only mapping potential logarithmic-linear shifts but also provide a new perspective by studying in detail the estimation strategies of individual target digits within a number range familiar to children. Typically developing children (n = 67) from Years 1-3 completed a number-to-position numerical estimation task (0-20 number line). Estimation behaviors were first analyzed via logarithmic and linear regression modeling. Subsequently, using an analysis of variance we compared the estimation accuracy of each digit, thus identifying target digits that were estimated with the assistance of arithmetic strategy. Our results further confirm a developmental logarithmic-linear shift when utilizing regression modeling; however, uniquely we have identified that children employ variable strategies when completing numerical estimation, with levels of strategy advancing with development. In terms of the existing cognitive research, this strategy factor highlights the limitations of any regression modeling approach, or alternatively, it could underpin the developmental time course of the logarithmic-linear shift. Future studies need to systematically investigate this relationship and also consider the implications for educational practice.
2012-01-01
Background The objective of this study was to scrutinize number line estimation behaviors displayed by children in mathematics classrooms during the first three years of schooling. We extend existing research by not only mapping potential logarithmic-linear shifts but also provide a new perspective by studying in detail the estimation strategies of individual target digits within a number range familiar to children. Methods Typically developing children (n = 67) from Years 1-3 completed a number-to-position numerical estimation task (0-20 number line). Estimation behaviors were first analyzed via logarithmic and linear regression modeling. Subsequently, using an analysis of variance we compared the estimation accuracy of each digit, thus identifying target digits that were estimated with the assistance of arithmetic strategy. Results Our results further confirm a developmental logarithmic-linear shift when utilizing regression modeling; however, uniquely we have identified that children employ variable strategies when completing numerical estimation, with levels of strategy advancing with development. Conclusion In terms of the existing cognitive research, this strategy factor highlights the limitations of any regression modeling approach, or alternatively, it could underpin the developmental time course of the logarithmic-linear shift. Future studies need to systematically investigate this relationship and also consider the implications for educational practice. PMID:22217191
Wallsh, Josh O; Gallemore, Ron P; Taban, Mehran; Hu, Charles; Sharareh, Behnam
2013-01-01
To assess the safety and efficacy of a modified technique for pars plana placement of the Ahmed valve in combination with pars plana vitrectomy in the treatment of glaucoma associated with posterior segment disease. Thirty-nine eyes with glaucoma associated with posterior segment disease underwent pars plana vitrectomy combined with Ahmed valve placement. All valves were placed in the pars plana using a modified technique, without the pars plana clip, and using a scleral patch graft. The 24 eyes diagnosed with neovascular glaucoma had an improvement in intraocular pressure from 37.6 mmHg to 13.8 mmHg and best-corrected visual acuity from 2.13 logarithm of minimum angle of resolution to 1.40 logarithm of minimum angle of resolution. Fifteen eyes diagnosed with steroid-induced glaucoma had an improvement in intraocular pressure from 27.9 mmHg to 14.1 mmHg and best-corrected visual acuity from 1.38 logarithm of minimum angle of resolution to 1.13 logarithm of minimum angle of resolution. Complications included four cases of cystic bleb formation and one case of choroidal detachment and explantation for hypotony. Ahmed valve placement through the pars plana during vitrectomy is an effective option for managing complex cases of glaucoma without the use of the pars plana clip.
A Planar Microfluidic Mixer Based on Logarithmic Spirals
Scherr, Thomas; Quitadamo, Christian; Tesvich, Preston; Park, Daniel Sang-Won; Tiersch, Terrence; Hayes, Daniel; Choi, Jin-Woo; Nandakumar, Krishnaswamy
2013-01-01
A passive, planar micromixer design based on logarithmic spirals is presented. The device was fabricated using polydimethylsiloxane soft photolithography techniques, and mixing performance was characterized via numerical simulation and fluorescent microscopy. Mixing efficiency initially declined as Reynolds number increased, and this trend continued until a Reynolds number of 15 where a minimum was reached at 53%. Mixing efficiency then began to increase reaching a maximum mixing efficiency of 86% at Re = 67. Three-dimensional simulations of fluid mixing in this design were compared to other planar geometries such as the Archimedes spiral and Meandering-S mixers. The implementation of logarithmic curvature offers several unique advantages that enhance mixing, namely a variable cross-sectional area and a logarithmically varying radius of curvature that creates 3-D Dean vortices. These flow phenomena were observed in simulations with multilayered fluid folding and validated with confocal microscopy. This design provides improved mixing performance over a broader range of Reynolds numbers than other reported planar mixers, all while avoiding external force fields, more complicated fabrication processes, and the introduction of flow obstructions or cavities that may unintentionally affect sensitive or particulate-containing samples. Due to the planar design requiring only single-step lithographic features, this compact geometry could be easily implemented into existing micro-total analysis systems requiring effective rapid mixing. PMID:23956497
A planar microfluidic mixer based on logarithmic spirals
NASA Astrophysics Data System (ADS)
Scherr, Thomas; Quitadamo, Christian; Tesvich, Preston; Sang-Won Park, Daniel; Tiersch, Terrence; Hayes, Daniel; Choi, Jin-Woo; Nandakumar, Krishnaswamy; Monroe, W. Todd
2012-05-01
A passive, planar micromixer design based on logarithmic spirals is presented. The device was fabricated using polydimethylsiloxane soft photolithography techniques, and mixing performance was characterized via numerical simulation and fluorescent microscopy. Mixing efficiency initially declined as the Reynolds number increased, and this trend continued until a Reynolds number of 15 where a minimum was reached at 53%. Mixing efficiency then began to increase reaching a maximum mixing efficiency of 86% at Re = 67. Three-dimensional (3D) simulations of fluid mixing in this design were compared to other planar geometries such as the Archimedes spiral and Meandering-S mixers. The implementation of logarithmic curvature offers several unique advantages that enhance mixing, namely a variable cross-sectional area and a logarithmically varying radius of curvature that creates 3D Dean vortices. These flow phenomena were observed in simulations with multilayered fluid folding and validated with confocal microscopy. This design provides improved mixing performance over a broader range of Reynolds numbers than other reported planar mixers, all while avoiding external force fields, more complicated fabrication processes and the introduction of flow obstructions or cavities that may unintentionally affect sensitive or particulate-containing samples. Due to the planar design requiring only single-step lithographic features, this compact geometry could be easily implemented into existing micro-total analysis systems requiring effective rapid mixing.
Coherence and entanglement measures based on Rényi relative entropies
NASA Astrophysics Data System (ADS)
Zhu, Huangjun; Hayashi, Masahito; Chen, Lin
2017-11-01
We study systematically resource measures of coherence and entanglement based on Rényi relative entropies, which include the logarithmic robustness of coherence, geometric coherence, and conventional relative entropy of coherence together with their entanglement analogues. First, we show that each Rényi relative entropy of coherence is equal to the corresponding Rényi relative entropy of entanglement for any maximally correlated state. By virtue of this observation, we establish a simple operational connection between entanglement measures and coherence measures based on Rényi relative entropies. We then prove that all these coherence measures, including the logarithmic robustness of coherence, are additive. Accordingly, all these entanglement measures are additive for maximally correlated states. In addition, we derive analytical formulas for Rényi relative entropies of entanglement of maximally correlated states and bipartite pure states, which reproduce a number of classic results on the relative entropy of entanglement and logarithmic robustness of entanglement in a unified framework. Several nontrivial bounds for Rényi relative entropies of coherence (entanglement) are further derived, which improve over results known previously. Moreover, we determine all states whose relative entropy of coherence is equal to the logarithmic robustness of coherence. As an application, we provide an upper bound for the exact coherence distillation rate, which is saturated for pure states.
Zhao, Henan; Bryant, Garnett W.; Griffin, Wesley; Terrill, Judith E.; Chen, Jian
2017-01-01
We designed and evaluated SplitVectors, a new vector field display approach to help scientists perform new discrimination tasks on large-magnitude-range scientific data shown in three-dimensional (3D) visualization environments. SplitVectors uses scientific notation to display vector magnitude, thus improving legibility. We present an empirical study comparing the SplitVectors approach with three other approaches - direct linear representation, logarithmic, and text display commonly used in scientific visualizations. Twenty participants performed three domain analysis tasks: reading numerical values (a discrimination task), finding the ratio between values (a discrimination task), and finding the larger of two vectors (a pattern detection task). Participants used both mono and stereo conditions. Our results suggest the following: (1) SplitVectors improve accuracy by about 10 times compared to linear mapping and by four times to logarithmic in discrimination tasks; (2) SplitVectors have no significant differences from the textual display approach, but reduce cluttering in the scene; (3) SplitVectors and textual display are less sensitive to data scale than linear and logarithmic approaches; (4) using logarithmic can be problematic as participants' confidence was as high as directly reading from the textual display, but their accuracy was poor; and (5) Stereoscopy improved performance, especially in more challenging discrimination tasks. PMID:28113469
Henan Zhao; Bryant, Garnett W; Griffin, Wesley; Terrill, Judith E; Jian Chen
2017-06-01
We designed and evaluated SplitVectors, a new vector field display approach to help scientists perform new discrimination tasks on large-magnitude-range scientific data shown in three-dimensional (3D) visualization environments. SplitVectors uses scientific notation to display vector magnitude, thus improving legibility. We present an empirical study comparing the SplitVectors approach with three other approaches - direct linear representation, logarithmic, and text display commonly used in scientific visualizations. Twenty participants performed three domain analysis tasks: reading numerical values (a discrimination task), finding the ratio between values (a discrimination task), and finding the larger of two vectors (a pattern detection task). Participants used both mono and stereo conditions. Our results suggest the following: (1) SplitVectors improve accuracy by about 10 times compared to linear mapping and by four times to logarithmic in discrimination tasks; (2) SplitVectors have no significant differences from the textual display approach, but reduce cluttering in the scene; (3) SplitVectors and textual display are less sensitive to data scale than linear and logarithmic approaches; (4) using logarithmic can be problematic as participants' confidence was as high as directly reading from the textual display, but their accuracy was poor; and (5) Stereoscopy improved performance, especially in more challenging discrimination tasks.
NASA Astrophysics Data System (ADS)
Dok, A.; Fukuoka, H.
2010-12-01
Landslides are complex geo-disaters that frequently occur due to certain causes, but only one trigger such as earthquake or heavy rainfall or other related natural phenomenas. A slope failure seldom occurs without any creep deformation. Failure time of a slope as found by Fukuzono (1985) and Siato (1965) based on graphical analysis of extensometer monitoring data through large scale flume test for landslide studies, logarithm of acceleration is proportional to the logarithm of velocity of surface displacement immediately before the failure. It is expressed as d2x/dt2 = A(dx/dt)α, where x is surface displacement, t is time, and A and α are constant. And, Fukuzono (1985, 1989) proposed a simple method of predicting the time of falure by the inverse velocity (1/v) mean. The curve of inverse velocity is concave at 1< α<2, linear at α=2, and convex at α>2. Recently, Minamitani (2007) have researched on mechanism of Tertiary Creep deformation for landslide failure time prediction by increasing shear-stress development in order to understand the story behind the empirical relationship found by senior researcher Fukozono. He found a strong relationshp between constants A and α, expressed as α = 0.1781A+ 1.814. For deeper understanding, this study aims at learning in more detail on mechanism of landslides in tropical soils by ring shear apparatus (invented by DPRI, Disaster Prevention Research Institute) based on Tertiary Creep deformation theory in help issue warning on rainfall-induced landslides through back (pore-water) pressure control tests under combined conditions of particular normal stress and shear stress with pore-water pressure changes to simulate the potential sliding surface condition in the heavy rainfall, which no body experiences conducting such a test series, particularly by applying cyclic and actual groundwater change pattern to the soils. To reach the archivement, serie of back pressure control test were implemented by utilising stress-controlled ring shear apparatus which can control pore pressure, as well as monotonic increase of pore pressure at constant rate. Mixture of sand and clay materials was used to simulate actual landslide potential sliding surface. Repeated 1~5 time shear test for a specimen was also additionally conducted to produce reactivated motion landsliding. As a result, the tests were succeeded to reproduce tertiary creep to failure, through which the logarithm of acceleration-logarithm of velicity relation was found to be concave feature of 1/v trend (of safer side), and alpha value is much smaller than Fukuzono and Minamitani's works (0.3~0.7) by unknown reason. Moreover, trial repeated shear found a scatter of alpha values and the value itself did not show any significant trend of change.
Phenomenology of single-inclusive jet production with jet radius and threshold resummation
NASA Astrophysics Data System (ADS)
Liu, Xiaohui; Moch, Sven-Olaf; Ringer, Felix
2018-03-01
We perform a detailed study of inclusive jet production cross sections at the LHC and compare the QCD theory predictions based on the recently developed formalism for threshold and jet radius joint resummation at next-to-leading logarithmic accuracy to inclusive jet data collected by the CMS Collaboration at √{S }=7 and 13 TeV. We compute the cross sections at next-to-leading order in QCD with and without the joint resummation for different choices of jet radii R and observe that the joint resummation leads to crucial improvements in the description of the data. Comprehensive studies with different parton distribution functions demonstrate the necessity of considering the joint resummation in fits of those functions based on the LHC jet data.
Height growth of solutions and a discrete Painlevé equation
NASA Astrophysics Data System (ADS)
Al-Ghassani, A.; Halburd, R. G.
2015-07-01
Consider the discrete equation where the right side is of degree two in yn and where the coefficients an, bn and cn are rational functions of n with rational coefficients. Suppose that there is a solution such that for all sufficiently large n, y_n\\in{Q} and the height of yn dominates the height of the coefficient functions an, bn and cn. We show that if the logarithmic height of yn grows no faster than a power of n then either the equation is a well known discrete Painlevé equation dPII or its autonomous version or yn is also an admissible solution of a discrete Riccati equation. This provides further evidence that slow height growth is a good detector of integrability.
Local patches of turbulent boundary layer behaviour in classical-state vertical natural convection
NASA Astrophysics Data System (ADS)
Ng, Chong Shen; Ooi, Andrew; Lohse, Detlef; Chung, Daniel
2016-11-01
We present evidence of local patches in vertical natural convection that are reminiscent of Prandtl-von Kármán turbulent boundary layers, for Rayleigh numbers 105-109 and Prandtl number 0.709. These local patches exist in the classical state, where boundary layers exhibit a laminar-like Prandtl-Blasius-Polhausen scaling at the global level, and are distinguished by regions dominated by high shear and low buoyancy flux. Within these patches, the locally averaged mean temperature profiles appear to obey a log-law with the universal constants of Yaglom (1979). We find that the local Nusselt number versus Rayleigh number scaling relation agrees with the logarithmically corrected power-law scaling predicted in the ultimate state of thermal convection, with an exponent consistent with Rayleigh-Bénard convection and Taylor-Couette flows. The local patches grow in size with increasing Rayleigh number, suggesting that the transition from the classical state to the ultimate state is characterised by increasingly larger patches of the turbulent boundary layers.
Stochastic nature of series of waiting times.
Anvari, Mehrnaz; Aghamohammadi, Cina; Dashti-Naserabadi, H; Salehi, E; Behjat, E; Qorbani, M; Nezhad, M Khazaei; Zirak, M; Hadjihosseini, Ali; Peinke, Joachim; Tabar, M Reza Rahimi
2013-06-01
Although fluctuations in the waiting time series have been studied for a long time, some important issues such as its long-range memory and its stochastic features in the presence of nonstationarity have so far remained unstudied. Here we find that the "waiting times" series for a given increment level have long-range correlations with Hurst exponents belonging to the interval 1/2
Stochastic nature of series of waiting times
NASA Astrophysics Data System (ADS)
Anvari, Mehrnaz; Aghamohammadi, Cina; Dashti-Naserabadi, H.; Salehi, E.; Behjat, E.; Qorbani, M.; Khazaei Nezhad, M.; Zirak, M.; Hadjihosseini, Ali; Peinke, Joachim; Tabar, M. Reza Rahimi
2013-06-01
Although fluctuations in the waiting time series have been studied for a long time, some important issues such as its long-range memory and its stochastic features in the presence of nonstationarity have so far remained unstudied. Here we find that the “waiting times” series for a given increment level have long-range correlations with Hurst exponents belonging to the interval 1/2
Elimination of endrin by mallard ducks
Heinz, G.H.; Johnson, R.W.
1979-01-01
Endrin is very toxic to birds and has been implicated in the deaths of birds in nature. However, it is not known how rapidly birds eliminate endrin, a factor important in determining how much is accumulated in tissues. In this study, the loss rate of endrin was followed for 64 days in mallard (Anas platyrhynchos) drakes that had been fed 20 ppm endrin for 13 days. The loss from carcass and blood was described by the equation Y = a e b square root of x where Y = the concentration of endrin in ppm, a = the concentration at day 0, e = the base of natural logarithms, b = the first order rate constant for the elimination process, and x = the number of days after cessation of endrin treatment. Endrin was lost rapidly at first; concentrations in carcasses on a wet-weight basis decreased by 50% in the first 3 days. Thereafter, endrin was eliminated more slowly; elimination of 50% of the remainder required 8.9 days, and it took 32.9 days to lose 90% of the original amount.
NASA Astrophysics Data System (ADS)
Davidson, Eric A.; Verchot, Louis V.
2000-12-01
Because several soil properties and processes affect emissions of nitric oxide (NO) and nitrous oxide (N2O) from soils, it has been difficult to develop effective and robust algorithms to predict emissions of these gases in biogeochemical models. The conceptual "hole-in-the-pipe" (HIP) model has been used effectively to interpret results of numerous studies, but the ranges of climatic conditions and soil properties are often relatively narrow for each individual study. The Trace Gas Network (TRAGNET) database offers a unique opportunity to test the validity of one manifestation of the HIP model across a broad range of sites, including temperate and tropical climates, grasslands and forests, and native vegetation and agricultural crops. The logarithm of the sum of NO + N2O emissions was positively and significantly correlated with the logarithm of the sum of extractable soil NH4+ + NO3-. The logarithm of the ratio of NO:N2O emissions was negatively and significantly correlated with water-filled pore space (WFPS). These analyses confirm the applicability of the HIP model concept, that indices of soil N availability correlate with the sum of NO+N2O emissions, while soil water content is a strong and robust controller of the ratio of NO:N2O emissions. However, these parameterizations have only broad-brush accuracy because of unaccounted variation among studies in the soil depths where gas production occurs, where soil N and water are measured, and other factors. Although accurate predictions at individual sites may still require site-specific parameterization of these empirical functions, the parameterizations presented here, particularly the one for WFPS, may be appropriate for global biogeochemical modeling. Moreover, this integration of data sets demonstrates the broad ranging applicability of the HIP conceptual approach for understanding soil emissions of NO and N2O.
Rock Failure Analysis Based on a Coupled Elastoplastic-Logarithmic Damage Model
NASA Astrophysics Data System (ADS)
Abdia, M.; Molladavoodi, H.; Salarirad, H.
2017-12-01
The rock materials surrounding the underground excavations typically demonstrate nonlinear mechanical response and irreversible behavior in particular under high in-situ stress states. The dominant causes of irreversible behavior are plastic flow and damage process. The plastic flow is controlled by the presence of local shear stresses which cause the frictional sliding. During this process, the net number of bonds remains unchanged practically. The overall macroscopic consequence of plastic flow is that the elastic properties (e.g. the stiffness of the material) are insensitive to this type of irreversible change. The main cause of irreversible changes in quasi-brittle materials such as rock is the damage process occurring within the material. From a microscopic viewpoint, damage initiates with the nucleation and growth of microcracks. When the microcracks length reaches a critical value, the coalescence of them occurs and finally, the localized meso-cracks appear. The macroscopic and phenomenological consequence of damage process is stiffness degradation, dilatation and softening response. In this paper, a coupled elastoplastic-logarithmic damage model was used to simulate the irreversible deformations and stiffness degradation of rock materials under loading. In this model, damage evolution & plastic flow rules were formulated in the framework of irreversible thermodynamics principles. To take into account the stiffness degradation and softening on post-peak region, logarithmic damage variable was implemented. Also, a plastic model with Drucker-Prager yield function was used to model plastic strains. Then, an algorithm was proposed to calculate the numerical steps based on the proposed coupled plastic and damage constitutive model. The developed model has been programmed in VC++ environment. Then, it was used as a separate and new constitutive model in DEM code (UDEC). Finally, the experimental Oolitic limestone rock behavior was simulated based on the developed model. The irreversible strains, softening and stiffness degradation were reproduced in the numerical results. Furthermore, the confinement pressure dependency of rock behavior was simulated in according to experimental observations.
NASA Astrophysics Data System (ADS)
Wang, Xiaoyu; Lu, Zhun
2018-03-01
We investigate the Sivers asymmetry in the pion-induced single polarized Drell-Yan process in the theoretical framework of the transverse momentum dependent factorization up to next-to-leading logarithmic order of QCD. Within the TMD evolution formalism of parton distribution functions, the recently extracted nonperturbative Sudakov form factor for the pion distribution functions as well as the one for the Sivers function of the proton are applied to numerically estimate the Sivers asymmetry in the π-p Drell-Yan at the kinematics of the COMPASS at CERN. In the low b region, the Sivers function in b -space can be expressed as the convolution of the perturbatively calculable hard coefficients and the corresponding collinear correlation function, of which the Qiu-Sterman function is the most relevant one. The effect of the energy-scale dependence of the Qiu-Sterman function to the asymmetry is also studied. We find that our prediction on the Sivers asymmetries as functions of xp, xπ, xF and q⊥ is consistent with the recent COMPASS measurement.
A viable logarithmic f(R) model for inflation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amin, M.; Khalil, S.; Salah, M.
2016-08-18
Inflation in the framework of f(R) modified gravity is revisited. We study the conditions that f(R) should satisfy in order to lead to a viable inflationary model in the original form and in the Einstein frame. Based on these criteria we propose a new logarithmic model as a potential candidate for f(R) theories aiming to describe inflation consistent with observations from Planck satellite (2015). The model predicts scalar spectral index 0.9615
NASA Astrophysics Data System (ADS)
Wu, Jun; Gygi, François
2012-06-01
We present a simplified implementation of the non-local van der Waals correlation functional introduced by Dion et al. [Phys. Rev. Lett. 92, 246401 (2004)] and reformulated by Román-Pérez et al. [Phys. Rev. Lett. 103, 096102 (2009)]. The proposed numerical approach removes the logarithmic singularity of the kernel function. Complete expressions of the self-consistent correlation potential and of the stress tensor are given. Combined with various choices of exchange functionals, five versions of van der Waals density functionals are implemented. Applications to the computation of the interaction energy of the benzene-water complex and to the computation of the equilibrium cell parameters of the benzene crystal are presented. As an example of crystal structure calculation involving a mixture of hydrogen bonding and dispersion interactions, we compute the equilibrium structure of two polymorphs of aspirin (2-acetoxybenzoic acid, C9H8O4) in the P21/c monoclinic structure.
Scaling in the vicinity of the four-state Potts fixed point
NASA Astrophysics Data System (ADS)
Blöte, H. W. J.; Guo, Wenan; Nightingale, M. P.
2017-08-01
We study a self-dual generalization of the Baxter-Wu model, employing results obtained by transfer matrix calculations of the magnetic scaling dimension and the free energy. While the pure critical Baxter-Wu model displays the critical behavior of the four-state Potts fixed point in two dimensions, in the sense that logarithmic corrections are absent, the introduction of different couplings in the up- and down triangles moves the model away from this fixed point, so that logarithmic corrections appear. Real couplings move the model into the first-order range, away from the behavior displayed by the nearest-neighbor, four-state Potts model. We also use complex couplings, which bring the model in the opposite direction characterized by the same type of logarithmic corrections as present in the four-state Potts model. Our finite-size analysis confirms in detail the existing renormalization theory describing the immediate vicinity of the four-state Potts fixed point.
Optical Processing of Speckle Images with Bacteriorhodopsin for Pattern Recognition
NASA Technical Reports Server (NTRS)
Downie, John D.; Tucker, Deanne (Technical Monitor)
1994-01-01
Logarithmic processing of images with multiplicative noise characteristics can be utilized to transform the image into one with an additive noise distribution. This simplifies subsequent image processing steps for applications such as image restoration or correlation for pattern recognition. One particularly common form of multiplicative noise is speckle, for which the logarithmic operation not only produces additive noise, but also makes it of constant variance (signal-independent). We examine the optical transmission properties of some bacteriorhodopsin films here and find them well suited to implement such a pointwise logarithmic transformation optically in a parallel fashion. We present experimental results of the optical conversion of speckle images into transformed images with additive, signal-independent noise statistics using the real-time photochromic properties of bacteriorhodopsin. We provide an example of improved correlation performance in terms of correlation peak signal-to-noise for such a transformed speckle image.
NASA Astrophysics Data System (ADS)
Weng, Tongfeng; Zhang, Jie; Small, Michael; Harandizadeh, Bahareh; Hui, Pan
2018-03-01
We propose a unified framework to evaluate and quantify the search time of multiple random searchers traversing independently and concurrently on complex networks. We find that the intriguing behaviors of multiple random searchers are governed by two basic principles—the logarithmic growth pattern and the harmonic law. Specifically, the logarithmic growth pattern characterizes how the search time increases with the number of targets, while the harmonic law explores how the search time of multiple random searchers varies relative to that needed by individual searchers. Numerical and theoretical results demonstrate these two universal principles established across a broad range of random search processes, including generic random walks, maximal entropy random walks, intermittent strategies, and persistent random walks. Our results reveal two fundamental principles governing the search time of multiple random searchers, which are expected to facilitate investigation of diverse dynamical processes like synchronization and spreading.
NASA Astrophysics Data System (ADS)
Wan, Qianwen; Panetta, Karen; Agaian, Sos
2017-05-01
Autonomous facial recognition system is widely used in real-life applications, such as homeland border security, law enforcement identification and authentication, and video-based surveillance analysis. Issues like low image quality, non-uniform illumination as well as variations in poses and facial expressions can impair the performance of recognition systems. To address the non-uniform illumination challenge, we present a novel robust autonomous facial recognition system inspired by the human visual system based, so called, logarithmical image visualization technique. In this paper, the proposed method, for the first time, utilizes the logarithmical image visualization technique coupled with the local binary pattern to perform discriminative feature extraction for facial recognition system. The Yale database, the Yale-B database and the ATT database are used for computer simulation accuracy and efficiency testing. The extensive computer simulation demonstrates the method's efficiency, accuracy, and robustness of illumination invariance for facial recognition.
Chemical origins of frictional aging.
Liu, Yun; Szlufarska, Izabela
2012-11-02
Although the basic laws of friction are simple enough to be taught in elementary physics classes and although friction has been widely studied for centuries, in the current state of knowledge it is still not possible to predict a friction force from fundamental principles. One of the highly debated topics in this field is the origin of static friction. For most macroscopic contacts between two solids, static friction will increase logarithmically with time, a phenomenon that is referred to as aging of the interface. One known reason for the logarithmic growth of static friction is the deformation creep in plastic contacts. However, this mechanism cannot explain frictional aging observed in the absence of roughness and plasticity. Here, we discover molecular mechanisms that can lead to a logarithmic increase of friction based purely on interfacial chemistry. Predictions of our model are consistent with published experimental data on the friction of silica.
Resistance of bacterial biofilms formed on stainless steel surface to disinfecting agent.
Królasik, Joanna; Zakowska, Zofia; Krepska, Milena; Klimek, Leszek
2010-01-01
The natural ability of microorganisms for adhesion and biofilm formation on various surfaces is one of the factors causing the inefficiency of a disinfection agent, despite its proven activity in vitro. The aim of the study was to determine the effectiveness of disinfecting substances on bacterial biofilms formed on stainless steel surface. A universally applied disinfecting agent was used in the tests. Bacterial strains: Listeria innocua, Pseudomonas putida, Micrococcus luteus, Staphylococcus hominis strains, were isolated from food contact surfaces, after a cleaning and disinfection process. The disinfecting agent was a commercially available acid specimen based on hydrogen peroxide and peroxyacetic acid, the substance that was designed for food industry usage. Model tests were carried out on biofilm formed on stainless steel (type 304, no 4 finish). Biofilms were recorded by electron scanning microscope. The disinfecting agent in usable concentration, 0.5% and during 10 minutes was ineffective for biofilms. The reduction of cells in biofilms was only 1-2 logarithmic cycles. The use of the agent in higher concentration--1% for 30 minutes caused reduction of cell number by around 5 logarithmic cycles only in the case of one microorganism, M. luteus. For other types: L. innocua, P. putida, S. hominis, the requirements placed on disinfecting agents were not fulfilled. The results of experiments proved that bacterial biofilms are resistant to the disinfectant applied in its operational parameters. Disinfecting effectiveness was achieved after twofold increase of the agent's concentration.
Petersen, Per H; Lund, Flemming; Fraser, Callum G; Sölétormos, György
2016-11-01
Background The distributions of within-subject biological variation are usually described as coefficients of variation, as are analytical performance specifications for bias, imprecision and other characteristics. Estimation of specifications required for reference change values is traditionally done using relationship between the batch-related changes during routine performance, described as Δbias, and the coefficients of variation for analytical imprecision (CV A ): the original theory is based on standard deviations or coefficients of variation calculated as if distributions were Gaussian. Methods The distribution of between-subject biological variation can generally be described as log-Gaussian. Moreover, recent analyses of within-subject biological variation suggest that many measurands have log-Gaussian distributions. In consequence, we generated a model for the estimation of analytical performance specifications for reference change value, with combination of Δbias and CV A based on log-Gaussian distributions of CV I as natural logarithms. The model was tested using plasma prolactin and glucose as examples. Results Analytical performance specifications for reference change value generated using the new model based on log-Gaussian distributions were practically identical with the traditional model based on Gaussian distributions. Conclusion The traditional and simple to apply model used to generate analytical performance specifications for reference change value, based on the use of coefficients of variation and assuming Gaussian distributions for both CV I and CV A , is generally useful.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jia, Yu; Kavli Institute for Theoretical Physics China, Chinese Academy of Sciences, Beijing 100190
We study the first-order relativistic correction to the associated production of J/{psi} with light hadrons at B factory experiments at {radical}(s)=10.58 GeV, in the context of nonrelativistic QCD (NRQCD) factorization. We employ a strategy for NRQCD expansion that slightly deviates from the orthodox doctrine, in that the matching coefficients are not truly of a ''short-distance'' nature, but explicitly depend upon physical kinematic variables rather than partonic ones. Our matching method, with validity guaranteed by the Gremm-Kapustin relation, is particularly suited for the inclusive quarkonium production and decay processes with involved kinematics, exemplified by the process e{sup +}e{sup -}{yields}J/{psi}+gg considered inmore » this work. Despite some intrinsic ambiguity affiliated with the order-v{sup 2} NRQCD matrix element, if we choose its value as what has been extracted from a recent Cornell-potential-model-based analysis, including the relative order-v{sup 2} effect is found to increase the lowest-order prediction for the integrated J/{psi} cross section by about 30%, and exert a modest impact on J/{psi} energy, angular and polarization distributions except near the very upper end of the J/{psi} energy. The order-v{sup 2} contribution to the energy spectrum becomes logarithmically divergent at the maximum of J/{psi} energy. A consistent analysis may require that these large end-point logarithms be resummed to all orders in {alpha}{sub s}.« less
Neural spike-timing patterns vary with sound shape and periodicity in three auditory cortical fields
Lee, Christopher M.; Osman, Ahmad F.; Volgushev, Maxim; Escabí, Monty A.
2016-01-01
Mammals perceive a wide range of temporal cues in natural sounds, and the auditory cortex is essential for their detection and discrimination. The rat primary (A1), ventral (VAF), and caudal suprarhinal (cSRAF) auditory cortical fields have separate thalamocortical pathways that may support unique temporal cue sensitivities. To explore this, we record responses of single neurons in the three fields to variations in envelope shape and modulation frequency of periodic noise sequences. Spike rate, relative synchrony, and first-spike latency metrics have previously been used to quantify neural sensitivities to temporal sound cues; however, such metrics do not measure absolute spike timing of sustained responses to sound shape. To address this, in this study we quantify two forms of spike-timing precision, jitter, and reliability. In all three fields, we find that jitter decreases logarithmically with increase in the basis spline (B-spline) cutoff frequency used to shape the sound envelope. In contrast, reliability decreases logarithmically with increase in sound envelope modulation frequency. In A1, jitter and reliability vary independently, whereas in ventral cortical fields, jitter and reliability covary. Jitter time scales increase (A1 < VAF < cSRAF) and modulation frequency upper cutoffs decrease (A1 > VAF > cSRAF) with ventral progression from A1. These results suggest a transition from independent encoding of shape and periodicity sound cues on short time scales in A1 to a joint encoding of these same cues on longer time scales in ventral nonprimary cortices. PMID:26843599
On the Crossover from Classical to Fermi Liquid Behavior in Dense Plasmas
NASA Astrophysics Data System (ADS)
Daligault, Jerome
2017-10-01
We explore the crossover from classical plasma to quantum Fermi liquid behavior of electrons in dense plasmas. To this end, we analyze the evolution with density and temperature of the momentum lifetime of a test electron introduced in a dense electron gas. This allows us 1) to determine the boundaries of the crossover region in the temperature-density plane and to shed light on the evolution of scattering properties across it, 2) to quantify the role of the fermionic nature of electrons on electronic collisions across the crossover region, and 3) to explain how the concept of Coulomb logarithm emerges at high enough temperature but disappears at low enough temperature. Work supported by LDRD Grant No. 20170490ER.
Nonlinear dynamics and quantum entanglement in optomechanical systems.
Wang, Guanglei; Huang, Liang; Lai, Ying-Cheng; Grebogi, Celso
2014-03-21
To search for and exploit quantum manifestations of classical nonlinear dynamics is one of the most fundamental problems in physics. Using optomechanical systems as a paradigm, we address this problem from the perspective of quantum entanglement. We uncover strong fingerprints in the quantum entanglement of two common types of classical nonlinear dynamical behaviors: periodic oscillations and quasiperiodic motion. There is a transition from the former to the latter as an experimentally adjustable parameter is changed through a critical value. Accompanying this process, except for a small region about the critical value, the degree of quantum entanglement shows a trend of continuous increase. The time evolution of the entanglement measure, e.g., logarithmic negativity, exhibits a strong dependence on the nature of classical nonlinear dynamics, constituting its signature.
Modularity of logarithmic parafermion vertex algebras
NASA Astrophysics Data System (ADS)
Auger, Jean; Creutzig, Thomas; Ridout, David
2018-05-01
The parafermionic cosets Ck = {Com} ( H , Lk(sl2) ) are studied for negative admissible levels k, as are certain infinite-order simple current extensions Bk of Ck . Under the assumption that the tensor theory considerations of Huang, Lepowsky and Zhang apply to Ck , irreducible Ck - and Bk -modules are obtained from those of Lk(sl2) . Assuming the validity of a certain Verlinde-type formula likewise gives the Grothendieck fusion rules of these irreducible modules. Notably, there are only finitely many irreducible Bk -modules. The irreducible Ck - and Bk -characters are computed and the latter are shown, when supplemented by pseudotraces, to carry a finite-dimensional representation of the modular group. The natural conjecture then is that the Bk are C_2 -cofinite vertex operator algebras.
NASA Astrophysics Data System (ADS)
Ausloos, M.; Dorbolo, S.
A logarithmic behavior is hidden in the linear temperature regime of the electrical resistivity R(T) of some YBCO sample below 2Tc where "pairs" break apart, fluctuations occur and "a gap is opening". An anomalous effect also occurs near 200 K in the normal state Hall coefficient. In a simulation of oxygen diffusion in planar 123 YBCO, an anomalous behavior is found in the oxygen-vacancy motion near such a temperature. We claim that the behavior of the specific heat above and near the critical temperature should be reexamined in order to show the influence and implications of fluctuations and dimensionality on the nature of the phase transition and on the true onset temperature.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larche, Michael R.; Prowant, Matthew S.; Bruillard, Paul J.
This study compares different approaches for imaging the internal architecture of graphite/epoxy composites using backscattered ultrasound. Two cases are studied. In the first, near-surface defects in a thin graphite/epoxy plates are imaged. The same backscattered waveforms were used to produce peak-to-peak, logarithm of signal energy, as well as entropy images of different types. All of the entropy images exhibit better border delineation and defect contrast than the either peak-to-peak or logarithm of signal energy. The best results are obtained using the joint entropy of the backscattered waveforms with a reference function. Two different references are examined. The first is amore » reflection of the insonifying pulse from a stainless steel reflector. The second is an approximate optimum obtained from an iterative parametric search. The joint entropy images produced using this reference exhibit three times the contrast obtained in previous studies. These plates were later destructively analyzed to determine size and location of near-surface defects and the results found to agree with the defect location and shape as indicated by the entropy images. In the second study, images of long carbon graphite fibers (50% by weight) in polypropylene thermoplastic are obtained as a first step toward ultrasonic determination of the distributions of fiber position and orientation.« less
Log-polar mapping-based scale space tracking with adaptive target response
NASA Astrophysics Data System (ADS)
Li, Dongdong; Wen, Gongjian; Kuai, Yangliu; Zhang, Ximing
2017-05-01
Correlation filter-based tracking has exhibited impressive robustness and accuracy in recent years. Standard correlation filter-based trackers are restricted to translation estimation and equipped with fixed target response. These trackers produce an inferior performance when encountered with a significant scale variation or appearance change. We propose a log-polar mapping-based scale space tracker with an adaptive target response. This tracker transforms the scale variation of the target in the Cartesian space into a shift along the logarithmic axis in the log-polar space. A one-dimensional scale correlation filter is learned online to estimate the shift along the logarithmic axis. With the log-polar representation, scale estimation is achieved accurately without a multiresolution pyramid. To achieve an adaptive target response, a variance of the Gaussian function is computed from the response map and updated online with a learning rate parameter. Our log-polar mapping-based scale correlation filter and adaptive target response can be combined with any correlation filter-based trackers. In addition, the scale correlation filter can be extended to a two-dimensional correlation filter to achieve joint estimation of the scale variation and in-plane rotation. Experiments performed on an OTB50 benchmark demonstrate that our tracker achieves superior performance against state-of-the-art trackers.
Casimir meets Poisson: improved quark/gluon discrimination with counting observables
Frye, Christopher; Larkoski, Andrew J.; Thaler, Jesse; ...
2017-09-19
Charged track multiplicity is among the most powerful observables for discriminating quark- from gluon-initiated jets. Despite its utility, it is not infrared and collinear (IRC) safe, so perturbative calculations are limited to studying the energy evolution of multiplicity moments. While IRC-safe observables, like jet mass, are perturbatively calculable, their distributions often exhibit Casimir scaling, such that their quark/gluon discrimination power is limited by the ratio of quark to gluon color factors. In this paper, we introduce new IRC-safe counting observables whose discrimination performance exceeds that of jet mass and approaches that of track multiplicity. The key observation is that trackmore » multiplicity is approximately Poisson distributed, with more suppressed tails than the Sudakov peak structure from jet mass. By using an iterated version of the soft drop jet grooming algorithm, we can define a “soft drop multiplicity” which is Poisson distributed at leading-logarithmic accuracy. In addition, we calculate the next-to-leading-logarithmic corrections to this Poisson structure. If we allow the soft drop groomer to proceed to the end of the jet branching history, we can define a collinear-unsafe (but still infrared-safe) counting observable. Exploiting the universality of the collinear limit, we define generalized fragmentation functions to study the perturbative energy evolution of collinear-unsafe multiplicity.« less
Casimir meets Poisson: improved quark/gluon discrimination with counting observables
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frye, Christopher; Larkoski, Andrew J.; Thaler, Jesse
Charged track multiplicity is among the most powerful observables for discriminating quark- from gluon-initiated jets. Despite its utility, it is not infrared and collinear (IRC) safe, so perturbative calculations are limited to studying the energy evolution of multiplicity moments. While IRC-safe observables, like jet mass, are perturbatively calculable, their distributions often exhibit Casimir scaling, such that their quark/gluon discrimination power is limited by the ratio of quark to gluon color factors. In this paper, we introduce new IRC-safe counting observables whose discrimination performance exceeds that of jet mass and approaches that of track multiplicity. The key observation is that trackmore » multiplicity is approximately Poisson distributed, with more suppressed tails than the Sudakov peak structure from jet mass. By using an iterated version of the soft drop jet grooming algorithm, we can define a “soft drop multiplicity” which is Poisson distributed at leading-logarithmic accuracy. In addition, we calculate the next-to-leading-logarithmic corrections to this Poisson structure. If we allow the soft drop groomer to proceed to the end of the jet branching history, we can define a collinear-unsafe (but still infrared-safe) counting observable. Exploiting the universality of the collinear limit, we define generalized fragmentation functions to study the perturbative energy evolution of collinear-unsafe multiplicity.« less
Morisawa, Yusuke; Suga, Arisa
2018-05-15
Visible (Vis), near-infrared (NIR) and IR spectra in the 15,600-2500cm -1 region were measured for methanol, methanol-d 3 , and t-butanol-d 9 in n-hexane to investigate effects of intermolecular interaction on absorption intensities of the fundamental and the first, second, and third overtones of their OH stretching vibrations. The relative area intensities of OH stretching bands of free and hydrogen-bonded species were plotted versus the vibrational quantum number using logarithm plots (V=1-4) for 0.5M methanol, 0.5M methanol‑d 3 , and 0.5M t-butanol-d 9 in n-hexane. In the logarithm plots the relative intensities of free species yield a linear dependence irrespective of the solutes while those of hydrogen-bonded species deviate significantly from the linearity. The observed results suggest that the modifications in dipole moment functions of the OH bond induced by the formation of the hydrogen bondings change transient dipole moment, leading to the deviations of the dependences of relative absorption intensities on the vibrational quantum number from the linearity. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Morisawa, Yusuke; Suga, Arisa
2018-05-01
Visible (Vis), near-infrared (NIR) and IR spectra in the 15,600-2500 cm- 1 region were measured for methanol, methanol-d3, and t-butanol-d9 in n-hexane to investigate effects of intermolecular interaction on absorption intensities of the fundamental and the first, second, and third overtones of their OH stretching vibrations. The relative area intensities of OH stretching bands of free and hydrogen-bonded species were plotted versus the vibrational quantum number using logarithm plots (V = 1-4) for 0.5 M methanol, 0.5 M methanol‑d3, and 0.5 M t-butanol-d9 in n-hexane. In the logarithm plots the relative intensities of free species yield a linear dependence irrespective of the solutes while those of hydrogen-bonded species deviate significantly from the linearity. The observed results suggest that the modifications in dipole moment functions of the OH bond induced by the formation of the hydrogen bondings change transient dipole moment, leading to the deviations of the dependences of relative absorption intensities on the vibrational quantum number from the linearity.
Quantifying fluctuations in market liquidity: analysis of the bid-ask spread.
Plerou, Vasiliki; Gopikrishnan, Parameswaran; Stanley, H Eugene
2005-04-01
Quantifying the statistical features of the bid-ask spread offers the possibility of understanding some aspects of market liquidity. Using quote data for the 116 most frequently traded stocks on the New York Stock Exchange over the two-year period 1994-1995, we analyze the fluctuations of the average bid-ask spread S over a time interval deltat. We find that S is characterized by a distribution that decays as a power law P[S>x] approximately x(-zeta(S) ), with an exponent zeta(S) approximately = 3 for all 116 stocks analyzed. Our analysis of the autocorrelation function of S shows long-range power-law correlations, (S(t)S(t + tau)) approximately tau(-mu(s)), similar to those previously found for the volatility. We next examine the relationship between the bid-ask spread and the volume Q, and find that S approximately ln Q; we find that a similar logarithmic relationship holds between the transaction-level bid-ask spread and the trade size. We then study the relationship between S and other indicators of market liquidity such as the frequency of trades N and the frequency of quote updates U, and find S approximately ln N and S approximately ln U. Lastly, we show that the bid-ask spread and the volatility are also related logarithmically.
Using sky radiances measured by ground based AERONET Sun-Radiometers for cirrus cloud detection
NASA Astrophysics Data System (ADS)
Sinyuk, A.; Holben, B. N.; Eck, T. F.; Slutsker, I.; Lewis, J. R.
2013-12-01
Screening of cirrus clouds using observations of optical depth (OD) only has proven to be a difficult task due mostly to some clouds having temporally and spatially stable OD. On the other hand, the sky radiances measurements which in AERONET protocol are taken throughout the day may contain additional cloud information. In this work the potential of using sky radiances for cirrus cloud detection is investigated. The detection is based on differences in the angular shape of sky radiances due to cirrus clouds and aerosol (see Figure). The range of scattering angles from 3 to 6 degrees was selected due to two primary reasons: high sensitivity to cirrus clouds presence, and close proximity to the Sun. The angular shape of sky radiances was parametrized by its curvature, which is a parameter defined as a combination of the first and second derivatives as a function of scattering angle. We demonstrate that a slope of the logarithm of curvature versus logarithm of scattering angle in this selected range of scattering angles is sensitive to cirrus cloud presence. We also demonstrate that restricting the values of the slope below some threshold value can be used for cirrus cloud screening. The threshold value of the slope was estimated using collocated measurements of AERONET data and MPLNET lidars.
Ulbrich, Philipp; Gail, Alexander
2017-01-01
When deciding between alternative options, a rational agent chooses on the basis of the desirability of each outcome, including associated costs. As different options typically result in different actions, the effort associated with each action is an essential cost parameter. How do humans discount physical effort when deciding between movements? We used an action-selection task to characterize how subjective effort depends on the parameters of arm transport movements and controlled for potential confounding factors such as delay discounting and performance. First, by repeatedly asking subjects to choose between 2 arm movements of different amplitudes or durations, performed against different levels of force, we identified parameter combinations that subjects experienced as identical in effort (isoeffort curves). Movements with a long duration were judged more effortful than short-duration movements against the same force, while movement amplitudes did not influence effort. Biomechanics of the movements also affected effort, as movements towards the body midline were preferred to movements away from it. Second, by introducing movement repetitions, we further determined that the cost function for choosing between effortful movements had a quadratic relationship with force, while choices were made on the basis of the logarithm of these costs. Our results show that effort-based action selection during reaching cannot easily be explained by metabolic costs. Instead, force-loaded reaches, a widely occurring natural behavior, imposed an effort cost for decision making similar to cost functions in motor control. Our results thereby support the idea that motor control and economic choice are governed by partly overlapping optimization principles. PMID:28586347
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leonard, T.L.; Gustin, M.S.; Fernandez, G.C.J.
The objective of this study was to evaluate the role of physiological and environmental factors in governing the flux of elemental mercury from plants to the atmosphere. Five species (Lepidium latifolium, Artemisia douglasiana, Caulanthus sp., Fragaria vesca, and Eucalyptus globulus) with different ecological and physiological attributes and growing in soils with high levels of mercury contamination were examined. Studies were conducted in a whole-plant, gas-exchange chamber providing precise control of environmental conditions, and mercury flux was estimated using the mass balance approach. Mercury flux increased linearly as a function of temperature within the range of 20 to 40 C, andmore » the mean temperature coefficient (Q{sub 10}) was 2.04. The temperature dependence of mercury flux was attributed to changes in the contaminant`s vapor pressure in the leaf interior. Mercury flux from foliage increased linearly as a function of irradiance within the range of 500 to 1,500 {micro}mol m/s, and the light enhancement of mercury flux was within a factor of 2.0 to 2.5 for all species. Even though the leaf-to-atmosphere diffusive path for mercury vapor from foliage is similar to that of water vapor, stomatal conductance played a secondary role in governing mercury flux. In a quantitative comparison with other studies in both laboratory and field settings, a strong linear relationship is evident between mercury vapor flux and the natural logarithm of soil mercury concentration, and this relationship may have predictive value in developing regional- and continental-scale mercury budgets. The most critical factors governing mercury flux from plants are mercury concentration in the soil, leaf area index, temperature, and irradiance.« less
NASA Astrophysics Data System (ADS)
Kirillov, A. A.
2006-01-01
The observed strong dark-to-luminous matter coupling [F. Donato, et al., astro-ph/0403206, Mon. Not. R. Astron. Soc., submitted for publication; G. Gentile, et al., Mon. Not. R. Astron. Soc. 351 (2004) 903; D.T.F. Weldrake, et al., Mon. Not. R. Astron. Soc. 340 (2003) 12; W.J.G. de Blok, A. Bosma, Astron. Astrophys. 385 (2002) 816; O. Gerhard, et al., Astrophys. J. 121 (2001) 1936; A. Borriello, et al., Mon. Not. R. Astron. Soc. 341 (2003) 1109] suggests the existence of a some functional relation between visible and DM sources which leads to biased Einstein equations. We show that such a bias appears in the case when the topological structure of the actual Universe at very large distances does not match properly that of the Friedman space. We introduce a bias operator ρ=Bˆρ and show that the simple bias function b=1/(4πrr)θ(r-r) (the kernel of Bˆ) allows to account for all the variety of observed DM halos in astrophysical systems. In galaxies such a bias forms the cored DM distribution with the radius R˜R (which explains the recently observed strong correlation between R and R [F. Donato, et al., astro-ph/0403206, Mon. Not. R. Astron. Soc., submitted for publication]), while for a point source it produces the logarithmic correction to the Newton's potential (which explains the observed flat rotation curves in spirals). Finally, we show that in the theory suggested the galaxy formation process leads to a specific variation with time of all interaction constants and, in particular, of the fine structure constant.
Star Cluster Formation in Cosmological Simulations. I. Properties of Young Clusters
NASA Astrophysics Data System (ADS)
Li, Hui; Gnedin, Oleg Y.; Gnedin, Nickolay Y.; Meng, Xi; Semenov, Vadim A.; Kravtsov, Andrey V.
2017-01-01
We present a new implementation of star formation in cosmological simulations by considering star clusters as a unit of star formation. Cluster particles grow in mass over several million years at the rate determined by local gas properties, with high time resolution. The particle growth is terminated by its own energy and momentum feedback on the interstellar medium. We test this implementation for Milky Way-sized galaxies at high redshift by comparing the properties of model clusters with observations of young star clusters. We find that the cluster initial mass function is best described by a Schechter function rather than a single power law. In agreement with observations, at low masses the logarithmic slope is α ≈ 1.8{--}2, while the cutoff at high mass scales with the star formation rate (SFR). A related trend is a positive correlation between the surface density of the SFR and fraction of stars contained in massive clusters. Both trends indicate that the formation of massive star clusters is preferred during bursts of star formation. These bursts are often associated with major-merger events. We also find that the median timescale for cluster formation ranges from 0.5 to 4 Myr and decreases systematically with increasing star formation efficiency. Local variations in the gas density and cluster accretion rate naturally lead to the scatter of the overall formation efficiency by an order of magnitude, even when the instantaneous efficiency is kept constant. Comparison of the formation timescale with the observed age spread of young star clusters provides an additional important constraint on the modeling of star formation and feedback schemes.
NASA Astrophysics Data System (ADS)
Naghibolhosseini, Maryam; Long, Glenis
2011-11-01
The distortion product otoacoustic emission (DPOAE) input/output (I/O) function may provide a potential tool for evaluating cochlear compression. Hearing loss causes an increase in the level of the sound that is just audible for the person, which affects the cochlea compression and thus the dynamic range of hearing. Although the slope of the I/O function is highly variable when the total DPOAE is used, separating the nonlinear-generator component from the reflection component reduces this variability. We separated the two components using least squares fit (LSF) analysis of logarithmic sweeping tones, and confirmed that the separated generator component provides more consistent I/O functions than the total DPOAE. In this paper we estimated the slope of the I/O functions of the generator components at different sound levels using LSF analysis. An artificial neural network (ANN) was used to estimate psychophysical thresholds using the estimated slopes of the I/O functions. DPOAE I/O functions determined in this way may help to estimate hearing thresholds and cochlear health.
NASA Astrophysics Data System (ADS)
Libera, Arianna; de Barros, Felipe P. J.; Riva, Monica; Guadagnini, Alberto
2017-10-01
Our study is keyed to the analysis of the interplay between engineering factors (i.e., transient pumping rates versus less realistic but commonly analyzed uniform extraction rates) and the heterogeneous structure of the aquifer (as expressed by the probability distribution characterizing transmissivity) on contaminant transport. We explore the joint influence of diverse (a) groundwater pumping schedules (constant and variable in time) and (b) representations of the stochastic heterogeneous transmissivity (T) field on temporal histories of solute concentrations observed at an extraction well. The stochastic nature of T is rendered by modeling its natural logarithm, Y = ln T, through a typical Gaussian representation and the recently introduced Generalized sub-Gaussian (GSG) model. The latter has the unique property to embed scale-dependent non-Gaussian features of the main statistics of Y and its (spatial) increments, which have been documented in a variety of studies. We rely on numerical Monte Carlo simulations and compute the temporal evolution at the well of low order moments of the solute concentration (C), as well as statistics of the peak concentration (Cp), identified as the environmental performance metric of interest in this study. We show that the pumping schedule strongly affects the pattern of the temporal evolution of the first two statistical moments of C, regardless the nature (Gaussian or non-Gaussian) of the underlying Y field, whereas the latter quantitatively influences their magnitude. Our results show that uncertainty associated with C and Cp estimates is larger when operating under a transient extraction scheme than under the action of a uniform withdrawal schedule. The probability density function (PDF) of Cp displays a long positive tail in the presence of time-varying pumping schedule. All these aspects are magnified in the presence of non-Gaussian Y fields. Additionally, the PDF of Cp displays a bimodal shape for all types of pumping schemes analyzed, independent of the type of heterogeneity considered.
Thermochemical Data for Propellant Ingredients and their Products of Explosion
1949-12-01
oases except perhaps at temperatures below 2000°K. The logarithms of all the equilibrium constants except Ko have been tabulated since these logarithms...have almost constant first differences. Linear interpolation may lead to an error of a unit or two in the third decimal place for Ko but the...dissociation products OH, H and KO will be formed and at still higher temperatures the other dissociation products 0*, 0, N and C will begin to appear