Sample records for logarithmic utility functions

  1. SPECIFIC HEAT INDICATOR

    DOEpatents

    Horn, F.L.; Binns, J.E.

    1961-05-01

    Apparatus for continuously and automatically measuring and computing the specific heat of a flowing solution is described. The invention provides for the continuous measurement of all the parameters required for the mathematical solution of this characteristic. The parameters are converted to logarithmic functions which are added and subtracted in accordance with the solution and a null-seeking servo reduces errors due to changing voltage drops to a minimum. Logarithmic potentiometers are utilized in a unique manner to accomplish these results.

  2. Mean-variance portfolio optimization by using time series approaches based on logarithmic utility function

    NASA Astrophysics Data System (ADS)

    Soeryana, E.; Fadhlina, N.; Sukono; Rusyaman, E.; Supian, S.

    2017-01-01

    Investments in stocks investors are also faced with the issue of risk, due to daily price of stock also fluctuate. For minimize the level of risk, investors usually forming an investment portfolio. Establishment of a portfolio consisting of several stocks are intended to get the optimal composition of the investment portfolio. This paper discussed about optimizing investment portfolio of Mean-Variance to stocks by using mean and volatility is not constant based on logarithmic utility function. Non constant mean analysed using models Autoregressive Moving Average (ARMA), while non constant volatility models are analysed using the Generalized Autoregressive Conditional heteroscedastic (GARCH). Optimization process is performed by using the Lagrangian multiplier technique. As a numerical illustration, the method is used to analyse some Islamic stocks in Indonesia. The expected result is to get the proportion of investment in each Islamic stock analysed.

  3. The time resolution of the St Petersburg paradox

    PubMed Central

    Peters, Ole

    2011-01-01

    A resolution of the St Petersburg paradox is presented. In contrast to the standard resolution, utility is not required. Instead, the time-average performance of the lottery is computed. The final result can be phrased mathematically identically to Daniel Bernoulli's resolution, which uses logarithmic utility, but is derived using a conceptually different argument. The advantage of the time resolution is the elimination of arbitrary utility functions. PMID:22042904

  4. Evaluating gambles using dynamics

    NASA Astrophysics Data System (ADS)

    Peters, O.; Gell-Mann, M.

    2016-02-01

    Gambles are random variables that model possible changes in wealth. Classic decision theory transforms money into utility through a utility function and defines the value of a gamble as the expectation value of utility changes. Utility functions aim to capture individual psychological characteristics, but their generality limits predictive power. Expectation value maximizers are defined as rational in economics, but expectation values are only meaningful in the presence of ensembles or in systems with ergodic properties, whereas decision-makers have no access to ensembles, and the variables representing wealth in the usual growth models do not have the relevant ergodic properties. Simultaneously addressing the shortcomings of utility and those of expectations, we propose to evaluate gambles by averaging wealth growth over time. No utility function is needed, but a dynamic must be specified to compute time averages. Linear and logarithmic "utility functions" appear as transformations that generate ergodic observables for purely additive and purely multiplicative dynamics, respectively. We highlight inconsistencies throughout the development of decision theory, whose correction clarifies that our perspective is legitimate. These invalidate a commonly cited argument for bounded utility functions.

  5. Deriving a Utility Function For the U.S. Economy

    DTIC Science & Technology

    1988-04-01

    Jorgenson, D.W., L.J. Lau, and T.M. Stoker, "The Transcendental Logarithmic Model of Ag- gregate Consumer Behavior ," in R.L. Baseman and G. Rhodes (eds...Jorgenson, D.W., L.J. Lau, and T.M. Stoker, "Aggregate Consumer Behavior and Individual Welfare," Macro Economic Analysis, eds. D. Currie, R. Nabay, D. Peel

  6. Programming of the complex logarithm function in the solution of the cracked anisotropic plate loaded by a point force

    NASA Astrophysics Data System (ADS)

    Zaal, K. J. J. M.

    1991-06-01

    In programming solutions of complex function theory, the complex logarithm function is replaced by the complex logarithmic function, introducing a discontinuity along the branch cut into the programmed solution which was not present in the mathematical solution. Recently, Liaw and Kamel presented their solution of the infinite anisotropic centrally cracked plate loaded by an arbitrary point force, which they used as Green's function in a boundary element method intended to evaluate the stress intensity factor at the tip of a crack originating from an elliptical home. Their solution may be used as Green's function of many more numerical methods involving anisotropic elasticity. In programming applications of Liaw and Kamel's solution, the standard definition of the logarithmic function with the branch cut at the nonpositive real axis cannot provide a reliable computation of the displacement field for Liaw and Kamel's solution. Either the branch cut should be redefined outside the domain of the logarithmic function, after proving that the domain is limited to a part of the plane, or the logarithmic function should be defined on its Riemann surface. A two dimensional line fractal can provide the link between all mesh points on the plane essential to evaluate the logarithm function on its Riemann surface. As an example, a two dimensional line fractal is defined for a mesh once used by Erdogan and Arin.

  7. Continuous time random walk model with asymptotical probability density of waiting times via inverse Mittag-Leffler function

    NASA Astrophysics Data System (ADS)

    Liang, Yingjie; Chen, Wen

    2018-04-01

    The mean squared displacement (MSD) of the traditional ultraslow diffusion is a logarithmic function of time. Recently, the continuous time random walk model is employed to characterize this ultraslow diffusion dynamics by connecting the heavy-tailed logarithmic function and its variation as the asymptotical waiting time density. In this study we investigate the limiting waiting time density of a general ultraslow diffusion model via the inverse Mittag-Leffler function, whose special case includes the traditional logarithmic ultraslow diffusion model. The MSD of the general ultraslow diffusion model is analytically derived as an inverse Mittag-Leffler function, and is observed to increase even more slowly than that of the logarithmic function model. The occurrence of very long waiting time in the case of the inverse Mittag-Leffler function has the largest probability compared with the power law model and the logarithmic function model. The Monte Carlo simulations of one dimensional sample path of a single particle are also performed. The results show that the inverse Mittag-Leffler waiting time density is effective in depicting the general ultraslow random motion.

  8. A non-local structural derivative model for characterization of ultraslow diffusion in dense colloids

    NASA Astrophysics Data System (ADS)

    Liang, Yingjie; Chen, Wen

    2018-03-01

    Ultraslow diffusion has been observed in numerous complicated systems. Its mean squared displacement (MSD) is not a power law function of time, but instead a logarithmic function, and in some cases grows even more slowly than the logarithmic rate. The distributed-order fractional diffusion equation model simply does not work for the general ultraslow diffusion. Recent study has used the local structural derivative to describe ultraslow diffusion dynamics by using the inverse Mittag-Leffler function as the structural function, in which the MSD is a function of inverse Mittag-Leffler function. In this study, a new stretched logarithmic diffusion law and its underlying non-local structural derivative diffusion model are proposed to characterize the ultraslow diffusion in aging dense colloidal glass at both the short and long waiting times. It is observed that the aging dynamics of dense colloids is a class of the stretched logarithmic ultraslow diffusion processes. Compared with the power, the logarithmic, and the inverse Mittag-Leffler diffusion laws, the stretched logarithmic diffusion law has better precision in fitting the MSD of the colloidal particles at high densities. The corresponding non-local structural derivative diffusion equation manifests clear physical mechanism, and its structural function is equivalent to the first-order derivative of the MSD.

  9. Integral definition of the logarithmic function and the derivative of the exponential function in calculus

    NASA Astrophysics Data System (ADS)

    Vaninsky, Alexander

    2015-04-01

    Defining the logarithmic function as a definite integral with a variable upper limit, an approach used by some popular calculus textbooks, is problematic. We discuss the disadvantages of such a definition and provide a way to fix the problem. We also consider a definition-based, rigorous derivation of the derivative of the exponential function that is easier, more intuitive, and complies with the standard definitions of the number e, the logarithmic, and the exponential functions.

  10. Small range logarithm calculation on Intel Quartus II Verilog

    NASA Astrophysics Data System (ADS)

    Mustapha, Muhazam; Mokhtar, Anis Shahida; Ahmad, Azfar Asyrafie

    2018-02-01

    Logarithm function is the inverse of exponential function. This paper implement power series of natural logarithm function using Verilog HDL in Quartus II. The mode of design used is RTL in order to decrease the number of megafunctions. The simulations were done to determine the precision and number of LEs used so that the output calculated accurately. It is found that the accuracy of the system only valid for the range of 1 to e.

  11. Lambert W function for applications in physics

    NASA Astrophysics Data System (ADS)

    Veberič, Darko

    2012-12-01

    The Lambert W(x) function and its possible applications in physics are presented. The actual numerical implementation in C++ consists of Halley's and Fritsch's iterations with initial approximations based on branch-point expansion, asymptotic series, rational fits, and continued-logarithm recursion. Program summaryProgram title: LambertW Catalogue identifier: AENC_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENC_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License version 3 No. of lines in distributed program, including test data, etc.: 1335 No. of bytes in distributed program, including test data, etc.: 25 283 Distribution format: tar.gz Programming language: C++ (with suitable wrappers it can be called from C, Fortran etc.), the supplied command-line utility is suitable for other scripting languages like sh, csh, awk, perl etc. Computer: All systems with a C++ compiler. Operating system: All Unix flavors, Windows. It might work with others. RAM: Small memory footprint, less than 1 MB Classification: 1.1, 4.7, 11.3, 11.9. Nature of problem: Find fast and accurate numerical implementation for the Lambert W function. Solution method: Halley's and Fritsch's iterations with initial approximations based on branch-point expansion, asymptotic series, rational fits, and continued logarithm recursion. Additional comments: Distribution file contains the command-line utility lambert-w. Doxygen comments, included in the source files. Makefile. Running time: The tests provided take only a few seconds to run.

  12. Stratified Flow Past a Hill: Dividing Streamline Concept Revisited

    NASA Astrophysics Data System (ADS)

    Leo, Laura S.; Thompson, Michael Y.; Di Sabatino, Silvana; Fernando, Harindra J. S.

    2016-06-01

    The Sheppard formula (Q J R Meteorol Soc 82:528-529, 1956) for the dividing streamline height H_s assumes a uniform velocity U_∞ and a constant buoyancy frequency N for the approach flow towards a mountain of height h, and takes the form H_s/h=( {1-F} ) , where F=U_{∞}/Nh. We extend this solution to a logarithmic approach-velocity profile with constant N. An analytical solution is obtained for H_s/h in terms of Lambert-W functions, which also suggests alternative scaling for H_s/h. A `modified' logarithmic velocity profile is proposed for stably stratified atmospheric boundary-layer flows. A field experiment designed to observe H_s is described, which utilized instrumentation from the spring field campaign of the Mountain Terrain Atmospheric Modeling and Observations (MATERHORN) Program. Multiple releases of smoke at F≈ 0.3-0.4 support the new formulation, notwithstanding the limited success of experiments due to logistical constraints. No dividing streamline is discerned for F≈ 10, since, if present, it is too close to the foothill. Flow separation and vortex shedding is observed in this case. The proposed modified logarithmic profile is in reasonable agreement with experimental observations.

  13. Bio-Inspired Microsystem for Robust Genetic Assay Recognition

    PubMed Central

    Lue, Jaw-Chyng; Fang, Wai-Chi

    2008-01-01

    A compact integrated system-on-chip (SoC) architecture solution for robust, real-time, and on-site genetic analysis has been proposed. This microsystem solution is noise-tolerable and suitable for analyzing the weak fluorescence patterns from a PCR prepared dual-labeled DNA microchip assay. In the architecture, a preceding VLSI differential logarithm microchip is designed for effectively computing the logarithm of the normalized input fluorescence signals. A posterior VLSI artificial neural network (ANN) processor chip is used for analyzing the processed signals from the differential logarithm stage. A single-channel logarithmic circuit was fabricated and characterized. A prototype ANN chip with unsupervised winner-take-all (WTA) function was designed, fabricated, and tested. An ANN learning algorithm using a novel sigmoid-logarithmic transfer function based on the supervised backpropagation (BP) algorithm is proposed for robustly recognizing low-intensity patterns. Our results show that the trained new ANN can recognize low-fluorescence patterns better than an ANN using the conventional sigmoid function. PMID:18566679

  14. Approximating Exponential and Logarithmic Functions Using Polynomial Interpolation

    ERIC Educational Resources Information Center

    Gordon, Sheldon P.; Yang, Yajun

    2017-01-01

    This article takes a closer look at the problem of approximating the exponential and logarithmic functions using polynomials. Either as an alternative to or a precursor to Taylor polynomial approximations at the precalculus level, interpolating polynomials are considered. A measure of error is given and the behaviour of the error function is…

  15. Monotonicity and Logarithmic Concavity of Two Functions Involving Exponential Function

    ERIC Educational Resources Information Center

    Liu, Ai-Qi; Li, Guo-Fu; Guo, Bai-Ni; Qi, Feng

    2008-01-01

    The function 1 divided by "x"[superscript 2] minus "e"[superscript"-x"] divided by (1 minus "e"[superscript"-x"])[superscript 2] for "x" greater than 0 is proved to be strictly decreasing. As an application of this monotonicity, the logarithmic concavity of the function "t" divided by "e"[superscript "at"] minus "e"[superscript"(a-1)""t"] for "a"…

  16. LOGARITHMIC AMPLIFIER

    DOEpatents

    Wade, E.J.; Stone, R.S.

    1959-03-10

    Electronic,amplifier circuits, especially a logai-ithmic amplifier characterizxed by its greatly improved strability are discussed. According to the in ention, means are provided to feed bach the output valtagee to a diode in the amplifier input circuit, the diode being utilized to produce the logarithmic characteristics. The diode is tics, The diode isition therewith and having its filament operated from thc same source s the filament of the logarithmic diode. A bias current of relatively large value compareii with the signal current is continuously passed through the compiting dioie to render the diode insensitivy to variations in the signal current. by this odes kdu to variaelled, so that the stability of the amlifier will be unimpaired.

  17. Design of a Programmable Gain, Temperature Compensated Current-Input Current-Output CMOS Logarithmic Amplifier.

    PubMed

    Ming Gu; Chakrabartty, Shantanu

    2014-06-01

    This paper presents the design of a programmable gain, temperature compensated, current-mode CMOS logarithmic amplifier that can be used for biomedical signal processing. Unlike conventional logarithmic amplifiers that use a transimpedance technique to generate a voltage signal as a logarithmic function of the input current, the proposed approach directly produces a current output as a logarithmic function of the input current. Also, unlike a conventional transimpedance amplifier the gain of the proposed logarithmic amplifier can be programmed using floating-gate trimming circuits. The synthesis of the proposed circuit is based on the Hart's extended translinear principle which involves embedding a floating-voltage source and a linear resistive element within a translinear loop. Temperature compensation is then achieved using a translinear-based resistive cancelation technique. Measured results from prototypes fabricated in a 0.5 μm CMOS process show that the amplifier has an input dynamic range of 120 dB and a temperature sensitivity of 230 ppm/°C (27 °C- 57°C), while consuming less than 100 nW of power.

  18. Can One Take the Logarithm or the Sine of a Dimensioned Quantity or a Unit? Dimensional Analysis Involving Transcendental Functions

    ERIC Educational Resources Information Center

    Matta, Cherif F.; Massa, Lou; Gubskaya, Anna V.; Knoll, Eva

    2011-01-01

    The fate of dimensions of dimensioned quantities that are inserted into the argument of transcendental functions such as logarithms, exponentiation, trigonometric, and hyperbolic functions is discussed. Emphasis is placed on common misconceptions that are not often systematically examined in undergraduate courses of physical sciences. The argument…

  19. Ask the Experts

    ERIC Educational Resources Information Center

    Science Teacher, 2005

    2005-01-01

    This article features questions regarding logarithmic functions and hair growth. The first question is, "What is the underlying natural phenomenon that causes the natural log function to show up so frequently in scientific equations?" There are two reasons for this. The first is simply that the logarithm of a number is often used as a replacement…

  20. Solving the Schroedinger equation for helium atom and its isoelectronic ions with the free iterative complement interaction (ICI) method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakashima, Hiroyuki; Nakatsuji, Hiroshi

    2007-12-14

    The Schroedinger equation was solved very accurately for helium atom and its isoelectronic ions (Z=1-10) with the free iterative complement interaction (ICI) method followed by the variational principle. We obtained highly accurate wave functions and energies of helium atom and its isoelectronic ions. For helium, the calculated energy was -2.903 724 377 034 119 598 311 159 245 194 404 446 696 905 37 a.u., correct over 40 digit accuracy, and for H{sup -}, it was -0.527 751 016 544 377 196 590 814 566 747 511 383 045 02 a.u. These results prove numerically that with the free ICImore » method, we can calculate the solutions of the Schroedinger equation as accurately as one desires. We examined several types of scaling function g and initial function {psi}{sub 0} of the free ICI method. The performance was good when logarithm functions were used in the initial function because the logarithm function is physically essential for three-particle collision area. The best performance was obtained when we introduce a new logarithm function containing not only r{sub 1} and r{sub 2} but also r{sub 12} in the same logarithm function.« less

  1. Logarithmic current measurement circuit with improved accuracy and temperature stability and associated method

    DOEpatents

    Ericson, M. Nance; Rochelle, James M.

    1994-01-01

    A logarithmic current measurement circuit for operating upon an input electric signal utilizes a quad, dielectrically isolated, well-matched, monolithic bipolar transistor array. One group of circuit components within the circuit cooperate with two transistors of the array to convert the input signal logarithmically to provide a first output signal which is temperature-dependant, and another group of circuit components cooperate with the other two transistors of the array to provide a second output signal which is temperature-dependant. A divider ratios the first and second output signals to provide a resultant output signal which is independent of temperature. The method of the invention includes the operating steps performed by the measurement circuit.

  2. Mathematical model for logarithmic scaling of velocity fluctuations in wall turbulence.

    PubMed

    Mouri, Hideaki

    2015-12-01

    For wall turbulence, moments of velocity fluctuations are known to be logarithmic functions of the height from the wall. This logarithmic scaling is due to the existence of a characteristic velocity and to the nonexistence of any characteristic height in the range of the scaling. By using the mathematics of random variables, we obtain its necessary and sufficient conditions. They are compared with characteristics of a phenomenological model of eddies attached to the wall and also with those of the logarithmic scaling of the mean velocity.

  3. Approximating exponential and logarithmic functions using polynomial interpolation

    NASA Astrophysics Data System (ADS)

    Gordon, Sheldon P.; Yang, Yajun

    2017-04-01

    This article takes a closer look at the problem of approximating the exponential and logarithmic functions using polynomials. Either as an alternative to or a precursor to Taylor polynomial approximations at the precalculus level, interpolating polynomials are considered. A measure of error is given and the behaviour of the error function is analysed. The results of interpolating polynomials are compared with those of Taylor polynomials.

  4. Are there common mathematical structures in economics and physics?

    NASA Astrophysics Data System (ADS)

    Mimkes, Jürgen

    2016-12-01

    Economics is a field that looks into the future. We may know a few things ahead (ex ante), but most things we only know, afterwards (ex post). How can we work in a field, where much of the important information is missing? Mathematics gives two answers: 1. Probability theory leads to microeconomics: the Lagrange function optimizes utility under constraints of economic terms (like costs). The utility function is the entropy, the logarithm of probability. The optimal result is given by a probability distribution and an integrating factor. 2. Calculus leads to macroeconomics: In economics we have two production factors, capital and labour. This requires two dimensional calculus with exact and not-exact differentials, which represent the "ex ante" and "ex post" terms of economics. An integrating factor turns a not-exact term (like income) into an exact term (entropy, the natural production function). The integrating factor is the same as in microeconomics and turns the not-exact field of economics into an exact physical science.

  5. Some properties of the Catalan-Qi function related to the Catalan numbers.

    PubMed

    Qi, Feng; Mahmoud, Mansour; Shi, Xiao-Ting; Liu, Fang-Fang

    2016-01-01

    In the paper, the authors find some properties of the Catalan numbers, the Catalan function, and the Catalan-Qi function which is a generalization of the Catalan numbers. Concretely speaking, the authors present a new expression, asymptotic expansions, integral representations, logarithmic convexity, complete monotonicity, minimality, logarithmically complete monotonicity, a generating function, and inequalities of the Catalan numbers, the Catalan function, and the Catalan-Qi function. As by-products, an exponential expansion and a double inequality for the ratio of two gamma functions are derived.

  6. Static versus Dynamic Disposition: The Role of GeoGebra in Representing Polynomial-Rational Inequalities and Exponential-Logarithmic Functions

    ERIC Educational Resources Information Center

    Caglayan, Günhan

    2014-01-01

    This study investigates prospective secondary mathematics teachers' visual representations of polynomial and rational inequalities, and graphs of exponential and logarithmic functions with GeoGebra Dynamic Software. Five prospective teachers in a university in the United States participated in this research study, which was situated within a…

  7. Graviton 1-loop partition function for 3-dimensional massive gravity

    NASA Astrophysics Data System (ADS)

    Gaberdiel, Matthias R.; Grumiller, Daniel; Vassilevich, Dmitri

    2010-11-01

    Thegraviton1-loop partition function in Euclidean topologically massivegravity (TMG) is calculated using heat kernel techniques. The partition function does not factorize holomorphically, and at the chiral point it has the structure expected from a logarithmic conformal field theory. This gives strong evidence for the proposal that the dual conformal field theory to TMG at the chiral point is indeed logarithmic. We also generalize our results to new massive gravity.

  8. Logarithmic Laplacian Prior Based Bayesian Inverse Synthetic Aperture Radar Imaging.

    PubMed

    Zhang, Shuanghui; Liu, Yongxiang; Li, Xiang; Bi, Guoan

    2016-04-28

    This paper presents a novel Inverse Synthetic Aperture Radar Imaging (ISAR) algorithm based on a new sparse prior, known as the logarithmic Laplacian prior. The newly proposed logarithmic Laplacian prior has a narrower main lobe with higher tail values than the Laplacian prior, which helps to achieve performance improvement on sparse representation. The logarithmic Laplacian prior is used for ISAR imaging within the Bayesian framework to achieve better focused radar image. In the proposed method of ISAR imaging, the phase errors are jointly estimated based on the minimum entropy criterion to accomplish autofocusing. The maximum a posterior (MAP) estimation and the maximum likelihood estimation (MLE) are utilized to estimate the model parameters to avoid manually tuning process. Additionally, the fast Fourier Transform (FFT) and Hadamard product are used to minimize the required computational efficiency. Experimental results based on both simulated and measured data validate that the proposed algorithm outperforms the traditional sparse ISAR imaging algorithms in terms of resolution improvement and noise suppression.

  9. Robust Bioinformatics Recognition with VLSI Biochip Microsystem

    NASA Technical Reports Server (NTRS)

    Lue, Jaw-Chyng L.; Fang, Wai-Chi

    2006-01-01

    A microsystem architecture for real-time, on-site, robust bioinformatic patterns recognition and analysis has been proposed. This system is compatible with on-chip DNA analysis means such as polymerase chain reaction (PCR)amplification. A corresponding novel artificial neural network (ANN) learning algorithm using new sigmoid-logarithmic transfer function based on error backpropagation (EBP) algorithm is invented. Our results show the trained new ANN can recognize low fluorescence patterns better than the conventional sigmoidal ANN does. A differential logarithmic imaging chip is designed for calculating logarithm of relative intensities of fluorescence signals. The single-rail logarithmic circuit and a prototype ANN chip are designed, fabricated and characterized.

  10. On the logarithmic-singularity correction in the kernel function method of subsonic lifting-surface theory

    NASA Technical Reports Server (NTRS)

    Lan, C. E.; Lamar, J. E.

    1977-01-01

    A logarithmic-singularity correction factor is derived for use in kernel function methods associated with Multhopp's subsonic lifting-surface theory. Because of the form of the factor, a relation was formulated between the numbers of chordwise and spanwise control points needed for good accuracy. This formulation is developed and discussed. Numerical results are given to show the improvement of the computation with the new correction factor.

  11. Parallel, exhaustive processing underlies logarithmic search functions: Visual search with cortical magnification.

    PubMed

    Wang, Zhiyuan; Lleras, Alejandro; Buetti, Simona

    2018-04-17

    Our lab recently found evidence that efficient visual search (with a fixed target) is characterized by logarithmic Reaction Time (RT) × Set Size functions whose steepness is modulated by the similarity between target and distractors. To determine whether this pattern of results was based on low-level visual factors uncontrolled by previous experiments, we minimized the possibility of crowding effects in the display, compensated for the cortical magnification factor by magnifying search items based on their eccentricity, and compared search performance on such displays to performance on displays without magnification compensation. In both cases, the RT × Set Size functions were found to be logarithmic, and the modulation of the log slopes by target-distractor similarity was replicated. Consistent with previous results in the literature, cortical magnification compensation eliminated most target eccentricity effects. We conclude that the log functions and their modulation by target-distractor similarity relations reflect a parallel exhaustive processing architecture for early vision.

  12. Using History to Teach Mathematics: The Case of Logarithms

    NASA Astrophysics Data System (ADS)

    Panagiotou, Evangelos N.

    2011-01-01

    Many authors have discussed the question why we should use the history of mathematics to mathematics education. For example, Fauvel (For Learn Math, 11(2): 3-6, 1991) mentions at least fifteen arguments for applying the history of mathematics in teaching and learning mathematics. Knowing how to introduce history into mathematics lessons is a more difficult step. We found, however, that only a limited number of articles contain instructions on how to use the material, as opposed to numerous general articles suggesting the use of the history of mathematics as a didactical tool. The present article focuses on converting the history of logarithms into material appropriate for teaching students of 11th grade, without any knowledge of calculus. History uncovers that logarithms were invented prior of the exponential function and shows that the logarithms are not an arbitrary product, as is the case when we leap straight in the definition given in all modern textbooks, but they are a response to a problem. We describe step by step the historical evolution of the concept, in a way appropriate for use in class, until the definition of the logarithm as area under the hyperbola. Next, we present the formal development of the theory and define the exponential function. The teaching sequence has been successfully undertaken in two high school classrooms.

  13. The ABC (in any D) of logarithmic CFT

    NASA Astrophysics Data System (ADS)

    Hogervorst, Matthijs; Paulos, Miguel; Vichi, Alessandro

    2017-10-01

    Logarithmic conformal field theories have a vast range of applications, from critical percolation to systems with quenched disorder. In this paper we thoroughly examine the structure of these theories based on their symmetry properties. Our analysis is model-independent and holds for any spacetime dimension. Our results include a determination of the general form of correlation functions and conformal block decompositions, clearing the path for future bootstrap applications. Several examples are discussed in detail, including logarithmic generalized free fields, holographic models, self-avoiding random walks and critical percolation.

  14. Evaluation of data transformations used with the square root and schoolfield models for predicting bacterial growth rate.

    PubMed Central

    Alber, S A; Schaffner, D W

    1992-01-01

    A comparison was made between mathematical variations of the square root and Schoolfield models for predicting growth rate as a function of temperature. The statistical consequences of square root and natural logarithm transformations of growth rate use in several variations of the Schoolfield and square root models were examined. Growth rate variances of Yersinia enterocolitica in brain heart infusion broth increased as a function of temperature. The ability of the two data transformations to correct for the heterogeneity of variance was evaluated. A natural logarithm transformation of growth rate was more effective than a square root transformation at correcting for the heterogeneity of variance. The square root model was more accurate than the Schoolfield model when both models used natural logarithm transformation. PMID:1444367

  15. Optical Processing of Speckle Images with Bacteriorhodopsin for Pattern Recognition

    NASA Technical Reports Server (NTRS)

    Downie, John D.; Tucker, Deanne (Technical Monitor)

    1994-01-01

    Logarithmic processing of images with multiplicative noise characteristics can be utilized to transform the image into one with an additive noise distribution. This simplifies subsequent image processing steps for applications such as image restoration or correlation for pattern recognition. One particularly common form of multiplicative noise is speckle, for which the logarithmic operation not only produces additive noise, but also makes it of constant variance (signal-independent). We examine the optical transmission properties of some bacteriorhodopsin films here and find them well suited to implement such a pointwise logarithmic transformation optically in a parallel fashion. We present experimental results of the optical conversion of speckle images into transformed images with additive, signal-independent noise statistics using the real-time photochromic properties of bacteriorhodopsin. We provide an example of improved correlation performance in terms of correlation peak signal-to-noise for such a transformed speckle image.

  16. Autonomous facial recognition system inspired by human visual system based logarithmical image visualization technique

    NASA Astrophysics Data System (ADS)

    Wan, Qianwen; Panetta, Karen; Agaian, Sos

    2017-05-01

    Autonomous facial recognition system is widely used in real-life applications, such as homeland border security, law enforcement identification and authentication, and video-based surveillance analysis. Issues like low image quality, non-uniform illumination as well as variations in poses and facial expressions can impair the performance of recognition systems. To address the non-uniform illumination challenge, we present a novel robust autonomous facial recognition system inspired by the human visual system based, so called, logarithmical image visualization technique. In this paper, the proposed method, for the first time, utilizes the logarithmical image visualization technique coupled with the local binary pattern to perform discriminative feature extraction for facial recognition system. The Yale database, the Yale-B database and the ATT database are used for computer simulation accuracy and efficiency testing. The extensive computer simulation demonstrates the method's efficiency, accuracy, and robustness of illumination invariance for facial recognition.

  17. Children's Early Mental Number Line: Logarithmic or Decomposed Linear?

    ERIC Educational Resources Information Center

    Moeller, Korbinean; Pixner, Silvia; Kaufmann, Liane; Nuerk, Hans-Christoph

    2009-01-01

    Recently, the nature of children's mental number line has received much investigation. In the number line task, children are required to mark a presented number on a physical number line with fixed endpoints. Typically, it was observed that the estimations of younger/inexperienced children were accounted for best by a logarithmic function, whereas…

  18. Evaluating source separation of plastic waste using conjoint analysis.

    PubMed

    Nakatani, Jun; Aramaki, Toshiya; Hanaki, Keisuke

    2008-11-01

    Using conjoint analysis, we estimated the willingness to pay (WTP) of households for source separation of plastic waste and the improvement of related environmental impacts, the residents' loss of life expectancy (LLE), the landfill capacity, and the CO2 emissions. Unreliable respondents were identified and removed from the sample based on their answers to follow-up questions. It was found that the utility associated with reducing LLE and with the landfill capacity were both well expressed by logarithmic functions, but that residents were indifferent to the level of CO2 emissions even though they approved of CO2 reduction. In addition, residents derived utility from the act of separating plastic waste, irrespective of its environmental impacts; that is, they were willing to practice the separation of plastic waste at home in anticipation of its "invisible effects", such as the improvement of citizens' attitudes toward solid waste issues.

  19. A case study in electricity regulation: Theory, evidence, and policy

    NASA Astrophysics Data System (ADS)

    Luk, Stephen Kai Ming

    This research provides a thorough empirical analysis of the problem of excess capacity found in the electricity supply industry in Hong Kong. I utilize a cost-function based temporary equilibrium framework to investigate empirically whether the current regulatory scheme encourages the two utilities to overinvest in capital, and how much consumers would have saved if the underutilized capacity is eliminated. The research is divided into two main parts. The first section attempts to find any evidence of over-investment in capital. As a point of departure from traditional analysis, I treat physical capital as quasi-fixed, which implies a restricted cost function to represent the firm's short-run cost structure. Under such specification, the firm minimizes the cost of employing variable factor inputs subject to predetermined levels of quasi-fixed factors. Using a transcendental logarithmic restricted cost function, I estimate the cost-side equivalent of marginal product of capital, or commonly referred to as "shadow values" of capital. The estimation results suggest that the two electric utilities consistently over-invest in generation capacity. The second part of this research focuses on the economies of capital utilization, and the estimation of distortion cost in capital investment. Again, I utilize a translog specification of the cost function to estimate the actual cost of the excess capacity, and to find out how much consumers could have saved if the underutilized generation capacity were brought closer to the international standard. Estimation results indicate that an increase in the utilization rate can significantly reduce the costs of both utilities. And if the current excess capacity were reduced to the international standard, the combined savings in costs for both firms will reach 4.4 billion. This amount of savings, if redistributed to all consumers evenly, will translate into a 650 rebate per capita. Finally, two policy recommendations: a more stringent policy towards capacity expansion and the creation of a reimbursement program, are discussed.

  20. Alternative Proofs for Inequalities of Some Trigonometric Functions

    ERIC Educational Resources Information Center

    Guo, Bai-Ni; Qi, Feng

    2008-01-01

    By using an identity relating to Bernoulli's numbers and power series expansions of cotangent function and logarithms of functions involving sine function, cosine function and tangent function, four inequalities involving cotangent function, sine function, secant function and tangent function are established.

  1. A new "Logicle" display method avoids deceptive effects of logarithmic scaling for low signals and compensated data.

    PubMed

    Parks, David R; Roederer, Mario; Moore, Wayne A

    2006-06-01

    In immunofluorescence measurements and most other flow cytometry applications, fluorescence signals of interest can range down to essentially zero. After fluorescence compensation, some cell populations will have low means and include events with negative data values. Logarithmic presentation has been very useful in providing informative displays of wide-ranging flow cytometry data, but it fails to adequately display cell populations with low means and high variances and, in particular, offers no way to include negative data values. This has led to a great deal of difficulty in interpreting and understanding flow cytometry data, has often resulted in incorrect delineation of cell populations, and has led many people to question the correctness of compensation computations that were, in fact, correct. We identified a set of criteria for creating data visualization methods that accommodate the scaling difficulties presented by flow cytometry data. On the basis of these, we developed a new data visualization method that provides important advantages over linear or logarithmic scaling for display of flow cytometry data, a scaling we refer to as "Logicle" scaling. Logicle functions represent a particular generalization of the hyperbolic sine function with one more adjustable parameter than linear or logarithmic functions. Finally, we developed methods for objectively and automatically selecting an appropriate value for this parameter. The Logicle display method provides more complete, appropriate, and readily interpretable representations of data that includes populations with low-to-zero means, including distributions resulting from fluorescence compensation procedures, than can be produced using either logarithmic or linear displays. The method includes a specific algorithm for evaluating actual data distributions and deriving parameters of the Logicle scaling function appropriate for optimal display of that data. It is critical to note that Logicle visualization does not change the data values or the descriptive statistics computed from them. Copyright 2006 International Society for Analytical Cytology.

  2. Exact Asymptotics of the Freezing Transition of a Logarithmically Correlated Random Energy Model

    NASA Astrophysics Data System (ADS)

    Webb, Christian

    2011-12-01

    We consider a logarithmically correlated random energy model, namely a model for directed polymers on a Cayley tree, which was introduced by Derrida and Spohn. We prove asymptotic properties of a generating function of the partition function of the model by studying a discrete time analogy of the KPP-equation—thus translating Bramson's work on the KPP-equation into a discrete time case. We also discuss connections to extreme value statistics of a branching random walk and a rescaled multiplicative cascade measure beyond the critical point.

  3. Optimization of non-linear gradient in hydrophobic interaction chromatography for the analytical characterization of antibody-drug conjugates.

    PubMed

    Bobály, Balázs; Randazzo, Giuseppe Marco; Rudaz, Serge; Guillarme, Davy; Fekete, Szabolcs

    2017-01-20

    The goal of this work was to evaluate the potential of non-linear gradients in hydrophobic interaction chromatography (HIC), to improve the separation between the different homologous species (drug-to-antibody, DAR) of commercial antibody-drug conjugates (ADC). The selectivities between Brentuximab Vedotin species were measured using three different gradient profiles, namely linear, power function based and logarithmic ones. The logarithmic gradient provides the most equidistant retention distribution for the DAR species and offers the best overall separation of cysteine linked ADC in HIC. Another important advantage of the logarithmic gradient, is its peak focusing effect for the DAR0 species, which is particularly useful to improve the quantitation limit of DAR0. Finally, the logarithmic behavior of DAR species of ADC in HIC was modelled using two different approaches, based on i) the linear solvent strength theory (LSS) and two scouting linear gradients and ii) a new derived equation and two logarithmic scouting gradients. In both cases, the retention predictions were excellent and systematically below 3% compared to the experimental values. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. The complete two-loop integrated jet thrust distribution in soft-collinear effective theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    von Manteuffel, Andreas; Schabinger, Robert M.; Zhu, Hua Xing

    2014-03-01

    In this work, we complete the calculation of the soft part of the two-loop integrated jet thrust distribution in e+e- annihilation. This jet mass observable is based on the thrust cone jet algorithm, which involves a veto scale for out-of-jet radiation. The previously uncomputed part of our result depends in a complicated way on the jet cone size, r, and at intermediate stages of the calculation we actually encounter a new class of multiple polylogarithms. We employ an extension of the coproduct calculus to systematically exploit functional relations and represent our results concisely. In contrast to the individual contributions, themore » sum of all global terms can be expressed in terms of classical polylogarithms. Our explicit two-loop calculation enables us to clarify the small r picture discussed in earlier work. In particular, we show that the resummation of the logarithms of r that appear in the previously uncomputed part of the two-loop integrated jet thrust distribution is inextricably linked to the resummation of the non-global logarithms. Furthermore, we find that the logarithms of r which cannot be absorbed into the non-global logarithms in the way advocated in earlier work have coefficients fixed by the two-loop cusp anomalous dimension. We also show that in many cases one can straightforwardly predict potentially large logarithmic contributions to the integrated jet thrust distribution at L loops by making use of analogous contributions to the simpler integrated hemisphere soft function.« less

  5. Representational change and strategy use in children's number line estimation during the first years of primary school.

    PubMed

    White, Sonia L J; Szűcs, Dénes

    2012-01-04

    The objective of this study was to scrutinize number line estimation behaviors displayed by children in mathematics classrooms during the first three years of schooling. We extend existing research by not only mapping potential logarithmic-linear shifts but also provide a new perspective by studying in detail the estimation strategies of individual target digits within a number range familiar to children. Typically developing children (n = 67) from Years 1-3 completed a number-to-position numerical estimation task (0-20 number line). Estimation behaviors were first analyzed via logarithmic and linear regression modeling. Subsequently, using an analysis of variance we compared the estimation accuracy of each digit, thus identifying target digits that were estimated with the assistance of arithmetic strategy. Our results further confirm a developmental logarithmic-linear shift when utilizing regression modeling; however, uniquely we have identified that children employ variable strategies when completing numerical estimation, with levels of strategy advancing with development. In terms of the existing cognitive research, this strategy factor highlights the limitations of any regression modeling approach, or alternatively, it could underpin the developmental time course of the logarithmic-linear shift. Future studies need to systematically investigate this relationship and also consider the implications for educational practice.

  6. Representational change and strategy use in children's number line estimation during the first years of primary school

    PubMed Central

    2012-01-01

    Background The objective of this study was to scrutinize number line estimation behaviors displayed by children in mathematics classrooms during the first three years of schooling. We extend existing research by not only mapping potential logarithmic-linear shifts but also provide a new perspective by studying in detail the estimation strategies of individual target digits within a number range familiar to children. Methods Typically developing children (n = 67) from Years 1-3 completed a number-to-position numerical estimation task (0-20 number line). Estimation behaviors were first analyzed via logarithmic and linear regression modeling. Subsequently, using an analysis of variance we compared the estimation accuracy of each digit, thus identifying target digits that were estimated with the assistance of arithmetic strategy. Results Our results further confirm a developmental logarithmic-linear shift when utilizing regression modeling; however, uniquely we have identified that children employ variable strategies when completing numerical estimation, with levels of strategy advancing with development. Conclusion In terms of the existing cognitive research, this strategy factor highlights the limitations of any regression modeling approach, or alternatively, it could underpin the developmental time course of the logarithmic-linear shift. Future studies need to systematically investigate this relationship and also consider the implications for educational practice. PMID:22217191

  7. Statistical improvement in detection level of gravitational microlensing events from their light curves

    NASA Astrophysics Data System (ADS)

    Ibrahim, Ichsan; Malasan, Hakim L.; Kunjaya, Chatief; Timur Jaelani, Anton; Puannandra Putri, Gerhana; Djamal, Mitra

    2018-04-01

    In astronomy, the brightness of a source is typically expressed in terms of magnitude. Conventionally, the magnitude is defined by the logarithm of received flux. This relationship is known as the Pogson formula. For received flux with a small signal to noise ratio (S/N), however, the formula gives a large magnitude error. We investigate whether the use of Inverse Hyperbolic Sine function (hereafter referred to as the Asinh magnitude) in the modified formulae could allow for an alternative calculation of magnitudes for small S/N flux, and whether the new approach is better for representing the brightness of that region. We study the possibility of increasing the detection level of gravitational microlensing using 40 selected microlensing light curves from the 2013 and 2014 seasons and by using the Asinh magnitude. Photometric data of the selected events are obtained from the Optical Gravitational Lensing Experiment (OGLE). We found that utilization of the Asinh magnitude makes the events brighter compared to using the logarithmic magnitude, with an average of about 3.42 × 10‑2 magnitude and an average in the difference of error between the logarithmic and the Asinh magnitude of about 2.21 × 10‑2 magnitude. The microlensing events OB140847 and OB140885 are found to have the largest difference values among the selected events. Using a Gaussian fit to find the peak for OB140847 and OB140885, we conclude statistically that the Asinh magnitude gives better mean squared values of the regression and narrower residual histograms than the Pogson magnitude. Based on these results, we also attempt to propose a limit in magnitude value for which use of the Asinh magnitude is optimal with small S/N data.

  8. Fragmentation functions beyond fixed order accuracy

    NASA Astrophysics Data System (ADS)

    Anderle, Daniele P.; Kaufmann, Tom; Stratmann, Marco; Ringer, Felix

    2017-03-01

    We give a detailed account of the phenomenology of all-order resummations of logarithmically enhanced contributions at small momentum fraction of the observed hadron in semi-inclusive electron-positron annihilation and the timelike scale evolution of parton-to-hadron fragmentation functions. The formalism to perform resummations in Mellin moment space is briefly reviewed, and all relevant expressions up to next-to-next-to-leading logarithmic order are derived, including their explicit dependence on the factorization and renormalization scales. We discuss the details pertinent to a proper numerical implementation of the resummed results comprising an iterative solution to the timelike evolution equations, the matching to known fixed-order expressions, and the choice of the contour in the Mellin inverse transformation. First extractions of parton-to-pion fragmentation functions from semi-inclusive annihilation data are performed at different logarithmic orders of the resummations in order to estimate their phenomenological relevance. To this end, we compare our results to corresponding fits up to fixed, next-to-next-to-leading order accuracy and study the residual dependence on the factorization scale in each case.

  9. Hierarchical random additive process and logarithmic scaling of generalized high order, two-point correlations in turbulent boundary layer flow

    NASA Astrophysics Data System (ADS)

    Yang, X. I. A.; Marusic, I.; Meneveau, C.

    2016-06-01

    Townsend [Townsend, The Structure of Turbulent Shear Flow (Cambridge University Press, Cambridge, UK, 1976)] hypothesized that the logarithmic region in high-Reynolds-number wall-bounded flows consists of space-filling, self-similar attached eddies. Invoking this hypothesis, we express streamwise velocity fluctuations in the inertial layer in high-Reynolds-number wall-bounded flows as a hierarchical random additive process (HRAP): uz+=∑i=1Nzai . Here u is the streamwise velocity fluctuation, + indicates normalization in wall units, z is the wall normal distance, and ai's are independently, identically distributed random additives, each of which is associated with an attached eddy in the wall-attached hierarchy. The number of random additives is Nz˜ln(δ /z ) where δ is the boundary layer thickness and ln is natural log. Due to its simplified structure, such a process leads to predictions of the scaling behaviors for various turbulence statistics in the logarithmic layer. Besides reproducing known logarithmic scaling of moments, structure functions, and correlation function [" close="]3/2 uz(x ) uz(x +r ) >, new logarithmic laws in two-point statistics such as uz4(x ) > 1 /2, 1/3, etc. can be derived using the HRAP formalism. Supporting empirical evidence for the logarithmic scaling in such statistics is found from the Melbourne High Reynolds Number Boundary Layer Wind Tunnel measurements. We also show that, at high Reynolds numbers, the above mentioned new logarithmic laws can be derived by assuming the arrival of an attached eddy at a generic point in the flow field to be a Poisson process [Woodcock and Marusic, Phys. Fluids 27, 015104 (2015), 10.1063/1.4905301]. Taken together, the results provide new evidence supporting the essential ingredients of the attached eddy hypothesis to describe streamwise velocity fluctuations of large, momentum transporting eddies in wall-bounded turbulence, while observed deviations suggest the need for further extensions of the model.

  10. Casimir meets Poisson: improved quark/gluon discrimination with counting observables

    DOE PAGES

    Frye, Christopher; Larkoski, Andrew J.; Thaler, Jesse; ...

    2017-09-19

    Charged track multiplicity is among the most powerful observables for discriminating quark- from gluon-initiated jets. Despite its utility, it is not infrared and collinear (IRC) safe, so perturbative calculations are limited to studying the energy evolution of multiplicity moments. While IRC-safe observables, like jet mass, are perturbatively calculable, their distributions often exhibit Casimir scaling, such that their quark/gluon discrimination power is limited by the ratio of quark to gluon color factors. In this paper, we introduce new IRC-safe counting observables whose discrimination performance exceeds that of jet mass and approaches that of track multiplicity. The key observation is that trackmore » multiplicity is approximately Poisson distributed, with more suppressed tails than the Sudakov peak structure from jet mass. By using an iterated version of the soft drop jet grooming algorithm, we can define a “soft drop multiplicity” which is Poisson distributed at leading-logarithmic accuracy. In addition, we calculate the next-to-leading-logarithmic corrections to this Poisson structure. If we allow the soft drop groomer to proceed to the end of the jet branching history, we can define a collinear-unsafe (but still infrared-safe) counting observable. Exploiting the universality of the collinear limit, we define generalized fragmentation functions to study the perturbative energy evolution of collinear-unsafe multiplicity.« less

  11. Casimir meets Poisson: improved quark/gluon discrimination with counting observables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frye, Christopher; Larkoski, Andrew J.; Thaler, Jesse

    Charged track multiplicity is among the most powerful observables for discriminating quark- from gluon-initiated jets. Despite its utility, it is not infrared and collinear (IRC) safe, so perturbative calculations are limited to studying the energy evolution of multiplicity moments. While IRC-safe observables, like jet mass, are perturbatively calculable, their distributions often exhibit Casimir scaling, such that their quark/gluon discrimination power is limited by the ratio of quark to gluon color factors. In this paper, we introduce new IRC-safe counting observables whose discrimination performance exceeds that of jet mass and approaches that of track multiplicity. The key observation is that trackmore » multiplicity is approximately Poisson distributed, with more suppressed tails than the Sudakov peak structure from jet mass. By using an iterated version of the soft drop jet grooming algorithm, we can define a “soft drop multiplicity” which is Poisson distributed at leading-logarithmic accuracy. In addition, we calculate the next-to-leading-logarithmic corrections to this Poisson structure. If we allow the soft drop groomer to proceed to the end of the jet branching history, we can define a collinear-unsafe (but still infrared-safe) counting observable. Exploiting the universality of the collinear limit, we define generalized fragmentation functions to study the perturbative energy evolution of collinear-unsafe multiplicity.« less

  12. A neuroeconomic theory of rational addiction and nonlinear time-perception.

    PubMed

    Takahashi, Taiki

    2011-01-01

    Neuroeconomic conditions for "rational addiction" (Becker & Murphy 1988) have been unknown. This paper derived the conditions for "rational addiction" by utilizing a nonlinear time-perception theory of "hyperbolic" discounting, which is mathematically equivalent to the q-exponential intertemporal choice model based on Tsallis' statistics. It is shown that (i) Arrow-Pratt measure for temporal cognition corresponds to the degree of irrationality (i.e., Prelec's "decreasing impatience" parameter of temporal discounting) and (ii) rationality in addicts is controlled by a nondimensionalization parameter of the logarithmic time-perception function. Furthermore, the present theory illustrates the possibility that addictive drugs increase impulsivity via dopaminergic neuroadaptation without increasing irrationality. Future directions in the application of the model to studies in neuroeconomics are discussed.

  13. A factorization approach to next-to-leading-power threshold logarithms

    NASA Astrophysics Data System (ADS)

    Bonocore, D.; Laenen, E.; Magnea, L.; Melville, S.; Vernazza, L.; White, C. D.

    2015-06-01

    Threshold logarithms become dominant in partonic cross sections when the selected final state forces gluon radiation to be soft or collinear. Such radiation factorizes at the level of scattering amplitudes, and this leads to the resummation of threshold logarithms which appear at leading power in the threshold variable. In this paper, we consider the extension of this factorization to include effects suppressed by a single power of the threshold variable. Building upon the Low-Burnett-Kroll-Del Duca (LBKD) theorem, we propose a decomposition of radiative amplitudes into universal building blocks, which contain all effects ultimately responsible for next-to-leading-power (NLP) threshold logarithms in hadronic cross sections for electroweak annihilation processes. In particular, we provide a NLO evaluation of the radiative jet function, responsible for the interference of next-to-soft and collinear effects in these cross sections. As a test, using our expression for the amplitude, we reproduce all abelian-like NLP threshold logarithms in the NNLO Drell-Yan cross section, including the interplay of real and virtual emissions. Our results are a significant step towards developing a generally applicable resummation formalism for NLP threshold effects, and illustrate the breakdown of next-to-soft theorems for gauge theory amplitudes at loop level.

  14. Freezing transition of the directed polymer in a 1+d random medium: Location of the critical temperature and unusual critical properties

    NASA Astrophysics Data System (ADS)

    Monthus, Cécile; Garel, Thomas

    2006-07-01

    In dimension d⩾3 , the directed polymer in a random medium undergoes a phase transition between a free phase at high temperature and a low-temperature disorder-dominated phase. For the latter phase, Fisher and Huse have proposed a droplet theory based on the scaling of the free-energy fluctuations ΔF(l)˜lθ at scale l . On the other hand, in related growth models belonging to the Kardar-Parisi-Zhang universality class, Forrest and Tang have found that the height-height correlation function is logarithmic at the transition. For the directed polymer model at criticality, this translates into logarithmic free-energy fluctuations ΔFTc(l)˜(lnl)σ with σ=1/2 . In this paper, we propose a droplet scaling analysis exactly at criticality based on this logarithmic scaling. Our main conclusion is that the typical correlation length ξ(T) of the low-temperature phase diverges as lnξ(T)˜[-ln(Tc-T)]1/σ˜[-ln(Tc-T)]2 , instead of the usual power law ξ(T)˜(Tc-T)-ν . Furthermore, the logarithmic dependence of ΔFTc(l) leads to the conclusion that the critical temperature Tc actually coincides with the explicit upper bound T2 derived by Derrida and co-workers, where T2 corresponds to the temperature below which the ratio ZL2¯/(ZL¯)2 diverges exponentially in L . Finally, since the Fisher-Huse droplet theory was initially introduced for the spin-glass phase, we briefly mention the similarities with and differences from the directed polymer model. If one speculates that the free energy of droplet excitations for spin glasses is also logarithmic at Tc , one obtains a logarithmic decay for the mean square correlation function at criticality, C2(r)¯˜1/(lnr)σ , instead of the usual power law 1/rd-2+η .

  15. Impact of long-range interactions on the disordered vortex lattice

    NASA Astrophysics Data System (ADS)

    Koopmann, J. A.; Geshkenbein, V. B.; Blatter, G.

    2003-07-01

    The interaction between the vortex lines in a type-II superconductor is mediated by currents. In the absence of transverse screening this interaction is long ranged, stiffening up the vortex lattice as expressed by the dispersive elastic moduli. The effect of disorder is strongly reduced, resulting in a mean-squared displacement correlator ≡<[u(R,L)-u(0,0)]2> characterized by a mere logarithmic growth with distance. Finite screening cuts the interaction on the scale of the London penetration depth λ and limits the above behavior to distances R<λ. Using a functional renormalization-group approach, we derive the flow equation for the disorder correlation function and calculate the disorder-averaged mean-squared relative displacement ∝ ln2σ(R/a0). The logarithmic growth (2σ=1) in the perturbative regime at small distances [A. I. Larkin and Yu. N. Ovchinnikov, J. Low Temp. Phys. 34, 409 (1979)] crosses over to a sub-logarithmic growth with 2σ=0.348 at large distances.

  16. Logarithmic violation of scaling in anisotropic kinematic dynamo model

    NASA Astrophysics Data System (ADS)

    Antonov, N. V.; Gulitskiy, N. M.

    2016-01-01

    Inertial-range asymptotic behavior of a vector (e.g., magnetic) field, passively advected by a strongly anisotropic turbulent flow, is studied by means of the field theoretic renormalization group and the operator product expansion. The advecting velocity field is Gaussian, not correlated in time, with the pair correlation function of the form ∝δ (t -t')/k⊥d-1 +ξ , where k⊥ = |k⊥| and k⊥ is the component of the wave vector, perpendicular to the distinguished direction. The stochastic advection-diffusion equation for the transverse (divergence-free) vector field includes, as special cases, the kinematic dynamo model for magnetohydrodynamic turbulence and the linearized Navier-Stokes equation. In contrast to the well known isotropic Kraichnan's model, where various correlation functions exhibit anomalous scaling behavior with infinite sets of anomalous exponents, here the dependence on the integral turbulence scale L has a logarithmic behavior: instead of power-like corrections to ordinary scaling, determined by naive (canonical) dimensions, the anomalies manifest themselves as polynomials of logarithms of L.

  17. Structure-function relationships using spectral-domain optical coherence tomography: comparison with scanning laser polarimetry.

    PubMed

    Aptel, Florent; Sayous, Romain; Fortoul, Vincent; Beccat, Sylvain; Denis, Philippe

    2010-12-01

    To evaluate and compare the regional relationships between visual field sensitivity and retinal nerve fiber layer (RNFL) thickness as measured by spectral-domain optical coherence tomography (OCT) and scanning laser polarimetry. Prospective cross-sectional study. One hundred and twenty eyes of 120 patients (40 with healthy eyes, 40 with suspected glaucoma, and 40 with glaucoma) were tested on Cirrus-OCT, GDx VCC, and standard automated perimetry. Raw data on RNFL thickness were extracted for 256 peripapillary sectors of 1.40625 degrees each for the OCT measurement ellipse and 64 peripapillary sectors of 5.625 degrees each for the GDx VCC measurement ellipse. Correlations between peripapillary RNFL thickness in 6 sectors and visual field sensitivity in the 6 corresponding areas were evaluated using linear and logarithmic regression analysis. Receiver operating curve areas were calculated for each instrument. With spectral-domain OCT, the correlations (r(2)) between RNFL thickness and visual field sensitivity ranged from 0.082 (nasal RNFL and corresponding visual field area, linear regression) to 0.726 (supratemporal RNFL and corresponding visual field area, logarithmic regression). By comparison, with GDx-VCC, the correlations ranged from 0.062 (temporal RNFL and corresponding visual field area, linear regression) to 0.362 (supratemporal RNFL and corresponding visual field area, logarithmic regression). In pairwise comparisons, these structure-function correlations were generally stronger with spectral-domain OCT than with GDx VCC and with logarithmic regression than with linear regression. The largest areas under the receiver operating curve were seen for OCT superior thickness (0.963 ± 0.022; P < .001) in eyes with glaucoma and for OCT average thickness (0.888 ± 0.072; P < .001) in eyes with suspected glaucoma. The structure-function relationship was significantly stronger with spectral-domain OCT than with scanning laser polarimetry, and was better expressed logarithmically than linearly. Measurements with these 2 instruments should not be considered to be interchangeable. Copyright © 2010 Elsevier Inc. All rights reserved.

  18. Dissipative quantum trajectories in complex space: Damped harmonic oscillator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chou, Chia-Chun, E-mail: ccchou@mx.nthu.edu.tw

    Dissipative quantum trajectories in complex space are investigated in the framework of the logarithmic nonlinear Schrödinger equation. The logarithmic nonlinear Schrödinger equation provides a phenomenological description for dissipative quantum systems. Substituting the wave function expressed in terms of the complex action into the complex-extended logarithmic nonlinear Schrödinger equation, we derive the complex quantum Hamilton–Jacobi equation including the dissipative potential. It is shown that dissipative quantum trajectories satisfy a quantum Newtonian equation of motion in complex space with a friction force. Exact dissipative complex quantum trajectories are analyzed for the wave and solitonlike solutions to the logarithmic nonlinear Schrödinger equation formore » the damped harmonic oscillator. These trajectories converge to the equilibrium position as time evolves. It is indicated that dissipative complex quantum trajectories for the wave and solitonlike solutions are identical to dissipative complex classical trajectories for the damped harmonic oscillator. This study develops a theoretical framework for dissipative quantum trajectories in complex space.« less

  19. Estimating ice-affected streamflow by extended Kalman filtering

    USGS Publications Warehouse

    Holtschlag, D.J.; Grewal, M.S.

    1998-01-01

    An extended Kalman filter was developed to automate the real-time estimation of ice-affected streamflow on the basis of routine measurements of stream stage and air temperature and on the relation between stage and streamflow during open-water (ice-free) conditions. The filter accommodates three dynamic modes of ice effects: sudden formation/ablation, stable ice conditions, and eventual elimination. The utility of the filter was evaluated by applying it to historical data from two long-term streamflow-gauging stations, St. John River at Dickey, Maine and Platte River at North Bend, Nebr. Results indicate that the filter was stable and that parameters converged for both stations, producing streamflow estimates that are highly correlated with published values. For the Maine station, logarithms of estimated streamflows are within 8% of the logarithms of published values 87.2% of the time during periods of ice effects and within 15% 96.6% of the time. Similarly, for the Nebraska station, logarithms of estimated streamflows are within 8% of the logarithms of published values 90.7% of the time and within 15% 97.7% of the time. In addition, the correlation between temporal updates and published streamflows on days of direct measurements at the Maine station was 0.777 and 0.998 for ice-affected and open-water periods, respectively; for the Nebraska station, corresponding correlations were 0.864 and 0.997.

  20. Logarithmic corrections to entropy of magnetically charged AdS4 black holes

    NASA Astrophysics Data System (ADS)

    Jeon, Imtak; Lal, Shailesh

    2017-11-01

    Logarithmic terms are quantum corrections to black hole entropy determined completely from classical data, thus providing a strong check for candidate theories of quantum gravity purely from physics in the infrared. We compute these terms in the entropy associated to the horizon of a magnetically charged extremal black hole in AdS4×S7 using the quantum entropy function and discuss the possibility of matching against recently derived microscopic expressions.

  1. Evaporation rate and vapor pressure of selected polymeric lubricating oils.

    NASA Technical Reports Server (NTRS)

    Gardos, M. N.

    1973-01-01

    A recently developed ultrahigh-vacuum quartz spring mass sorption microbalance has been utilized to measure the evaporation rates of several low-volatility polymeric lubricating oils at various temperatures. The evaporation rates are used to calculate the vapor pressures by the Langmuir equation. A method is presented to accurately estimate extended temperature range evaporation rate and vapor pressure data for polymeric oils, incorporating appropriate corrections for the increases in molecular weight and the change in volatility of the progressively evaporating polymer fractions. The logarithms of the calculated data appear to follow linear relationships within the test temperature ranges, when plotted versus 1000/T. These functions and the observed effusion characteristics of the fluids on progressive volatilization are useful in estimating evaporation rate and vapor pressure changes on evaporative depletion.

  2. Adjustments for the display of quantized ion channel dwell times in histograms with logarithmic bins.

    PubMed

    Stark, J A; Hladky, S B

    2000-02-01

    Dwell-time histograms are often plotted as part of patch-clamp investigations of ion channel currents. The advantages of plotting these histograms with a logarithmic time axis were demonstrated by, J. Physiol. (Lond.). 378:141-174), Pflügers Arch. 410:530-553), and, Biophys. J. 52:1047-1054). Sigworth and Sine argued that the interpretation of such histograms is simplified if the counts are presented in a manner similar to that of a probability density function. However, when ion channel records are recorded as a discrete time series, the dwell times are quantized. As a result, the mapping of dwell times to logarithmically spaced bins is highly irregular; bins may be empty, and significant irregularities may extend beyond the duration of 100 samples. Using simple approximations based on the nature of the binning process and the transformation rules for probability density functions, we develop adjustments for the display of the counts to compensate for this effect. Tests with simulated data suggest that this procedure provides a faithful representation of the data.

  3. Application of Mahler measure theory to the face-centred cubic lattice Green function at the origin and its associated logarithmic integral

    NASA Astrophysics Data System (ADS)

    Joyce, G. S.

    2012-07-01

    The mathematical properties of the face-centred cubic lattice Green function \\begin{equation*} \\fl G(w) \\equiv {1\\over {\\pi ^3}}\\int _{0}^{\\pi }\\int _{0}^{\\pi }\\int _{0}^{\\pi } {{d\\theta _1\\,d\\theta _2\\,d\\theta _3}\\over {w-c(\\theta _1)\\,c(\\theta _2)- c(\\theta _2)\\,c(\\theta _3)-c(\\theta _3)\\,c(\\theta _1)}} \\end{equation*} and the associated logarithmic integral \\begin{eqnarray*} \\fl S(w) \\equiv {1\\over {\\pi ^3}}\\int _{0}^{\\pi }\\int _{0}^{\\pi }\\int _{0}^{\\pi } \\ln [ w-c(\\theta _1)\\,c(\\theta _2)-c(\\theta _2)\\,c(\\theta _3)\

  4. Evaluation of Low-Voltage Distribution Network Index Based on Improved Principal Component Analysis

    NASA Astrophysics Data System (ADS)

    Fan, Hanlu; Gao, Suzhou; Fan, Wenjie; Zhong, Yinfeng; Zhu, Lei

    2018-01-01

    In order to evaluate the development level of the low-voltage distribution network objectively and scientifically, chromatography analysis method is utilized to construct evaluation index model of low-voltage distribution network. Based on the analysis of principal component and the characteristic of logarithmic distribution of the index data, a logarithmic centralization method is adopted to improve the principal component analysis algorithm. The algorithm can decorrelate and reduce the dimensions of the evaluation model and the comprehensive score has a better dispersion degree. The clustering method is adopted to analyse the comprehensive score because the comprehensive score of the courts is concentrated. Then the stratification evaluation of the courts is realized. An example is given to verify the objectivity and scientificity of the evaluation method.

  5. Measurement of Galactic Logarithmic Spiral Arm Pitch Angle Using Two-dimensional Fast Fourier Transform Decomposition

    NASA Astrophysics Data System (ADS)

    Davis, Benjamin L.; Berrier, Joel C.; Shields, Douglas W.; Kennefick, Julia; Kennefick, Daniel; Seigar, Marc S.; Lacy, Claud H. S.; Puerari, Ivânio

    2012-04-01

    A logarithmic spiral is a prominent feature appearing in a majority of observed galaxies. This feature has long been associated with the traditional Hubble classification scheme, but historical quotes of pitch angle of spiral galaxies have been almost exclusively qualitative. We have developed a methodology, utilizing two-dimensional fast Fourier transformations of images of spiral galaxies, in order to isolate and measure the pitch angles of their spiral arms. Our technique provides a quantitative way to measure this morphological feature. This will allow comparison of spiral galaxy pitch angle to other galactic parameters and test spiral arm genesis theories. In this work, we detail our image processing and analysis of spiral galaxy images and discuss the robustness of our analysis techniques.

  6. LOGARITHMIC AMPLIFIER

    DOEpatents

    De Shong, J.A. Jr.

    1957-12-31

    A logarithmic current amplifier circuit having a high sensitivity and fast response is described. The inventor discovered the time constant of the input circuit of a system utilizing a feedback amplifier, ionization chamber, and a diode, is inversely proportional to the input current, and that the amplifier becomes unstable in amplifying signals in the upper frequency range when the amplifier's forward gain time constant equals the input circuit time constant. The described device incorporates impedance networks having low frequency response characteristic at various points in the circuit to change the forward gain of the amplifler at a rate of 0.7 of the gain magnitude for every two times increased in frequency. As a result of this improvement, the time constant of the input circuit is greatly reduced at high frequencies, and the amplifier response is increased.

  7. Optimization of the Monte Carlo code for modeling of photon migration in tissue.

    PubMed

    Zołek, Norbert S; Liebert, Adam; Maniewski, Roman

    2006-10-01

    The Monte Carlo method is frequently used to simulate light transport in turbid media because of its simplicity and flexibility, allowing to analyze complicated geometrical structures. Monte Carlo simulations are, however, time consuming because of the necessity to track the paths of individual photons. The time consuming computation is mainly associated with the calculation of the logarithmic and trigonometric functions as well as the generation of pseudo-random numbers. In this paper, the Monte Carlo algorithm was developed and optimized, by approximation of the logarithmic and trigonometric functions. The approximations were based on polynomial and rational functions, and the errors of these approximations are less than 1% of the values of the original functions. The proposed algorithm was verified by simulations of the time-resolved reflectance at several source-detector separations. The results of the calculation using the approximated algorithm were compared with those of the Monte Carlo simulations obtained with an exact computation of the logarithm and trigonometric functions as well as with the solution of the diffusion equation. The errors of the moments of the simulated distributions of times of flight of photons (total number of photons, mean time of flight and variance) are less than 2% for a range of optical properties, typical of living tissues. The proposed approximated algorithm allows to speed up the Monte Carlo simulations by a factor of 4. The developed code can be used on parallel machines, allowing for further acceleration.

  8. On the entropy function in sociotechnical systems

    PubMed Central

    Montroll, Elliott W.

    1981-01-01

    The entropy function H = -Σpj log pj (pj being the probability of a system being in state j) and its continuum analogue H = ∫p(x) log p(x) dx are fundamental in Shannon's theory of information transfer in communication systems. It is here shown that the discrete form of H also appears naturally in single-lane traffic flow theory. In merchandising, goods flow from a whole-saler through a retailer to a customer. Certain features of the process may be deduced from price distribution functions derived from Sears Roebuck and Company catalogues. It is found that the dispersion in logarithm of catalogue prices of a given year has remained about constant, independently of the year, for over 75 years. From this it may be inferred that the continuum entropy function for the variable logarithm of price had inadvertently, through Sears Roebuck policies, been maximized for that firm subject to the observed dispersion. PMID:16593136

  9. On the entropy function in sociotechnical systems.

    PubMed

    Montroll, E W

    1981-12-01

    The entropy function H = -Sigmap(j) log p(j) (p(j) being the probability of a system being in state j) and its continuum analogue H = integralp(x) log p(x) dx are fundamental in Shannon's theory of information transfer in communication systems. It is here shown that the discrete form of H also appears naturally in single-lane traffic flow theory. In merchandising, goods flow from a whole-saler through a retailer to a customer. Certain features of the process may be deduced from price distribution functions derived from Sears Roebuck and Company catalogues. It is found that the dispersion in logarithm of catalogue prices of a given year has remained about constant, independently of the year, for over 75 years. From this it may be inferred that the continuum entropy function for the variable logarithm of price had inadvertently, through Sears Roebuck policies, been maximized for that firm subject to the observed dispersion.

  10. Homotopy method for optimization of variable-specific-impulse low-thrust trajectories

    NASA Astrophysics Data System (ADS)

    Chi, Zhemin; Yang, Hongwei; Chen, Shiyu; Li, Junfeng

    2017-11-01

    The homotopy method has been used as a useful tool in solving fuel-optimal trajectories with constant-specific-impulse low thrust. However, the specific impulse is often variable for many practical solar electric power-limited thrusters. This paper investigates the application of the homotopy method for optimization of variable-specific-impulse low-thrust trajectories. Difficulties arise when the two commonly-used homotopy functions are employed for trajectory optimization. The optimal power throttle level and the optimal specific impulse are coupled with the commonly-used quadratic and logarithmic homotopy functions. To overcome these difficulties, a modified logarithmic homotopy function is proposed to serve as a gateway for trajectory optimization, leading to decoupled expressions of both the optimal power throttle level and the optimal specific impulse. The homotopy method based on this homotopy function is proposed. Numerical simulations validate the feasibility and high efficiency of the proposed method.

  11. Analytic properties for the honeycomb lattice Green function at the origin

    NASA Astrophysics Data System (ADS)

    Joyce, G. S.

    2018-05-01

    The analytic properties of the honeycomb lattice Green function are investigated, where is a complex variable which lies in a plane. This double integral defines a single-valued analytic function provided that a cut is made along the real axis from w  =  ‑3 to . In order to analyse the behaviour of along the edges of the cut it is convenient to define the limit function where . It is shown that and can be evaluated exactly for all in terms of various hypergeometric functions, where the argument function is always real-valued and rational. The second-order linear Fuchsian differential equation satisfied by is also used to derive series expansions for and which are valid in the neighbourhood of the regular singular points and . Integral representations are established for and , where with . In particular, it is proved that where J 0(z) and Y 0(z) denote Bessel functions of the first and second kind, respectively. The results derived in the paper are utilized to evaluate the associated logarithmic integral where w lies in the cut plane. A new set of orthogonal polynomials which are connected with the honeycomb lattice Green function are also briefly discussed. Finally, a link between and the theory of Pearson random walks in a plane is established.

  12. Impact of Many-Body Effects on Landau Levels in Graphene

    NASA Astrophysics Data System (ADS)

    Sonntag, J.; Reichardt, S.; Wirtz, L.; Beschoten, B.; Katsnelson, M. I.; Libisch, F.; Stampfer, C.

    2018-05-01

    We present magneto-Raman spectroscopy measurements on suspended graphene to investigate the charge carrier density-dependent electron-electron interaction in the presence of Landau levels. Utilizing gate-tunable magnetophonon resonances, we extract the charge carrier density dependence of the Landau level transition energies and the associated effective Fermi velocity vF. In contrast to the logarithmic divergence of vF at zero magnetic field, we find a piecewise linear scaling of vF as a function of the charge carrier density, due to a magnetic-field-induced suppression of the long-range Coulomb interaction. We quantitatively confirm our experimental findings by performing tight-binding calculations on the level of the Hartree-Fock approximation, which also allow us to estimate an excitonic binding energy of ≈6 meV contained in the experimentally extracted Landau level transitions energies.

  13. Cost drivers and resource allocation in military health care systems.

    PubMed

    Fulton, Larry; Lasdon, Leon S; McDaniel, Reuben R

    2007-03-01

    This study illustrates the feasibility of incorporating technical efficiency considerations in the funding of military hospitals and identifies the primary drivers for hospital costs. Secondary data collected for 24 U.S.-based Army hospitals and medical centers for the years 2001 to 2003 are the basis for this analysis. Technical efficiency was measured by using data envelopment analysis; subsequently, efficiency estimates were included in logarithmic-linear cost models that specified cost as a function of volume, complexity, efficiency, time, and facility type. These logarithmic-linear models were compared against stochastic frontier analysis models. A parsimonious, three-variable, logarithmic-linear model composed of volume, complexity, and efficiency variables exhibited a strong linear relationship with observed costs (R(2) = 0.98). This model also proved reliable in forecasting (R(2) = 0.96). Based on our analysis, as much as $120 million might be reallocated to improve the United States-based Army hospital performance evaluated in this study.

  14. Late-time structure of the Bunch-Davies de Sitter wavefunction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anninos, Dionysios; Anous, Tarek; Freedman, Daniel Z.

    2015-11-30

    We examine the late time behavior of the Bunch-Davies wavefunction for interacting light fields in a de Sitter background. We use perturbative techniques developed in the framework of AdS/CFT, and analytically continue to compute tree and loop level contributions to the Bunch-Davies wavefunction. We consider self-interacting scalars of general mass, but focus especially on the massless and conformally coupled cases. We show that certain contributions grow logarithmically in conformal time both at tree and loop level. We also consider gauge fields and gravitons. The four-dimensional Fefferman-Graham expansion of classical asymptotically de Sitter solutions is used to show that the wavefunctionmore » contains no logarithmic growth in the pure graviton sector at tree level. Finally, assuming a holographic relation between the wavefunction and the partition function of a conformal field theory, we interpret the logarithmic growths in the language of conformal field theory.« less

  15. Confirming the Lanchestrian linear-logarithmic model of attrition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hartley, D.S. III.

    1990-12-01

    This paper is the fourth in a series of reports on the breakthrough research in historical validation of attrition in conflict. Significant defense policy decisions, including weapons acquisition and arms reduction, are based in part on models of conflict. Most of these models are driven by their attrition algorithms, usually forms of the Lanchester square and linear laws. None of these algorithms have been validated. The results of this paper confirm the results of earlier papers, using a large database of historical results. The homogeneous linear-logarithmic Lanchestrian attrition model is validated to the extent possible with current initial and finalmore » force size data and is consistent with the Iwo Jima data. A particular differential linear-logarithmic model is described that fits the data very well. A version of Helmbold's victory predicting parameter is also confirmed, with an associated probability function. 37 refs., 73 figs., 68 tabs.« less

  16. Uplink Downlink Rate Balancing and Throughput Scaling in FDD Massive MIMO Systems

    NASA Astrophysics Data System (ADS)

    Bergel, Itsik; Perets, Yona; Shamai, Shlomo

    2016-05-01

    In this work we extend the concept of uplink-downlink rate balancing to frequency division duplex (FDD) massive MIMO systems. We consider a base station with large number antennas serving many single antenna users. We first show that any unused capacity in the uplink can be traded off for higher throughput in the downlink in a system that uses either dirty paper (DP) coding or linear zero-forcing (ZF) precoding. We then also study the scaling of the system throughput with the number of antennas in cases of linear Beamforming (BF) Precoding, ZF Precoding, and DP coding. We show that the downlink throughput is proportional to the logarithm of the number of antennas. While, this logarithmic scaling is lower than the linear scaling of the rate in the uplink, it can still bring significant throughput gains. For example, we demonstrate through analysis and simulation that increasing the number of antennas from 4 to 128 will increase the throughput by more than a factor of 5. We also show that a logarithmic scaling of downlink throughput as a function of the number of receive antennas can be achieved even when the number of transmit antennas only increases logarithmically with the number of receive antennas.

  17. Evaporation Loss of Light Elements as a Function of Cooling Rate: Logarithmic Law

    NASA Technical Reports Server (NTRS)

    Xiong, Yong-Liang; Hewins, Roger H.

    2003-01-01

    Knowledge about the evaporation loss of light elements is important to our understanding of chondrule formation processes. The evaporative loss of light elements (such as B and Li) as a function of cooling rate is of special interest because recent investigations of the distribution of Li, Be and B in meteoritic chondrules have revealed that Li varies by 25 times, and B and Be varies by about 10 times. Therefore, if we can extrapolate and interpolate with confidence the evaporation loss of B and Li (and other light elements such as K, Na) at a wide range of cooling rates of interest based upon limited experimental data, we would be able to assess the full range of scenarios relating to chondrule formation processes. Here, we propose that evaporation loss of light elements as a function of cooling rate should obey the logarithmic law.

  18. Logarithmic violation of scaling in strongly anisotropic turbulent transfer of a passive vector field

    NASA Astrophysics Data System (ADS)

    Antonov, N. V.; Gulitskiy, N. M.

    2015-01-01

    Inertial-range asymptotic behavior of a vector (e.g., magnetic) field, passively advected by a strongly anisotropic turbulent flow, is studied by means of the field-theoretic renormalization group and the operator product expansion. The advecting velocity field is Gaussian, not correlated in time, with the pair correlation function of the form ∝δ (t -t') /k⊥d -1 +ξ , where k⊥=|k⊥| and k⊥ is the component of the wave vector, perpendicular to the distinguished direction ("direction of the flow")—the d -dimensional generalization of the ensemble introduced by Avellaneda and Majda [Commun. Math. Phys. 131, 381 (1990), 10.1007/BF02161420]. The stochastic advection-diffusion equation for the transverse (divergence-free) vector field includes, as special cases, the kinematic dynamo model for magnetohydrodynamic turbulence and the linearized Navier-Stokes equation. In contrast to the well-known isotropic Kraichnan's model, where various correlation functions exhibit anomalous scaling behavior with infinite sets of anomalous exponents, here the dependence on the integral turbulence scale L has a logarithmic behavior: Instead of powerlike corrections to ordinary scaling, determined by naive (canonical) dimensions, the anomalies manifest themselves as polynomials of logarithms of L . The key point is that the matrices of scaling dimensions of the relevant families of composite operators appear nilpotent and cannot be diagonalized. The detailed proof of this fact is given for the correlation functions of arbitrary order.

  19. The critical role of logarithmic transformation in Nernstian equilibrium potential calculations.

    PubMed

    Sawyer, Jemima E R; Hennebry, James E; Revill, Alexander; Brown, Angus M

    2017-06-01

    The membrane potential, arising from uneven distribution of ions across cell membranes containing selectively permeable ion channels, is of fundamental importance to cell signaling. The necessity of maintaining the membrane potential may be appreciated by expressing Ohm's law as current = voltage/resistance and recognizing that no current flows when voltage = 0, i.e., transmembrane voltage gradients, created by uneven transmembrane ion concentrations, are an absolute requirement for the generation of currents that precipitate the action and synaptic potentials that consume >80% of the brain's energy budget and underlie the electrical activity that defines brain function. The concept of the equilibrium potential is vital to understanding the origins of the membrane potential. The equilibrium potential defines a potential at which there is no net transmembrane ion flux, where the work created by the concentration gradient is balanced by the transmembrane voltage difference, and derives from a relationship describing the work done by the diffusion of ions down a concentration gradient. The Nernst equation predicts the equilibrium potential and, as such, is fundamental to understanding the interplay between transmembrane ion concentrations and equilibrium potentials. Logarithmic transformation of the ratio of internal and external ion concentrations lies at the heart of the Nernst equation, but most undergraduate neuroscience students have little understanding of the logarithmic function. To compound this, no current undergraduate neuroscience textbooks describe the effect of logarithmic transformation in appreciable detail, leaving the majority of students with little insight into how ion concentrations determine, or how ion perturbations alter, the membrane potential. Copyright © 2017 the American Physiological Society.

  20. Wave propagation model of heat conduction and group speed

    NASA Astrophysics Data System (ADS)

    Zhang, Long; Zhang, Xiaomin; Peng, Song

    2018-03-01

    In view of the finite relaxation model of non-Fourier's law, the Cattaneo and Vernotte (CV) model and Fourier's law are presented in this work for comparing wave propagation modes. Independent variable translation is applied to solve the partial differential equation. Results show that the general form of the time spatial distribution of temperature for the three media comprises two solutions: those corresponding to the positive and negative logarithmic heating rates. The former shows that a group of heat waves whose spatial distribution follows the exponential function law propagates at a group speed; the speed of propagation is related to the logarithmic heating rate. The total speed of all the possible heat waves can be combined to form the group speed of the wave propagation. The latter indicates that the spatial distribution of temperature, which follows the exponential function law, decays with time. These features show that propagation accelerates when heated and decelerates when cooled. For the model media that follow Fourier's law and correspond to the positive heat rate of heat conduction, the propagation mode is also considered the propagation of a group of heat waves because the group speed has no upper bound. For the finite relaxation model with non-Fourier media, the interval of group speed is bounded and the maximum speed can be obtained when the logarithmic heating rate is exactly the reciprocal of relaxation time. And for the CV model with a non-Fourier medium, the interval of group speed is also bounded and the maximum value can be obtained when the logarithmic heating rate is infinite.

  1. Logarithmic spiral flap for circular or oval defects on the lateral surface of the nose and nasal ala: a series of 15 cases.

    PubMed

    Moreno-Artero, E; Redondo, P

    2015-10-01

    A large number of flaps, particularly rotation and transposition flaps, have been described for the closure of skin defects left by oncologic surgery of the nose. The logarithmic spiral flap is a variant of the rotation flap. We present a series of 15 patients with different types of skin tumor on the nose. The skin defect resulting from excision of the tumor by micrographic surgery was reconstructed using various forms of the logarithmic spiral flap. There are 3 essential aspects to flap design: commencement of the pedicle at the upper or lower border of the wound, a width of the distal end of the flap equal to the vertical diameter of the defect, and a progressive increase in the radius of the spiral from the distal end of the flap to its base. The cosmetic and functional results of surgical reconstruction were satisfactory, and no patient required additional treatment to improve scar appearance. The logarithmic spiral flap is useful for the closure of circular or oval defects situated on the lateral surface of the nose and nasal ala. The flap initiates at one of the borders of the wound as a pedicle with a radius that increases progressively to create a spiral. We propose the logarithmic spiral flap as an excellent option for the closure of circular or oval defects of the nose. Copyright © 2015 Elsevier España, S.L.U. and AEDV. All rights reserved.

  2. Effective field theory approach to heavy quark fragmentation

    DOE PAGES

    Fickinger, Michael; Fleming, Sean; Kim, Chul; ...

    2016-11-17

    Using an approach based on Soft Collinear Effective Theory (SCET) and Heavy Quark Effective Theory (HQET) we determine the b-quark fragmentation function from electron-positron annihilation data at the Z-boson peak at next-to-next-to leading order with next-to-next-to leading log resummation of DGLAP logarithms, and next-to-next-to-next-to leading log resummation of endpoint logarithms. This analysis improves, by one order, the previous extraction of the b-quark fragmentation function. We find that while the addition of the next order in the calculation does not much shift the extracted form of the fragmentation function, it does reduce theoretical errors indicating that the expansion is converging. Usingmore » an approach based on effective field theory allows us to systematically control theoretical errors. Furthermore, while the fits of theory to data are generally good, the fits seem to be hinting that higher order correction from HQET may be needed to explain the b-quark fragmentation function at smaller values of momentum fraction.« less

  3. Climatological Aspects of the Optical Properties of Fine/Coarse Mode Aerosol Mixtures

    NASA Technical Reports Server (NTRS)

    Eck, T. F.; Holben, B. N.; Sinyuk, A.; Pinker, R. T.; Goloub, P.; Chen, H.; Chatenet, B.; Li, Z.; Singh, R. P.; Tripathi, S.N.; hide

    2010-01-01

    Aerosol mixtures composed of coarse mode desert dust combined with fine mode combustion generated aerosols (from fossil fuel and biomass burning sources) were investigated at three locations that are in and/or downwind of major global aerosol emission source regions. Multiyear monitoring data at Aerosol Robotic Network sites in Beijing (central eastern China), Kanpur (Indo-Gangetic Plain, northern India), and Ilorin (Nigeria, Sudanian zone of West Africa) were utilized to study the climatological characteristics of aerosol optical properties. Multiyear climatological averages of spectral single scattering albedo (SSA) versus fine mode fraction (FMF) of aerosol optical depth at 675 nm at all three sites exhibited relatively linear trends up to 50% FMF. This suggests the possibility that external linear mixing of both fine and coarse mode components (weighted by FMF) dominates the SSA variation, where the SSA of each component remains relatively constant for this range of FMF only. However, it is likely that a combination of other factors is also involved in determining the dynamics of SSA as a function of FMF, such as fine mode particles adhering to coarse mode dust. The spectral variation of the climatological averaged aerosol absorption optical depth (AAOD) was nearly linear in logarithmic coordinates over the wavelength range of 440-870 nm for both the Kanpur and Ilorin sites. However, at two sites in China (Beijing and Xianghe), a distinct nonlinearity in spectral AAOD in logarithmic space was observed, suggesting the possibility of anomalously strong absorption in coarse mode aerosols increasing the 870 nm AAOD.

  4. Finite Temperature Densities via the S-Function Method with Application to Electron Screening in Plasmas

    NASA Astrophysics Data System (ADS)

    Watrous, Mitchell James

    1997-12-01

    A new approach to the Green's-function method for the calculation of equilibrium densities within the finite temperature, Kohn-Sham formulation of density functional theory is presented, which extends the method to all temperatures. The contour of integration in the complex energy plane is chosen such that the density is given by a sum of Green's function differences evaluated at the Matsubara frequencies, rather than by the calculation and summation of Kohn-Sham single-particle wave functions. The Green's functions are written in terms of their spectral representation and are calculated as the solutions of their defining differential equations. These differential equations are boundary value problems as opposed to the standard eigenvalue problems. For large values of the complex energy, the differential equations are further simplified from second to first-order by writing the Green's functions in terms of logarithmic derivatives. An asymptotic expression for the Green's functions is derived, which allows the sum over Matsubara poles to be approximated. The method is applied to the screening of nuclei by electrons in finite temperature plasmas. To demonstrate the method's utility, and to illustrate its advantages, the results of previous wave function type calculations for protons and neon nuclei are reproduced. The method is also used to formulate a new screening model for fusion reactions in the solar core, and the predicted reaction rate enhancements factors are compared with existing models.

  5. Significant Figure Rules for General Arithmetic Functions.

    ERIC Educational Resources Information Center

    Graham, D. M.

    1989-01-01

    Provides some significant figure rules used in chemistry including the general theoretical basis; logarithms and antilogarithms; exponentiation (with exactly known exponents); sines and cosines; and the extreme value rule. (YP)

  6. Effect analysis of material properties of picosecond laser ablation for ABS/PVC

    NASA Astrophysics Data System (ADS)

    Tsai, Y. H.; Ho, C. Y.; Chiou, Y. J.

    2017-06-01

    This paper analytically investigates the picosecond laser ablation of ABS/PVC. Laser-pulsed ablation is a wellestablished tool for polymer. However the ablation mechanism of laser processing for polymer has not been thoroughly understood yet. This study utilized a thermal transport model to analyze the relationship between the ablation rate and laser fluences. This model considered the energy balance at the decomposition interface and Arrhenius law as the ablation mechanisms. The calculated variation of the ablation rate with the logarithm of the laser fluence agrees with the measured data. It is also validated in this work that the variation of the ablation rate with the logarithm of the laser fluence obeys Beer's law for low laser fluences. The effects of material properties and processing parameters on the ablation depth per pulse are also discussed for picosecond laser processing of ABS/PVC.

  7. Measurement of Galactic Logarithmic Spiral Arm Pitch Angle Using Two-Dimensional Fast Fourier Transform Decomposition

    NASA Astrophysics Data System (ADS)

    Davis, Benjamin L.; Berrier, J. C.; Shields, D. W.; Kennefick, J.; Kennefick, D.; Seigar, M. S.; Lacy, C. H. S.; Puerari, I.

    2012-01-01

    A logarithmic spiral is a prominent feature appearing in a majority of observed galaxies. This feature has long been associated with the traditional Hubble classification scheme, but historical quotes of pitch angle of spiral galaxies have been almost exclusively qualitative. We have developed a methodology, utilizing Two-Dimensional Fast Fourier Transformations of images of spiral galaxies, in order to isolate and measure the pitch angles of their spiral arms. Our technique provides a quantitative way to measure this morphological feature. This will allow the precise comparison of spiral galaxy evolution to other galactic parameters and test spiral arm genesis theories. In this work, we detail our image processing and analysis of spiral galaxy images and discuss the robustness of our analysis techniques. The authors gratefully acknowledge support for this work from NASA Grant NNX08AW03A.

  8. An algorithm for the numerical evaluation of the associated Legendre functions that runs in time independent of degree and order

    NASA Astrophysics Data System (ADS)

    Bremer, James

    2018-05-01

    We describe a method for the numerical evaluation of normalized versions of the associated Legendre functions Pν- μ and Qν- μ of degrees 0 ≤ ν ≤ 1, 000, 000 and orders - ν ≤ μ ≤ ν for arguments in the interval (- 1 , 1). Our algorithm, which runs in time independent of ν and μ, is based on the fact that while the associated Legendre functions themselves are extremely expensive to represent via polynomial expansions, the logarithms of certain solutions of the differential equation defining them are not. We exploit this by numerically precomputing the logarithms of carefully chosen solutions of the associated Legendre differential equation and representing them via piecewise trivariate Chebyshev expansions. These precomputed expansions, which allow for the rapid evaluation of the associated Legendre functions over a large swath of parameter domain mentioned above, are supplemented with asymptotic and series expansions in order to cover it entirely. The results of numerical experiments demonstrating the efficacy of our approach are presented, and our code for evaluating the associated Legendre functions is publicly available.

  9. Evaluating the Use of Problem-Based Video Podcasts to Teach Mathematics in Higher Education

    ERIC Educational Resources Information Center

    Kay, Robin; Kletskin, Ilona

    2012-01-01

    Problem-based video podcasts provide short, web-based, audio-visual explanations of how to solve specific procedural problems in subject areas such as mathematics or science. A series of 59 problem-based video podcasts covering five key areas (operations with functions, solving equations, linear functions, exponential and logarithmic functions,…

  10. Study to investigate and evaluate means of optimizing the Ku-band combined radar/communication functions for the space shuttle

    NASA Technical Reports Server (NTRS)

    Weber, C. L.; Udalov, S.; Alem, W.

    1977-01-01

    The performance of the space shuttle orbiter's Ku-Band integrated radar and communications equipment is analyzed for the radar mode of operation. The block diagram of the rendezvous radar subsystem is described. Power budgets for passive target detection are calculated, based on the estimated values of system losses. Requirements for processing of radar signals in the search and track modes are examined. Time multiplexed, single-channel, angle tracking of passive scintillating targets is analyzed. Radar performance in the presence of main lobe ground clutter is considered and candidate techniques for clutter suppression are discussed. Principal system parameter drivers are examined for the case of stationkeeping at ranges comparable to target dimension. Candidate ranging waveforms for short range operation are analyzed and compared. The logarithmic error discriminant utilized for range, range rate and angle tracking is formulated and applied to the quantitative analysis of radar subsystem tracking loops.

  11. Learning curves in highly skilled chess players: a test of the generality of the power law of practice.

    PubMed

    Howard, Robert W

    2014-09-01

    The power law of practice holds that a power function best interrelates skill performance and amount of practice. However, the law's validity and generality are moot. Some researchers argue that it is an artifact of averaging individual exponential curves while others question whether the law generalizes to complex skills and to performance measures other than response time. The present study tested the power law's generality to development over many years of a very complex cognitive skill, chess playing, with 387 skilled participants, most of whom were grandmasters. A power or logarithmic function best fit grouped data but individuals showed much variability. An exponential function usually was the worst fit to individual data. Groups differing in chess talent were compared and a power function best fit the group curve for the more talented players while a quadratic function best fit that for the less talented. After extreme amounts of practice, a logarithmic function best fit grouped data but a quadratic function best fit most individual curves. Individual variability is great and the power law or an exponential law are not the best descriptions of individual chess skill development. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. EOS Interpolation and Thermodynamic Consistency

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gammel, J. Tinka

    2015-11-16

    As discussed in LA-UR-08-05451, the current interpolator used by Grizzly, OpenSesame, EOSPAC, and similar routines is the rational function interpolator from Kerley. While the rational function interpolator is well-suited for interpolation on sparse grids with logarithmic spacing and it preserves monotonicity in 1-d, it has some known problems.

  13. Logarithmic compression methods for spectral data

    DOEpatents

    Dunham, Mark E.

    2003-01-01

    A method is provided for logarithmic compression, transmission, and expansion of spectral data. A log Gabor transformation is made of incoming time series data to output spectral phase and logarithmic magnitude values. The output phase and logarithmic magnitude values are compressed by selecting only magnitude values above a selected threshold and corresponding phase values to transmit compressed phase and logarithmic magnitude values. A reverse log Gabor transformation is then performed on the transmitted phase and logarithmic magnitude values to output transmitted time series data to a user.

  14. Transcriptional and functional analysis of galactooligosaccharide uptake by lacS in Lactobacillus acidophilus

    PubMed Central

    Andersen, Joakim M.; Barrangou, Rodolphe; Abou Hachem, Maher; Lahtinen, Sampo; Goh, Yong Jun; Svensson, Birte; Klaenhammer, Todd R.

    2011-01-01

    Probiotic microbes rely on their ability to survive in the gastrointestinal tract, adhere to mucosal surfaces, and metabolize available energy sources from dietary compounds, including prebiotics. Genome sequencing projects have proposed models for understanding prebiotic catabolism, but mechanisms remain to be elucidated for many prebiotic substrates. Although β-galactooligosaccharides (GOS) are documented prebiotic compounds, little is known about their utilization by lactobacilli. This study aimed to identify genetic loci in Lactobacillus acidophilus NCFM responsible for the transport and catabolism of GOS. Whole-genome oligonucleotide microarrays were used to survey the differential global transcriptome during logarithmic growth of L. acidophilus NCFM using GOS or glucose as a sole source of carbohydrate. Within the 16.6-kbp gal-lac gene cluster, lacS, a galactoside-pentose-hexuronide permease-encoding gene, was up-regulated 5.1-fold in the presence of GOS. In addition, two β-galactosidases, LacA and LacLM, and enzymes in the Leloir pathway were also encoded by genes within this locus and up-regulated by GOS stimulation. Generation of a lacS-deficient mutant enabled phenotypic confirmation of the functional LacS permease not only for the utilization of lactose and GOS but also lactitol, suggesting a prominent role of LacS in the metabolism of a broad range of prebiotic β-galactosides, known to selectively modulate the beneficial gut microbiota. PMID:22006318

  15. Decibels Made Easy.

    ERIC Educational Resources Information Center

    Tindle, C. T.

    1996-01-01

    Describes a method to teach acoustics to students with minimal mathematical backgrounds. Discusses the uses of charts in teaching topics of sound intensity level and the decibel scale. Avoids the difficulties of working with logarithm functions. (JRH)

  16. Fine tuning of optical signals in nanoporous anodic alumina photonic crystals by apodized sinusoidal pulse anodisation.

    PubMed

    Santos, Abel; Law, Cheryl Suwen; Chin Lei, Dominique Wong; Pereira, Taj; Losic, Dusan

    2016-11-03

    In this study, we present an advanced nanofabrication approach to produce gradient-index photonic crystal structures based on nanoporous anodic alumina. An apodization strategy is for the first time applied to a sinusoidal pulse anodisation process in order to engineer the photonic stop band of nanoporous anodic alumina (NAA) in depth. Four apodization functions are explored, including linear positive, linear negative, logarithmic positive and logarithmic negative, with the aim of finely tuning the characteristic photonic stop band of these photonic crystal structures. We systematically analyse the effect of the amplitude difference (from 0.105 to 0.840 mA cm -2 ), the pore widening time (from 0 to 6 min), the anodisation period (from 650 to 950 s) and the anodisation time (from 15 to 30 h) on the quality and the position of the characteristic photonic stop band and the interferometric colour of these photonic crystal structures using the aforementioned apodization functions. Our results reveal that a logarithmic negative apodisation function is the most optimal approach to obtain unprecedented well-resolved and narrow photonic stop bands across the UV-visible-NIR spectrum of NAA-based gradient-index photonic crystals. Our study establishes a fully comprehensive rationale towards the development of unique NAA-based photonic crystal structures with finely engineered optical properties for advanced photonic devices such as ultra-sensitive optical sensors, selective optical filters and all-optical platforms for quantum computing.

  17. [Changes in the rates of glucose consumption and lactate release by cells in perfused and nonperfused cultures].

    PubMed

    Berestovskaia, N G; Akatov, V S; Lavrovskaia, V P

    1993-01-01

    The energetic state of Chinese hamster fibroblasts was investigated under stationary cultural conditions and under condition of culture medium perfusion immediately above the cells. Specific rates of glucose utilization and lactate formation under the former conditions (1.88 +/- 0.2) x 10(-13) and (4.3 +/- 0.56) x 10(-13) Mole/cell/h at the logarithmic growth phase, and (0.21 +/- 0.08) x 10(-13) and (0.58 +/- 0.06) x 10(-13) Mole/cell/h at the stationary phase, respectively. In the perfused culture, specific rates of glucose utilization and formation of lactate are (4.86 +/- 0.56) x 10(-13) and (11.0 +/- 1.8) x 10(-13) Mole/cell/h at the logarithmic growth phase, and (1.57 +/- 0.14) x 10(-13) and (4.11 +/- 0.5) x 10(-13) Mole/cell/h at the stationary phase, respectively. It has been proposed that under conditions of stationary culture the fall of the rates, as the culture reaches the survival phase, is due to diffusion-dependent limitations of mass transfer between the medium and the culture. Under perfusion conditions, the fall of the rates can be explained by some deficiency of necessary components and by excessive amounts of metabolic products in the multilayer structure.

  18. Numerical solution of the quantum Lenard-Balescu equation for a non-degenerate one-component plasma

    DOE PAGES

    Scullard, Christian R.; Belt, Andrew P.; Fennell, Susan C.; ...

    2016-09-01

    We present a numerical solution of the quantum Lenard-Balescu equation using a spectral method, namely an expansion in Laguerre polynomials. This method exactly conserves both particles and kinetic energy and facilitates the integration over the dielectric function. To demonstrate the method, we solve the equilibration problem for a spatially homogeneous one-component plasma with various initial conditions. Unlike the more usual Landau/Fokker-Planck system, this method requires no input Coulomb logarithm; the logarithmic terms in the collision integral arise naturally from the equation along with the non-logarithmic order-unity terms. The spectral method can also be used to solve the Landau equation andmore » a quantum version of the Landau equation in which the integration over the wavenumber requires only a lower cutoff. We solve these problems as well and compare them with the full Lenard-Balescu solution in the weak-coupling limit. Finally, we discuss the possible generalization of this method to include spatial inhomogeneity and velocity anisotropy.« less

  19. Logarithmic black hole entropy corrections and holographic Rényi entropy

    NASA Astrophysics Data System (ADS)

    Mahapatra, Subhash

    2018-01-01

    The entanglement and Rényi entropies for spherical entangling surfaces in CFTs with gravity duals can be explicitly calculated by mapping these entropies first to the thermal entropy on hyperbolic space and then, using the AdS/CFT correspondence, to the Wald entropy of topological black holes. Here we extend this idea by taking into account corrections to the Wald entropy. Using the method based on horizon symmetries and the asymptotic Cardy formula, we calculate corrections to the Wald entropy and find that these corrections are proportional to the logarithm of the area of the horizon. With the corrected expression for the entropy of the black hole, we then find corrections to the Rényi entropies. We calculate these corrections for both Einstein and Gauss-Bonnet gravity duals. Corrections with logarithmic dependence on the area of the entangling surface naturally occur at the order GD^0. The entropic c-function and the inequalities of the Rényi entropy are also satisfied even with the correction terms.

  20. Logarithmic conformal field theory: beyond an introduction

    NASA Astrophysics Data System (ADS)

    Creutzig, Thomas; Ridout, David

    2013-12-01

    This article aims to review a selection of central topics and examples in logarithmic conformal field theory. It begins with the remarkable observation of Cardy that the horizontal crossing probability of critical percolation may be computed analytically within the formalism of boundary conformal field theory. Cardy’s derivation relies on certain implicit assumptions which are shown to lead inexorably to indecomposable modules and logarithmic singularities in correlators. For this, a short introduction to the fusion algorithm of Nahm, Gaberdiel and Kausch is provided. While the percolation logarithmic conformal field theory is still not completely understood, there are several examples for which the formalism familiar from rational conformal field theory, including bulk partition functions, correlation functions, modular transformations, fusion rules and the Verlinde formula, has been successfully generalized. This is illustrated for three examples: the singlet model \\mathfrak {M} (1,2), related to the triplet model \\mathfrak {W} (1,2), symplectic fermions and the fermionic bc ghost system; the fractional level Wess-Zumino-Witten model based on \\widehat{\\mathfrak {sl}} \\left( 2 \\right) at k=-\\frac{1}{2}, related to the bosonic βγ ghost system; and the Wess-Zumino-Witten model for the Lie supergroup \\mathsf {GL} \\left( 1 {\\mid} 1 \\right), related to \\mathsf {SL} \\left( 2 {\\mid} 1 \\right) at k=-\\frac{1}{2} and 1, the Bershadsky-Polyakov algebra W_3^{(2)} and the Feigin-Semikhatov algebras W_n^{(2)}. These examples have been chosen because they represent the most accessible, and most useful, members of the three best-understood families of logarithmic conformal field theories. The logarithmic minimal models \\mathfrak {W} (q,p), the fractional level Wess-Zumino-Witten models, and the Wess-Zumino-Witten models on Lie supergroups (excluding \\mathsf {OSP} \\left( 1 {\\mid} 2n \\right)). In this review, the emphasis lies on the representation theory of the underlying chiral algebra and the modular data pertaining to the characters of the representations. Each of the archetypal logarithmic conformal field theories is studied here by first determining its irreducible spectrum, which turns out to be continuous, as well as a selection of natural reducible, but indecomposable, modules. This is followed by a detailed description of how to obtain character formulae for each irreducible, a derivation of the action of the modular group on the characters, and an application of the Verlinde formula to compute the Grothendieck fusion rules. In each case, the (genuine) fusion rules are known, so comparisons can be made and favourable conclusions drawn. In addition, each example admits an infinite set of simple currents, hence extended symmetry algebras may be constructed and a series of bulk modular invariants computed. The spectrum of such an extended theory is typically discrete and this is how the triplet model \\mathfrak {W} (1,2) arises, for example. Moreover, simple current technology admits a derivation of the extended algebra fusion rules from those of its continuous parent theory. Finally, each example is concluded by a brief description of the computation of some bulk correlators, a discussion of the structure of the bulk state space, and remarks concerning more advanced developments and generalizations. The final part gives a very short account of the theory of staggered modules, the (simplest class of) representations that are responsible for the logarithmic singularities that distinguish logarithmic theories from their rational cousins. These modules are discussed in a generality suitable to encompass all the examples met in this review and some of the very basic structure theory is proven. Then, the important quantities known as logarithmic couplings are reviewed for Virasoro staggered modules and their role as fundamentally important parameters, akin to the three-point constants of rational conformal field theory, is discussed. An appendix is also provided in order to introduce some of the necessary, but perhaps unfamiliar, language of homological algebra.

  1. Logarithmic conformal field theory

    NASA Astrophysics Data System (ADS)

    Gainutdinov, Azat; Ridout, David; Runkel, Ingo

    2013-12-01

    Conformal field theory (CFT) has proven to be one of the richest and deepest subjects of modern theoretical and mathematical physics research, especially as regards statistical mechanics and string theory. It has also stimulated an enormous amount of activity in mathematics, shaping and building bridges between seemingly disparate fields through the study of vertex operator algebras, a (partial) axiomatisation of a chiral CFT. One can add to this that the successes of CFT, particularly when applied to statistical lattice models, have also served as an inspiration for mathematicians to develop entirely new fields: the Schramm-Loewner evolution and Smirnov's discrete complex analysis being notable examples. When the energy operator fails to be diagonalisable on the quantum state space, the CFT is said to be logarithmic. Consequently, a logarithmic CFT is one whose quantum space of states is constructed from a collection of representations which includes reducible but indecomposable ones. This qualifier arises because of the consequence that certain correlation functions will possess logarithmic singularities, something that contrasts with the familiar case of power law singularities. While such logarithmic singularities and reducible representations were noted by Rozansky and Saleur in their study of the U (1|1) Wess-Zumino-Witten model in 1992, the link between the non-diagonalisability of the energy operator and logarithmic singularities in correlators is usually ascribed to Gurarie's 1993 article (his paper also contains the first usage of the term 'logarithmic conformal field theory'). The class of CFTs that were under control at this time was quite small. In particular, an enormous amount of work from the statistical mechanics and string theory communities had produced a fairly detailed understanding of the (so-called) rational CFTs. However, physicists from both camps were well aware that applications from many diverse fields required significantly more complicated non-rational theories. Examples include critical percolation, supersymmetric string backgrounds, disordered electronic systems, sandpile models describing avalanche processes, and so on. In each case, the non-rationality and non-unitarity of the CFT suggested that a more general theoretical framework was needed. Driven by the desire to better understand these applications, the mid-1990s saw significant theoretical advances aiming to generalise the constructs of rational CFT to a more general class. In 1994, Nahm introduced an algorithm for computing the fusion product of representations which was significantly generalised two years later by Gaberdiel and Kausch who applied it to explicitly construct (chiral) representations upon which the energy operator acts non-diagonalisably. Their work made it clear that underlying the physically relevant correlation functions are classes of reducible but indecomposable representations that can be investigated mathematically to the benefit of applications. In another direction, Flohr had meanwhile initiated the study of modular properties of the characters of logarithmic CFTs, a topic which had already evoked much mathematical interest in the rational case. Since these seminal theoretical papers appeared, the field has undergone rapid development, both theoretically and with regard to applications. Logarithmic CFTs are now known to describe non-local observables in the scaling limit of critical lattice models, for example percolation and polymers, and are an integral part of our understanding of quantum strings propagating on supermanifolds. They are also believed to arise as duals of three-dimensional chiral gravity models, fill out hidden sectors in non-rational theories with non-compact target spaces, and describe certain transitions in various incarnations of the quantum Hall effect. Other physical applications range from two-dimensional turbulence and non-equilibrium systems to aspects of the AdS/CFT correspondence and describing supersymmetric sigma models beyond the topological sector. We refer the reader to the reviews in this collection for further applications and details. More recently, our understanding of logarithmic CFT has improved dramatically thanks largely to a better understanding of the underlying mathematical structures. This includes those associated to the vertex operator algebras themselves (representations, characters, modular transformations, fusion, braiding) as well as structures associated with applications to two-dimensional statistical models (diagram algebras, eg. Temperley-Lieb quantum groups). Not only are we getting to the point where we understand how these structures differ from standard (rational) theories, but we are starting to tackle applications both in the boundary and bulk settings. It is now clear that the logarithmic case is generic, so it is this case that one should expect to encounter in applications. We therefore feel that it is timely to review what has been accomplished in order to disseminate this improved understanding and motivate further applications. We now give a quick overview of the articles that constitute this special issue. Adamović and Milas provide a detailed summary of their rigorous results pertaining to logarithmic vertex operator (super)algebras constructed from lattices. This survey discusses the C2-cofiniteness of the (p, p') triplet models (this is the generalisation of rationality to the logarithmic setting), describes Zhu's algebra for (some of) these theories and outlines the difficulties involved in explicitly constructing the modules responsible for their logarithmic nature. Cardy gives an account of a popular approach to logarithmic theories that regards them, heuristically at least, as limits of ordinary (but non-rational) CFTs. More precisely, it seems that any given correlator may be computed as a limit of standard (non-logarithmic) correlators, any logarithmic singularities that arise do so because of a degeneration when taking the limit. He then illustrates this phenomenon in several theories describing statistical lattice models including the n → 0 limit of the O(n ) model and the Q → 1 limit of the Q-state Potts model. Creutzig and Ridout review the continuum approach to logarithmic CFT, using the percolation (boundary) CFT to detail the connection between module structure and logarithmic singularities in correlators before describing their proposed solution to the thorny issue of generalising modular data and Verlinde formulae to the logarithmic setting. They illustrate this proposal using the three best-understood examples of logarithmic CFTs: the (1, 2) models, related to symplectic fermions; the fractional level WZW model on , related to the beta gamma ghosts; and the WZW model on GL(1|1). The analysis in each case requires that the spectrum be continuous; C2-cofinite models are only recovered as orbifolds. Flohr and Koehn consider the characters of the irreducible modules in the spectrum of a CFT and discuss why these only span a proper subspace of the space of torus vacuum amplitudes in the logarithmic case. This is illustrated explicitly for the (1, 2) triplet model and conclusions are drawn for the action of the modular group. They then note that the irreducible characters of this model also admit fermionic sum forms which seem to fit well into Nahmrsquo;s well-known conjecture for rational theories. Quasi-particle interpretations are also introduced, leading to the conclusion that logarithmic C2-cofinite theories are not so terribly different to rational theories, at least in some respects. Fuchs, Schweigert and Stigner address the problem of constructing local logarithmic CFTs starting from the chiral theory. They first review the construction of the local theory in the non-logarithmic setting from an angle that will then generalise to logarithmic theories. In particular, they observe that the bulk space can be understood as a certain coend. The authors then show how to carry out the construction of the bulk space in the category of modules over a factorisable ribbon Hopf algebra, which shares many properties with the braided categories arising from logarithmic chiral theories. The authors proceed to construct the analogue of all-genus correlators in their setting and establish invariance under the mapping class group, i.e. locality of the correlators. Gainutdinov, Jacobsen, Read, Saleur and Vasseur review their approach based on the assumption that certain classes of logarithmic CFTs admit lattice regularisations with local degrees of freedom, for example quantum spin chains (with local interactions). They therefore study the finite-dimensional algebras generated by the hamiltonian densities (typically the Temperley-Lieb algebras and their extensions) that describe the dynamics of these lattice models. The authors then argue that the lattice algebras exhibit, in finite size, mathematical properties that are in correspondence with those of their continuum limits, allowing one to predict continuum structures directly from the lattice. Moreover, the lattice models considered admit quantum group symmetries that play a central role in the algebraic analysis (representation structure and fusion). Grumiller, Riedler, Rosseel and Zojer review the role that logarithmic CFTs may play in certain versions of the AdS/CFT correspondence, particularly for what is known as topologically massive gravity (TMG). This has been a very active subject over the last five years and the article takes great care to disentangle the contributions from the many groups that have participated. They begin with some general remarks on logarithmic behaviour, much in the spirit of Cardyrsquo;s review, before detailing the distinction between the chiral (no logs) and logarithmic proposals for critical TMG. The latter is then subjected to various consistency checks before discussing evidence for logarithmic behaviour in more general classes of gravity theories including those with boundaries, supersymmetry and galilean relativity. Gurarie has written an historical overview of his seminal contributions to this field, putting his results (and those of his collaborators) in the context of understanding applications to condensed matter physics. This includes the link between the non-diagonalisability of L0 and logarithmic singularities, a study of the c → 0 catastrophe, and a proposed resolution involving supersymmetric partners for the stress-energy tensor and its logarithmic partner field. Henkel and Rouhani describe a direction in which logarithmic singularities are observed in correlators of non-relativistic field theories. Their review covers the appropriate modifications of conformal invariance that are appropriate to non-equilibrium statistical mechanics, strongly anisotropic critical points and certain variants of TMG. The main variation away from the standard relativistic idea of conformal invariance is that time is explicitly distinguished from space when considering dilations and this leads to a variety of algebraic structures to explore. In this review, the link between non-diagonalisable representations and logarithmic singularities in correlators is generalised to these algebras, before two applications of the theory are discussed. Huang and Lepowsky give a non-technical overview of their work on braided tensor structures on suitable categories of representations of vertex operator algebras. They also place their work in historic context and compare it to related approaches. The authors sketch their construction of the so-called P(z)-tensor product of modules of a vertex operator algebra, and the construction of the associativity isomorphisms for this tensor product. They proceed to give a guide to their works leading to the first authorrsquo;s proof of modularity for a class of vertex operator algebras, and to their works, joint with Zhang, on logarithmic intertwining operators and the resulting tensor product theory. Morin-Duchesne and Saint-Aubin have contributed a research article describing their recent characterisation of when the transfer matrix of a periodic loop model fails to be diagonalisable. This generalises their recent result for non-periodic loop models and provides rigorous methods to justify what has often been assumed in the lattice approach to logarithmic CFT. The philosophy here is one of analysing lattice models with finite size, aiming to demonstrate that non-diagonalisability survives the scaling limit. This is extremely difficult in general (see also the review by Gainutdinov et al ), so it is remarkable that it is even possible to demonstrate this at any level of generality. Quella and Schomerus have prepared an extensive review covering their longstanding collaboration on the logarithmic nature of conformal sigma models on Lie supergroups and their cosets with applications to string theory and AdS/CFT. Beginning with a very welcome overview of Lie superalgebras and their representations, harmonic analysis and cohomological reduction, they then apply these mathematical tools to WZW models on type I Lie supergroups and their homogeneous subspaces. Along the way, deformations are discussed and potential dualities in the corresponding string theories are described. Ruelle provides an exhaustive account of his substantial contributions to the study of the abelian sandpile model. This is a statistical model which has the surprising feature that many correlation functions can be computed exactly, in the bulk and on the boundary, even though the spectrum of conformal weights is largely unknown. Nevertheless, there is much evidence suggesting that its scaling limit is described by an, as yet unknown, c = -2 logarithmic CFT. Semikhatov and Tipunin present their very recent results regarding the construction of logarithmic chiral W-algebra extensions of a fractional level algebra. The idea is that these algebras are the centralisers of a rank-two Nichols algebra which possesses at least one fermionic generator. In turn, these Nichols algebra generators are represented by screening operators which naturally appear in CFT bosonisation. The major advantage of using these generators is that they give strong hints about the representation theory and fusion rules of the chiral algebra. Simmons has contributed an article describing the calculation of various correlation functions in the logarithmic CFT that describes critical percolation. These calculations are interpreted geometrically in a manner that should be familiar to mathematicians studying Schramm-Loewner evolutions and point towards a (largely unexplored) bridge connecting logarithmic CFT with this branch of mathematics. Of course, the field of logarithmic CFT has benefited greatly from the work of many of researchers who are not represented in this special issue. The interested reader will find many links to their work in the bibliographies of the special issue articles and reviews. In summary, logarithmic CFT describes an extension of the incredibly successful methods of rational CFT to a more general setting. This extension is necessary to properly describe many different fundamental phenomena of physical interest. The formalism is moreover highly non-trivial from a mathematical point of view and so logarithmic theories are of significant interest to both physicists and mathematicians. We hope that the collection of articles that follows will serve as an inspiration, and a valuable resource, for both of these communities.

  2. Synthetic analog computation in living cells.

    PubMed

    Daniel, Ramiz; Rubens, Jacob R; Sarpeshkar, Rahul; Lu, Timothy K

    2013-05-30

    A central goal of synthetic biology is to achieve multi-signal integration and processing in living cells for diagnostic, therapeutic and biotechnology applications. Digital logic has been used to build small-scale circuits, but other frameworks may be needed for efficient computation in the resource-limited environments of cells. Here we demonstrate that synthetic analog gene circuits can be engineered to execute sophisticated computational functions in living cells using just three transcription factors. Such synthetic analog gene circuits exploit feedback to implement logarithmically linear sensing, addition, ratiometric and power-law computations. The circuits exhibit Weber's law behaviour as in natural biological systems, operate over a wide dynamic range of up to four orders of magnitude and can be designed to have tunable transfer functions. Our circuits can be composed to implement higher-order functions that are well described by both intricate biochemical models and simple mathematical functions. By exploiting analog building-block functions that are already naturally present in cells, this approach efficiently implements arithmetic operations and complex functions in the logarithmic domain. Such circuits may lead to new applications for synthetic biology and biotechnology that require complex computations with limited parts, need wide-dynamic-range biosensing or would benefit from the fine control of gene expression.

  3. Neyman-Pearson biometric score fusion as an extension of the sum rule

    NASA Astrophysics Data System (ADS)

    Hube, Jens Peter

    2007-04-01

    We define the biometric performance invariance under strictly monotonic functions on match scores as normalization symmetry. We use this symmetry to clarify the essential difference between the standard score-level fusion approaches of sum rule and Neyman-Pearson. We then express Neyman-Pearson fusion assuming match scores defined using false acceptance rates on a logarithmic scale. We show that by stating Neyman-Pearson in this form, it reduces to sum rule fusion for ROC curves with logarithmic slope. We also introduce a one parameter model of biometric performance and use it to express Neyman-Pearson fusion as a weighted sum rule.

  4. Precise Determination of the Absorption Maximum in Wide Bands

    ERIC Educational Resources Information Center

    Eriksson, Karl-Hugo; And Others

    1977-01-01

    A precise method of determining absorption maxima where Gaussian functions occur is described. The method is based on a logarithmic transformation of the Gaussian equation and is suited for a mini-computer. (MR)

  5. Generalization and capacity of extensively large two-layered perceptrons.

    PubMed

    Rosen-Zvi, Michal; Engel, Andreas; Kanter, Ido

    2002-09-01

    The generalization ability and storage capacity of a treelike two-layered neural network with a number of hidden units scaling as the input dimension is examined. The mapping from the input to the hidden layer is via Boolean functions; the mapping from the hidden layer to the output is done by a perceptron. The analysis is within the replica framework where an order parameter characterizing the overlap between two networks in the combined space of Boolean functions and hidden-to-output couplings is introduced. The maximal capacity of such networks is found to scale linearly with the logarithm of the number of Boolean functions per hidden unit. The generalization process exhibits a first-order phase transition from poor to perfect learning for the case of discrete hidden-to-output couplings. The critical number of examples per input dimension, alpha(c), at which the transition occurs, again scales linearly with the logarithm of the number of Boolean functions. In the case of continuous hidden-to-output couplings, the generalization error decreases according to the same power law as for the perceptron, with the prefactor being different.

  6. Abelian non-global logarithms from soft gluon clustering

    NASA Astrophysics Data System (ADS)

    Kelley, Randall; Walsh, Jonathan R.; Zuberi, Saba

    2012-09-01

    Most recombination-style jet algorithms cluster soft gluons in a complex way. This leads to previously identified correlations in the soft gluon phase space and introduces logarithmic corrections to jet cross sections, which are known as clustering logarithms. The leading Abelian clustering logarithms occur at least at next-to leading logarithm (NLL) in the exponent of the distribution. Using the framework of Soft Collinear Effective Theory (SCET), we show that new clustering effects contributing at NLL arise at each order. While numerical resummation of clustering logs is possible, it is unlikely that they can be analytically resummed to NLL. Clustering logarithms make the anti-kT algorithm theoretically preferred, for which they are power suppressed. They can arise in Abelian and non-Abelian terms, and we calculate the Abelian clustering logarithms at O ( {α_s^2} ) for the jet mass distribution using the Cambridge/Aachen and kT algorithms, including jet radius dependence, which extends previous results. We find that clustering logarithms can be naturally thought of as a class of non-global logarithms, which have traditionally been tied to non-Abelian correlations in soft gluon emission.

  7. Non-additive non-interacting kinetic energy of rare gas dimers

    NASA Astrophysics Data System (ADS)

    Jiang, Kaili; Nafziger, Jonathan; Wasserman, Adam

    2018-03-01

    Approximations of the non-additive non-interacting kinetic energy (NAKE) as an explicit functional of the density are the basis of several electronic structure methods that provide improved computational efficiency over standard Kohn-Sham calculations. However, within most fragment-based formalisms, there is no unique exact NAKE, making it difficult to develop general, robust approximations for it. When adjustments are made to the embedding formalisms to guarantee uniqueness, approximate functionals may be more meaningfully compared to the exact unique NAKE. We use numerically accurate inversions to study the exact NAKE of several rare-gas dimers within partition density functional theory, a method that provides the uniqueness for the exact NAKE. We find that the NAKE decreases nearly exponentially with atomic separation for the rare-gas dimers. We compute the logarithmic derivative of the NAKE with respect to the bond length for our numerically accurate inversions as well as for several approximate NAKE functionals. We show that standard approximate NAKE functionals do not reproduce the correct behavior for this logarithmic derivative and propose two new NAKE functionals that do. The first of these is based on a re-parametrization of a conjoint Perdew-Burke-Ernzerhof (PBE) functional. The second is a simple, physically motivated non-decomposable NAKE functional that matches the asymptotic decay constant without fitting.

  8. The four-loop six-gluon NMHV ratio function

    DOE PAGES

    Dixon, Lance J.; von Hippel, Matt; McLeod, Andrew J.

    2016-01-11

    We use the hexagon function bootstrap to compute the ratio function which characterizes the next-to-maximally-helicity-violating (NMHV) six-point amplitude in planar N=4 super-Yang-Mills theory at four loops. A powerful constraint comes from dual superconformal invariance, in the form of a Q¯ differential equation, which heavily constrains the first derivatives of the transcendental functions entering the ratio function. At four loops, it leaves only a 34-parameter space of functions. Constraints from the collinear limits, and from the multi-Regge limit at the leading-logarithmic (LL) and next-to-leading-logarithmic (NLL) order, suffice to fix these parameters and obtain a unique result. We test the result againstmore » multi-Regge predictions at NNLL and N 3LL, and against predictions from the operator product expansion involving one and two flux-tube excitations; all cross-checks are satisfied. We study the analytical and numerical behavior of the parity-even and parity-odd parts on various lines and surfaces traversing the three-dimensional space of cross ratios. As part of this program, we characterize all irreducible hexagon functions through weight eight in terms of their coproduct. As a result, we also provide representations of the ratio function in particular kinematic regions in terms of multiple polylogarithms.« less

  9. The four-loop six-gluon NMHV ratio function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dixon, Lance J.; von Hippel, Matt; McLeod, Andrew J.

    2016-01-11

    We use the hexagon function bootstrap to compute the ratio function which characterizes the next-to-maximally-helicity-violating (NMHV) six-point amplitude in planar N = 4 super-Yang-Mills theory at four loops. A powerful constraint comes from dual superconformal invariance, in the form of a Q - differential equation, which heavily constrains the first derivatives of the transcendental functions entering the ratio function. At four loops, it leaves only a 34-parameter space of functions. Constraints from the collinear limits, and from the multi-Regge limit at the leading-logarithmic (LL) and next-to-leading-logarithmic (NLL) order, suffice to fix these parameters and obtain a unique result. We testmore » the result against multi- Regge predictions at NNLL and N 3LL, and against predictions from the operator product expansion involving one and two flux-tube excitations; all cross-checks are satisfied. We also study the analytical and numerical behavior of the parity-even and parity-odd parts on various lines and surfaces traversing the three-dimensional space of cross ratios. As part of this program, we characterize all irreducible hexagon functions through weight eight in terms of their coproduct. Furthermore, we provide representations of the ratio function in particular kinematic regions in terms of multiple polylogarithms.« less

  10. Leading logarithmic corrections to the muonium hyperfine splitting and to the hydrogen Lamb shift

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karshenboim, S.G.

    1994-12-31

    Main leading corrections with recoil logarithm log(M/m) and low-energy logarithm log(Za) to the Muonium hyperfine splitting axe discussed. Logarithmic corrections have magnitudes of 0.1 {divided_by} 0.3 kHz. Non-leading higher order corrections axe expected to be not larger than 0.1 kHz. Leading logarithmic correction to the Hydrogen Lamb shift is also obtained.

  11. Q estimation of seismic data using the generalized S-transform

    NASA Astrophysics Data System (ADS)

    Hao, Yaju; Wen, Xiaotao; Zhang, Bo; He, Zhenhua; Zhang, Rui; Zhang, Jinming

    2016-12-01

    Quality factor, Q, is a parameter that characterizes the energy dissipation during seismic wave propagation. The reservoir pore is one of the main factors that affect the value of Q. Especially, when pore space is filled with oil or gas, the rock usually exhibits a relative low Q value. Such a low Q value has been used as a direct hydrocarbon indicator by many researchers. The conventional Q estimation method based on spectral ratio suffers from the problem of waveform tuning; hence, many researchers have introduced time-frequency analysis techniques to tackle this problem. Unfortunately, the window functions adopted in time-frequency analysis algorithms such as continuous wavelet transform (CWT) and S-transform (ST) contaminate the amplitude spectra because the seismic signal is multiplied by the window functions during time-frequency decomposition. The basic assumption of the spectral ratio method is that there is a linear relationship between natural logarithmic spectral ratio and frequency. However, this assumption does not hold if we take the influence of window functions into consideration. In this paper, we first employ a recently developed two-parameter generalized S-transform (GST) to obtain the time-frequency spectra of seismic traces. We then deduce the non-linear relationship between natural logarithmic spectral ratio and frequency. Finally, we obtain a linear relationship between natural logarithmic spectral ratio and a newly defined parameter γ by ignoring the negligible second order term. The gradient of this linear relationship is 1/Q. Here, the parameter γ is a function of frequency and source wavelet. Numerical examples for VSP and post-stack reflection data confirm that our algorithm is capable of yielding accurate results. The Q-value results estimated from field data acquired in western China show reasonable comparison with oil-producing well location.

  12. Jet shapes in dijet events at the LHC in SCET

    NASA Astrophysics Data System (ADS)

    Hornig, Andrew; Makris, Yiannis; Mehen, Thomas

    2016-04-01

    We consider the class of jet shapes known as angularities in dijet production at hadron colliders. These angularities are modified from the original definitions in e + e - collisions to be boost invariant along the beam axis. These shapes apply to the constituents of jets defined with respect to either k T -type (anti- k T , C/ A, and k T ) algorithms and cone-type algorithms. We present an SCET factorization formula and calculate the ingredients needed to achieve next-to-leading-log (NLL) accuracy in kinematic regions where non-global logarithms are not large. The factorization formula involves previously unstudied "unmeasured beam functions," which are present for finite rapidity cuts around the beams. We derive relations between the jet functions and the shape-dependent part of the soft function that appear in the factorized cross section and those previously calculated for e + e - collisions, and present the calculation of the non-trivial, color-connected part of the soft-function to O({α}_s) . This latter part of the soft function is universal in the sense that it applies to any experimental setup with an out-of-jet p T veto and rapidity cuts together with two identified jets and it is independent of the choice of jet (sub-)structure measurement. In addition, we implement the recently introduced soft-collinear refactorization to resum logarithms of the jet size, valid in the region of non-enhanced non-global logarithm effects. While our results are valid for all 2 → 2 channels, we compute explicitly for the qq' → qq' channel the color-flow matrices and plot the NLL resummed differential dijet cross section as an explicit example, which shows that the normalization and scale uncertainty is reduced when the soft function is refactorized. For this channel, we also plot the jet size R dependence, the p T cut dependence, and the dependence on the angularity parameter a.

  13. Jet shapes in dijet events at the LHC in SCET

    DOE PAGES

    Hornig, Andrew; Makris, Yiannis; Mehen, Thomas

    2016-04-15

    Here, we consider the class of jet shapes known as angularities in dijet production at hadron colliders. These angularities are modified from the original definitions in e + e- collisions to be boost invariant along the beam axis. These shapes apply to the constituents of jets defined with respect to either k T-type (anti-k T, C/A, and k T) algorithms and cone-type algorithms. We present an SCET factorization formula and calculate the ingredients needed to achieve next-to-leading-log (NLL) accuracy in kinematic regions where non-global logarithms are not large. The factorization formula involves previously unstudied “unmeasured beam functions,” which are present for finite rapidity cuts around the beams. We derive relations between the jet functions and the shape-dependent part of the soft function that appear in the factorized cross section and those previously calculated for e +e - collisions, and present the calculation of the non-trivial, color-connected part of the soft-function to O(αs) . This latter part of the soft function is universal in the sense that it applies to any experimental setup with an out-of-jet p T veto and rapidity cuts together with two identified jets and it is independent of the choice of jet (sub-)structure measurement. In addition, we implement the recently introduced soft-collinear refactorization to resum logarithms of the jet size, valid in the region of non-enhanced non-global logarithm effects. While our results are valid for all 2 → 2 channels, we compute explicitly for the qq' → qq' channel the color-flow matrices and plot the NLL resummed differential dijet cross section as an explicit example, which shows that the normalization and scale uncertainty is reduced when the soft function is refactorized. For this channel, we also plot the jet size R dependence, the pmore » $$cut\\atop{T}$$ dependence, and the dependence on the angularity parameter a.« less

  14. Assessing the role of pavement macrotexture in preventing crashes on highways.

    PubMed

    Pulugurtha, Srinivas S; Kusam, Prasanna R; Patel, Kuvleshay J

    2010-02-01

    The objective of this article is to assess the role of pavement macrotexture in preventing crashes on highways in the State of North Carolina. Laser profilometer data obtained from the North Carolina Department of Transportation (NCDOT) for highways comprising four corridors are processed to calculate pavement macrotexture at 100-m (approximately 330-ft) sections according to the American Society for Testing and Materials (ASTM) standards. Crash data collected over the same lengths of the corridors were integrated with the calculated pavement macrotexture for each section. Scatterplots were generated to assess the role of pavement macrotexture on crashes and logarithm of crashes. Regression analyses were conducted by considering predictor variables such as million vehicle miles of travel (as a function of traffic volume and length), the number of interchanges, the number of at-grade intersections, the number of grade-separated interchanges, and the number of bridges, culverts, and overhead signs along with pavement macrotexture to study the statistical significance of relationship between pavement macrotexture and crashes (both linear and log-linear) when compared to other predictor variables. Scatterplots and regression analysis conducted indicate a more statistically significant relationship between pavement macrotexture and logarithm of crashes than between pavement macrotexture and crashes. The coefficient for pavement macrotexture, in general, is negative, indicating that the number of crashes or logarithm of crashes decreases as it increases. The relation between pavement macrotexture and logarithm of crashes is generally stronger than between most other predictor variables and crashes or logarithm of crashes. Based on results obtained, it can be concluded that maintaining pavement macrotexture greater than or equal to 1.524 mm (0.06 in.) as a threshold limit would possibly reduce crashes and provide safe transportation to road users on highways.

  15. General equations for optimal selection of diagnostic image acquisition parameters in clinical X-ray imaging.

    PubMed

    Zheng, Xiaoming

    2017-12-01

    The purpose of this work was to examine the effects of relationship functions between diagnostic image quality and radiation dose on the governing equations for image acquisition parameter variations in X-ray imaging. Various equations were derived for the optimal selection of peak kilovoltage (kVp) and exposure parameter (milliAmpere second, mAs) in computed tomography (CT), computed radiography (CR), and direct digital radiography. Logistic, logarithmic, and linear functions were employed to establish the relationship between radiation dose and diagnostic image quality. The radiation dose to the patient, as a function of image acquisition parameters (kVp, mAs) and patient size (d), was used in radiation dose and image quality optimization. Both logistic and logarithmic functions resulted in the same governing equation for optimal selection of image acquisition parameters using a dose efficiency index. For image quality as a linear function of radiation dose, the same governing equation was derived from the linear relationship. The general equations should be used in guiding clinical X-ray imaging through optimal selection of image acquisition parameters. The radiation dose to the patient could be reduced from current levels in medical X-ray imaging.

  16. Probabilities and statistics for backscatter estimates obtained by a scatterometer with applications to new scatterometer design data

    NASA Technical Reports Server (NTRS)

    Pierson, Willard J., Jr.

    1989-01-01

    The values of the Normalized Radar Backscattering Cross Section (NRCS), sigma (o), obtained by a scatterometer are random variables whose variance is a known function of the expected value. The probability density function can be obtained from the normal distribution. Models for the expected value obtain it as a function of the properties of the waves on the ocean and the winds that generated the waves. Point estimates of the expected value were found from various statistics given the parameters that define the probability density function for each value. Random intervals were derived with a preassigned probability of containing that value. A statistical test to determine whether or not successive values of sigma (o) are truly independent was derived. The maximum likelihood estimates for wind speed and direction were found, given a model for backscatter as a function of the properties of the waves on the ocean. These estimates are biased as a result of the terms in the equation that involve natural logarithms, and calculations of the point estimates of the maximum likelihood values are used to show that the contributions of the logarithmic terms are negligible and that the terms can be omitted.

  17. Nonlinear interactions and their scaling in the logarithmic region of turbulent channels

    NASA Astrophysics Data System (ADS)

    Moarref, Rashad; Sharma, Ati S.; Tropp, Joel A.; McKeon, Beverley J.

    2014-11-01

    The nonlinear interactions in wall turbulence redistribute the turbulent kinetic energy across different scales and different wall-normal locations. To better understand these interactions in the logarithmic region of turbulent channels, we decompose the velocity into a weighted sum of resolvent modes (McKeon & Sharma, J. Fluid Mech., 2010). The resolvent modes represent the linear amplification mechanisms in the Navier-Stokes equations (NSE) and the weights represent the scaling influence of the nonlinearity. An explicit equation for the unknown weights is obtained by projecting the NSE onto the known resolvent modes (McKeon et al., Phys. Fluids, 2013). The weights of triad modes -the modes that directly interact via the quadratic nonlinearity in the NSE- are coupled via interaction coefficients that depend solely on the resolvent modes. We use the hierarchies of self-similar modes in the logarithmic region (Moarref et al., J. Fluid Mech., 2013) to extend the notion of triad modes to triad hierarchies. It is shown that the interaction coefficients for the triad modes that belong to a triad hierarchy follow an exponential function. These scalings can be used to better understand the interaction of flow structures in the logarithmic region and develop analytical results therein. The support of Air Force Office of Scientific Research under Grants FA 9550-09-1-0701 (P.M. Rengasamy Ponnappan) and FA 9550-12-1-0469 (P.M. Doug Smith) is gratefully acknowledged.

  18. The asymptotic form of non-global logarithms, black disc saturation, and gluonic deserts

    NASA Astrophysics Data System (ADS)

    Neill, Duff

    2017-01-01

    We develop an asymptotic perturbation theory for the large logarithmic behavior of the non-linear integro-differential equation describing the soft correlations of QCD jet measurements, the Banfi-Marchesini-Smye (BMS) equation. This equation captures the late-time evolution of radiating color dipoles after a hard collision. This allows us to prove that at large values of the control variable (the non-global logarithm, a function of the infra-red energy scales associated with distinct hard jets in an event), the distribution has a gaussian tail. We compute the decay width analytically, giving a closed form expression, and find it to be jet geometry independent, up to the number of legs of the dipole in the active jet. Enabling the asymptotic expansion is the correct perturbative seed, where we perturb around an anzats encoding formally no real emissions, an intuition motivated by the buffer region found in jet dynamics. This must be supplemented with the correct application of the BFKL approximation to the BMS equation in collinear limits. Comparing to the asymptotics of the conformally related evolution equation encountered in small-x physics, the Balitisky-Kovchegov (BK) equation, we find that the asymptotic form of the non-global logarithms directly maps to the black-disc unitarity limit of the BK equation, despite the contrasting physical pictures. Indeed, we recover the equations of saturation physics in the final state dynamics of QCD.

  19. 40 CFR 53.62 - Test procedure: Full wind tunnel test.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... accuracy of 5 percent or better (e.g., hot-wire anemometry). For the wind speeds specified in table F-2 of... candidate sampler as a function of aerodynamic particle diameter (Dae) on semi-logarithmic graph paper where...

  20. 40 CFR 53.62 - Test procedure: Full wind tunnel test.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... accuracy of 5 percent or better (e.g., hot-wire anemometry). For the wind speeds specified in table F-2 of... candidate sampler as a function of aerodynamic particle diameter (Dae) on semi-logarithmic graph paper where...

  1. 40 CFR 53.62 - Test procedure: Full wind tunnel test.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... accuracy of 5 percent or better (e.g., hot-wire anemometry). For the wind speeds specified in table F-2 of... candidate sampler as a function of aerodynamic particle diameter (Dae) on semi-logarithmic graph paper where...

  2. 40 CFR 53.62 - Test procedure: Full wind tunnel test.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... accuracy of 5 percent or better (e.g., hot-wire anemometry). For the wind speeds specified in table F-2 of... candidate sampler as a function of aerodynamic particle diameter (Dae) on semi-logarithmic graph paper where...

  3. CHEMICAL TIME-SERIES SAMPLING

    EPA Science Inventory

    The rationale for chemical time-series sampling has its roots in the same fundamental relationships as govern well hydraulics. Samples of ground water are collected as a function of increasing time of pumpage. The most efficient pattern of collection consists of logarithmically s...

  4. A quick response four decade logarithmic high-voltage stepping supply

    NASA Technical Reports Server (NTRS)

    Doong, H.

    1978-01-01

    An improved high-voltage stepping supply, for space instrumentation is described where low power consumption and fast settling time between steps are required. The high-voltage stepping supply, utilizing an average power of 750 milliwatts, delivers a pair of mirror images with 64 level logarithmic outputs. It covers a four decade range of + or - 2500 to + or - 0.29 volts having an output stability of + or - 0.5 percent or + or - 20 millivolts for all line load and temperature variations. The supply provides a typical step setting time of 1 millisecond with 100 microseconds for the lower two decades. The versatile design features of the high-voltage stepping supply provides a quick response staircase generator as described or a fixed voltage with the option to change levels as required over large dynamic ranges without circuit modifications. The concept can be implemented up to + or - 5000 volts. With these design features, the high-voltage stepping supply should find numerous applications where charged particle detection, electro-optical systems, and high voltage scientific instruments are used.

  5. Surface capillary currents: Rediscovery of fluid-structure interaction by forced evolving boundary theory

    NASA Astrophysics Data System (ADS)

    Wang, Chunbai; Mitra, Ambar K.

    2016-01-01

    Any boundary surface evolving in viscous fluid is driven with surface capillary currents. By step function defined for the fluid-structure interface, surface currents are found near a flat wall in a logarithmic form. The general flat-plate boundary layer is demonstrated through the interface kinematics. The dynamics analysis elucidates the relationship of the surface currents with the adhering region as well as the no-slip boundary condition. The wall skin friction coefficient, displacement thickness, and the logarithmic velocity-defect law of the smooth flat-plate boundary-layer flow are derived with the advent of the forced evolving boundary method. This fundamental theory has wide applications in applied science and engineering.

  6. Collective modes in two-dimensional one-component-plasma with logarithmic interaction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khrapak, Sergey A.; Forschungsgruppe Komplexe Plasmen, Deutsches Zentrum für Luft- und Raumfahrt, Oberpfaffenhofen; Joint Institute for High Temperatures, Russian Academy of Sciences, Moscow

    The collective modes of a familiar two-dimensional one-component-plasma with the repulsive logarithmic interaction between the particles are analysed using the quasi-crystalline approximation (QCA) combined with the molecular dynamic simulation of the equilibrium structural properties. It is found that the dispersion curves in the strongly coupled regime are virtually independent of the coupling strength. Arguments based on the excluded volume consideration for the radial distribution function allow us to derive very simple expressions for the dispersion relations, which show excellent agreement with the exact QCA dispersion over the entire domain of wavelengths. Comparison with the results of the conventional fluid analysismore » is performed, and the difference is explained.« less

  7. Species-abundance distribution patterns of soil fungi: contribution to the ecological understanding of their response to experimental fire in Mediterranean maquis (southern Italy).

    PubMed

    Persiani, Anna Maria; Maggi, Oriana

    2013-01-01

    Experimental fires, of both low and high intensity, were lit during summer 2000 and the following 2 y in the Castel Volturno Nature Reserve, southern Italy. Soil samples were collected Jul 2000-Jul 2002 to analyze the soil fungal community dynamics. Species abundance distribution patterns (geometric, logarithmic, log normal, broken-stick) were compared. We plotted datasets with information both on species richness and abundance for total, xerotolerant and heat-stimulated soil microfungi. The xerotolerant fungi conformed to a broken-stick model for both the low- and high intensity fires at 7 and 84 d after the fire; their distribution subsequently followed logarithmic models in the 2 y following the fire. The distribution of the heat-stimulated fungi changed from broken-stick to logarithmic models and eventually to a log-normal model during the post-fire recovery. Xerotolerant and, to a far greater extent, heat-stimulated soil fungi acquire an important functional role following soil water stress and/or fire disturbance; these disturbances let them occupy unsaturated habitats and become increasingly abundant over time.

  8. Quality parameters analysis of optical imaging systems with enhanced focal depth using the Wigner distribution function

    PubMed

    Zalvidea; Colautti; Sicre

    2000-05-01

    An analysis of the Strehl ratio and the optical transfer function as imaging quality parameters of optical elements with enhanced focal length is carried out by employing the Wigner distribution function. To this end, we use four different pupil functions: a full circular aperture, a hyper-Gaussian aperture, a quartic phase plate, and a logarithmic phase mask. A comparison is performed between the quality parameters and test images formed by these pupil functions at different defocus distances.

  9. Design and Analysis of Compact DNA Strand Displacement Circuits for Analog Computation Using Autocatalytic Amplifiers.

    PubMed

    Song, Tianqi; Garg, Sudhanshu; Mokhtar, Reem; Bui, Hieu; Reif, John

    2018-01-19

    A main goal in DNA computing is to build DNA circuits to compute designated functions using a minimal number of DNA strands. Here, we propose a novel architecture to build compact DNA strand displacement circuits to compute a broad scope of functions in an analog fashion. A circuit by this architecture is composed of three autocatalytic amplifiers, and the amplifiers interact to perform computation. We show DNA circuits to compute functions sqrt(x), ln(x) and exp(x) for x in tunable ranges with simulation results. A key innovation in our architecture, inspired by Napier's use of logarithm transforms to compute square roots on a slide rule, is to make use of autocatalytic amplifiers to do logarithmic and exponential transforms in concentration and time. In particular, we convert from the input that is encoded by the initial concentration of the input DNA strand, to time, and then back again to the output encoded by the concentration of the output DNA strand at equilibrium. This combined use of strand-concentration and time encoding of computational values may have impact on other forms of molecular computation.

  10. Extraction of quark transversity distribution and Collins fragmentation functions with QCD evolution

    NASA Astrophysics Data System (ADS)

    Kang, Zhong-Bo; Prokudin, Alexei; Sun, Peng; Yuan, Feng

    2016-01-01

    We study the transverse-momentum-dependent (TMD) evolution of the Collins azimuthal asymmetries in e+e- annihilations and semi-inclusive hadron production in deep inelastic scattering processes. All the relevant coefficients are calculated up to the next-to-leading-logarithmic-order accuracy. By applying the TMD evolution at the approximate next-to-leading-logarithmic order in the Collins-Soper-Sterman formalism, we extract transversity distributions for u and d quarks and Collins fragmentation functions from current experimental data by a global analysis of the Collins asymmetries in back-to-back dihadron productions in e+e- annihilations measured by BELLE and BABAR collaborations and semi-inclusive hadron production in deep inelastic scattering data from HERMES, COMPASS, and JLab HALL A experiments. The impact of the evolution effects and the relevant theoretical uncertainties are discussed. We further discuss the TMD interpretation for our results and illustrate the unpolarized quark distribution, transversity distribution, unpolarized quark fragmentation, and Collins fragmentation functions depending on the transverse momentum and the hard momentum scale. We make detailed predictions for future experiments and discuss their impact.

  11. Real-Time Implementation of Nonlinear Processing Functions.

    DTIC Science & Technology

    1981-08-01

    crystal devices and then to use them in a coherent optical data- processing apparatus using halftone masks custom designed at the University oi Southern...California. With the halftone mask technique, we have demonstrated logarithmic nonlinear transformation, allowing us to separate multiplicative images...improved.,_ This device allowed nonlinear functions to be implemented directly wit - out the need for specially made halftone masks. Besides

  12. Don't wear that button out!

    NASA Astrophysics Data System (ADS)

    Jue, Brian J.; Bice, Michael D.

    2013-07-01

    As students explore the technological tools available to them for learning mathematics, some will eventually discover what happens when a function button is repeatedly pressed on a calculator. We explore several examples of this, presenting tabular and graphical results for the square root, natural logarithm and sine and cosine functions. Observed behaviour is proven and then discussed in the context of fixed points.

  13. The time dependence of rock healing as a universal relaxation process, a tutorial

    NASA Astrophysics Data System (ADS)

    Snieder, Roel; Sens-Schönfelder, Christoph; Wu, Renjie

    2017-01-01

    The material properties of earth materials often change after the material has been perturbed (slow dynamics). For example, the seismic velocity of subsurface materials changes after earthquakes, and granular materials compact after being shaken. Such relaxation processes are associated by observables that change logarithmically with time. Since the logarithm diverges for short and long times, the relaxation can, strictly speaking, not have a log-time dependence. We present a self-contained description of a relaxation function that consists of a superposition of decaying exponentials that has log-time behaviour for intermediate times, but converges to zero for long times, and is finite for t = 0. The relaxation function depends on two parameters, the minimum and maximum relaxation time. These parameters can, in principle, be extracted from the observed relaxation. As an example, we present a crude model of a fracture that is closing under an external stress. Although the fracture model violates some of the assumptions on which the relaxation function is based, it follows the relaxation function well. We provide qualitative arguments that the relaxation process, just like the Gutenberg-Richter law, is applicable to a wide range of systems and has universal properties.

  14. The asymptotic form of non-global logarithms, black disc saturation, and gluonic deserts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neill, Duff

    Here, we develop an asymptotic perturbation theory for the large logarithmic behavior of the non-linear integro-differential equation describing the soft correlations of QCD jet measurements, the Banfi-Marchesini-Smye (BMS) equation. Furthermore, this equation captures the late-time evolution of radiating color dipoles after a hard collision. This allows us to prove that at large values of the control variable (the non-global logarithm, a function of the infra-red energy scales associated with distinct hard jets in an event), the distribution has a gaussian tail. We also compute the decay width analytically, giving a closed form expression, and find it to be jet geometrymore » independent, up to the number of legs of the dipole in the active jet. By enabling the asymptotic expansion we find that the perturbative seed is correct; we perturb around an anzats encoding formally no real emissions, an intuition motivated by the buffer region found in jet dynamics. This must be supplemented with the correct application of the BFKL approximation to the BMS equation in collinear limits. Comparing to the asymptotics of the conformally related evolution equation encountered in small-x physics, the Balitisky-Kovchegov (BK) equation, we find that the asymptotic form of the non-global logarithms directly maps to the black-disc unitarity limit of the BK equation, despite the contrasting physical pictures. Indeed, we recover the equations of saturation physics in the final state dynamics of QCD.« less

  15. The asymptotic form of non-global logarithms, black disc saturation, and gluonic deserts

    DOE PAGES

    Neill, Duff

    2017-01-25

    Here, we develop an asymptotic perturbation theory for the large logarithmic behavior of the non-linear integro-differential equation describing the soft correlations of QCD jet measurements, the Banfi-Marchesini-Smye (BMS) equation. Furthermore, this equation captures the late-time evolution of radiating color dipoles after a hard collision. This allows us to prove that at large values of the control variable (the non-global logarithm, a function of the infra-red energy scales associated with distinct hard jets in an event), the distribution has a gaussian tail. We also compute the decay width analytically, giving a closed form expression, and find it to be jet geometrymore » independent, up to the number of legs of the dipole in the active jet. By enabling the asymptotic expansion we find that the perturbative seed is correct; we perturb around an anzats encoding formally no real emissions, an intuition motivated by the buffer region found in jet dynamics. This must be supplemented with the correct application of the BFKL approximation to the BMS equation in collinear limits. Comparing to the asymptotics of the conformally related evolution equation encountered in small-x physics, the Balitisky-Kovchegov (BK) equation, we find that the asymptotic form of the non-global logarithms directly maps to the black-disc unitarity limit of the BK equation, despite the contrasting physical pictures. Indeed, we recover the equations of saturation physics in the final state dynamics of QCD.« less

  16. Global stability and quadratic Hamiltonian structure in Lotka-Volterra and quasi-polynomial systems

    NASA Astrophysics Data System (ADS)

    Szederkényi, Gábor; Hangos, Katalin M.

    2004-04-01

    We show that the global stability of quasi-polynomial (QP) and Lotka-Volterra (LV) systems with the well-known logarithmic Lyapunov function is equivalent to the existence of a local generalized dissipative Hamiltonian description of the LV system with a diagonal quadratic form as a Hamiltonian function. The Hamiltonian function can be calculated and the quadratic dissipativity neighborhood of the origin can be estimated by solving linear matrix inequalities.

  17. Stochastic differential equation (SDE) model of opening gold share price of bursa saham malaysia

    NASA Astrophysics Data System (ADS)

    Hussin, F. N.; Rahman, H. A.; Bahar, A.

    2017-09-01

    Black and Scholes option pricing model is one of the most recognized stochastic differential equation model in mathematical finance. Two parameter estimation methods have been utilized for the Geometric Brownian model (GBM); historical and discrete method. The historical method is a statistical method which uses the property of independence and normality logarithmic return, giving out the simplest parameter estimation. Meanwhile, discrete method considers the function of density of transition from the process of diffusion normal log which has been derived from maximum likelihood method. These two methods are used to find the parameter estimates samples of Malaysians Gold Share Price data such as: Financial Times and Stock Exchange (FTSE) Bursa Malaysia Emas, and Financial Times and Stock Exchange (FTSE) Bursa Malaysia Emas Shariah. Modelling of gold share price is essential since fluctuation of gold affects worldwide economy nowadays, including Malaysia. It is found that discrete method gives the best parameter estimates than historical method due to the smallest Root Mean Square Error (RMSE) value.

  18. Spectral analysis of near-wall turbulence in channel flow at Reτ=4200 with emphasis on the attached-eddy hypothesis

    NASA Astrophysics Data System (ADS)

    Agostini, Lionel; Leschziner, Michael

    2017-01-01

    Direct numerical simulation data for channel flow at a friction Reynolds number of 4200, generated by Lozano-Durán and Jiménez [J. Fluid Mech. 759, 432 (2014), 10.1017/jfm.2014.575], are used to examine the properties of near-wall turbulence within subranges of eddy-length scale. Attention is primarily focused on the intermediate layer (mesolayer) covering the logarithmic velocity region within the range of wall-scaled wall-normal distance of 80-1500. The examination is based on a number of statistical properties, including premultiplied and compensated spectra, the premultiplied derivative of the second-order structure function, and three scalar parameters that characterize the anisotropic or isotropic state of the various length-scale subranges. This analysis leads to the delineation of three regions within the map of wall-normal-wise premultiplied spectra, each characterized by distinct turbulence properties. A question of particular interest is whether the Townsend-Perry attached-eddy hypothesis (AEH) can be shown to be valid across the entire mesolayer, in contrast to the usual focus on the outer portion of the logarithmic-velocity layer at high Reynolds numbers, which is populated with very-large-scale motions. This question is addressed by reference to properties in the premultiplied scalewise derivative of the second-order structure function (PMDS2) and joint probability density functions of streamwise-velocity fluctuations and their streamwise and spanwise derivatives. This examination provides evidence, based primarily on the existence of a plateau region in the PMDS2, for the qualified validity of the AEH right down the lower limit of the logarithmic velocity range.

  19. Efficient dynamic optimization of logic programs

    NASA Technical Reports Server (NTRS)

    Laird, Phil

    1992-01-01

    A summary is given of the dynamic optimization approach to speed up learning for logic programs. The problem is to restructure a recursive program into an equivalent program whose expected performance is optimal for an unknown but fixed population of problem instances. We define the term 'optimal' relative to the source of input instances and sketch an algorithm that can come within a logarithmic factor of optimal with high probability. Finally, we show that finding high-utility unfolding operations (such as EBG) can be reduced to clause reordering.

  20. A Comparative Study of the Dispersion of Multi-Wall Carbon Nanotubes Made by Arc-Discharge and Chemical Vapour Deposition.

    PubMed

    Frømyr, Tomas-Roll; Bourgeaux-Goget, Marie; Hansen, Finn Knut

    2015-05-01

    A method has been developed to characterize the dispersion of multi-wall carbon nanotubes in water using a disc centrifuge for the detection of individual carbon nanotubes, residual aggregates, and contaminants. Carbon nanotubes produced by arc-discharge have been measured and compared with carbon nanotubes produced by chemical vapour deposition. Studies performed on both pristine (see text) arc-discharge nanotubes is rather strong and that high ultra-sound intensity is required to achieve complete dispersion of carbon nanotube bundles. The logarithm of the mode of the particle size distribution of the arc-discharge carbon nanotubes was found to be a linear function of the logarithm of the total ultrasonic energy input in the dispersion process.

  1. Logarithmic entanglement lightcone in many-body localized systems

    NASA Astrophysics Data System (ADS)

    Deng, Dong-Ling; Li, Xiaopeng; Pixley, J. H.; Wu, Yang-Le; Das Sarma, S.

    2017-01-01

    We theoretically study the response of a many-body localized system to a local quench from a quantum information perspective. We find that the local quench triggers entanglement growth throughout the whole system, giving rise to a logarithmic lightcone. This saturates the modified Lieb-Robinson bound for quantum information propagation in many-body localized systems previously conjectured based on the existence of local integrals of motion. In addition, near the localization-delocalization transition, we find that the final states after the local quench exhibit volume-law entanglement. We also show that the local quench induces a deterministic orthogonality catastrophe for highly excited eigenstates, where the typical wave-function overlap between the pre- and postquench eigenstates decays exponentially with the system size.

  2. Function algorithms for MPP scientific subroutines, volume 1

    NASA Technical Reports Server (NTRS)

    Gouch, J. G.

    1984-01-01

    Design documentation and user documentation for function algorithms for the Massively Parallel Processor (MPP) are presented. The contract specifies development of MPP assembler instructions to perform the following functions: natural logarithm; exponential (e to the x power); square root; sine; cosine; and arctangent. To fulfill the requirements of the contract, parallel array and solar implementations for these functions were developed on the PDP11/34 Program Development and Management Unit (PDMU) that is resident at the MPP testbed installation located at the NASA Goddard facility.

  3. Binomial test statistics using Psi functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bowman, Kimiko o

    2007-01-01

    For the negative binomial model (probability generating function (p + 1 - pt){sup -k}) a logarithmic derivative is the Psi function difference {psi}(k + x) - {psi}(k); this and its derivatives lead to a test statistic to decide on the validity of a specified model. The test statistic uses a data base so there exists a comparison available between theory and application. Note that the test function is not dominated by outliers. Applications to (i) Fisher's tick data, (ii) accidents data, (iii) Weldon's dice data are included.

  4. Graphene Dendrimer-stabilized silver nanoparticles for detection of methimazole using Surface-enhanced Raman scattering with computational assignment

    NASA Astrophysics Data System (ADS)

    Saleh, Tawfik A.; Al-Shalalfeh, Mutasem M.; Al-Saadi, Abdulaziz A.

    2016-08-01

    Graphene functionalized with polyamidoamine dendrimer, decorated with silver nanoparticles (G-D-Ag), was synthesized and evaluated as a substrate with surface-enhanced Raman scattering (SERS) for methimazole (MTZ) detection. Sodium borohydride was used as a reducing agent to cultivate silver nanoparticles on the dendrimer. The obtained G-D-Ag was characterized by using UV-vis spectroscopy, scanning electron microscope (SEM), high-resolution transmission electron microscope (TEM), Fourier-transformed infrared (FT-IR) and Raman spectroscopy. The SEM image indicated the successful formation of the G-D-Ag. The behavior of MTZ on the G-D-Ag as a reliable and robust substrate was investigated by SERS, which indicated mostly a chemical interaction between G-D-Ag and MTZ. The bands of the MTZ normal spectra at 1538, 1463, 1342, 1278, 1156, 1092, 1016, 600, 525 and 410 cm-1 were enhanced due to the SERS effect. Correlations between the logarithmical scale of MTZ concentrations and SERS signal intensities were established, and a low detection limit of 1.43 × 10-12 M was successfully obtained. The density functional theory (DFT) approach was utilized to provide reliable assignment of the key Raman bands.

  5. Graphene Dendrimer-stabilized silver nanoparticles for detection of methimazole using Surface-enhanced Raman scattering with computational assignment

    PubMed Central

    Saleh, Tawfik A.; Al-Shalalfeh, Mutasem M.; Al-Saadi, Abdulaziz A.

    2016-01-01

    Graphene functionalized with polyamidoamine dendrimer, decorated with silver nanoparticles (G-D-Ag), was synthesized and evaluated as a substrate with surface-enhanced Raman scattering (SERS) for methimazole (MTZ) detection. Sodium borohydride was used as a reducing agent to cultivate silver nanoparticles on the dendrimer. The obtained G-D-Ag was characterized by using UV-vis spectroscopy, scanning electron microscope (SEM), high-resolution transmission electron microscope (TEM), Fourier-transformed infrared (FT-IR) and Raman spectroscopy. The SEM image indicated the successful formation of the G-D-Ag. The behavior of MTZ on the G-D-Ag as a reliable and robust substrate was investigated by SERS, which indicated mostly a chemical interaction between G-D-Ag and MTZ. The bands of the MTZ normal spectra at 1538, 1463, 1342, 1278, 1156, 1092, 1016, 600, 525 and 410 cm−1 were enhanced due to the SERS effect. Correlations between the logarithmical scale of MTZ concentrations and SERS signal intensities were established, and a low detection limit of 1.43 × 10−12 M was successfully obtained. The density functional theory (DFT) approach was utilized to provide reliable assignment of the key Raman bands. PMID:27572919

  6. A Sampling-Based Bayesian Approach for Cooperative Multiagent Online Search With Resource Constraints.

    PubMed

    Xiao, Hu; Cui, Rongxin; Xu, Demin

    2018-06-01

    This paper presents a cooperative multiagent search algorithm to solve the problem of searching for a target on a 2-D plane under multiple constraints. A Bayesian framework is used to update the local probability density functions (PDFs) of the target when the agents obtain observation information. To obtain the global PDF used for decision making, a sampling-based logarithmic opinion pool algorithm is proposed to fuse the local PDFs, and a particle sampling approach is used to represent the continuous PDF. Then the Gaussian mixture model (GMM) is applied to reconstitute the global PDF from the particles, and a weighted expectation maximization algorithm is presented to estimate the parameters of the GMM. Furthermore, we propose an optimization objective which aims to guide agents to find the target with less resource consumptions, and to keep the resource consumption of each agent balanced simultaneously. To this end, a utility function-based optimization problem is put forward, and it is solved by a gradient-based approach. Several contrastive simulations demonstrate that compared with other existing approaches, the proposed one uses less overall resources and shows a better performance of balancing the resource consumption.

  7. Re-suspension Process In Turbulent Particle-fluid Mixture Boundary Layers

    NASA Astrophysics Data System (ADS)

    Zwinger, T.; Kluwick, A.

    Many theoretical applications of geophysical flows, such as sediment transport (e.g. Jenkins &Hanes, 1998) and aeolian transport of particles (e.g. Hopwood et al., 1995) utilize concepts for describing the near wall velocity profiles of particle suspensions originally arising from classical single phase theories. This approach is supported by experiments indicating the existence of a logarithmic fluid velocity profile similar to single phase flows also in case of high Reynolds number wall bounded particle sus- pension flows with low particle volume fractions (Nishimura &Hunt, 2000). Since the concept of a logarithmic near wall profile follows from classic asymptotic the- ory of high Reynolds number wall bounded flows the question arises to what extent this theory can be modified to account for particles being suspended in the ambient fluid. To this end, the asymptotic theory developed by Mellor (1972) is applied to the Favré-averaged equations for the carrier fluid as well as the dispersed phase derived on the basis of a volume averaged dispersed two-phase theory (Gray &Lee, 1977). Numerical solutions for profiles of main stream velocities and particle volume frac- tion in the fully turbulent region of the boundary layer for different turbulent Schmidt numbers are computed applying a Finite Difference box scheme. In particular, atten- tion is focused on the turbulent re-suspension process of particles from dense granular flow adjacent to the bounding surface into the suspension. From these results boundary conditions in form of wall functions for velocities as well as the volume fraction of the particles can be derived and the validity of analogy laws between turbulent mass and momentum transfer at the bounding surface can be proved from an asymptotic point of view. The application of these concepts in the field of snow avalanche simulation (Zwinger, 2000) is discussed.

  8. QLog Solar-Cell Mode Photodiode Logarithmic CMOS Pixel Using Charge Compression and Readout †

    PubMed Central

    Ni, Yang

    2018-01-01

    In this paper, we present a new logarithmic pixel design currently under development at New Imaging Technologies SA (NIT). This new logarithmic pixel design uses charge domain logarithmic signal compression and charge-transfer-based signal readout. This structure gives a linear response in low light conditions and logarithmic response in high light conditions. The charge transfer readout efficiently suppresses the reset (KTC) noise by using true correlated double sampling (CDS) in low light conditions. In high light conditions, thanks to charge domain logarithmic compression, it has been demonstrated that 3000 electrons should be enough to cover a 120 dB dynamic range with a mobile phone camera-like signal-to-noise ratio (SNR) over the whole dynamic range. This low electron count permits the use of ultra-small floating diffusion capacitance (sub-fF) without charge overflow. The resulting large conversion gain permits a single photon detection capability with a wide dynamic range without a complex sensor/system design. A first prototype sensor with 320 × 240 pixels has been implemented to validate this charge domain logarithmic pixel concept and modeling. The first experimental results validate the logarithmic charge compression theory and the low readout noise due to the charge-transfer-based readout. PMID:29443903

  9. QLog Solar-Cell Mode Photodiode Logarithmic CMOS Pixel Using Charge Compression and Readout.

    PubMed

    Ni, Yang

    2018-02-14

    In this paper, we present a new logarithmic pixel design currently under development at New Imaging Technologies SA (NIT). This new logarithmic pixel design uses charge domain logarithmic signal compression and charge-transfer-based signal readout. This structure gives a linear response in low light conditions and logarithmic response in high light conditions. The charge transfer readout efficiently suppresses the reset (KTC) noise by using true correlated double sampling (CDS) in low light conditions. In high light conditions, thanks to charge domain logarithmic compression, it has been demonstrated that 3000 electrons should be enough to cover a 120 dB dynamic range with a mobile phone camera-like signal-to-noise ratio (SNR) over the whole dynamic range. This low electron count permits the use of ultra-small floating diffusion capacitance (sub-fF) without charge overflow. The resulting large conversion gain permits a single photon detection capability with a wide dynamic range without a complex sensor/system design. A first prototype sensor with 320 × 240 pixels has been implemented to validate this charge domain logarithmic pixel concept and modeling. The first experimental results validate the logarithmic charge compression theory and the low readout noise due to the charge-transfer-based readout.

  10. Investigation of logarithmic spiral nanoantennas at optical frequencies

    NASA Astrophysics Data System (ADS)

    Verma, Anamika; Pandey, Awanish; Mishra, Vigyanshu; Singh, Ten; Alam, Aftab; Dinesh Kumar, V.

    2013-12-01

    The first study is reported of a logarithmic spiral antenna in the optical frequency range. Using the finite integration technique, we investigated the spectral and radiation properties of a logarithmic spiral nanoantenna and a complementary structure made of thin gold film. A comparison is made with results for an Archimedean spiral nanoantenna. Such nanoantennas can exhibit broadband behavior that is independent of polarization. Two prominent features of logarithmic spiral nanoantennas are highly directional far field emission and perfectly circularly polarized radiation when excited by a linearly polarized source. The logarithmic spiral nanoantenna promises potential advantages over Archimedean spirals and could be harnessed for several applications in nanophotonics and allied areas.

  11. Next-to-leading-logarithmic power corrections for N -jettiness subtraction in color-singlet production

    NASA Astrophysics Data System (ADS)

    Boughezal, Radja; Isgrò, Andrea; Petriello, Frank

    2018-04-01

    We present a detailed derivation of the power corrections to the factorization theorem for the 0-jettiness event shape variable T . Our calculation is performed directly in QCD without using the formalism of effective field theory. We analytically calculate the next-to-leading logarithmic power corrections for small T at next-to-leading order in the strong coupling constant, extending previous computations which obtained only the leading-logarithmic power corrections. We address a discrepancy in the literature between results for the leading-logarithmic power corrections to a particular definition of 0-jettiness. We present a numerical study of the power corrections in the context of their application to the N -jettiness subtraction method for higher-order calculations, using gluon-fusion Higgs production as an example. The inclusion of the next-to-leading-logarithmic power corrections further improves the numerical efficiency of the approach beyond the improvement obtained from the leading-logarithmic power corrections.

  12. Generalizing a Limit Description of the Natural Logarithm

    ERIC Educational Resources Information Center

    Dobbs, David E.

    2010-01-01

    If f is a continuous positive-valued function defined on the closed interval from a to x and if k[subscript 0] is greater than 0, then lim[subscript k[right arrow]0[superscript +] [integral][superscript x] [subscript a] f (t)[superscript k-k[subscript 0

  13. Microarray and functional analysis of growth-phase dependent gene regulation in Bordetella bronchiseptica

    USDA-ARS?s Scientific Manuscript database

    Growth-phase dependent gene regulation has recently been demonstrated to occur in B. pertussis, with many transcripts, including known virulence factors, significantly decreasing during the transition from logarithmic to stationary-phase growth. Given that B. pertussis is thought to have derived fro...

  14. Boundary layer and fundamental problems of hydrodynamics (compatibility of a logarithmic velocity profile in a turbulent boundary layer with the experience values)

    NASA Astrophysics Data System (ADS)

    Zaryankin, A. E.

    2017-11-01

    The compatibility of the semiempirical turbulence theory of L. Prandtl with the actual flow pattern in a turbulent boundary layer is considered in this article, and the final calculation results of the boundary layer is analyzed based on the mentioned theory. It shows that accepted additional conditions and relationships, which integrate the differential equation of L. Prandtl, associating the turbulent stresses in the boundary layer with the transverse velocity gradient, are fulfilled only in the near-wall region where the mentioned equation loses meaning and are inconsistent with the physical meaning on the main part of integration. It is noted that an introduced concept about the presence of a laminar sublayer between the wall and the turbulent boundary layer is the way of making of a physical meaning to the logarithmic velocity profile, and can be defined as adjustment of the actual flow to the formula that is inconsistent with the actual boundary conditions. It shows that coincidence of the experimental data with the actual logarithmic profile is obtained as a result of the use of not particular physical value, as an argument, but function of this value.

  15. Using polynomials to simplify fixed pattern noise and photometric correction of logarithmic CMOS image sensors.

    PubMed

    Li, Jing; Mahmoodi, Alireza; Joseph, Dileepan

    2015-10-16

    An important class of complementary metal-oxide-semiconductor (CMOS) image sensors are those where pixel responses are monotonic nonlinear functions of light stimuli. This class includes various logarithmic architectures, which are easily capable of wide dynamic range imaging, at video rates, but which are vulnerable to image quality issues. To minimize fixed pattern noise (FPN) and maximize photometric accuracy, pixel responses must be calibrated and corrected due to mismatch and process variation during fabrication. Unlike literature approaches, which employ circuit-based models of varying complexity, this paper introduces a novel approach based on low-degree polynomials. Although each pixel may have a highly nonlinear response, an approximately-linear FPN calibration is possible by exploiting the monotonic nature of imaging. Moreover, FPN correction requires only arithmetic, and an optimal fixed-point implementation is readily derived, subject to a user-specified number of bits per pixel. Using a monotonic spline, involving cubic polynomials, photometric calibration is also possible without a circuit-based model, and fixed-point photometric correction requires only a look-up table. The approach is experimentally validated with a logarithmic CMOS image sensor and is compared to a leading approach from the literature. The novel approach proves effective and efficient.

  16. Phase pupil functions for focal-depth enhancement derived from a Wigner distribution function.

    PubMed

    Zalvidea, D; Sicre, E E

    1998-06-10

    A method for obtaining phase-retardation functions, which give rise to an increase of the image focal depth, is proposed. To this end, the Wigner distribution function corresponding to a specific aperture that has an associated small depth of focus in image space is conveniently sheared in the phase-space domain to generate a new Wigner distribution function. From this new function a more uniform on-axis image irradiance can be accomplished. This approach is illustrated by comparison of the imaging performance of both the derived phase function and a previously reported logarithmic phase distribution.

  17. Reduction of bromate to bromide coupled to acetate oxidation by anaerobic mixed microbial cultures.

    PubMed

    van Ginkel, C G; van Haperen, A M; van der Togt, B

    2005-01-01

    Bromate, a weakly mutagenic oxidizing agent, exists in surface waters. The biodegradation of bromate was investigated by assessing the ability of mixed cultures of micro-organisms for utilization of bromate as electron acceptor and acetate as electron donor. Reduction of bromate was only observed at relatively low concentrations (<3.0 mM) in the absence of molecular oxygen. Under these conditions bromate was reduced stoichiometrically to bromide. Unadapted sludge from an activated sludge treatment plant and a digester reduced bromate without lag period at a constant rate. Using an enrichment culture adapted to bromate, it was demonstrated that bromate was a terminal electron acceptor for anaerobic growth. Approximately 50% of the acetate was utilized for growth with bromate by the enrichment culture. A doubling of 20 h was estimated from a logarithmic growth curve. Other electron acceptors, like perchlorate, chlorate and nitrate, were not reduced or at negligible rates by bromate-utilizing microorganisms.

  18. FAST TRACK COMMUNICATION: Freezing and extreme-value statistics in a random energy model with logarithmically correlated potential

    NASA Astrophysics Data System (ADS)

    Fyodorov, Yan V.; Bouchaud, Jean-Philippe

    2008-09-01

    We investigate some implications of the freezing scenario proposed by Carpentier and Le Doussal (CLD) for a random energy model (REM) with logarithmically correlated random potential. We introduce a particular (circular) variant of the model, and show that the integer moments of the partition function in the high-temperature phase are given by the well-known Dyson Coulomb gas integrals. The CLD freezing scenario allows one to use those moments for extracting the distribution of the free energy in both high- and low-temperature phases. In particular, it yields the full distribution of the minimal value in the potential sequence. This provides an explicit new class of extreme-value statistics for strongly correlated variables, manifestly different from the standard Gumbel class.

  19. Helicity evolution at small x : Flavor singlet and nonsinglet observables

    DOE PAGES

    Kovchegov, Yuri V.; Pitonyak, Daniel; Sievert, Matthew D.

    2017-01-30

    We extend our earlier results for the quark helicity evolution at small x to derive the small-x asymptotics of the flavor singlet and flavor nonsinglet quark helicity TMDs and PDFs and of the g 1 structure function. In the flavor singlet case we rederive the evolution equations obtained in our previous paper on the subject, performing additional cross-checks of our results. In the flavor nonsinglet case we construct new small-x evolution equations by employing the large-N c limit. Here, all evolution equations resum double-logarithmic powers of α sln 2(1/x) in the polarization-dependent evolution along with the single-logarithmic powers of αmore » sln(1/x) in the unpolarized evolution which includes saturation effects.« less

  20. Helicity evolution at small x : Flavor singlet and nonsinglet observables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kovchegov, Yuri V.; Pitonyak, Daniel; Sievert, Matthew D.

    We extend our earlier results for the quark helicity evolution at small x to derive the small-x asymptotics of the flavor singlet and flavor nonsinglet quark helicity TMDs and PDFs and of the g 1 structure function. In the flavor singlet case we rederive the evolution equations obtained in our previous paper on the subject, performing additional cross-checks of our results. In the flavor nonsinglet case we construct new small-x evolution equations by employing the large-N c limit. Here, all evolution equations resum double-logarithmic powers of α sln 2(1/x) in the polarization-dependent evolution along with the single-logarithmic powers of αmore » sln(1/x) in the unpolarized evolution which includes saturation effects.« less

  1. Scaling of Rényi entanglement entropies of the free fermi-gas ground state: a rigorous proof.

    PubMed

    Leschke, Hajo; Sobolev, Alexander V; Spitzer, Wolfgang

    2014-04-25

    In a remarkable paper [Phys. Rev. Lett. 96, 100503 (2006)], Gioev and Klich conjectured an explicit formula for the leading asymptotic growth of the spatially bipartite von Neumann entanglement entropy of noninteracting fermions in multidimensional Euclidean space at zero temperature. Based on recent progress by one of us (A. V. S.) in semiclassical functional calculus for pseudodifferential operators with discontinuous symbols, we provide here a complete proof of that formula and of its generalization to Rényi entropies of all orders α>0. The special case α=1/2 is also known under the name logarithmic negativity and often considered to be a particularly useful quantification of entanglement. These formulas exhibiting a "logarithmically enhanced area law" have been used already in many publications.

  2. Gauge boson exchange in AdS d+1

    NASA Astrophysics Data System (ADS)

    D'Hoker, Eric; Freedman, Daniel Z.

    1999-04-01

    We study the amplitude for exchange of massless gauge bosons between pairs of massive scalar fields in anti-de Sitter space. In the AdS/CFT correspondence this amplitude describes the contribution of conserved flavor symmetry currents to 4-point functions of scalar operators in the boundary conformal theory. A concise, covariant, Y2K compatible derivation of the gauge boson propagator in AdS d + 1 is given. Techniques are developed to calculate the two bulk integrals over AdS space leading to explicit expressions or convenient, simple integral representations for the amplitude. The amplitude contains leading power and sub-leading logarithmic singularities in the gauge boson channel and leading logarithms in the crossed channel. The new methods of this paper are expected to have other applications in the study of the Maldacena conjecture.

  3. Reform in Mathematics Education: "What Do We Teach for and Against?"

    ERIC Educational Resources Information Center

    Petric, Marius

    2011-01-01

    This study examines the implementation of a problem-based math curriculum that uses problem situations related to global warming and pollution to involve students in modeling polynomial, exponential, and logarithmic functions. Each instructional module includes activities that engage students in investigating current social justice and…

  4. Demonstrating the Light-Emitting Diode.

    ERIC Educational Resources Information Center

    Johnson, David A.

    1995-01-01

    Describes a simple inexpensive circuit which can be used to quickly demonstrate the basic function and versatility of the solid state diode. Can be used to demonstrate the light-emitting diode (LED) as a light emitter, temperature sensor, light detector with both a linear and logarithmic response, and charge storage device. (JRH)

  5. Brownian motion in time-dependent logarithmic potential: Exact results for dynamics and first-passage properties.

    PubMed

    Ryabov, Artem; Berestneva, Ekaterina; Holubec, Viktor

    2015-09-21

    The paper addresses Brownian motion in the logarithmic potential with time-dependent strength, U(x, t) = g(t)log(x), subject to the absorbing boundary at the origin of coordinates. Such model can represent kinetics of diffusion-controlled reactions of charged molecules or escape of Brownian particles over a time-dependent entropic barrier at the end of a biological pore. We present a simple asymptotic theory which yields the long-time behavior of both the survival probability (first-passage properties) and the moments of the particle position (dynamics). The asymptotic survival probability, i.e., the probability that the particle will not hit the origin before a given time, is a functional of the potential strength. As such, it exhibits a rather varied behavior for different functions g(t). The latter can be grouped into three classes according to the regime of the asymptotic decay of the survival probability. We distinguish 1. the regular (power-law decay), 2. the marginal (power law times a slow function of time), and 3. the regime of enhanced absorption (decay faster than the power law, e.g., exponential). Results of the asymptotic theory show good agreement with numerical simulations.

  6. Transverse parton distribution functions at next-to-next-to-leading order: the quark-to-quark case.

    PubMed

    Gehrmann, Thomas; Lübbert, Thomas; Yang, Li Lin

    2012-12-14

    We present a calculation of the perturbative quark-to-quark transverse parton distribution function at next-to-next-to-leading order based on a gauge invariant operator definition. We demonstrate for the first time that such a definition works beyond the first nontrivial order. We extract from our calculation the coefficient functions relevant for a next-to-next-to-next-to-leading logarithmic Q(T) resummation in a large class of processes at hadron colliders.

  7. Stress Energy Tensor in LCFT and LOGARITHMIC Sugawara Construction

    NASA Astrophysics Data System (ADS)

    Kogan, Ian I.; Nichols, Alexander

    We discuss the partners of the stress energy tensor and their structure in Logarithmic conformal field theories. In particular we draw attention to the fundamental differences between theories with zero and non-zero central charge. However they are both characterised by at least two independent parameters. We show how, by using a generalised Sugawara construction, one can calculate the logarithmic partner of T. We show that such a construction works in the c=-2 theory using the conformal dimension one primary currents which generate a logarithmic extension of the Kac-Moody algebra. This is an expanded version of a talk presented by A. Nichols at the conference on Logarithmic Conformal Field Theory and its Applications in Tehran Iran, 2001.

  8. Kinetics of the B1-B2 phase transition in KCl under rapid compression

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Chuanlong; Smith, Jesse S.; Sinogeikin, Stanislav V.

    2016-01-28

    Kinetics of the B1-B2 phase transition in KCl has been investigated under various compression rates (0.03–13.5 GPa/s) in a dynamic diamond anvil cell using time-resolved x-ray diffraction and fast imaging. Our experimental data show that the volume fraction across the transition generally gives sigmoidal curves as a function of pressure during rapid compression. Based upon classical nucleation and growth theories (Johnson-Mehl-Avrami-Kolmogorov theories), we propose a model that is applicable for studying kinetics for the compression rates studied. The fit of the experimental volume fraction as a function of pressure provides information on effective activation energy and average activation volume at amore » given compression rate. The resulting parameters are successfully used for interpreting several experimental observables that are compression-rate dependent, such as the transition time, grain size, and over-pressurization. The effective activation energy (Q{sub eff}) is found to decrease linearly with the logarithm of compression rate. When Q{sub eff} is applied to the Arrhenius equation, this relationship can be used to interpret the experimentally observed linear relationship between the logarithm of the transition time and logarithm of the compression rates. The decrease of Q{sub eff} with increasing compression rate results in the decrease of the nucleation rate, which is qualitatively in agreement with the observed change of the grain size with compression rate. The observed over-pressurization is also well explained by the model when an exponential relationship between the average activation volume and the compression rate is assumed.« less

  9. Value function in economic growth model

    NASA Astrophysics Data System (ADS)

    Bagno, Alexander; Tarasyev, Alexandr A.; Tarasyev, Alexander M.

    2017-11-01

    Properties of the value function are examined in an infinite horizon optimal control problem with an unlimited integrand index appearing in the quality functional with a discount factor. Optimal control problems of such type describe solutions in models of economic growth. Necessary and sufficient conditions are derived to ensure that the value function satisfies the infinitesimal stability properties. It is proved that value function coincides with the minimax solution of the Hamilton-Jacobi equation. Description of the growth asymptotic behavior for the value function is provided for the logarithmic, power and exponential quality functionals and an example is given to illustrate construction of the value function in economic growth models.

  10. Computing Logarithms by Hand

    ERIC Educational Resources Information Center

    Reed, Cameron

    2016-01-01

    How can old-fashioned tables of logarithms be computed without technology? Today, of course, no practicing mathematician, scientist, or engineer would actually use logarithms to carry out a calculation, let alone worry about deriving them from scratch. But high school students may be curious about the process. This article develops a…

  11. Logarithmic scaling for fluctuations of a scalar concentration in wall turbulence.

    PubMed

    Mouri, Hideaki; Morinaga, Takeshi; Yagi, Toshimasa; Mori, Kazuyasu

    2017-12-01

    Within wall turbulence, there is a sublayer where the mean velocity and the variance of velocity fluctuations vary logarithmically with the height from the wall. This logarithmic scaling is also known for the mean concentration of a passive scalar. By using heat as such a scalar in a laboratory experiment of a turbulent boundary layer, the existence of the logarithmic scaling is shown here for the variance of fluctuations of the scalar concentration. It is reproduced by a model of energy-containing eddies that are attached to the wall.

  12. Logarithmic amplifiers.

    PubMed

    Gandler, W; Shapiro, H

    1990-01-01

    Logarithmic amplifiers (log amps), which produce an output signal proportional to the logarithm of the input signal, are widely used in cytometry for measurements of parameters that vary over a wide dynamic range, e.g., cell surface immunofluorescence. Existing log amp circuits all deviate to some extent from ideal performance with respect to dynamic range and fidelity to the logarithmic curve; accuracy in quantitative analysis using log amps therefore requires that log amps be individually calibrated. However, accuracy and precision may be limited by photon statistics and system noise when very low level input signals are encountered.

  13. Stress Energy tensor in LCFT and the Logarithmic Sugawara construction

    NASA Astrophysics Data System (ADS)

    Kogan, Ian I.; Nichols, Alexander

    2002-01-01

    We discuss the partners of the stress energy tensor and their structure in Logarithmic conformal field theories. In particular we draw attention to the fundamental differences between theories with zero and non-zero central charge. However they are both characterised by at least two independent parameters. We show how, by using a generalised Sugawara construction, one can calculate the logarithmic partner of T. We show that such a construction works in the c = -2 theory using the conformal dimension one primary currents which generate a logarithmic extension of the Kac-Moody algebra.

  14. Equilibrium Solutions of the Logarithmic Hamiltonian Leapfrog for the N-body Problem

    NASA Astrophysics Data System (ADS)

    Minesaki, Yukitaka

    2018-04-01

    We prove that a second-order logarithmic Hamiltonian leapfrog for the classical general N-body problem (CGNBP) designed by Mikkola and Tanikawa and some higher-order logarithmic Hamiltonian methods based on symmetric multicompositions of the logarithmic algorithm exactly reproduce the orbits of elliptic relative equilibrium solutions in the original CGNBP. These methods are explicit symplectic methods. Before this proof, only some implicit discrete-time CGNBPs proposed by Minesaki had been analytically shown to trace the orbits of elliptic relative equilibrium solutions. The proof is therefore the first existence proof for explicit symplectic methods. Such logarithmic Hamiltonian methods with a variable time step can also precisely retain periodic orbits in the classical general three-body problem, which generic numerical methods with a constant time step cannot do.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stevens, Mark J.; Saleh, Omar A.

    We calculated the force-extension curves for a flexible polyelectrolyte chain with varying charge separations by performing Monte Carlo simulations of a 5000 bead chain using a screened Coulomb interaction. At all charge separations, the force-extension curves exhibit a Pincus-like scaling regime at intermediate forces and a logarithmic regime at large forces. As the charge separation increases, the Pincus regime shifts to a larger range of forces and the logarithmic regime starts are larger forces. We also found that force-extension curve for the corresponding neutral chain has a logarithmic regime. Decreasing the diameter of bead in the neutral chain simulations removedmore » the logarithmic regime, and the force-extension curve tends to the freely jointed chain limit. In conclusion, this result shows that only excluded volume is required for the high force logarithmic regime to occur.« less

  16. Logarithmic M(2,p) minimal models, their logarithmic couplings, and duality

    NASA Astrophysics Data System (ADS)

    Mathieu, Pierre; Ridout, David

    2008-10-01

    A natural construction of the logarithmic extension of the M(2,p) (chiral) minimal models is presented, which generalises our previous model of percolation ( p=3). Its key aspect is the replacement of the minimal model irreducible modules by reducible ones obtained by requiring that only one of the two principal singular vectors of each module vanish. The resulting theory is then constructed systematically by repeatedly fusing these building block representations. This generates indecomposable representations of the type which signify the presence of logarithmic partner fields in the theory. The basic data characterising these indecomposable modules, the logarithmic couplings, are computed for many special cases and given a new structural interpretation. Quite remarkably, a number of them are presented in closed analytic form (for general p). These are the prime examples of "gauge-invariant" data—quantities independent of the ambiguities present in defining the logarithmic partner fields. Finally, mere global conformal invariance is shown to enforce strong constraints on the allowed spectrum: It is not possible to include modules other than those generated by the fusion of the model's building blocks. This generalises the statement that there cannot exist two effective central charges in a c=0 model. It also suggests the existence of a second "dual" logarithmic theory for each p. Such dual models are briefly discussed.

  17. "Turn-on" fluorescence detection of lead ions based on accelerated leaching of gold nanoparticles on the surface of graphene.

    PubMed

    Fu, Xiuli; Lou, Tingting; Chen, Zhaopeng; Lin, Meng; Feng, Weiwei; Chen, Lingxin

    2012-02-01

    A novel platform for effective "turn-on" fluorescence sensing of lead ions (Pb(2+)) in aqueous solution was developed based on gold nanoparticle (AuNP)-functionalized graphene. The AuNP-functionalized graphene exhibited minimal background fluorescence because of the extraordinarily high quenching ability of AuNPs. Interestingly, the AuNP-functionalized graphene underwent fluorescence restoration as well as significant enhancement upon adding Pb(2+), which was attributed to the fact that Pb(2+) could accelerate the leaching rate of the AuNPs on graphene surfaces in the presence of both thiosulfate (S(2)O(3)(2-)) and 2-mercaptoethanol (2-ME). Consequently, this could be utilized as the basis for selective detection of Pb(2+). With the optimum conditions chosen, the relative fluorescence intensity showed good linearity versus logarithm concentration of Pb(2+) in the range of 50-1000 nM (R = 0.9982), and a detection limit of 10 nM. High selectivity over common coexistent metal ions was also demonstrated. The practical application had been carried out for determination of Pb(2+) in tap water and mineral water samples. The Pb(2+)-specific "turn-on" fluorescence sensor, based on Pb(2+) accelerated leaching of AuNPs on the surface of graphene, provided new opportunities for highly sensitive and selective Pb(2+) detection in aqueous media.

  18. The Role of Hellinger Processes in Mathematical Finance

    NASA Astrophysics Data System (ADS)

    Choulli, T.; Hurd, T. R.

    2001-09-01

    This paper illustrates the natural role that Hellinger processes can play in solving problems from ¯nance. We propose an extension of the concept of Hellinger process applicable to entropy distance and f-divergence distances, where f is a convex logarithmic function or a convex power function with general order q, 0 6= q < 1. These concepts lead to a new approach to Merton's optimal portfolio problem and its dual in general L¶evy markets.

  19. A meta-cognitive learning algorithm for a Fully Complex-valued Relaxation Network.

    PubMed

    Savitha, R; Suresh, S; Sundararajan, N

    2012-08-01

    This paper presents a meta-cognitive learning algorithm for a single hidden layer complex-valued neural network called "Meta-cognitive Fully Complex-valued Relaxation Network (McFCRN)". McFCRN has two components: a cognitive component and a meta-cognitive component. A Fully Complex-valued Relaxation Network (FCRN) with a fully complex-valued Gaussian like activation function (sech) in the hidden layer and an exponential activation function in the output layer forms the cognitive component. The meta-cognitive component contains a self-regulatory learning mechanism which controls the learning ability of FCRN by deciding what-to-learn, when-to-learn and how-to-learn from a sequence of training data. The input parameters of cognitive components are chosen randomly and the output parameters are estimated by minimizing a logarithmic error function. The problem of explicit minimization of magnitude and phase errors in the logarithmic error function is converted to system of linear equations and output parameters of FCRN are computed analytically. McFCRN starts with zero hidden neuron and builds the number of neurons required to approximate the target function. The meta-cognitive component selects the best learning strategy for FCRN to acquire the knowledge from training data and also adapts the learning strategies to implement best human learning components. Performance studies on a function approximation and real-valued classification problems show that proposed McFCRN performs better than the existing results reported in the literature. Copyright © 2012 Elsevier Ltd. All rights reserved.

  20. One Concept and Two Narrations: The Case of the Logarithm

    ERIC Educational Resources Information Center

    Hamdan, May

    2008-01-01

    Through an account of the history of exponential functions as presented in traditional calculus textbooks, I present my observations and remarks on the spiral development of the concept, and my concerns about the general presentations of the subject. In this article I emphasize how the different arrangements and sequencing of the subjects required…

  1. Using Spreadsheets to Discover Meaning for Parameters in Nonlinear Models

    ERIC Educational Resources Information Center

    Green, Kris H.

    2008-01-01

    This paper explores the use of spreadsheets to develop an exploratory environment where mathematics students can develop their own understanding of the parameters of commonly encountered families of functions: linear, logarithmic, exponential and power. The key to this understanding involves opening up the definition of rate of change from the…

  2. Inclusive production of small radius jets in heavy-ion collisions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kang, Zhong-Bo; Ringer, Felix; Vitev, Ivan

    Here, we develop a new formalism to describe the inclusive production of small radius jets in heavy-ion collisions, which is consistent with jet calculations in the simpler proton–proton system. Only at next-to-leading order (NLO) and beyond, the jet radius parameter R and the jet algorithm dependence of the jet cross section can be studied and a meaningful comparison to experimental measurements is possible. We are able to consistently achieve NLO accuracy by making use of the recently developed semi-inclusive jet functions within Soft Collinear Effective Theory (SCET). Additionally, single logarithms of the jet size parameter αmore » $$n\\atop{s}$$ln nR leading logarithmic (NLL R) accuracy in proton–proton collisions. The medium modified semi-inclusive jet functions are obtained within the framework of SCET with Glauber gluons that describe the interaction of jets with the medium. We also present numerical results for the suppression of inclusive jet cross sections in heavy ion collisions at the LHC and the formalism developed here can be extended directly to corresponding jet substructure observables.« less

  3. Inclusive production of small radius jets in heavy-ion collisions

    DOE PAGES

    Kang, Zhong-Bo; Ringer, Felix; Vitev, Ivan

    2017-03-31

    Here, we develop a new formalism to describe the inclusive production of small radius jets in heavy-ion collisions, which is consistent with jet calculations in the simpler proton–proton system. Only at next-to-leading order (NLO) and beyond, the jet radius parameter R and the jet algorithm dependence of the jet cross section can be studied and a meaningful comparison to experimental measurements is possible. We are able to consistently achieve NLO accuracy by making use of the recently developed semi-inclusive jet functions within Soft Collinear Effective Theory (SCET). Additionally, single logarithms of the jet size parameter αmore » $$n\\atop{s}$$ln nR leading logarithmic (NLL R) accuracy in proton–proton collisions. The medium modified semi-inclusive jet functions are obtained within the framework of SCET with Glauber gluons that describe the interaction of jets with the medium. We also present numerical results for the suppression of inclusive jet cross sections in heavy ion collisions at the LHC and the formalism developed here can be extended directly to corresponding jet substructure observables.« less

  4. Stochastic exponential synchronization of memristive neural networks with time-varying delays via quantized control.

    PubMed

    Zhang, Wanli; Yang, Shiju; Li, Chuandong; Zhang, Wei; Yang, Xinsong

    2018-08-01

    This paper focuses on stochastic exponential synchronization of delayed memristive neural networks (MNNs) by the aid of systems with interval parameters which are established by using the concept of Filippov solution. New intermittent controller and adaptive controller with logarithmic quantization are structured to deal with the difficulties induced by time-varying delays, interval parameters as well as stochastic perturbations, simultaneously. Moreover, not only control cost can be reduced but also communication channels and bandwidth are saved by using these controllers. Based on novel Lyapunov functions and new analytical methods, several synchronization criteria are established to realize the exponential synchronization of MNNs with stochastic perturbations via intermittent control and adaptive control with or without logarithmic quantization. Finally, numerical simulations are offered to substantiate our theoretical results. Copyright © 2018 Elsevier Ltd. All rights reserved.

  5. Entropy and complexity analysis of hydrogenic Rydberg atoms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lopez-Rosa, S.; Departamento de Fisica Aplicada II, Universidad de Sevilla, 41012-Sevilla; Toranzo, I. V.

    The internal disorder of hydrogenic Rydberg atoms as contained in their position and momentum probability densities is examined by means of the following information-theoretic spreading quantities: the radial and logarithmic expectation values, the Shannon entropy, and the Fisher information. As well, the complexity measures of Cramer-Rao, Fisher-Shannon, and Lopez Ruiz-Mancini-Calvet types are investigated in both reciprocal spaces. The leading term of these quantities is rigorously calculated by use of the asymptotic properties of the concomitant entropic functionals of the Laguerre and Gegenbauer orthogonal polynomials which control the wavefunctions of the Rydberg states in both position and momentum spaces. The associatedmore » generalized Heisenberg-like, logarithmic and entropic uncertainty relations are also given. Finally, application to linear (l= 0), circular (l=n- 1), and quasicircular (l=n- 2) states is explicitly done.« less

  6. Quantum corrections to conductivity in graphene with vacancies

    NASA Astrophysics Data System (ADS)

    Araujo, E. N. D.; Brant, J. C.; Archanjo, B. S.; Medeiros-Ribeiro, G.; Alves, E. S.

    2018-06-01

    In this work, different regions of a graphene device were exposed to a 30 keV helium ion beam creating a series of alternating strips of vacancy-type defects and pristine graphene. From magnetoconductance measurements as function of temperature, density of carriers and density of strips we show that the electron-electron interaction is important to explain the logarithmic quantum corrections to the Drude conductivity in graphene with vacancies. It is known that vacancies in graphene behave as local magnetic moments that interact with the conduction electrons and leads to a logarithmic correction to the conductance through the Kondo effect. However, our work shows that it is necessary to account for the non-homogeneity of the sample to avoid misinterpretations about the Kondo physics due the difficulties in separating the electron-electron interaction from the Kondo effect.

  7. Non-abelian factorisation for next-to-leading-power threshold logarithms

    NASA Astrophysics Data System (ADS)

    Bonocore, D.; Laenen, E.; Magnea, L.; Vernazza, L.; White, C. D.

    2016-12-01

    Soft and collinear radiation is responsible for large corrections to many hadronic cross sections, near thresholds for the production of heavy final states. There is much interest in extending our understanding of this radiation to next-to-leading power (NLP) in the threshold expansion. In this paper, we generalise a previously proposed all-order NLP factorisation formula to include non-abelian corrections. We define a nonabelian radiative jet function, organising collinear enhancements at NLP, and compute it for quark jets at one loop. We discuss in detail the issue of double counting between soft and collinear regions. Finally, we verify our prescription by reproducing all NLP logarithms in Drell-Yan production up to NNLO, including those associated with double real emission. Our results constitute an important step in the development of a fully general resummation formalism for NLP threshold effects.

  8. Maximum entropy perception-action space: a Bayesian model of eye movement selection

    NASA Astrophysics Data System (ADS)

    Colas, Francis; Bessière, Pierre; Girard, Benoît

    2011-03-01

    In this article, we investigate the issue of the selection of eye movements in a free-eye Multiple Object Tracking task. We propose a Bayesian model of retinotopic maps with a complex logarithmic mapping. This model is structured in two parts: a representation of the visual scene, and a decision model based on the representation. We compare different decision models based on different features of the representation and we show that taking into account uncertainty helps predict the eye movements of subjects recorded in a psychophysics experiment. Finally, based on experimental data, we postulate that the complex logarithmic mapping has a functional relevance, as the density of objects in this space in more uniform than expected. This may indicate that the representation space and control strategies are such that the object density is of maximum entropy.

  9. How Do Students Acquire an Understanding of Logarithmic Concepts?

    ERIC Educational Resources Information Center

    Mulqueeny, Ellen

    2012-01-01

    The use of logarithms, an important tool for calculus and beyond, has been reduced to symbol manipulation without understanding in most entry-level college algebra courses. The primary aim of this research, therefore, was to investigate college students' understanding of logarithmic concepts through the use of a series of instructional tasks…

  10. 123s and ABCs: developmental shifts in logarithmic-to-linear responding reflect fluency with sequence values.

    PubMed

    Hurst, Michelle; Monahan, K Leigh; Heller, Elizabeth; Cordes, Sara

    2014-11-01

    When placing numbers along a number line with endpoints 0 and 1000, children generally space numbers logarithmically until around the age of 7, when they shift to a predominantly linear pattern of responding. This developmental shift of responding on the number placement task has been argued to be indicative of a shift in the format of the underlying representation of number (Siegler & Opfer, ). In the current study, we provide evidence from both child and adult participants to suggest that performance on the number placement task may not reflect the structure of the mental number line, but instead is a function of the fluency (i.e. ease) with which the individual can work with the values in the sequence. In Experiment 1, adult participants respond logarithmically when placing numbers on a line with less familiar anchors (1639 to 2897), despite linear responding on control tasks with standard anchors involving a similar range (0 to 1287) and a similar numerical magnitude (2000 to 3000). In Experiment 2, we show a similar developmental shift in childhood from logarithmic to linear responding for a non-numerical sequence with no inherent magnitude (the alphabet). In conclusion, we argue that the developmental trend towards linear behavior on the number line task is a product of successful strategy use and mental fluency with the values of the sequence, resulting from familiarity with endpoints and increased knowledge about general ordering principles of the sequence.A video abstract of this article can be viewed at:http://www.youtube.com/watch?v=zg5Q2LIFk3M. © 2014 John Wiley & Sons Ltd.

  11. Factorization for jet radius logarithms in jet mass spectra at the LHC

    DOE PAGES

    Kolodrubetz, Daniel W.; Pietrulewicz, Piotr; Stewart, Iain W.; ...

    2016-12-14

    To predict the jet mass spectrum at a hadron collider it is crucial to account for the resummation of logarithms between the transverse momentum of the jet and its invariant mass m J . For small jet areas there are additional large logarithms of the jet radius R, which affect the convergence of the perturbative series. We present an analytic framework for exclusive jet production at the LHC which gives a complete description of the jet mass spectrum including realistic jet algorithms and jet vetoes. It factorizes the scales associated with m J , R, and the jet veto, enablingmore » in addition the systematic resummation of jet radius logarithms in the jet mass spectrum beyond leading logarithmic order. We discuss the factorization formulae for the peak and tail region of the jet mass spectrum and for small and large R, and the relations between the different regimes and how to combine them. Regions of experimental interest are classified which do not involve large nonglobal logarithms. We also present universal results for nonperturbative effects and discuss various jet vetoes.« less

  12. Collinearly-improved BK evolution meets the HERA data

    DOE PAGES

    Iancu, E.; Madrigal, J. D.; Mueller, A. H.; ...

    2015-10-03

    In a previous publication, we have established a collinearly-improved version of the Balitsky–Kovchegov (BK) equation, which resums to all orders the radiative corrections enhanced by large double transverse logarithms. Here, we study the relevance of this equation as a tool for phenomenology, by confronting it to the HERA data. To that aim, we first improve the perturbative accuracy of our resummation, by including two classes of single-logarithmic corrections: those generated by the first non-singular terms in the DGLAP splitting functions and those expressing the one-loop running of the QCD coupling. The equation thus obtained includes all the next-to-leading order correctionsmore » to the BK equation which are enhanced by (single or double) collinear logarithms. Furthermore, we then use numerical solutions to this equation to fit the HERA data for the electron–proton reduced cross-section at small Bjorken x. We obtain good quality fits for physically acceptable initial conditions. Our best fit, which shows a good stability up to virtualities as large as Q 2 = 400 GeV 2 for the exchanged photon, uses as an initial condition the running-coupling version of the McLerran–Venugopalan model, with the QCD coupling running according to the smallest dipole prescription.« less

  13. Reducing bias and analyzing variability in the time-left procedure.

    PubMed

    Trujano, R Emmanuel; Orduña, Vladimir

    2015-04-01

    The time-left procedure was designed to evaluate the psychophysical function for time. Although previous results indicated a linear relationship, it is not clear what role the observed bias toward the time-left option plays in this procedure and there are no reports of how variability changes with predicted indifference. The purposes of this experiment were to reduce bias experimentally, and to contrast the difference limen (a measure of variability around indifference) with predictions from scalar expectancy theory (linear timing) and behavioral economic model (logarithmic timing). A control group of 6 rats performed the original time-left procedure with C=60 s and S=5, 10,…, 50, 55 s, whereas a no-bias group of 6 rats performed the same conditions in a modified time-left procedure in which only a single response per choice trial was allowed. Results showed that bias was reduced for the no-bias group, observed indifference grew linearly with predicted indifference for both groups, and difference limen and Weber ratios decreased as expected indifference increased for the control group, which is consistent with linear timing, whereas for the no-bias group they remained constant, consistent with logarithmic timing. Therefore, the time-left procedure generates results consistent with logarithmic perceived time once bias is experimentally reduced. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. Using Polynomials to Simplify Fixed Pattern Noise and Photometric Correction of Logarithmic CMOS Image Sensors

    PubMed Central

    Li, Jing; Mahmoodi, Alireza; Joseph, Dileepan

    2015-01-01

    An important class of complementary metal-oxide-semiconductor (CMOS) image sensors are those where pixel responses are monotonic nonlinear functions of light stimuli. This class includes various logarithmic architectures, which are easily capable of wide dynamic range imaging, at video rates, but which are vulnerable to image quality issues. To minimize fixed pattern noise (FPN) and maximize photometric accuracy, pixel responses must be calibrated and corrected due to mismatch and process variation during fabrication. Unlike literature approaches, which employ circuit-based models of varying complexity, this paper introduces a novel approach based on low-degree polynomials. Although each pixel may have a highly nonlinear response, an approximately-linear FPN calibration is possible by exploiting the monotonic nature of imaging. Moreover, FPN correction requires only arithmetic, and an optimal fixed-point implementation is readily derived, subject to a user-specified number of bits per pixel. Using a monotonic spline, involving cubic polynomials, photometric calibration is also possible without a circuit-based model, and fixed-point photometric correction requires only a look-up table. The approach is experimentally validated with a logarithmic CMOS image sensor and is compared to a leading approach from the literature. The novel approach proves effective and efficient. PMID:26501287

  15. CFD study of the flow pattern in an ultrasonic horn reactor: Introducing a realistic vibrating boundary condition.

    PubMed

    Rahimi, Masoud; Movahedirad, Salman; Shahhosseini, Shahrokh

    2017-03-01

    Recently, great attention has been paid to predict the acoustic streaming field distribution inside the sonoreactors, induced by high-power ultrasonic wave generator. The focus of this paper is to model an ultrasonic vibrating horn and study the induced flow pattern with a newly developed moving boundary condition. The numerical simulation utilizes the modified cavitation model along with the "mixture" model for turbulent flow (RNG, k-ε), and a moving boundary condition with an oscillating parabolic-logarithmic profile, applied to the horn tip. This moving-boundary provides the situation in which the center of the horn tip vibrates stronger than that of the peripheral regions. The velocity field obtained by computational fluid dynamic was in a reasonably good agreement with the PIV results. The moving boundary model is more accurate since it better approximates the movement of the horn tip in the ultrasonic assisted process. From an optimizing point of view, the model with the new moving boundary is more suitable than the conventional models for design purposes because the displacement magnitude of the horn tip is the only fitting parameter. After developing and validating the numerical model, the model was utilized to predict various quantities such as cavitation zone, pressure field and stream function that are not experimentally feasible to measure. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Retention behavior of long chain quaternary ammonium homologues and related nitroso-alkymethylamines

    USGS Publications Warehouse

    Abidi, S.L.

    1985-01-01

    Several chromatographic methods have been utilized to study the retentionbehavior of a homologous series of n-alkylbenzyldimethylammonium chlorides (ABDAC) and the corresponding nitroso-n-alkylmethylamines (NAMA). Linear correlation of the logarithmic capacity factor (k') with the number of carbons in the alkyl chain provides useful information on both gas chromatographic (GC) and high-performance liquid chromatographich (HPLC) retention parameters of unknown components. Under all conditions empolyed, GC methodology has proved effective in achieving complete resolution of the homologous mixture of NMA despite its obvious inadequacy in the separation of E-Z configurational isomers. Conversely, normal-phase HPLC on silica demonstrates that the selectivity (a) value for an E-Z pair is much higher than that for an adjacent homologous pair. In the reversed-phase HPLC study, three different silica-based column systems were examined under various mobile phase conditions. The extent of variation in k' was found to be a function of the organic modifier, counter-ion concentration, eluent pH, nature of counter-ion, and the polarity and type of stationary phase. The k'—[NaClO4] profiles showed similar trends between the ABDAC and the NAMA series, supporting the dipolar electronic structures of the latter compounds. Mobile phase and stationary phase effects on component separation are described. The methodology presented establishes the utility of HPLC separation techniques as versatile analytical tools for practical application.

  17. Renormalizability of quasiparton distribution functions

    DOE PAGES

    Ishikawa, Tomomi; Ma, Yan-Qing; Qiu, Jian-Wei; ...

    2017-11-21

    Quasi-parton distribution functions have received a lot of attentions in both perturbative QCD and lattice QCD communities in recent years because they not only carry good information on the parton distribution functions, but also could be evaluated by lattice QCD simulations. However, unlike the parton distribution functions, the quasi-parton distribution functions have perturbative ultraviolet power divergences because they are not defined by twist-2 operators. Here in this article, we identify all sources of ultraviolet divergences for the quasi-parton distribution functions in coordinate-space, and demonstrated that power divergences, as well as all logarithmic divergences can be renormalized multiplicatively to all ordersmore » in QCD perturbation theory.« less

  18. Renormalizability of quasiparton distribution functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ishikawa, Tomomi; Ma, Yan-Qing; Qiu, Jian-Wei

    Quasi-parton distribution functions have received a lot of attentions in both perturbative QCD and lattice QCD communities in recent years because they not only carry good information on the parton distribution functions, but also could be evaluated by lattice QCD simulations. However, unlike the parton distribution functions, the quasi-parton distribution functions have perturbative ultraviolet power divergences because they are not defined by twist-2 operators. Here in this article, we identify all sources of ultraviolet divergences for the quasi-parton distribution functions in coordinate-space, and demonstrated that power divergences, as well as all logarithmic divergences can be renormalized multiplicatively to all ordersmore » in QCD perturbation theory.« less

  19. Application of the Artificial Neural Network model for prediction of monthly Standardized Precipitation and Evapotranspiration Index using hydrometeorological parameters and climate indices in eastern Australia

    NASA Astrophysics Data System (ADS)

    Deo, Ravinesh C.; Şahin, Mehmet

    2015-07-01

    The forecasting of drought based on cumulative influence of rainfall, temperature and evaporation is greatly beneficial for mitigating adverse consequences on water-sensitive sectors such as agriculture, ecosystems, wildlife, tourism, recreation, crop health and hydrologic engineering. Predictive models of drought indices help in assessing water scarcity situations, drought identification and severity characterization. In this paper, we tested the feasibility of the Artificial Neural Network (ANN) as a data-driven model for predicting the monthly Standardized Precipitation and Evapotranspiration Index (SPEI) for eight candidate stations in eastern Australia using predictive variable data from 1915 to 2005 (training) and simulated data for the period 2006-2012. The predictive variables were: monthly rainfall totals, mean temperature, minimum temperature, maximum temperature and evapotranspiration, which were supplemented by large-scale climate indices (Southern Oscillation Index, Pacific Decadal Oscillation, Southern Annular Mode and Indian Ocean Dipole) and the Sea Surface Temperatures (Nino 3.0, 3.4 and 4.0). A total of 30 ANN models were developed with 3-layer ANN networks. To determine the best combination of learning algorithms, hidden transfer and output functions of the optimum model, the Levenberg-Marquardt and Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton backpropagation algorithms were utilized to train the network, tangent and logarithmic sigmoid equations used as the activation functions and the linear, logarithmic and tangent sigmoid equations used as the output function. The best ANN architecture had 18 input neurons, 43 hidden neurons and 1 output neuron, trained using the Levenberg-Marquardt learning algorithm using tangent sigmoid equation as the activation and output functions. An evaluation of the model performance based on statistical rules yielded time-averaged Coefficient of Determination, Root Mean Squared Error and the Mean Absolute Error ranging from 0.9945-0.9990, 0.0466-0.1117, and 0.0013-0.0130, respectively for individual stations. Also, the Willmott's Index of Agreement and the Nash-Sutcliffe Coefficient of Efficiency were between 0.932-0.959 and 0.977-0.998, respectively. When checked for the severity (S), duration (D) and peak intensity (I) of drought events determined from the simulated and observed SPEI, differences in drought parameters ranged from - 1.41-0.64%, - 2.17-1.92% and - 3.21-1.21%, respectively. Based on performance evaluation measures, we aver that the Artificial Neural Network model is a useful data-driven tool for forecasting monthly SPEI and its drought-related properties in the region of study.

  20. The functional dependence of canopy conductance on water vapor pressure deficit revisited

    NASA Astrophysics Data System (ADS)

    Fuchs, Marcel; Stanghellini, Cecilia

    2018-03-01

    Current research seeking to relate between ambient water vapor deficit (D) and foliage conductance (g F ) derives a canopy conductance (g W ) from measured transpiration by inverting the coupled transpiration model to yield g W = m - n ln(D) where m and n are fitting parameters. In contrast, this paper demonstrates that the relation between coupled g W and D is g W = AP/D + B, where P is the barometric pressure, A is the radiative term, and B is the convective term coefficient of the Penman-Monteith equation. A and B are functions of g F and of meteorological parameters but are mathematically independent of D. Keeping A and B constant implies constancy of g F . With these premises, the derived g W is a hyperbolic function of D resembling the logarithmic expression, in contradiction with the pre-set constancy of g F . Calculations with random inputs that ensure independence between g F and D reproduce published experimental scatter plots that display a dependence between g W and D in contradiction with the premises. For this reason, the dependence of g W on D is a computational artifact unrelated to any real effect of ambient humidity on stomatal aperture and closure. Data collected in a maize field confirm the inadequacy of the logarithmic function to quantify the relation between canopy conductance and vapor pressure deficit.

  1. Calculation of the transverse parton distribution functions at next-to-next-to-leading order

    NASA Astrophysics Data System (ADS)

    Gehrmann, Thomas; Lübbert, Thomas; Yang, Li Lin

    2014-06-01

    We describe the perturbative calculation of the transverse parton distribution functions in all partonic channels up to next-to-next-to-leading order based on a gauge invariant operator definition. We demonstrate the cancellation of light-cone divergences and show that universal process-independent transverse parton distribution functions can be obtained through a refactorization. Our results serve as the first explicit higher-order calculation of these functions starting from first principles, and can be used to perform next-to-next-to-next-to-leading logarithmic q T resummation for a large class of processes at hadron colliders.

  2. Simulations of stretching a flexible polyelectrolyte with varying charge separation

    DOE PAGES

    Stevens, Mark J.; Saleh, Omar A.

    2016-07-22

    We calculated the force-extension curves for a flexible polyelectrolyte chain with varying charge separations by performing Monte Carlo simulations of a 5000 bead chain using a screened Coulomb interaction. At all charge separations, the force-extension curves exhibit a Pincus-like scaling regime at intermediate forces and a logarithmic regime at large forces. As the charge separation increases, the Pincus regime shifts to a larger range of forces and the logarithmic regime starts are larger forces. We also found that force-extension curve for the corresponding neutral chain has a logarithmic regime. Decreasing the diameter of bead in the neutral chain simulations removedmore » the logarithmic regime, and the force-extension curve tends to the freely jointed chain limit. In conclusion, this result shows that only excluded volume is required for the high force logarithmic regime to occur.« less

  3. Improved maximum average correlation height filter with adaptive log base selection for object recognition

    NASA Astrophysics Data System (ADS)

    Tehsin, Sara; Rehman, Saad; Awan, Ahmad B.; Chaudry, Qaiser; Abbas, Muhammad; Young, Rupert; Asif, Afia

    2016-04-01

    Sensitivity to the variations in the reference image is a major concern when recognizing target objects. A combinational framework of correlation filters and logarithmic transformation has been previously reported to resolve this issue alongside catering for scale and rotation changes of the object in the presence of distortion and noise. In this paper, we have extended the work to include the influence of different logarithmic bases on the resultant correlation plane. The meaningful changes in correlation parameters along with contraction/expansion in the correlation plane peak have been identified under different scenarios. Based on our research, we propose some specific log bases to be used in logarithmically transformed correlation filters for achieving suitable tolerance to different variations. The study is based upon testing a range of logarithmic bases for different situations and finding an optimal logarithmic base for each particular set of distortions. Our results show improved correlation and target detection accuracies.

  4. Task-switching cost and repetition priming: two overlooked confounds in the first-set procedure of the Sternberg paradigm and how they affect memory set-size effects.

    PubMed

    Jou, Jerwen

    2014-10-01

    Subjects performed Sternberg-type memory recognition tasks (Sternberg paradigm) in four experiments. Category-instance names were used as learning and testing materials. Sternberg's original experiments demonstrated a linear relation between reaction time (RT) and memory-set size (MSS). A few later studies found no relation, and other studies found a nonlinear relation (logarithmic) between the two variables. These deviations were used as evidence undermining Sternberg's serial scan theory. This study identified two confounding variables in the fixed-set procedure of the paradigm (where multiple probes are presented at test for a learned memory set) that could generate a MSS RT function that was either flat or logarithmic rather than linearly increasing. These two confounding variables were task-switching cost and repetition priming. The former factor worked against smaller memory sets and in favour of larger sets whereas the latter factor worked in the opposite way. Results demonstrated that a null or a logarithmic RT-to-MSS relation could be the artefact of the combined effects of these two variables. The Sternberg paradigm has been used widely in memory research, and a thorough understanding of the subtle methodological pitfalls is crucial. It is suggested that a varied-set procedure (where only one probe is presented at test for a learned memory set) is a more contamination-free procedure for measuring the MSS effects, and that if a fixed-set procedure is used, it is worthwhile examining the RT function of the very first trials across the MSSs, which are presumably relatively free of contamination by the subsequent trials.

  5. Promoting convergence: The Phi spiral in abduction of mouse corneal behaviors

    PubMed Central

    Rhee, Jerry; Nejad, Talisa Mohammad; Comets, Olivier; Flannery, Sean; Gulsoy, Eine Begum; Iannaccone, Philip; Foster, Craig

    2015-01-01

    Why do mouse corneal epithelial cells display spiraling patterns? We want to provide an explanation for this curious phenomenon by applying an idealized problem solving process. Specifically, we applied complementary line-fitting methods to measure transgenic epithelial reporter expression arrangements displayed on three mature, live enucleated globes to clarify the problem. Two prominent logarithmic curves were discovered, one of which displayed the ϕ ratio, an indicator of an optimal configuration in phyllotactic systems. We then utilized two different computational approaches to expose our current understanding of the behavior. In one procedure, which involved an isotropic mechanics-based finite element method, we successfully produced logarithmic spiral curves of maximum shear strain based pathlines but computed dimensions displayed pitch angles of 35° (ϕ spiral is ∼17°), which was altered when we fitted the model with published measurements of coarse collagen orientations. We then used model-based reasoning in context of Peircean abduction to select a working hypothesis. Our work serves as a concise example of applying a scientific habit of mind and illustrates nuances of executing a common method to doing integrative science. © 2014 Wiley Periodicals, Inc. Complexity 20: 22–38, 2015 PMID:25755620

  6. Beam Thrust Cross Section for Drell-Yan Production at Next-to-Next-to-Leading-Logarithmic Order

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stewart, Iain W.; Tackmann, Frank J.; Waalewijn, Wouter J.

    2011-01-21

    At the LHC and Tevatron strong initial-state radiation (ISR) plays an important role. It can significantly affect the partonic luminosity available to the hard interaction or contaminate a signal with additional jets and soft radiation. An ideal process to study ISR is isolated Drell-Yan production, pp{yields}Xl{sup +}l{sup -} without central jets, where the jet veto is provided by the hadronic event shape beam thrust {tau}{sub B}. Most hadron collider event shapes are designed to study central jets. In contrast, requiring {tau}{sub B}<<1 provides an inclusive veto of central jets and measures the spectrum of ISR. For {tau}{sub B}<<1 we carrymore » out a resummation of {alpha}{sub s}{sup n}ln{sup m{tau}}{sub B} corrections at next-to-next-to-leading-logarithmic order. This is the first resummation at this order for a hadron-hadron collider event shape. Measurements of {tau}{sub B} at the Tevatron and LHC can provide crucial tests of our understanding of ISR and of {tau}{sub B}'s utility as a central jet veto.« less

  7. The vibrational properties of Chinese fir wood during moisture sorption process

    Treesearch

    Jiali Jiang; Jianxiong Lu; Zhiyong Cai

    2012-01-01

    The vibrational properties of Chinese fir (Cunninghamia lanceolata) wood were investigated in this study as a function of changes in moisture content (MC) and grain direction. The dynamic modulus of elasticity (DMOE) and logarithmic decrement σ were examined using a cantilever beam vibration testing apparatus. It was observed that DMOE and 6 of wood vaned...

  8. Estimating leaf area and leaf biomass of open-grown deciduous urban trees

    Treesearch

    David J. Nowak

    1996-01-01

    Logarithmic regression equations were developed to predict leaf area and leaf biomass for open-grown deciduous urban trees based on stem diameter and crown parameters. Equations based on crown parameters produced more reliable estimates. The equations can be used to help quantify forest structure and functions, particularly in urbanizing and urban/suburban areas.

  9. Dry Weight of Several Piedmont Hardwoods

    Treesearch

    Bobby G. Blackmon; Charles W. Ralston

    1968-01-01

    Forty-four sample hardwood trees felled on 24 plots were separated into three above-ground components- stem, branches, and leaves--and weighed for dry matter content. Tree, stand, and site variables were tested for significant relationships with dry weight of tree parts. Weight increase of stems was a logarithmic function ,of both stem diameter and height, whereas for...

  10. Predicting Body Fat Using Data on the BMI

    ERIC Educational Resources Information Center

    Mills, Terence C.

    2005-01-01

    A data set contained in the "Journal of Statistical Education's" data archive provides a way of exploring regression analysis at a variety of teaching levels. An appropriate functional form for the relationship between percentage body fat and the BMI is shown to be the semi-logarithmic, with variation in the BMI accounting for a little over half…

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Yunshan; DeVore, Peter T. S.; Jalali, Bahram

    Optical computing accelerators help alleviate bandwidth and power consumption bottlenecks in electronics. In this paper, we show an approach to implementing logarithmic-type analog co-processors in silicon photonics and use it to perform the exponentiation operation and the recovery of a signal in the presence of multiplicative distortion. Finally, the function is realized by exploiting nonlinear-absorption-enhanced Raman amplification saturation in a silicon waveguide.

  12. Leading chiral logarithms for the nucleon mass

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vladimirov, Alexey A.; Bijnens, Johan

    2016-01-22

    We give a short introduction to the calculation of the leading chiral logarithms, and present the results of the recent evaluation of the LLog series for the nucleon mass within the heavy baryon theory. The presented results are the first example of LLog calculation in the nucleon ChPT. We also discuss some regularities observed in the leading logarithmical series for nucleon mass.

  13. A computer graphics display and data compression technique

    NASA Technical Reports Server (NTRS)

    Teague, M. J.; Meyer, H. G.; Levenson, L. (Editor)

    1974-01-01

    The computer program discussed is intended for the graphical presentation of a general dependent variable X that is a function of two independent variables, U and V. The required input to the program is the variation of the dependent variable with one of the independent variables for various fixed values of the other. The computer program is named CRP, and the output is provided by the SD 4060 plotter. Program CRP is an extremely flexible program that offers the user a wide variety of options. The dependent variable may be presented in either a linear or a logarithmic manner. Automatic centering of the plot is provided in the ordinate direction, and the abscissa is scaled automatically for a logarithmic plot. A description of the carpet plot technique is given along with the coordinates system used in the program. Various aspects of the program logic are discussed and detailed documentation of the data card format is presented.

  14. Entanglement entropy of 2D conformal quantum critical points: hearing the shape of a quantum drum.

    PubMed

    Fradkin, Eduardo; Moore, Joel E

    2006-08-04

    The entanglement entropy of a pure quantum state of a bipartite system A union or logical sumB is defined as the von Neumann entropy of the reduced density matrix obtained by tracing over one of the two parts. In one dimension, the entanglement of critical ground states diverges logarithmically in the subsystem size, with a universal coefficient that for conformally invariant critical points is related to the central charge of the conformal field theory. We find that the entanglement entropy of a standard class of z=2 conformal quantum critical points in two spatial dimensions, in addition to a nonuniversal "area law" contribution linear in the size of the AB boundary, generically has a universal logarithmically divergent correction, which is completely determined by the geometry of the partition and by the central charge of the field theory that describes the critical wave function.

  15. Logarithmic Superdiffusion in Two Dimensional Driven Lattice Gases

    NASA Astrophysics Data System (ADS)

    Krug, J.; Neiss, R. A.; Schadschneider, A.; Schmidt, J.

    2018-03-01

    The spreading of density fluctuations in two-dimensional driven diffusive systems is marginally anomalous. Mode coupling theory predicts that the diffusivity in the direction of the drive diverges with time as (ln t)^{2/3} with a prefactor depending on the macroscopic current-density relation and the diffusion tensor of the fluctuating hydrodynamic field equation. Here we present the first numerical verification of this behavior for a particular version of the two-dimensional asymmetric exclusion process. Particles jump strictly asymmetrically along one of the lattice directions and symmetrically along the other, and an anisotropy parameter p governs the ratio between the two rates. Using a novel massively parallel coupling algorithm that strongly reduces the fluctuations in the numerical estimate of the two-point correlation function, we are able to accurately determine the exponent of the logarithmic correction. In addition, the variation of the prefactor with p provides a stringent test of mode coupling theory.

  16. Entanglement entropy of ABJM theory and entropy of topological black hole

    NASA Astrophysics Data System (ADS)

    Nian, Jun; Zhang, Xinyu

    2017-07-01

    In this paper we discuss the supersymmetric localization of the 4D N = 2 offshell gauged supergravity on the background of the AdS4 neutral topological black hole, which is the gravity dual of the ABJM theory defined on the boundary {S}^1× H^2 . We compute the large- N expansion of the supergravity partition function. The result gives the black hole entropy with the logarithmic correction, which matches the previous result of the entanglement entropy of the ABJM theory up to some stringy effects. Our result is consistent with the previous on-shell one-loop computation of the logarithmic correction to black hole entropy. It provides an explicit example of the identification of the entanglement entropy of the boundary conformal field theory with the bulk black hole entropy beyond the leading order given by the classical Bekenstein-Hawking formula, which consequently tests the AdS/CFT correspondence at the subleading order.

  17. Two Universality Properties Associated with the Monkey Model of Zipf's Law

    NASA Astrophysics Data System (ADS)

    Perline, Richard; Perline, Ron

    2016-03-01

    The distribution of word probabilities in the monkey model of Zipf's law is associated with two universality properties: (1) the power law exponent converges strongly to $-1$ as the alphabet size increases and the letter probabilities are specified as the spacings from a random division of the unit interval for any distribution with a bounded density function on $[0,1]$; and (2), on a logarithmic scale the version of the model with a finite word length cutoff and unequal letter probabilities is approximately normally distributed in the part of the distribution away from the tails. The first property is proved using a remarkably general limit theorem for the logarithm of sample spacings from Shao and Hahn, and the second property follows from Anscombe's central limit theorem for a random number of i.i.d. random variables. The finite word length model leads to a hybrid Zipf-lognormal mixture distribution closely related to work in other areas.

  18. Method of detecting system function by measuring frequency response

    DOEpatents

    Morrison, John L.; Morrison, William H.; Christophersen, Jon P.; Motloch, Chester G.

    2013-01-08

    Methods of rapidly measuring an impedance spectrum of an energy storage device in-situ over a limited number of logarithmically distributed frequencies are described. An energy storage device is excited with a known input signal, and a response is measured to ascertain the impedance spectrum. An excitation signal is a limited time duration sum-of-sines consisting of a select number of frequencies. In one embodiment, magnitude and phase of each frequency of interest within the sum-of-sines is identified when the selected frequencies and sample rate are logarithmic integer steps greater than two. This technique requires a measurement with a duration of one period of the lowest frequency. In another embodiment, where selected frequencies are distributed in octave steps, the impedance spectrum can be determined using a captured time record that is reduced to a half-period of the lowest frequency.

  19. A study of the eigenvectors of low frequency vibrational modes in crystalline cytidine via high pressure Raman spectroscopy

    NASA Astrophysics Data System (ADS)

    Lee, Scott A.

    2014-03-01

    High-pressure Raman spectroscopy has been used to study the eigenvectors and eigenvalues of the low-frequency vibrational modes of crystalline cytidine at 295 K by evaluating the logarithmic derivative of the vibrational frequency with respect to pressure: 1/ω dω/dP. Crystalline samples of molecular materials such as cytidine have vibrational modes that are localized within a molecular unit (``internal'' modes) as well as modes in which the molecular units vibrate against each other (``external'' modes). The value of the logarithmic derivative is a diagnostic probe of the nature of the eigenvector of the vibrational modes, making high pressure experiments a very useful probe for such studies. Internal stretching modes have low logarithmic derivatives while external as well as internal torsional and bending modes have higher logarithmic derivatives. All of the Raman modes below 200 cm-1 in cytidine are found to have high logarithmic derivatives, consistent with being either external modes or internal torsional or bending modes.

  20. Electronic filters, signal conversion apparatus, hearing aids and methods

    NASA Technical Reports Server (NTRS)

    Morley, Jr., Robert E. (Inventor); Engebretson, A. Maynard (Inventor); Engel, George L. (Inventor); Sullivan, Thomas J. (Inventor)

    1994-01-01

    An electronic filter for filtering an electrical signal. Signal processing circuitry therein includes a logarithmic filter having a series of filter stages with inputs and outputs in cascade and respective circuits associated with the filter stages for storing electrical representations of filter parameters. The filter stages include circuits for respectively adding the electrical representations of the filter parameters to the electrical signal to be filtered thereby producing a set of filter sum signals. At least one of the filter stages includes circuitry for producing a filter signal in substantially logarithmic form at its output by combining a filter sum signal for that filter stage with a signal from an output of another filter stage. The signal processing circuitry produces an intermediate output signal, and a multiplexer connected to the signal processing circuit multiplexes the intermediate output signal with the electrical signal to be filtered so that the logarithmic filter operates as both a logarithmic prefilter and a logarithmic postfilter. Other electronic filters, signal conversion apparatus, electroacoustic systems, hearing aids and methods are also disclosed.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kolodrubetz, Daniel W.; Pietrulewicz, Piotr; Stewart, Iain W.

    To predict the jet mass spectrum at a hadron collider it is crucial to account for the resummation of logarithms between the transverse momentum of the jet and its invariant mass m J . For small jet areas there are additional large logarithms of the jet radius R, which affect the convergence of the perturbative series. We present an analytic framework for exclusive jet production at the LHC which gives a complete description of the jet mass spectrum including realistic jet algorithms and jet vetoes. It factorizes the scales associated with m J , R, and the jet veto, enablingmore » in addition the systematic resummation of jet radius logarithms in the jet mass spectrum beyond leading logarithmic order. We discuss the factorization formulae for the peak and tail region of the jet mass spectrum and for small and large R, and the relations between the different regimes and how to combine them. Regions of experimental interest are classified which do not involve large nonglobal logarithms. We also present universal results for nonperturbative effects and discuss various jet vetoes.« less

  2. High speed high dynamic range high accuracy measurement system

    DOEpatents

    Deibele, Craig E.; Curry, Douglas E.; Dickson, Richard W.; Xie, Zaipeng

    2016-11-29

    A measuring system includes an input that emulates a bandpass filter with no signal reflections. A directional coupler connected to the input passes the filtered input to electrically isolated measuring circuits. Each of the measuring circuits includes an amplifier that amplifies the signal through logarithmic functions. The output of the measuring system is an accurate high dynamic range measurement.

  3. Detrended fluctuation analysis of short datasets: An application to fetal cardiac data

    NASA Astrophysics Data System (ADS)

    Govindan, R. B.; Wilson, J. D.; Preißl, H.; Eswaran, H.; Campbell, J. Q.; Lowery, C. L.

    2007-02-01

    Using detrended fluctuation analysis (DFA) we perform scaling analysis of short datasets of length 500-1500 data points. We quantify the long range correlation (exponent α) by computing the mean value of the local exponents αL (in the asymptotic regime). The local exponents are obtained as the (numerical) derivative of the logarithm of the fluctuation function F(s) with respect to the logarithm of the scale factor s:αL=dlog10F(s)/dlog10s. These local exponents display huge variations and complicate the correct quantification of the underlying correlations. We propose the use of the phase randomized surrogate (PRS), which preserves the long range correlations of the original data, to minimize the variations in the local exponents. Using the numerically generated uncorrelated and long range correlated data, we show that performing DFA on several realizations of PRS and estimating αL from the averaged fluctuation functions (of all realizations) can minimize the variations in αL. The application of this approach to the fetal cardiac data (RR intervals) is discussed and we show that there is a statistically significant correlation between α and the gestation age.

  4. On a new coordinate system with astrophysical application: Spiral coordinates

    NASA Astrophysics Data System (ADS)

    Campos, L. M. B. C.; Gil, P. J. S.

    In this presentation are introduced spiral coordinates, which are a particular case of conformal coordinates, i.e. orthogonal curvelinear coordinates with equal factors along all coordinate axis. The spiral coordinates in the plane have as coordinate curves two families of logarithmic spirals, making a constant angle, respectively phi and pi / 2-phi, with all radial lines, where phi is a parameter. They can be obtained from a complex function, representing a spiral potential flow, due to the superposition of a source/sink with a vortex; the parameter phi in this case specifies the ratio of the ass flux of source/sink to the circulation of the vortex. Regardless of hydrodynamical or other interpretations, spiral coordinates are particulary convenient in situation where physical quantities vary only along a logarithmicspiral. The example chosen is the propagation of Alfven waves along a logarithmic spiral, as an approximation to Parker's spiral. The equation of dissipative MHD are written in spiral coordinates, and eliminated to specify the Alfven wave equation in spiral coordinates; the latter is solved exactly in terms of Bessel functions, and the results analyzed for values of the parameters corresponding to the solar wind.

  5. Analytic Evolution of Singular Distribution Amplitudes in QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tandogan Kunkel, Asli

    2014-08-01

    Distribution amplitudes (DAs) are the basic functions that contain information about the quark momentum. DAs are necessary to describe hard exclusive processes in quantum chromodynamics. We describe a method of analytic evolution of DAs that have singularities such as nonzero values at the end points of the support region, jumps at some points inside the support region and cusps. We illustrate the method by applying it to the evolution of a at (constant) DA, antisymmetric at DA, and then use the method for evolution of the two-photon generalized distribution amplitude. Our approach to DA evolution has advantages over the standardmore » method of expansion in Gegenbauer polynomials [1, 2] and over a straightforward iteration of an initial distribution with evolution kernel. Expansion in Gegenbauer polynomials requires an infinite number of terms in order to accurately reproduce functions in the vicinity of singular points. Straightforward iteration of an initial distribution produces logarithmically divergent terms at each iteration. In our method the logarithmic singularities are summed from the start, which immediately produces a continuous curve. Afterwards, in order to get precise results, only one or two iterations are needed.« less

  6. Lattice QCD Thermodynamics and RHIC-BES Particle Production within Generic Nonextensive Statistics

    NASA Astrophysics Data System (ADS)

    Tawfik, Abdel Nasser

    2018-05-01

    The current status of implementing Tsallis (nonextensive) statistics on high-energy physics is briefly reviewed. The remarkably low freezeout-temperature, which apparently fails to reproduce the firstprinciple lattice QCD thermodynamics and the measured particle ratios, etc. is discussed. The present work suggests a novel interpretation for the so-called " Tsallis-temperature". It is proposed that the low Tsallis-temperature is due to incomplete implementation of Tsallis algebra though exponential and logarithmic functions to the high-energy particle-production. Substituting Tsallis algebra into grand-canonical partition-function of the hadron resonance gas model seems not assuring full incorporation of nonextensivity or correlations in that model. The statistics describing the phase-space volume, the number of states and the possible changes in the elementary cells should be rather modified due to interacting correlated subsystems, of which the phase-space is consisting. Alternatively, two asymptotic properties, each is associated with a scaling function, are utilized to classify a generalized entropy for such a system with large ensemble (produced particles) and strong correlations. Both scaling exponents define equivalence classes for all interacting and noninteracting systems and unambiguously characterize any statistical system in its thermodynamic limit. We conclude that the nature of lattice QCD simulations is apparently extensive and accordingly the Boltzmann-Gibbs statistics is fully fulfilled. Furthermore, we found that the ratios of various particle yields at extreme high and extreme low energies of RHIC-BES is likely nonextensive but not necessarily of Tsallis type.

  7. Wide-field fundus autofluorescence abnormalities and visual function in patients with cone and cone-rod dystrophies.

    PubMed

    Oishi, Maho; Oishi, Akio; Ogino, Ken; Makiyama, Yukiko; Gotoh, Norimoto; Kurimoto, Masafumi; Yoshimura, Nagahisa

    2014-05-20

    To evaluate the clinical utility of wide-field fundus autofluorescence (FAF) in patients with cone dystrophy and cone-rod dystrophy. Sixteen patients with cone dystrophy (CD) and 41 patients with cone-rod dystrophy (CRD) were recruited at one institution. The right eye of each patient was included for analysis. We obtained wide-field FAF images using a ultra-widefield retinal imaging device and measured the area of abnormal FAF. The association between the area of abnormal FAF and the results of visual acuity measurements, kinetic perimetry, and electroretinography (ERG) were investigated. The mean age of the participants was 51.4 ± 17.4 years, and the mean logarithm of the minimum angle of resolution was 1.00 ± 0.57. The area of abnormal FAF correlated with the scotoma measured by the Goldman perimetry I/4e isopter (ρ = 0.79, P < 0.001). The area also correlated with amplitudes of the rod ERG (ρ = -0.63, P < 0.001), combined ERG a-wave (ρ = -0.72, P < 0.001), combined ERG b-wave (ρ = -0.66, P < 0.001), cone ERG (ρ = -0.44, P = 0.001), and flicker ERG (ρ = -0.47, P < 0.001). The extent of abnormal FAF reflects the severity of functional impairment in patients with cone-dominant retinal dystrophies. Fundus autofluorescence measurements are useful for predicting retinal function in these patients. Copyright 2014 The Association for Research in Vision and Ophthalmology, Inc.

  8. The proton FL dipole approximation in the KMR and the MRW unintegrated parton distribution functions frameworks

    NASA Astrophysics Data System (ADS)

    Modarres, M.; Masouminia, M. R.; Hosseinkhani, H.; Olanj, N.

    2016-01-01

    In the spirit of performing a complete phenomenological investigation of the merits of Kimber-Martin-Ryskin (KMR) and Martin-Ryskin-Watt (MRW) unintegrated parton distribution functions (UPDF), we have computed the longitudinal structure function of the proton, FL (x ,Q2), from the so-called dipole approximation, using the LO and the NLO-UPDF, prepared in the respective frameworks. The preparation process utilizes the PDF of Martin et al., MSTW2008-LO and MSTW2008-NLO, as the inputs. Afterwards, the numerical results are undergone a series of comparisons against the exact kt-factorization and the kt-approximate results, derived from the work of Golec-Biernat and Stasto, against each other and the experimental data from ZEUS and H1 Collaborations at HERA. Interestingly, our results show a much better agreement with the exact kt-factorization, compared to the kt-approximate outcome. In addition, our results are completely consistent with those prepared from embedding the KMR and MRW UPDF directly into the kt-factorization framework. One may point out that the FL, prepared from the KMR UPDF shows a better agreement with the exact kt-factorization. This is despite the fact that the MRW formalism employs a better theoretical description of the DGLAP evolution equation and has an NLO expansion. Such unexpected consequence appears, due to the different implementation of the angular ordering constraint in the KMR approach, which automatically includes the resummation of ln ⁡ (1 / x), BFKL logarithms, in the LO-DGLAP evolution equation.

  9. Virtual photon structure functions and the parton content of the electron

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Drees, M.; Godbole, R.M.

    1994-09-01

    We point out that in processes involving the parton content of the photon the usual effective photon approximation should be modified. The reason is that the parton content of virtual photons is logarithmically suppressed compared to real photons. We describe this suppression using several simple, physically motivated [ital Ansa]$[ital uml---tze]. Although the parton content of the electron in general no longer factorizes into an electron flux function and a photon structure function, it can still be expressed as a single integral. Numerical examples are given for the [ital e][sup +][ital e][sup [minus

  10. A CMOS current-mode log(x) and log(1/x) functions generator

    NASA Astrophysics Data System (ADS)

    Al-Absi, Munir A.; Al-Tamimi, Karama M.

    2014-08-01

    A novel Complementary Metal Oxide Semiconductor (CMOS) current-mode low-voltage and low-power controllable logarithmic function circuit is presented. The proposed design utilises one Operational Transconductance Amplifier (OTA) and two PMOS transistors biased in weak inversion region. The proposed design provides high dynamic range, controllable amplitude, high accuracy and is insensitive to temperature variations. The circuit operates on a ±0.6 V power supply and consumes 0.3 μW. The functionality of the proposed circuit was verified using HSPICE with 0.35 μm 2P4M CMOS process technology.

  11. Hybrid optimization and Bayesian inference techniques for a non-smooth radiation detection problem

    DOE PAGES

    Stefanescu, Razvan; Schmidt, Kathleen; Hite, Jason; ...

    2016-12-12

    In this paper, we propose several algorithms to recover the location and intensity of a radiation source located in a simulated 250 × 180 m block of an urban center based on synthetic measurements. Radioactive decay and detection are Poisson random processes, so we employ likelihood functions based on this distribution. Owing to the domain geometry and the proposed response model, the negative logarithm of the likelihood is only piecewise continuous differentiable, and it has multiple local minima. To address these difficulties, we investigate three hybrid algorithms composed of mixed optimization techniques. For global optimization, we consider simulated annealing, particlemore » swarm, and genetic algorithm, which rely solely on objective function evaluations; that is, they do not evaluate the gradient in the objective function. By employing early stopping criteria for the global optimization methods, a pseudo-optimum point is obtained. This is subsequently utilized as the initial value by the deterministic implicit filtering method, which is able to find local extrema in non-smooth functions, to finish the search in a narrow domain. These new hybrid techniques, combining global optimization and implicit filtering address, difficulties associated with the non-smooth response, and their performances, are shown to significantly decrease the computational time over the global optimization methods. To quantify uncertainties associated with the source location and intensity, we employ the delayed rejection adaptive Metropolis and DiffeRential Evolution Adaptive Metropolis algorithms. Finally, marginal densities of the source properties are obtained, and the means of the chains compare accurately with the estimates produced by the hybrid algorithms.« less

  12. A distributed algorithm for demand-side management: Selling back to the grid.

    PubMed

    Latifi, Milad; Khalili, Azam; Rastegarnia, Amir; Zandi, Sajad; Bazzi, Wael M

    2017-11-01

    Demand side energy consumption scheduling is a well-known issue in the smart grid research area. However, there is lack of a comprehensive method to manage the demand side and consumer behavior in order to obtain an optimum solution. The method needs to address several aspects, including the scale-free requirement and distributed nature of the problem, consideration of renewable resources, allowing consumers to sell electricity back to the main grid, and adaptivity to a local change in the solution point. In addition, the model should allow compensation to consumers and ensurance of certain satisfaction levels. To tackle these issues, this paper proposes a novel autonomous demand side management technique which minimizes consumer utility costs and maximizes consumer comfort levels in a fully distributed manner. The technique uses a new logarithmic cost function and allows consumers to sell excess electricity (e.g. from renewable resources) back to the grid in order to reduce their electric utility bill. To develop the proposed scheme, we first formulate the problem as a constrained convex minimization problem. Then, it is converted to an unconstrained version using the segmentation-based penalty method. At each consumer location, we deploy an adaptive diffusion approach to obtain the solution in a distributed fashion. The use of adaptive diffusion makes it possible for consumers to find the optimum energy consumption schedule with a small number of information exchanges. Moreover, the proposed method is able to track drifts resulting from changes in the price parameters and consumer preferences. Simulations and numerical results show that our framework can reduce the total load demand peaks, lower the consumer utility bill, and improve the consumer comfort level.

  13. Quantum loop corrections of a charged de Sitter black hole

    NASA Astrophysics Data System (ADS)

    Naji, J.

    2018-03-01

    A charged black hole in de Sitter (dS) space is considered and logarithmic corrected entropy used to study its thermodynamics. Logarithmic corrections of entropy come from thermal fluctuations, which play a role of quantum loop correction. In that case we are able to study the effect of quantum loop on black hole thermodynamics and statistics. As a black hole is a gravitational object, it helps to obtain some information about the quantum gravity. The first and second laws of thermodynamics are investigated for the logarithmic corrected case and we find that it is only valid for the charged dS black hole. We show that the black hole phase transition disappears in the presence of logarithmic correction.

  14. [Spectral reflectance characteristics and modeling of typical Takyr Solonetzs water content].

    PubMed

    Zhang, Jun-hua; Jia, Ke-li

    2015-03-01

    Based on the analysis of the spectral reflectance of the typical Takyr Solonetzs soil in Ningxia, the relationship of soil water content and spectral reflectance was determined, and a quantitative model for the prediction of soil water content was constructed. The results showed that soil spectral reflectance decreased with the increasing soil water content when it was below the water holding capacity but increased with the increasing soil water content when it was higher than the water holding capacity. Soil water content presented significantly negative correlation with original reflectance (r), smooth reflectance (R), logarithm of reflectance (IgR), and positive correlation with the reciprocal of R and logarithm of reciprocal [lg (1/R)]. The correlation coefficient of soil water content and R in the whole wavelength was 0.0013, 0.0397 higher than r and lgR, respectively. Average correlation coefficient of soil water content with 1/R and [lg (1/R)] at the wavelength of 950-1000 nm was 0.2350 higher than that of 400-950 nm. The relationships of soil water content with the first derivate differential (R') , the first derivate differential of logarithm (lgR)' and the first derivate differential of logarithm of reciprocal [lg(1/R)]' were unstable. Base on the coefficients of r, lg(1/R), R' and (lgR)', different regression models were established to predict soil water content, and the coefficients of determination were 0.7610, 0.8184, 0.8524 and 0.8255, respectively. The determination coefficient for power function model of R'. reached 0.9447, while the fitting degree between the predicted value based on this model and on-site measured value was 0.8279. The model of R' had the highest fitted accuracy, while that of r had the lowest one. The results could provide a scientific basis for soil water content prediction and field irrigation in the Takyr Solonetzs region.

  15. Logarithmic spiral trajectories generated by Solar sails

    NASA Astrophysics Data System (ADS)

    Bassetto, Marco; Niccolai, Lorenzo; Quarta, Alessandro A.; Mengali, Giovanni

    2018-02-01

    Analytic solutions to continuous thrust-propelled trajectories are available in a few cases only. An interesting case is offered by the logarithmic spiral, that is, a trajectory characterized by a constant flight path angle and a fixed thrust vector direction in an orbital reference frame. The logarithmic spiral is important from a practical point of view, because it may be passively maintained by a Solar sail-based spacecraft. The aim of this paper is to provide a systematic study concerning the possibility of inserting a Solar sail-based spacecraft into a heliocentric logarithmic spiral trajectory without using any impulsive maneuver. The required conditions to be met by the sail in terms of attitude angle, propulsive performance, parking orbit characteristics, and initial position are thoroughly investigated. The closed-form variations of the osculating orbital parameters are analyzed, and the obtained analytical results are used for investigating the phasing maneuver of a Solar sail along an elliptic heliocentric orbit. In this mission scenario, the phasing orbit is composed of two symmetric logarithmic spiral trajectories connected with a coasting arc.

  16. Electronic filters, repeated signal charge conversion apparatus, hearing aids and methods

    NASA Technical Reports Server (NTRS)

    Morley, Jr., Robert E. (Inventor); Engebretson, A. Maynard (Inventor); Engel, George L. (Inventor); Sullivan, Thomas J. (Inventor)

    1993-01-01

    An electronic filter for filtering an electrical signal. Signal processing circuitry therein includes a logarithmic filter having a series of filter stages with inputs and outputs in cascade and respective circuits associated with the filter stages for storing electrical representations of filter parameters. The filter stages include circuits for respectively adding the electrical representations of the filter parameters to the electrical signal to be filtered thereby producing a set of filter sum signals. At least one of the filter stages includes circuitry for producing a filter signal in substantially logarithmic form at its output by combining a filter sum signal for that filter stage with a signal from an output of another filter stage. The signal processing circuitry produces an intermediate output signal, and a multiplexer connected to the signal processing circuit multiplexes the intermediate output signal with the electrical signal to be filtered so that the logarithmic filter operates as both a logarithmic prefilter and a logarithmic postfilter. Other electronic filters, signal conversion apparatus, electroacoustic systems, hearing aids and methods are also disclosed.

  17. Laser induced phosphorescence uranium analysis

    DOEpatents

    Bushaw, B.A.

    1983-06-10

    A method is described for measuring the uranium content of aqueous solutions wherein a uranyl phosphate complex is irradiated with a 5 nanosecond pulse of 425 nanometer laser light and resultant 520 nanometer emissions are observed for a period of 50 to 400 microseconds after the pulse. Plotting the natural logarithm of emission intensity as a function of time yields an intercept value which is proportional to uranium concentration.

  18. Space Mathematics: A Resource for Secondary School Teachers

    NASA Technical Reports Server (NTRS)

    Kastner, Bernice

    1985-01-01

    A collection of mathematical problems related to NASA space science projects is presented. In developing the examples and problems, attention was given to preserving the authenticity and significance of the original setting while keeping the level of mathematics within the secondary school curriculum. Computation and measurement, algebra, geometry, probability and statistics, exponential and logarithmic functions, trigonometry, matrix algebra, conic sections, and calculus are among the areas addressed.

  19. International Workshop on Discrete Time Domain Modelling of Electromagnetic Fields and Networks (2nd) Held in Berlin, Germany on October 28-29, 1993

    DTIC Science & Technology

    1993-10-29

    natural logarithm of the ratio of two maxima a period apart. Both methods are based on the results from the numerical integration. The details of this...check and okay member funtions are for sofware handshaking between the client and sever pracrss. Finally, the Forward function is used to initiate a

  20. Laser induced phosphorescence uranium analysis

    DOEpatents

    Bushaw, Bruce A.

    1986-01-01

    A method is described for measuring the uranium content of aqueous solutions wherein a uranyl phosphate complex is irradiated with a 5 nanosecond pulse of 425 nanometer laser light and resultant 520 nanometer emissions are observed for a period of 50 to 400 microseconds after the pulse. Plotting the natural logarithm of emission intensity as a function of time yields an intercept value which is proportional to uranium concentration.

  1. Analog optical computing primitives in silicon photonics

    DOE PAGES

    Jiang, Yunshan; DeVore, Peter T. S.; Jalali, Bahram

    2016-03-15

    Optical computing accelerators help alleviate bandwidth and power consumption bottlenecks in electronics. In this paper, we show an approach to implementing logarithmic-type analog co-processors in silicon photonics and use it to perform the exponentiation operation and the recovery of a signal in the presence of multiplicative distortion. Finally, the function is realized by exploiting nonlinear-absorption-enhanced Raman amplification saturation in a silicon waveguide.

  2. Performance analysis of 60-min to 1-min integration time rain rate conversion models in Malaysia

    NASA Astrophysics Data System (ADS)

    Ng, Yun-Yann; Singh, Mandeep Singh Jit; Thiruchelvam, Vinesh

    2018-01-01

    Utilizing the frequency band above 10 GHz is in focus nowadays as a result of the fast expansion of radio communication systems in Malaysia. However, rain fade is the critical factor in attenuation of signal propagation for frequencies above 10 GHz. Malaysia is located in a tropical and equatorial region with high rain intensity throughout the year, and this study will review rain distribution and evaluate the performance of 60-min to 1-min integration time rain rate conversion methods for Malaysia. Several conversion methods such as Segal, Chebil & Rahman, Burgeono, Emiliani, Lavergnat and Gole (LG), Simplified Moupfouma, Joo et al., fourth order polynomial fit and logarithmic model have been chosen to evaluate the performance to predict 1-min rain rate for 10 sites in Malaysia. After the completion of this research, the results show that Chebil & Rahman model, Lavergnat & Gole model, Fourth order polynomial fit and Logarithmic model have shown the best performances in 60-min to 1-min rain rate conversion over 10 sites. In conclusion, it is proven that there is no single model which can claim to perform the best across 10 sites. By averaging RMSE and SC-RMSE over 10 sites, Chebil and Rahman model is the best method.

  3. Compact exponential product formulas and operator functional derivative

    NASA Astrophysics Data System (ADS)

    Suzuki, Masuo

    1997-02-01

    A new scheme for deriving compact expressions of the logarithm of the exponential product is proposed and it is applied to several exponential product formulas. A generalization of the Dynkin-Specht-Wever (DSW) theorem on free Lie elements is given, and it is used to study the relation between the traditional method (based on the DSW theorem) and the present new scheme. The concept of the operator functional derivative is also proposed, and it is applied to ordered exponentials, such as time-evolution operators for time-dependent Hamiltonians.

  4. A class of nonideal solutions. 1: Definition and properties

    NASA Technical Reports Server (NTRS)

    Zeleznik, F. J.

    1983-01-01

    A class of nonideal solutions is defined by constructing a function to represent the composition dependence of thermodynamic properties for members of the class, and some properties of these solutions are studied. The constructed function has several useful features: (1) its parameters occur linearly; (2) it contains a logarithmic singularity in the dilute solution region and contains ideal solutions and regular solutions as special cases; and (3) it is applicable to N-ary systems and reduces to M-ary systems (M or = N) in a form-invariant manner.

  5. Transistor circuit increases range of logarithmic current amplifier

    NASA Technical Reports Server (NTRS)

    Gilmour, G.

    1966-01-01

    Circuit increases the range of a logarithmic current amplifier by combining a commercially available amplifier with a silicon epitaxial transistor. A temperature compensating network is provided for the transistor.

  6. Finite-difference interblock transmissivity for unconfined aquifers and for aquifers having smoothly varying transmissivity

    USGS Publications Warehouse

    Goode, D.J.; Appel, C.A.

    1992-01-01

    More accurate alternatives to the widely used harmonic mean interblock transmissivity are proposed for block-centered finite-difference models of ground-water flow in unconfined aquifers and in aquifers having smoothly varying transmissivity. The harmonic mean is the exact interblock transmissivity for steady-state one-dimensional flow with no recharge if the transmissivity is assumed to be spatially uniform over each finite-difference block, changing abruptly at the block interface. However, the harmonic mean may be inferior to other means if transmissivity varies in a continuous or smooth manner between nodes. Alternative interblock transmissivity functions are analytically derived for the case of steady-state one-dimensional flow with no recharge. The second author has previously derived the exact interblock transmissivity, the logarithmic mean, for one-dimensional flow when transmissivity is a linear function of distance in the direction of flow. We show that the logarithmic mean transmissivity is also exact for uniform flow parallel to the direction of changing transmissivity in a two- or three-dimensional model, regardless of grid orientation relative to the flow vector. For the case of horizontal flow in a homogeneous unconfined or water-table aquifer with a horizontal bottom and with areally distributed recharge, the exact interblock transmissivity is the unweighted arithmetic mean of transmissivity at the nodes. This mean also exhibits no grid-orientation effect for unidirectional flow in a two-dimensional model. For horizontal flow in an unconfined aquifer with no recharge where hydraulic conductivity is a linear function of distance in the direction of flow the exact interblock transmissivity is the product of the arithmetic mean saturated thickness and the logarithmic mean hydraulic conductivity. For several hypothetical two- and three-dimensional cases with smoothly varying transmissivity or hydraulic conductivity, the harmonic mean is shown to yield the least accurate solution to the flow equation of the alternatives considered. Application of the alternative interblock transmissivities to a regional aquifer system model indicates that the changes in computed heads and fluxes are typically small, relative to model calibration error. For this example, the use of alternative interblock transmissivities resulted in an increase in computational effort of less than 3 percent. Numerical algorithms to compute alternative interblock transmissivity functions in a modular three-dimensional flow model are presented and documented.

  7. Logarithmic corrections to black hole entropy from Kerr/CFT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pathak, Abhishek; Porfyriadis, Achilleas P.; Strominger, Andrew

    It has been shown by A. Sen that logarithmic corrections to the black hole area-entropy law are entirely determined macroscopically from the massless particle spectrum. They therefore serve as powerful consistency checks on any proposed enumeration of quantum black hole microstates. Furthermore, Sen’s results include a macroscopic computation of the logarithmic corrections for a five-dimensional near extremal Kerr-Newman black hole. We compute these corrections microscopically using a stringy embedding of the Kerr/CFT correspondence and find perfect agreement.

  8. Logarithmic corrections to black hole entropy from Kerr/CFT

    DOE PAGES

    Pathak, Abhishek; Porfyriadis, Achilleas P.; Strominger, Andrew; ...

    2017-04-14

    It has been shown by A. Sen that logarithmic corrections to the black hole area-entropy law are entirely determined macroscopically from the massless particle spectrum. They therefore serve as powerful consistency checks on any proposed enumeration of quantum black hole microstates. Furthermore, Sen’s results include a macroscopic computation of the logarithmic corrections for a five-dimensional near extremal Kerr-Newman black hole. We compute these corrections microscopically using a stringy embedding of the Kerr/CFT correspondence and find perfect agreement.

  9. Gene ercA, encoding a putative iron-containing alcohol dehydrogenase, is involved in regulation of ethanol utilization in Pseudomonas aeruginosa.

    PubMed

    Hempel, Niels; Görisch, Helmut; Mern, Demissew S

    2013-09-01

    Several two-component regulatory systems are known to be involved in the signal transduction pathway of the ethanol oxidation system in Pseudomonas aeruginosa ATCC 17933. These sensor kinases and response regulators are organized in a hierarchical manner. In addition, a cytoplasmic putative iron-containing alcohol dehydrogenase (Fe-ADH) encoded by ercA (PA1991) has been identified to play an essential role in this regulatory network. The gene ercA (PA1991) is located next to ercS, which encodes a sensor kinase. Inactivation of ercA (PA1991) by insertion of a kanamycin resistance cassette created mutant NH1. NH1 showed poor growth on various alcohols. On ethanol, NH1 grew only with an extremely extended lag phase. During the induction period on ethanol, transcription of structural genes exa and pqqABCDEH, encoding components of initial ethanol oxidation in P. aeruginosa, was drastically reduced in NH1, which indicates the regulatory function of ercA (PA1991). However, transcription in the extremely delayed logarithmic growth phase was comparable to that in the wild type. To date, the involvement of an Fe-ADH in signal transduction processes has not been reported.

  10. Gene ercA, Encoding a Putative Iron-Containing Alcohol Dehydrogenase, Is Involved in Regulation of Ethanol Utilization in Pseudomonas aeruginosa

    PubMed Central

    Hempel, Niels; Görisch, Helmut

    2013-01-01

    Several two-component regulatory systems are known to be involved in the signal transduction pathway of the ethanol oxidation system in Pseudomonas aeruginosa ATCC 17933. These sensor kinases and response regulators are organized in a hierarchical manner. In addition, a cytoplasmic putative iron-containing alcohol dehydrogenase (Fe-ADH) encoded by ercA (PA1991) has been identified to play an essential role in this regulatory network. The gene ercA (PA1991) is located next to ercS, which encodes a sensor kinase. Inactivation of ercA (PA1991) by insertion of a kanamycin resistance cassette created mutant NH1. NH1 showed poor growth on various alcohols. On ethanol, NH1 grew only with an extremely extended lag phase. During the induction period on ethanol, transcription of structural genes exa and pqqABCDEH, encoding components of initial ethanol oxidation in P. aeruginosa, was drastically reduced in NH1, which indicates the regulatory function of ercA (PA1991). However, transcription in the extremely delayed logarithmic growth phase was comparable to that in the wild type. To date, the involvement of an Fe-ADH in signal transduction processes has not been reported. PMID:23813731

  11. Parameter identification of JONSWAP spectrum acquired by airborne LIDAR

    NASA Astrophysics Data System (ADS)

    Yu, Yang; Pei, Hailong; Xu, Chengzhong

    2017-12-01

    In this study, we developed the first linear Joint North Sea Wave Project (JONSWAP) spectrum (JS), which involves a transformation from the JS solution to the natural logarithmic scale. This transformation is convenient for defining the least squares function in terms of the scale and shape parameters. We identified these two wind-dependent parameters to better understand the wind effect on surface waves. Due to its efficiency and high-resolution, we employed the airborne Light Detection and Ranging (LIDAR) system for our measurements. Due to the lack of actual data, we simulated ocean waves in the MATLAB environment, which can be easily translated into industrial programming language. We utilized the Longuet-Higgin (LH) random-phase method to generate the time series of wave records and used the fast Fourier transform (FFT) technique to compute the power spectra density. After validating these procedures, we identified the JS parameters by minimizing the mean-square error of the target spectrum to that of the estimated spectrum obtained by FFT. We determined that the estimation error is relative to the amount of available wave record data. Finally, we found the inverse computation of wind factors (wind speed and wind fetch length) to be robust and sufficiently precise for wave forecasting.

  12. Directed Design of Experiments for Validating Probability of Detection Capability of NDE Systems (DOEPOD)

    NASA Technical Reports Server (NTRS)

    Generazio, Edward R.

    2015-01-01

    Directed Design of Experiments for Validating Probability of Detection Capability of NDE Systems (DOEPOD) Manual v.1.2 The capability of an inspection system is established by applications of various methodologies to determine the probability of detection (POD). One accepted metric of an adequate inspection system is that there is 95% confidence that the POD is greater than 90% (90/95 POD). Design of experiments for validating probability of detection capability of nondestructive evaluation (NDE) systems (DOEPOD) is a methodology that is implemented via software to serve as a diagnostic tool providing detailed analysis of POD test data, guidance on establishing data distribution requirements, and resolving test issues. DOEPOD demands utilization of observance of occurrences. The DOEPOD capability has been developed to provide an efficient and accurate methodology that yields observed POD and confidence bounds for both Hit-Miss or signal amplitude testing. DOEPOD does not assume prescribed POD logarithmic or similar functions with assumed adequacy over a wide range of flaw sizes and inspection system technologies, so that multi-parameter curve fitting or model optimization approaches to generate a POD curve are not required. DOEPOD applications for supporting inspector qualifications is included.

  13. Determination of molecular mass values of chondroitin sulfates by fluorophore-assisted carbohydrate electrophoresis (FACE).

    PubMed

    Buzzega, Dania; Maccari, Francesca; Volpi, Nicola

    2010-03-11

    Fluorophore-assisted carbohydrate electrophoresis (FACE) was applied to determine the molecular mass (M) values of various chondroitin sulfate (CS) samples. After labeling with 8-aminonaphthalene-1,3,6-trisulfonic acid (ANTS), FACE was able to resolve each CS sample as a discrete band depending on the M value. After densitometric acquisition, the migration distance of each CS standard was acquired and the third grade polynomial calibration standard curve was determined by plotting the logarithms of the M values as a function of migration ratio. Purified CS samples of different origin and the European Pharmacopeia CS standard were analyzed by both FACE and conventional high-performance size-exclusion liquid chromatography (HPSEC) methods. The molecular weight value on the top of the chromatographic peak (M(p)), the number-average M(n), weight-average M(w), and polydispersity (M(w)/M(n)) were examined by both techniques and found to be quite similar. This study demonstrates that FACE analysis is a suitable, sensitive and simple method for the determination of the M values of CS macromolecules with possible utilization in virtually any kind of research and development such as quality control laboratories. Copyright 2009 Elsevier B.V. All rights reserved.

  14. Differential depuration of poliovirus, Escherichia coli, and a coliphage by the common mussel, Mytilus edulis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Power, U.F.; Collins, J.K.

    1989-06-01

    The elimination of sewage effluent-associated poliovirus, Escherichia coli, and a 22-nm icosahedral coliphage by the common mussel, Mytilus edulis, was studied. Both laboratory-and commercial-scale recirculating, UV depuration systems were used in this study. In the laboratory system, the logarithms of the poliovirus, E. coli, and coliphage levels were reduced by 1.86, 2.9, and 2.16, respectively, within 52 h of depuration. The relative patterns and rates of elimination of the three organisms suggest that they are eliminated from mussels by different mechanisms during depuration under suitable conditions. Poliovirus was not included in experiments undertaken in the commercial-scale depuration system. The differencesmore » in the relative rates and patterns of elimination were maintained for E. coli and coliphage in this system, with the logarithm of the E. coli levels being reduced by 3.18 and the logarithm of the coliphage levels being reduced by 0.87. The results from both depuration systems suggest that E. coli is an inappropriate indicator of the efficiency of virus elimination during depuration. The coliphage used appears to be a more representative indicator. Depuration under stressful conditions appeared to have a negligible affect on poliovirus and coliphage elimination rates from mussels. However, the rate and pattern of E. coli elimination were dramatically affected by these conditions. Therefore, monitoring E. coli counts might prove useful in ensuring that mussels are functioning well during depuration.« less

  15. Nonlinear isochrones in murine left ventricular pressure-volume loops: how well does the time-varying elastance concept hold?

    PubMed

    Claessens, T E; Georgakopoulos, D; Afanasyeva, M; Vermeersch, S J; Millar, H D; Stergiopulos, N; Westerhof, N; Verdonck, P R; Segers, P

    2006-04-01

    The linear time-varying elastance theory is frequently used to describe the change in ventricular stiffness during the cardiac cycle. The concept assumes that all isochrones (i.e., curves that connect pressure-volume data occurring at the same time) are linear and have a common volume intercept. Of specific interest is the steepest isochrone, the end-systolic pressure-volume relationship (ESPVR), of which the slope serves as an index for cardiac contractile function. Pressure-volume measurements, achieved with a combined pressure-conductance catheter in the left ventricle of 13 open-chest anesthetized mice, showed a marked curvilinearity of the isochrones. We therefore analyzed the shape of the isochrones by using six regression algorithms (two linear, two quadratic, and two logarithmic, each with a fixed or time-varying intercept) and discussed the consequences for the elastance concept. Our main observations were 1) the volume intercept varies considerably with time; 2) isochrones are equally well described by using quadratic or logarithmic regression; 3) linear regression with a fixed intercept shows poor correlation (R(2) < 0.75) during isovolumic relaxation and early filling; and 4) logarithmic regression is superior in estimating the fixed volume intercept of the ESPVR. In conclusion, the linear time-varying elastance fails to provide a sufficiently robust model to account for changes in pressure and volume during the cardiac cycle in the mouse ventricle. A new framework accounting for the nonlinear shape of the isochrones needs to be developed.

  16. Wetting in a phase separating polymer blend film: quench depth dependence

    PubMed

    Geoghegan; Ermer; Jungst; Krausch; Brenn

    2000-07-01

    We have used 3He nuclear reaction analysis to measure the growth of the wetting layer as a function of immiscibility (quench depth) in blends of deuterated polystyrene and poly(alpha-methylstyrene) undergoing surface-directed spinodal decomposition. We are able to identify three different laws for the surface layer growth with time t. For the deepest quenches, the forces driving phase separation dominate (high thermal noise) and the surface layer grows with a t(1/3) coarsening behavior. For shallower quenches, a logarithmic behavior is observed, indicative of a low noise system. The crossover from logarithmic growth to t(1/3) behavior is close to where a wetting transition should occur. We also discuss the possibility of a "plating transition" extending complete wetting to deeper quenches by comparing the surface field with thermal noise. For the shallowest quench, a critical blend exhibits a t(1/2) behavior. We believe this surface layer growth is driven by the curvature of domains at the surface and shows how the wetting layer forms in the absence of thermal noise. This suggestion is reinforced by a slower growth at later times, indicating that the surface domains have coalesced. Atomic force microscopy measurements in each of the different regimes further support the above. The surface in the region of t(1/3) growth is initially somewhat rougher than that in the regime of logarithmic growth, indicating the existence of droplets at the surface.

  17. Hydrodynamics of confined colloidal fluids in two dimensions

    NASA Astrophysics Data System (ADS)

    Sané, Jimaan; Padding, Johan T.; Louis, Ard A.

    2009-05-01

    We apply a hybrid molecular dynamics and mesoscopic simulation technique to study the dynamics of two-dimensional colloidal disks in confined geometries. We calculate the velocity autocorrelation functions and observe the predicted t-1 long-time hydrodynamic tail that characterizes unconfined fluids, as well as more complex oscillating behavior and negative tails for strongly confined geometries. Because the t-1 tail of the velocity autocorrelation function is cut off for longer times in finite systems, the related diffusion coefficient does not diverge but instead depends logarithmically on the overall size of the system. The Langevin equation gives a poor approximation to the velocity autocorrelation function at both short and long times.

  18. Multilayer neural networks with extensively many hidden units.

    PubMed

    Rosen-Zvi, M; Engel, A; Kanter, I

    2001-08-13

    The information processing abilities of a multilayer neural network with a number of hidden units scaling as the input dimension are studied using statistical mechanics methods. The mapping from the input layer to the hidden units is performed by general symmetric Boolean functions, whereas the hidden layer is connected to the output by either discrete or continuous couplings. Introducing an overlap in the space of Boolean functions as order parameter, the storage capacity is found to scale with the logarithm of the number of implementable Boolean functions. The generalization behavior is smooth for continuous couplings and shows a discontinuous transition to perfect generalization for discrete ones.

  19. Thermodynamic variables of first-order entropy corrected Lovelock-AdS black holes: P{-}V criticality analysis

    NASA Astrophysics Data System (ADS)

    Haldar, Amritendu; Biswas, Ritabrata

    2018-06-01

    We investigate the effect of thermal fluctuations on the thermodynamics of a Lovelock-AdS black hole. Taking the first order logarithmic correction term in entropy we analyze the thermodynamic potentials like Helmholtz free energy, enthalpy and Gibbs free energy. We find that all the thermodynamic potentials are decreasing functions of correction coefficient α . We also examined this correction coefficient must be positive by analysing P{-}V diagram. Further we study the P{-}V criticality and stability and find that presence of logarithmic correction in it is necessary to have critical points and stable phases. When P{-}V criticality appears, we calculate the critical volume V_c, critical pressure P_c and critical temperature T_c using different equations and show that there is no critical point for this black hole without thermal fluctuations. We also study the geometrothermodynamics of this kind of black holes. The Ricci scalar of the Ruppeiner metric is graphically analysed.

  20. Renormalization of dijet operators at order 1 /Q 2 in soft-collinear effective theory

    NASA Astrophysics Data System (ADS)

    Goerke, Raymond; Inglis-Whalen, Matthew

    2018-05-01

    We make progress towards resummation of power-suppressed logarithms in dijet event shapes such as thrust, which have the potential to improve high-precision fits for the value of the strong coupling constant. Using a newly developed formalism for Soft-Collinear Effective Theory (SCET), we identify and compute the anomalous dimensions of all the operators that contribute to event shapes at order 1 /Q 2. These anomalous dimensions are necessary to resum power-suppressed logarithms in dijet event shape distributions, although an additional matching step and running of observable-dependent soft functions will be necessary to complete the resummation. In contrast to standard SCET, the new formalism does not make reference to modes or λ-scaling. Since the formalism does not distinguish between collinear and ultrasoft degrees of freedom at the matching scale, fewer subleading operators are required when compared to recent similar work. We demonstrate how the overlap subtraction prescription extends to these subleading operators.

  1. Exact density-potential pairs from complex-shifted axisymmetric systems

    NASA Astrophysics Data System (ADS)

    Ciotti, Luca; Marinacci, Federico

    2008-07-01

    In a previous paper, the complex-shift method has been applied to self-gravitating spherical systems, producing new analytical axisymmetric density-potential pairs. We now extend the treatment to the Miyamoto-Nagai disc and the Binney logarithmic halo, and we study the resulting axisymmetric and triaxial analytical density-potential pairs; we also show how to obtain the surface density of shifted systems from the complex shift of the surface density of the parent model. In particular, the systems obtained from Miyamoto-Nagai discs can be used to describe disc galaxies with a peanut-shaped bulge or with a central triaxial bar, depending on the direction of the shift vector. By using a constructive method that can be applied to generic axisymmetric systems, we finally show that the Miyamoto-Nagai and the Satoh discs, and the Binney logarithmic halo cannot be obtained from the complex shift of any spherical parent distribution. As a by-product of this study, we also found two new generating functions in closed form for even and odd Legendre polynomials, respectively.

  2. Entanglement entropy in (3 + 1)-d free U(1) gauge theory

    NASA Astrophysics Data System (ADS)

    Soni, Ronak M.; Trivedi, Sandip P.

    2017-02-01

    We consider the entanglement entropy for a free U(1) theory in 3+1 dimensions in the extended Hilbert space definition. By taking the continuum limit carefully we obtain a replica trick path integral which calculates this entanglement entropy. The path integral is gauge invariant, with a gauge fixing delta function accompanied by a Faddeev -Popov determinant. For a spherical region it follows that the result for the logarithmic term in the entanglement, which is universal, is given by the a anomaly coefficient. We also consider the extractable part of the entanglement, which corresponds to the number of Bell pairs which can be obtained from entanglement distillation or dilution. For a spherical region we show that the coefficient of the logarithmic term for the extractable part is different from the extended Hilbert space result. We argue that the two results will differ in general, and this difference is accounted for by a massless scalar living on the boundary of the region of interest.

  3. Detailed kinetics of titanium nitride synthesis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rode, H.; Hlavacek, V.

    1995-02-01

    A thermogravimetric analyzer is used to study the synthesis of TiN from Ti powder over a wide range of temperature, conversion and heating rate, and for two Ti precursor powders with different morphologies. Conversions to TiN up to 99% are obtained with negligible oxygen contamination. Nonisothermal initial rate and isothermal data are used in a nonlinear least-squares minimization to determine the most appropriate rate law. The logarithmic rate law offers an excellent agreement between the experimental and calculated conversions to TiN and can predict afterburning, which is an important experimentally observed phenomenon. Due to the form of the logarithmic ratemore » law, the observed activation energy is a function of effective particle size, extent of conversion, and temperature even when the intrinsic activation energy remains constant. This aspect explains discrepancies among activation energies obtained in previous studies. The frequently used sedimentation particle size is a poor measure of the powder reactivity. The BET surface area indicates the powder reactivity much better.« less

  4. Convergence and stability of the exponential Euler method for semi-linear stochastic delay differential equations.

    PubMed

    Zhang, Ling

    2017-01-01

    The main purpose of this paper is to investigate the strong convergence and exponential stability in mean square of the exponential Euler method to semi-linear stochastic delay differential equations (SLSDDEs). It is proved that the exponential Euler approximation solution converges to the analytic solution with the strong order [Formula: see text] to SLSDDEs. On the one hand, the classical stability theorem to SLSDDEs is given by the Lyapunov functions. However, in this paper we study the exponential stability in mean square of the exact solution to SLSDDEs by using the definition of logarithmic norm. On the other hand, the implicit Euler scheme to SLSDDEs is known to be exponentially stable in mean square for any step size. However, in this article we propose an explicit method to show that the exponential Euler method to SLSDDEs is proved to share the same stability for any step size by the property of logarithmic norm.

  5. Simulating the component counts of combinatorial structures.

    PubMed

    Arratia, Richard; Barbour, A D; Ewens, W J; Tavaré, Simon

    2018-02-09

    This article describes and compares methods for simulating the component counts of random logarithmic combinatorial structures such as permutations and mappings. We exploit the Feller coupling for simulating permutations to provide a very fast method for simulating logarithmic assemblies more generally. For logarithmic multisets and selections, this approach is replaced by an acceptance/rejection method based on a particular conditioning relationship that represents the distribution of the combinatorial structure as that of independent random variables conditioned on a weighted sum. We show how to improve its acceptance rate. We illustrate the method by estimating the probability that a random mapping has no repeated component sizes, and establish the asymptotic distribution of the difference between the number of components and the number of distinct component sizes for a very general class of logarithmic structures. Copyright © 2018. Published by Elsevier Inc.

  6. Double Resummation for Higgs Production

    NASA Astrophysics Data System (ADS)

    Bonvini, Marco; Marzani, Simone

    2018-05-01

    We present the first double-resummed prediction of the inclusive cross section for the main Higgs production channel in proton-proton collisions, namely, gluon fusion. Our calculation incorporates to all orders in perturbation theory two distinct towers of logarithmic corrections which are enhanced, respectively, at threshold, i.e., large x , and in the high-energy limit, i.e., small x . Large-x logarithms are resummed to next-to-next-to-next-to-leading logarithmic accuracy, while small-x ones to leading logarithmic accuracy. The double-resummed cross section is furthermore matched to the state-of-the-art fixed-order prediction at next-to-next-to-next-to-leading accuracy. We find that double resummation corrects the Higgs production rate by 2% at the currently explored center-of-mass energy of 13 TeV and its impact reaches 10% at future circular colliders at 100 TeV.

  7. Coordination of Storage Lipid Synthesis and Membrane Biogenesis

    PubMed Central

    Gaspar, Maria L.; Hofbauer, Harald F.; Kohlwein, Sepp D.; Henry, Susan A.

    2011-01-01

    Despite the importance of triacylglycerols (TAG) and steryl esters (SE) in phospholipid synthesis in cells transitioning from stationary-phase into active growth, there is no direct evidence for their requirement in synthesis of phosphatidylinositol (PI) or other membrane phospholipids in logarithmically growing yeast cells. We report that the dga1Δlro1Δare1Δare2Δ strain, which lacks the ability to synthesize both TAG and SE, is not able to sustain normal growth in the absence of inositol (Ino− phenotype) at 37 °C especially when choline is present. Unlike many other strains exhibiting an Ino− phenotype, the dga1Δlro1Δare1Δare2Δ strain does not display a defect in INO1 expression. However, the mutant exhibits slow recovery of PI content compared with wild type cells upon reintroduction of inositol into logarithmically growing cultures. The tgl3Δtgl4Δtgl5Δ strain, which is able to synthesize TAG but unable to mobilize it, also exhibits attenuated PI formation under these conditions. However, unlike dga1Δlro1Δare1Δare2Δ, the tgl3Δtgl4Δtgl5Δ strain does not display an Ino− phenotype, indicating that failure to mobilize TAG is not fully responsible for the growth defect of the dga1Δlro1Δare1Δare2Δ strain in the absence of inositol. Moreover, synthesis of phospholipids, especially PI, is dramatically reduced in the dga1Δlro1Δare1Δare2Δ strain even when it is grown continuously in the presence of inositol. The mutant also utilizes a greater proportion of newly synthesized PI than wild type for the synthesis of inositol-containing sphingolipids, especially in the absence of inositol. Thus, we conclude that storage lipid synthesis actively influences membrane phospholipid metabolism in logarithmically growing cells. PMID:20972264

  8. Coordination of storage lipid synthesis and membrane biogenesis: evidence for cross-talk between triacylglycerol metabolism and phosphatidylinositol synthesis.

    PubMed

    Gaspar, Maria L; Hofbauer, Harald F; Kohlwein, Sepp D; Henry, Susan A

    2011-01-21

    Despite the importance of triacylglycerols (TAG) and steryl esters (SE) in phospholipid synthesis in cells transitioning from stationary-phase into active growth, there is no direct evidence for their requirement in synthesis of phosphatidylinositol (PI) or other membrane phospholipids in logarithmically growing yeast cells. We report that the dga1Δlro1Δare1Δare2Δ strain, which lacks the ability to synthesize both TAG and SE, is not able to sustain normal growth in the absence of inositol (Ino(-) phenotype) at 37 °C especially when choline is present. Unlike many other strains exhibiting an Ino(-) phenotype, the dga1Δlro1Δare1Δare2Δ strain does not display a defect in INO1 expression. However, the mutant exhibits slow recovery of PI content compared with wild type cells upon reintroduction of inositol into logarithmically growing cultures. The tgl3Δtgl4Δtgl5Δ strain, which is able to synthesize TAG but unable to mobilize it, also exhibits attenuated PI formation under these conditions. However, unlike dga1Δlro1Δare1Δare2Δ, the tgl3Δtgl4Δtgl5Δ strain does not display an Ino(-) phenotype, indicating that failure to mobilize TAG is not fully responsible for the growth defect of the dga1Δlro1Δare1Δare2Δ strain in the absence of inositol. Moreover, synthesis of phospholipids, especially PI, is dramatically reduced in the dga1Δlro1Δare1Δare2Δ strain even when it is grown continuously in the presence of inositol. The mutant also utilizes a greater proportion of newly synthesized PI than wild type for the synthesis of inositol-containing sphingolipids, especially in the absence of inositol. Thus, we conclude that storage lipid synthesis actively influences membrane phospholipid metabolism in logarithmically growing cells.

  9. Better Resolved Low Frequency Dispersions by the Apt Use of Kramers-Kronig Relations, Differential Operators, and All-In-1 Modeling

    PubMed Central

    van Turnhout, J.

    2016-01-01

    The dielectric spectra of colloidal systems often contain a typical low frequency dispersion, which usually remains unnoticed, because of the presence of strong conduction losses. The KK relations offer a means for converting ε′ into ε″ data. This allows us to calculate conduction free ε″ spectra in which the l.f. dispersion will show up undisturbed. This interconversion can be done on line with a moving frame of logarithmically spaced ε′ data. The coefficients of the conversion frames were obtained by kernel matching and by using symbolic differential operators. Logarithmic derivatives and differences of ε′ and ε″ provide another option for conduction free data analysis. These difference-based functions actually derived from approximations to the distribution function, have the additional advantage of improving the resolution power of dielectric studies. A high resolution is important because of the rich relaxation structure of colloidal suspensions. The development of all-in-1 modeling facilitates the conduction free and high resolution data analysis. This mathematical tool allows the apart-together fitting of multiple data and multiple model functions. It proved also useful to go around the KK conversion altogether. This was achieved by the combined approximating ε′ and ε″ data with a complex rational fractional power function. The all-in-1 minimization turned out to be also highly useful for the dielectric modeling of a suspension with the complex dipolar coefficient. It guarantees a secure correction for the electrode polarization, so that the modeling with the help of the differences ε′ and ε″ can zoom in on the genuine colloidal relaxations. PMID:27242997

  10. Pathogen Inactivated Plasma Concentrated: Preparation and Uses

    DTIC Science & Technology

    2004-09-01

    REPORT DATE 01 SEP 2004 2 . REPORT TYPE N/A 3. DATES COVERED - 4. TITLE AND SUBTITLE Pathogen Inactivated Plasma Concentrated: Preparation...Concentrated: Preparation and Uses 22 - 2 RTO-MP-HFM-109 Results: Both UVC and ozone yielded a PPV logarithmic reduction factor (LRF) of 6, for a...technology to be marketed; the industry name is Plas+SD [ 2 ]. This process functions by attacking the lipid sheathes that surround enveloped viruses

  11. Generating series for GUE correlators

    NASA Astrophysics Data System (ADS)

    Dubrovin, Boris; Yang, Di

    2017-11-01

    We extend to the Toda lattice hierarchy the approach of Bertola et al. (Phys D Nonlinear Phenom 327:30-57, 2016; IMRN, 2016) to computation of logarithmic derivatives of tau-functions in terms of the so-called matrix resolvents of the corresponding difference Lax operator. As a particular application we obtain explicit generating series for connected GUE correlators. On this basis an efficient recursive procedure for computing the correlators in full genera is developed.

  12. A critical assessment of viscous models of trench topography and corner flow

    NASA Technical Reports Server (NTRS)

    Zhang, J.; Hager, B. H.; Raefsky, A.

    1984-01-01

    Stresses for Newtonian viscous flow in a simple geometry (e.g., corner flow, bending flow) are obtained in order to study the effect of imposed velocity boundary conditions. Stress for a delta function velocity boundary condition decays as 1/R(2); for a step function velocity, stress goes as 1/R; for a discontinuity in curvature, the stress singularity is logarithmic. For corner flow, which has a discontinuity of velocity at a certain point, the corresponding stress has a 1/R singularity. However, for a more realistic circular-slab model, the stress singularity becomes logarithmic. Thus the stress distribution is very sensitive to the boundary conditions, and in evaluating the applicability of viscous models of trench topography it is essential to use realistic geometries. Topography and seismicity data from northern Hoshu, Japan, were used to construct a finite element model, with flow assumed tangent to the top of the grid, for both Newtonian and non-Newtonian flow (power law 3 rheology). Normal stresses at the top of the grid are compared to the observed trench topography and gravity anomalies. There is poor agreement. Purely viscous models of subducting slables with specified velocity boundary conditions do not predict normal stress patterns compatible with observed topography and gravity. Elasticity and plasticity appear to be important for the subduction process.

  13. Topologically massive gravity and the AdS/CFT correspondence

    NASA Astrophysics Data System (ADS)

    Skenderis, Kostas; Taylor, Marika; van Rees, Balt C.

    2009-09-01

    We set up the AdS/CFT correspondence for topologically massive gravity (TMG) in three dimensions. The first step in this procedure is to determine the appropriate fall off conditions at infinity. These cannot be fixed a priori as they depend on the bulk theory under consideration and are derived by solving asymptotically the non-linear field equations. We discuss in detail the asymptotic structure of the field equations for TMG, showing that it contains leading and subleading logarithms, determine the map between bulk fields and CFT operators, obtain the appropriate counterterms needed for holographic renormalization and compute holographically one- and two-point functions at and away from the ``chiral point'' (μ = 1). The 2-point functions at the chiral point are those of a logarithmic CFT (LCFT) with cL = 0,cR = 3l/GN and b = -3l/GN, where b is a parameter characterizing different c = 0 LCFTs. The bulk correlators away from the chiral point (μ ≠ 1) smoothly limit to the LCFT ones as μ → 1. Away from the chiral point, the CFT contains a state of negative norm and the expectation value of the energy momentum tensor in that state is also negative, reflecting a corresponding bulk instability due to negative energy modes.

  14. Passive advection of a vector field: Anisotropy, finite correlation time, exact solution, and logarithmic corrections to ordinary scaling

    NASA Astrophysics Data System (ADS)

    Antonov, N. V.; Gulitskiy, N. M.

    2015-10-01

    In this work we study the generalization of the problem considered in [Phys. Rev. E 91, 013002 (2015), 10.1103/PhysRevE.91.013002] to the case of finite correlation time of the environment (velocity) field. The model describes a vector (e.g., magnetic) field, passively advected by a strongly anisotropic turbulent flow. Inertial-range asymptotic behavior is studied by means of the field theoretic renormalization group and the operator product expansion. The advecting velocity field is Gaussian, with finite correlation time and preassigned pair correlation function. Due to the presence of distinguished direction n , all the multiloop diagrams in this model vanish, so that the results obtained are exact. The inertial-range behavior of the model is described by two regimes (the limits of vanishing or infinite correlation time) that correspond to the two nontrivial fixed points of the RG equations. Their stability depends on the relation between the exponents in the energy spectrum E ∝k⊥1 -ξ and the dispersion law ω ∝k⊥2 -η . In contrast to the well-known isotropic Kraichnan's model, where various correlation functions exhibit anomalous scaling behavior with infinite sets of anomalous exponents, here the corrections to ordinary scaling are polynomials of logarithms of the integral turbulence scale L .

  15. Quasi Sturmian basis for the two-electon continuum

    NASA Astrophysics Data System (ADS)

    Zaytsev, A. S.; Ancarani, L. U.; Zaytsev, S. A.

    2016-02-01

    A new type of basis functions is proposed to describe a two-electron continuum which arises as a final state in electron-impact ionization and double photoionization of atomic systems. We name these functions, which are calculated in terms of the recently introduced quasi Sturmian functions, Convoluted Quasi Sturmian functions (CQS); by construction, they look asymptotically like a six-dimensional spherical wave. The driven equation describing an ( e, 3 e) process on helium in the framework of the Temkin-Poet model is solved numerically in the entire space (rather than in a finite region of space) using expansions on CQS basis functions. We show that quite rapid convergence of the solution expansion can be achieved by multiplying the basis functions by the logarithmic phase factor corresponding to the Coulomb electron-electron interaction.

  16. Operator algebra as an application of logarithmic representation of infinitesimal generators

    NASA Astrophysics Data System (ADS)

    Iwata, Yoritaka

    2018-02-01

    The operator algebra is introduced based on the framework of logarithmic representation of infinitesimal generators. In conclusion a set of generally-unbounded infinitesimal generators is characterized as a module over the Banach algebra.

  17. Entropy production of doubly stochastic quantum channels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Müller-Hermes, Alexander, E-mail: muellerh@posteo.net; Department of Mathematical Sciences, University of Copenhagen, 2100 Copenhagen; Stilck França, Daniel, E-mail: dsfranca@mytum.de

    2016-02-15

    We study the entropy increase of quantum systems evolving under primitive, doubly stochastic Markovian noise and thus converging to the maximally mixed state. This entropy increase can be quantified by a logarithmic-Sobolev constant of the Liouvillian generating the noise. We prove a universal lower bound on this constant that stays invariant under taking tensor-powers. Our methods involve a new comparison method to relate logarithmic-Sobolev constants of different Liouvillians and a technique to compute logarithmic-Sobolev inequalities of Liouvillians with eigenvectors forming a projective representation of a finite abelian group. Our bounds improve upon similar results established before and as an applicationmore » we prove an upper bound on continuous-time quantum capacities. In the last part of this work we study entropy production estimates of discrete-time doubly stochastic quantum channels by extending the framework of discrete-time logarithmic-Sobolev inequalities to the quantum case.« less

  18. Enzyme-guided plasmonic biosensor based on dual-functional nanohybrid for sensitive detection of thrombin.

    PubMed

    Yan, Jing; Wang, Lida; Tang, Longhua; Lin, Lei; Liu, Yang; Li, Jinghong

    2015-08-15

    Rapid and sensitive methodologies for the detection of protein are in urgent requirement for clinic diagnostics. Localized surface plasmon resonance (LSPR) of metal nanostructures has the potential to circumvent this problem due to its sensitive optical properties and strong electromagnetic near-field enhancements. In this work, an enzyme mediated plasmonic biosensor on the basis of a dual-functional nanohybrid was developed for the detection of thrombin. By utilizing LSPR-responsive nanohybrid and anaptamer-enzyme conjugated reporting probe, the sensing platform brings enhanced signal, stability as well as simplicity. Enzymatic reaction catalyzed the reduction of Au(3+) to Au° in situ, further leading to the rapid crystal growth of gold nanoparticles (AuNPs). The LSPR absorbance band and color changed company with the nanoparticle generation, which can be real-time monitoring by UV-visible spectrophotometer and naked eye. Nanohybrid constructed by gold and magnetic nanoparticles acts as a dual functional plasmonic unit, which not only plays the role of signal production, but also endows the sensor with the function of magnetic separation. Simultaneously, the introduction of enzyme effectively regulates the programming crystal growth of AuNPs. In addition, enzyme also serves as signal amplifier owing to its high catalysis efficiency. The response of the plasmonic sensor varies linearly with the logarithmic thrombin concentration up to 10nM with a limit of detection of 200 pM. The as-proposed strategy shows good analytical performance for thrombin determination. This simple, disposable method is promising in developing universal platforms for protein monitoring, drug discovery and point-of-care diagnostics. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. BASIC2 INTERPRETER; minimal basic language. [MCS-80,8080-based microcomputers; 8080 Assembly language

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGoldrick, P.R.; Allison, T.G.

    The BASIC2 INTERPRETER was developed to provide a high-level easy-to-use language for performing both control and computational functions in the MCS-80. The package is supplied as two alternative implementations, hardware and software. The ''software'' implementation provides the following capabilities: entry and editing of BASIC programs, device-independent I/O, special functions to allow access from BASIC to any I/O port, formatted printing, special INPUT/OUTPUT-and-proceed statements to allow I/O without interrupting BASIC program execution, full arithmetic expressions, limited string manipulation (10 or fewer characters), shorthand forms for common BASIC keywords, immediate mode BASIC statement execution, and capability of running a BASIC program thatmore » is stored in PROM. The allowed arithmetic operations are addition, subtraction, multiplication, division, and raising a number to a positive integral power. In the second, or ''hardware'', implementation of BASIC2 requiring an Am9511 Arithmetic Processing Unit (APU) interfaced to the 8080 microprocessor, arithmetic operations are performed by the APU. The following additional built-in functions are available in this implementation: square root, sine, cosine, tangent, arcsine, arccosine, arctangent, exponential, logarithm base e, and logarithm base 10. MCS-80,8080-based microcomputers; 8080 Assembly language; Approximately 8K bytes of RAM to store the assembled interpreter, additional user program space, and necessary peripheral devices. The hardware implementation requires an Am9511 Arithmetic Processing Unit and an interface board (reference 2).« less

  20. Logarithms in the Year 10 A.C.

    ERIC Educational Resources Information Center

    Kalman, Dan; Mitchell, Charles E.

    1981-01-01

    An alternative application of logarithms in the high school algebra curriculum that is not undermined by the existence and widespread availability of calculators is presented. The importance and use of linear relationships are underscored in the proposed lessons. (MP)

  1. Hilbert and Blaschke phases in the temporal coherence function of stationary broadband light.

    PubMed

    Fernández-Pousa, Carlos R; Maestre, Haroldo; Torregrosa, Adrián J; Capmany, Juan

    2008-10-27

    We show that the minimal phase of the temporal coherence function gamma (tau) of stationary light having a partially-coherent symmetric spectral peak can be computed as a relative logarithmic Hilbert transform of its amplitude with respect to its asymptotic behavior. The procedure is applied to experimental data from amplified spontaneous emission broadband sources in the 1.55 microm band with subpicosecond coherence times, providing examples of degrees of coherence with both minimal and non-minimal phase. In the latter case, the Blaschke phase is retrieved and the position of the Blaschke zeros determined.

  2. Cointegration of output, capital, labor, and energy

    NASA Astrophysics Data System (ADS)

    Stresing, R.; Lindenberger, D.; Kã¼mmel, R.

    2008-11-01

    Cointegration analysis is applied to the linear combinations of the time series of (the logarithms of) output, capital, labor, and energy for Germany, Japan, and the USA since 1960. The computed cointegration vectors represent the output elasticities of the aggregate energy-dependent Cobb-Douglas function. The output elasticities give the economic weights of the production factors capital, labor, and energy. We find that they are for labor much smaller and for energy much larger than the cost shares of these factors. In standard economic theory output elasticities equal cost shares. Our heterodox findings support results obtained with LINEX production functions.

  3. Compact exponential product formulas and operator functional derivative

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suzuki, M.

    1997-02-01

    A new scheme for deriving compact expressions of the logarithm of the exponential product is proposed and it is applied to several exponential product formulas. A generalization of the Dynkin{endash}Specht{endash}Wever (DSW) theorem on free Lie elements is given, and it is used to study the relation between the traditional method (based on the DSW theorem) and the present new scheme. The concept of the operator functional derivative is also proposed, and it is applied to ordered exponentials, such as time-evolution operators for time-dependent Hamiltonians. {copyright} {ital 1997 American Institute of Physics.}

  4. Extraction of partonic transverse momentum distributions from semi-inclusive deep inelastic scattering and Drell-Yan data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pisano, Cristian; Bacchetta, Alessandro; Delcarro, Filippo

    We present a first attempt at a global fit of unpolarized quark transverse momentum dependent distribution and fragmentation functions from available data on semi-inclusive deep-inelastic scattering, Drell-Yan and $Z$ boson production processes. This analysis is performed in the low transverse momentum region, at leading order in perturbative QCD and with the inclusion of energy scale evolution effects at the next-to-leading logarithmic accuracy.

  5. The Coast Artillery Journal. Volume 57, Number 6, December 1922

    DTIC Science & Technology

    1922-12-01

    theorems ; Chapter III, to application; Chapters IV, V and VI, to infinitesimals and differentials, trigonometric functions, and logarithms and...taneously." There are chapters on complex numbers with simple and direct discussion of the roots of unity; on elementary theorems on the roots of an...through the centuries from the time of Pythagoras , an interest shared on the one extreme by nearly every noted mathematician and on the other extreme by

  6. Advantages of using a logarithmic scale in pressure-volume diagrams for Carnot and other heat engine cycles

    NASA Astrophysics Data System (ADS)

    Shieh, Lih-Yir; Kan, Hung-Chih

    2014-04-01

    We demonstrate that plotting the P-V diagram of an ideal gas Carnot cycle on a logarithmic scale results in a more intuitive approach for deriving the final form of the efficiency equation. The same approach also facilitates the derivation of the efficiency of other thermodynamic engines that employ adiabatic ideal gas processes, such as the Brayton cycle, the Otto cycle, and the Diesel engine. We finally demonstrate that logarithmic plots of isothermal and adiabatic processes help with visualization in approximating an arbitrary process in terms of an infinite number of Carnot cycles.

  7. Prediction of Soil pH Hyperspectral Spectrum in Guanzhong Area of Shaanxi Province Based on PLS

    NASA Astrophysics Data System (ADS)

    Liu, Jinbao; Zhang, Yang; Wang, Huanyuan; Cheng, Jie; Tong, Wei; Wei, Jing

    2017-12-01

    The soil pH of Fufeng County, Yangling County and Wugong County in Shaanxi Province was studied. The spectral reflectance was measured by ASD Field Spec HR portable terrain spectrum, and its spectral characteristics were analyzed. The first deviation of the original spectral reflectance of the soil, the second deviation, the logarithm of the reciprocal logarithm, the first order differential of the reciprocal logarithm and the second order differential of the reciprocal logarithm were used to establish the soil pH Spectral prediction model. The results showed that the correlation between the reflectance spectra after SNV pre-treatment and the soil pH was significantly improved. The optimal prediction model of soil pH established by partial least squares method was a prediction model based on the first order differential of the reciprocal logarithm of spectral reflectance. The principal component factor was 10, the decision coefficient Rc2 = 0.9959, the model root means square error RMSEC = 0.0076, the correction deviation SEC = 0.0077; the verification decision coefficient Rv2 = 0.9893, the predicted root mean square error RMSEP = 0.0157, The deviation of SEP = 0.0160, the model was stable, the fitting ability and the prediction ability were high, and the soil pH can be measured quickly.

  8. Resumming double non-global logarithms in the evolution of a jet

    NASA Astrophysics Data System (ADS)

    Hatta, Y.; Iancu, E.; Mueller, A. H.; Triantafyllopoulos, D. N.

    2018-02-01

    We consider the Banfi-Marchesini-Smye (BMS) equation which resums `non-global' energy logarithms in the QCD evolution of the energy lost by a pair of jets via soft radiation at large angles. We identify a new physical regime where, besides the energy logarithms, one has to also resum (anti)collinear logarithms. Such a regime occurs when the jets are highly collimated (boosted) and the relative angles between successive soft gluon emissions are strongly increasing. These anti-collinear emissions can violate the correct time-ordering for time-like cascades and result in large radiative corrections enhanced by double collinear logs, making the BMS evolution unstable beyond leading order. We isolate the first such a correction in a recent calculation of the BMS equation to next-to-leading order by Caron-Huot. To overcome this difficulty, we construct a `collinearly-improved' version of the leading-order BMS equation which resums the double collinear logarithms to all orders. Our construction is inspired by a recent treatment of the Balitsky-Kovchegov (BK) equation for the high-energy evolution of a space-like wavefunction, where similar time-ordering issues occur. We show that the conformal mapping relating the leading-order BMS and BK equations correctly predicts the physical time-ordering, but it fails to predict the detailed structure of the collinear improvement.

  9. Improvement of correlation-based centroiding methods for point source Shack-Hartmann wavefront sensor

    NASA Astrophysics Data System (ADS)

    Li, Xuxu; Li, Xinyang; wang, Caixia

    2018-03-01

    This paper proposes an efficient approach to decrease the computational costs of correlation-based centroiding methods used for point source Shack-Hartmann wavefront sensors. Four typical similarity functions have been compared, i.e. the absolute difference function (ADF), ADF square (ADF2), square difference function (SDF), and cross-correlation function (CCF) using the Gaussian spot model. By combining them with fast search algorithms, such as three-step search (TSS), two-dimensional logarithmic search (TDL), cross search (CS), and orthogonal search (OS), computational costs can be reduced drastically without affecting the accuracy of centroid detection. Specifically, OS reduces calculation consumption by 90%. A comprehensive simulation indicates that CCF exhibits a better performance than other functions under various light-level conditions. Besides, the effectiveness of fast search algorithms has been verified.

  10. Evidence of Temporal Postdischarge Decontamination of Bacteria by Gliding Electric Discharges: Application to Hafnia alvei▿

    PubMed Central

    Kamgang-Youbi, Georges; Herry, Jean-Marie; Bellon-Fontaine, Marie-Noëlle; Brisset, Jean-Louis; Doubla, Avaly; Naïtali, Murielle

    2007-01-01

    This study aimed to characterize the bacterium-destroying properties of a gliding arc plasma device during electric discharges and also under temporal postdischarge conditions (i.e., when the discharge was switched off). This phenomenon was reported for the first time in the literature in the case of the plasma destruction of microorganisms. When cells of a model bacterium, Hafnia alvei, were exposed to electric discharges, followed or not followed by temporal postdischarges, the survival curves exhibited a shoulder and then log-linear decay. These destruction kinetics were modeled using GinaFiT, a freeware tool to assess microbial survival curves, and adjustment parameters were determined. The efficiency of postdischarge treatments was clearly affected by the discharge time (t*); both the shoulder length and the inactivation rate kmax were linearly modified as a function of t*. Nevertheless, all conditions tested (t* ranging from 2 to 5 min) made it possible to achieve an abatement of at least 7 decimal logarithm units. Postdischarge treatment was also efficient against bacteria not subjected to direct discharge, and the disinfecting properties of “plasma-activated water” were dependent on the treatment time for the solution. Water treated with plasma for 2 min achieved a 3.7-decimal-logarithm-unit reduction in 20 min after application to cells, and abatement greater than 7 decimal logarithm units resulted from the same contact time with water activated with plasma for 10 min. These disinfecting properties were maintained during storage of activated water for 30 min. After that, they declined as the storage time increased. PMID:17557841

  11. Evaluation of empirical process design relationships for ozone disinfection of water and wastewater

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Finch, G.R.; Smith, D.W.

    A research program was undertaken to examine the dose-response of Escherichia coli ATCC 11775 in ozone demand-free phosphate buffer solution and in a high quality secondary wastewater effluent with a total organic carbon content of 8 mg/L and a chemical oxygen demand of 26 mg/L. The studies were conducted in bench-scale batch reactors for both water types. In addition, studies using secondary effluent also were conducted in a pilot-scale, semi-batch reactor to evaluate scale-up effects. It was found that the ozone dose was the most important design parameter in both types of water. Contact time was of some importance inmore » the ozone demand-free water and had no detectable effect in the secondary effluent. Pilot-scale data confirmed the results obtained at bench-scale for the secondary effluent. Regression analysis of the logarithm of the E. coli response on the logarithm of the utilized ozone dose revealed that there was lack-of-fit using the model form which has been used frequently for the design of wastewater disinfection systems. This occurred as a result of a marked tailing effect of the log-log plot as the ozone dose increased and the kill increased. It was postulated that this was caused by some unknown physiological differences within the E. coli population due to age or another factor.« less

  12. Magnitude and Determinants of the Ratio between Prevalence of Low Vision and Blindness in Rapid Assessment of Avoidable Blindness Surveys.

    PubMed

    Kaphle, Dinesh; Lewallen, Susan

    2017-10-01

    To determine the magnitude and determinants of the ratio between prevalence of low vision and prevalence of blindness in rapid assessment of avoidable blindness (RAAB) surveys globally. Standard RAAB reports were downloaded from the repository or requested from principal investigators. Potential predictor variables included prevalence of uncorrected refractive error (URE) as well as gross domestic product (GDP) per capita, health expenditure per capita of the country across World Bank regions. Univariate and multivariate linear regression were used to investigate the correlation between potential predictor variables and the ratio. The results of 94 surveys from 43 countries showed that the ratio ranged from 1.35 in Mozambique to 11.03 in India with a median value of 3.90 (Interquartile range 3.06;5.38). Univariate regression analysis showed that prevalence of URE (p = 0.04), logarithm of GDP per capita (p = 0.01) and logarithm of health expenditure per capita (p = 0.03) were significantly associated with the higher ratio. However, only prevalence of URE was found to be significant in multivariate regression analysis (p = 0.03). There is a wide variation in the ratio of the prevalence of low vision to the prevalence of blindness. Eye care service utilization indicators such as the prevalence of URE may explain some of the variation across the regions.

  13. Determination of the n-octanol/water partition coefficients of weakly ionizable basic compounds by reversed-phase high-performance liquid chromatography with neutral model compounds.

    PubMed

    Liang, Chao; Han, Shu-ying; Qiao, Jun-qin; Lian, Hong-zhen; Ge, Xin

    2014-11-01

    A strategy to utilize neutral model compounds for lipophilicity measurement of ionizable basic compounds by reversed-phase high-performance liquid chromatography is proposed in this paper. The applicability of the novel protocol was justified by theoretical derivation. Meanwhile, the linear relationships between logarithm of apparent n-octanol/water partition coefficients (logKow '') and logarithm of retention factors corresponding to the 100% aqueous fraction of mobile phase (logkw ) were established for a basic training set, a neutral training set and a mixed training set of these two. As proved in theory, the good linearity and external validation results indicated that the logKow ''-logkw relationships obtained from a neutral model training set were always reliable regardless of mobile phase pH. Afterwards, the above relationships were adopted to determine the logKow of harmaline, a weakly dissociable alkaloid. As far as we know, this is the first report on experimental logKow data for harmaline (logKow = 2.28 ± 0.08). Introducing neutral compounds into a basic model training set or using neutral model compounds alone is recommended to measure the lipophilicity of weakly ionizable basic compounds especially those with high hydrophobicity for the advantages of more suitable model compound choices and convenient mobile phase pH control. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Treatment of Advanced Glaucoma Study: a multicentre randomised controlled trial comparing primary medical treatment with primary trabeculectomy for people with newly diagnosed advanced glaucoma-study protocol.

    PubMed

    King, Anthony J; Fernie, Gordon; Azuara-Blanco, Augusto; Burr, Jennifer M; Garway-Heath, Ted; Sparrow, John M; Vale, Luke; Hudson, Jemma; MacLennan, Graeme; McDonald, Alison; Barton, Keith; Norrie, John

    2017-10-26

    Presentation with advanced glaucoma is the major risk factor for lifetime blindness. Effective intervention at diagnosis is expected to minimise risk of further visual loss in this group of patients. To compare clinical and cost-effectiveness of primary medical management compared with primary surgery for people presenting with advanced open-angle glaucoma (OAG). Design : A prospective, pragmatic multicentre randomised controlled trial (RCT). Twenty-seven UK hospital eye services. Four hundred and forty patients presenting with advanced OAG, according to the Hodapp-Parish-Anderson classification of visual field loss. Participants will be randomised to medical treatment or augmented trabeculectomy (1:1 allocation minimised by centre and presence of advanced disease in both eyes). The primary outcome is vision-related quality of life measured by the National Eye Institute-Visual Function Questionnaire-25 at 24 months. Secondary outcomes include generic EQ-5D-5L, Health Utility Index-3 and glaucoma-related health status (Glaucoma Utility Index), patient experience, visual field measured by mean deviation value, logarithm of the mean angle of resolution visual acuity, intraocular pressure, adverse events, standards for driving and eligibility for blind certification. Incremental cost per quality-adjusted life-year (QALY) based on EQ-5D-5L and glaucoma profile instrument will be estimated. The study will report the comparative effectiveness and cost-effectiveness of medical treatment against augmented trabeculectomy in patients presenting with advanced glaucoma in terms of patient-reported health and visual function, clinical outcomes and incremental cost per QALY at 2 years. Treatment of Advanced Glaucoma Study will be the first RCT reporting outcomes from the perspective of those with advanced glaucoma. ISRCTN56878850, Pre-results. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  15. Size-dependent standard deviation for growth rates: Empirical results and theoretical modeling

    NASA Astrophysics Data System (ADS)

    Podobnik, Boris; Horvatic, Davor; Pammolli, Fabio; Wang, Fengzhong; Stanley, H. Eugene; Grosse, I.

    2008-05-01

    We study annual logarithmic growth rates R of various economic variables such as exports, imports, and foreign debt. For each of these variables we find that the distributions of R can be approximated by double exponential (Laplace) distributions in the central parts and power-law distributions in the tails. For each of these variables we further find a power-law dependence of the standard deviation σ(R) on the average size of the economic variable with a scaling exponent surprisingly close to that found for the gross domestic product (GDP) [Phys. Rev. Lett. 81, 3275 (1998)]. By analyzing annual logarithmic growth rates R of wages of 161 different occupations, we find a power-law dependence of the standard deviation σ(R) on the average value of the wages with a scaling exponent β≈0.14 close to those found for the growth of exports, imports, debt, and the growth of the GDP. In contrast to these findings, we observe for payroll data collected from 50 states of the USA that the standard deviation σ(R) of the annual logarithmic growth rate R increases monotonically with the average value of payroll. However, also in this case we observe a power-law dependence of σ(R) on the average payroll with a scaling exponent β≈-0.08 . Based on these observations we propose a stochastic process for multiple cross-correlated variables where for each variable (i) the distribution of logarithmic growth rates decays exponentially in the central part, (ii) the distribution of the logarithmic growth rate decays algebraically in the far tails, and (iii) the standard deviation of the logarithmic growth rate depends algebraically on the average size of the stochastic variable.

  16. Size-dependent standard deviation for growth rates: empirical results and theoretical modeling.

    PubMed

    Podobnik, Boris; Horvatic, Davor; Pammolli, Fabio; Wang, Fengzhong; Stanley, H Eugene; Grosse, I

    2008-05-01

    We study annual logarithmic growth rates R of various economic variables such as exports, imports, and foreign debt. For each of these variables we find that the distributions of R can be approximated by double exponential (Laplace) distributions in the central parts and power-law distributions in the tails. For each of these variables we further find a power-law dependence of the standard deviation sigma(R) on the average size of the economic variable with a scaling exponent surprisingly close to that found for the gross domestic product (GDP) [Phys. Rev. Lett. 81, 3275 (1998)]. By analyzing annual logarithmic growth rates R of wages of 161 different occupations, we find a power-law dependence of the standard deviation sigma(R) on the average value of the wages with a scaling exponent beta approximately 0.14 close to those found for the growth of exports, imports, debt, and the growth of the GDP. In contrast to these findings, we observe for payroll data collected from 50 states of the USA that the standard deviation sigma(R) of the annual logarithmic growth rate R increases monotonically with the average value of payroll. However, also in this case we observe a power-law dependence of sigma(R) on the average payroll with a scaling exponent beta approximately -0.08 . Based on these observations we propose a stochastic process for multiple cross-correlated variables where for each variable (i) the distribution of logarithmic growth rates decays exponentially in the central part, (ii) the distribution of the logarithmic growth rate decays algebraically in the far tails, and (iii) the standard deviation of the logarithmic growth rate depends algebraically on the average size of the stochastic variable.

  17. A new family of distribution functions for spherical galaxies

    NASA Astrophysics Data System (ADS)

    Gerhard, Ortwin E.

    1991-06-01

    The present study describes a new family of anisotropic distribution functions for stellar systems designed to keep control of the orbit distribution at fixed energy. These are quasi-separable functions of energy and angular momentum, and they are specified in terms of a circularity function h(x) which fixes the distribution of orbits on the potential's energy surfaces outside some anisotropy radius. Detailed results are presented for a particular set of radially anisotropic circularity functions h-alpha(x). In the scale-free logarithmic potential, exact analytic solutions are shown to exist for all scale-free circularity functions. Intrinsic and projected velocity dispersions are calculated and the expected properties are presented in extensive tables and graphs. Several applications of the quasi-separable distribution functions are discussed. They include the effects of anisotropy or a dark halo on line-broadening functions, the radial orbit instability in anisotropic spherical systems, and violent relaxation in spherical collapse.

  18. Transverse vetoes with rapidity cutoff in SCET

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hornig, Andrew; Kang, Daekyoung; Makris, Yiannis

    We consider di-jet production in hadron collisions where a transverse veto is imposed on radiation for (pseudo-)rapidities in the central region only, where this central region is defined with rapidity cutoff. For the case where the transverse measurement (e.g., transverse energy or min p T for jet veto) is parametrically larger relative to the typical transverse momentum beyond the cutoff, the cross section is insensitive to the cutoff parameter and is factorized in terms of collinear and soft degrees of freedom. The virtuality for these degrees of freedom is set by the transverse measurement, as in typical transverse-momentum dependent observablesmore » such as Drell-Yan, Higgs production, and the event shape broadening. This paper focuses on the other region, where the typical transverse momentum below and beyond the cutoff is of similar size. In this region the rapidity cutoff further resolves soft radiation into (u)soft and soft-collinear radiation with different rapidities but identical virtuality. This gives rise to rapidity logarithms of the rapidity cutoff parameter which we resum using renormalization group methods. We factorize the cross section in this region in terms of soft and collinear functions in the framework of soft-collinear effective theory, then further refactorize the soft function as a convolution of the (u)soft and soft-collinear functions. All these functions are calculated at one-loop order. As an example, we calculate a differential cross section for a specific partonic channel, qq ' → qq ' , for the jet shape angularities and show that the refactorization allows us to resum the rapidity logarithms and significantly reduce theoretical uncertainties in the jet shape spectrum.« less

  19. Transverse vetoes with rapidity cutoff in SCET

    DOE PAGES

    Hornig, Andrew; Kang, Daekyoung; Makris, Yiannis; ...

    2017-12-11

    We consider di-jet production in hadron collisions where a transverse veto is imposed on radiation for (pseudo-)rapidities in the central region only, where this central region is defined with rapidity cutoff. For the case where the transverse measurement (e.g., transverse energy or min p T for jet veto) is parametrically larger relative to the typical transverse momentum beyond the cutoff, the cross section is insensitive to the cutoff parameter and is factorized in terms of collinear and soft degrees of freedom. The virtuality for these degrees of freedom is set by the transverse measurement, as in typical transverse-momentum dependent observablesmore » such as Drell-Yan, Higgs production, and the event shape broadening. This paper focuses on the other region, where the typical transverse momentum below and beyond the cutoff is of similar size. In this region the rapidity cutoff further resolves soft radiation into (u)soft and soft-collinear radiation with different rapidities but identical virtuality. This gives rise to rapidity logarithms of the rapidity cutoff parameter which we resum using renormalization group methods. We factorize the cross section in this region in terms of soft and collinear functions in the framework of soft-collinear effective theory, then further refactorize the soft function as a convolution of the (u)soft and soft-collinear functions. All these functions are calculated at one-loop order. As an example, we calculate a differential cross section for a specific partonic channel, qq ' → qq ' , for the jet shape angularities and show that the refactorization allows us to resum the rapidity logarithms and significantly reduce theoretical uncertainties in the jet shape spectrum.« less

  20. Resumming double logarithms in the QCD evolution of color dipoles

    DOE PAGES

    Iancu, E.; Madrigal, J. D.; Mueller, A. H.; ...

    2015-05-01

    The higher-order perturbative corrections, beyond leading logarithmic accuracy, to the BFKL evolution in QCD at high energy are well known to suffer from a severe lack-of-convergence problem, due to radiative corrections enhanced by double collinear logarithms. Via an explicit calculation of Feynman graphs in light cone (time-ordered) perturbation theory, we show that the corrections enhanced by double logarithms (either energy-collinear, or double collinear) are associated with soft gluon emissions which are strictly ordered in lifetime. These corrections can be resummed to all orders by solving an evolution equation which is non-local in rapidity. This equation can be equivalently rewritten inmore » local form, but with modified kernel and initial conditions, which resum double collinear logs to all orders. We extend this resummation to the next-to-leading order BFKL and BK equations. The first numerical studies of the collinearly-improved BK equation demonstrate the essential role of the resummation in both stabilizing and slowing down the evolution.« less

  1. Coulomb Logarithm in Nonideal and Degenerate Plasmas

    NASA Astrophysics Data System (ADS)

    Filippov, A. V.; Starostin, A. N.; Gryaznov, V. K.

    2018-03-01

    Various methods for determining the Coulomb logarithm in the kinetic theory of transport and various variants of the choice of the plasma screening constant, taking into account and disregarding the contribution of the ion component and the boundary value of the electron wavevector are considered. The correlation of ions is taken into account using the Ornstein-Zernike integral equation in the hypernetted-chain approximation. It is found that the effect of ion correlation in a nondegenerate plasma is weak, while in a degenerate plasma, this effect must be taken into account when screening is determined by the electron component alone. The calculated values of the electrical conductivity of a hydrogen plasma are compared with the values determined experimentally in the megabar pressure range. It is shown that the values of the Coulomb logarithm can indeed be smaller than unity. Special experiments are proposed for a more exact determination of the Coulomb logarithm in a magnetic field for extremely high pressures, for which electron scattering by ions prevails.

  2. The energy distribution of subjets and the jet shape

    DOE PAGES

    Kang, Zhong-Bo; Ringer, Felix; Waalewijn, Wouter J.

    2017-07-13

    We present a framework that describes the energy distribution of subjets of radius r within a jet of radius R. We consider both an inclusive sample of subjets as well as subjets centered around a predetermined axis, from which the jet shape can be obtained. For r << R we factorize the physics at angular scales r and R to resum the logarithms of r/R. For central subjets, we consider both the standard jet axis and the winner-take-all axis, which involve double and single logarithms of r/R, respectively. All relevant one-loop matching coefficients are given, and an inconsistency in somemore » previous results for cone jets is resolved. Our results for the standard jet shape differ from previous calculations at next-to-leading logarithmic order, because we account for the recoil of the standard jet axis due to soft radiation. Numerical results are presented for an inclusive subjet sample for pp → jet + X at next-to-leading order plus leading logarithmic order.« less

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kang, Zhong-Bo; Ringer, Felix; Waalewijn, Wouter J.

    We present a framework that describes the energy distribution of subjets of radius r within a jet of radius R. We consider both an inclusive sample of subjets as well as subjets centered around a predetermined axis, from which the jet shape can be obtained. For r << R we factorize the physics at angular scales r and R to resum the logarithms of r/R. For central subjets, we consider both the standard jet axis and the winner-take-all axis, which involve double and single logarithms of r/R, respectively. All relevant one-loop matching coefficients are given, and an inconsistency in somemore » previous results for cone jets is resolved. Our results for the standard jet shape differ from previous calculations at next-to-leading logarithmic order, because we account for the recoil of the standard jet axis due to soft radiation. Numerical results are presented for an inclusive subjet sample for pp → jet + X at next-to-leading order plus leading logarithmic order.« less

  4. Volatilities, Traded Volumes, and Price Increments in Derivative Securities

    NASA Astrophysics Data System (ADS)

    Kim, Kyungsik; Lim, Gyuchang; Kim, Soo Yong; Scalas, Enrico

    2007-03-01

    We apply the detrended fluctuation analysis (DFA) to the statistics of the Korean treasury bond (KTB) futures from which the logarithmic increments, volatilities, and traded volumes are estimated over a specific time lag. For our case, the logarithmic increment of futures prices has no long-memory property, while the volatility and the traded volume exhibit the existence of long-memory property. To analyze and calculate whether the volatility clustering is due to the inherent higher-order correlation not detected by applying directly the DFA to logarithmic increments of the KTB futures, it is of importance to shuffle the original tick data of futures prices and to generate the geometric Brownian random walk with the same mean and standard deviation. It is really shown from comparing the three tick data that the higher-order correlation inherent in logarithmic increments makes the volatility clustering. Particularly, the result of the DFA on volatilities and traded volumes may be supported the hypothesis of price changes.

  5. Volatilities, traded volumes, and the hypothesis of price increments in derivative securities

    NASA Astrophysics Data System (ADS)

    Lim, Gyuchang; Kim, SooYong; Scalas, Enrico; Kim, Kyungsik

    2007-08-01

    A detrended fluctuation analysis (DFA) is applied to the statistics of Korean treasury bond (KTB) futures from which the logarithmic increments, volatilities, and traded volumes are estimated over a specific time lag. In this study, the logarithmic increment of futures prices has no long-memory property, while the volatility and the traded volume exhibit the existence of the long-memory property. To analyze and calculate whether the volatility clustering is due to a inherent higher-order correlation not detected by with the direct application of the DFA to logarithmic increments of KTB futures, it is of importance to shuffle the original tick data of future prices and to generate a geometric Brownian random walk with the same mean and standard deviation. It was found from a comparison of the three tick data that the higher-order correlation inherent in logarithmic increments leads to volatility clustering. Particularly, the result of the DFA on volatilities and traded volumes can be supported by the hypothesis of price changes.

  6. Nonlinear coherent optical image processing using logarithmic transmittance of bacteriorhodopsin films

    NASA Astrophysics Data System (ADS)

    Downie, John D.

    1995-08-01

    The transmission properties of some bacteriorhodopsin-film spatial light modulators are uniquely suited to allow nonlinear optical image-processing operations to be applied to images with multiplicative noise characteristics. A logarithmic amplitude-transmission characteristic of the film permits the conversion of multiplicative noise to additive noise, which may then be linearly filtered out in the Fourier plane of the transformed image. I present experimental results demonstrating the principle and the capability for several different image and noise situations, including deterministic noise and speckle. The bacteriorhodopsin film studied here displays the logarithmic transmission response for write intensities spanning a dynamic range greater than 2 orders of magnitude.

  7. Next-to-leading order Balitsky-Kovchegov equation with resummation

    DOE PAGES

    Lappi, T.; Mantysaari, H.

    2016-05-03

    Here, we solve the Balitsky-Kovchegov evolution equation at next-to-leading order accuracy including a resummation of large single and double transverse momentum logarithms to all orders. We numerically determine an optimal value for the constant under the large transverse momentum logarithm that enables including a maximal amount of the full NLO result in the resummation. When this value is used, the contribution from the α 2 s terms without large logarithms is found to be small at large saturation scales and at small dipoles. Close to initial conditions relevant for phenomenological applications, these fixed-order corrections are shown to be numerically important.

  8. Nonlinear Coherent Optical Image Processing Using Logarithmic Transmittance of Bacteriorhodopsin Films

    NASA Technical Reports Server (NTRS)

    Downie, John D.

    1995-01-01

    The transmission properties of some bacteriorhodopsin-film spatial light modulators are uniquely suited to allow nonlinear optical image-processing operations to be applied to images with multiplicative noise characteristics. A logarithmic amplitude-transmission characteristic of the film permits the conversion of multiplicative noise to additive noise, which may then be linearly filtered out in the Fourier plane of the transformed image. I present experimental results demonstrating the principle and the capability for several different image and noise situations, including deterministic noise and speckle. The bacteriorhodopsin film studied here displays the logarithmic transmission response for write intensities spanning a dynamic range greater than 2 orders of magnitude.

  9. Resummation of jet veto logarithms at N 3 LL a + NNLO for W + W ? production at the LHC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dawson, S.; Jaiswal, P.; Li, Ye

    We compute the resummed on-shell W+W- production cross section under a jet veto at the LHC to partial N3LL order matched to the fixed-order NNLO result. Differential NNLO cross sections are obtained from an implementation of qT subtraction in Sherpa. The two-loop virtual corrections to the qq¯→W+W- amplitude, used in both fixed-order and resummation predictions, are extracted from the public code qqvvamp. We perform resummation using soft collinear effective theory, with approximate beam functions where only the logarithmic terms are included at two-loop. In addition to scale uncertainties from the hard matching scale and the factorization scale, rapidity scale variationsmore » are obtained within the analytic regulator approach. Our resummation results show a decrease in the jet veto cross section compared to NNLO fixed-order predictions, with reduced scale uncertainties compared to NNLL+NLO resummed predictions. We include the loop-induced gg contribution with jet veto resummation to NLL+LO. The prediction shows good agreement with recent LHC measurements.« less

  10. Resummation of jet veto logarithms at N 3 LL a + NNLO for W + W ? production at the LHC

    DOE PAGES

    Dawson, S.; Jaiswal, P.; Li, Ye; ...

    2016-12-01

    We compute the resummed on-shell W+W- production cross section under a jet veto at the LHC to partial N3LL order matched to the fixed-order NNLO result. Differential NNLO cross sections are obtained from an implementation of qT subtraction in Sherpa. The two-loop virtual corrections to the qq¯→W+W- amplitude, used in both fixed-order and resummation predictions, are extracted from the public code qqvvamp. We perform resummation using soft collinear effective theory, with approximate beam functions where only the logarithmic terms are included at two-loop. In addition to scale uncertainties from the hard matching scale and the factorization scale, rapidity scale variationsmore » are obtained within the analytic regulator approach. Our resummation results show a decrease in the jet veto cross section compared to NNLO fixed-order predictions, with reduced scale uncertainties compared to NNLL+NLO resummed predictions. We include the loop-induced gg contribution with jet veto resummation to NLL+LO. The prediction shows good agreement with recent LHC measurements.« less

  11. Climatology of contribution-weighted tropical rain rates based on TRMM 3B42

    NASA Astrophysics Data System (ADS)

    Venugopal, V.; Wallace, J. M.

    2016-10-01

    The climatology of annual mean tropical rain rate is investigated based on merged Tropical Rainfall Measuring Mission (TRMM) 3B42 data. At 0.25° × 0.25° spatial resolution, and 3-hourly temporal resolution, half the rain is concentrated within only ˜1% of the area of the tropics at any given instant. When plotted as a function of logarithm of rain rate, the cumulative contribution of rate-ranked rain occurrences to the annual mean rainfall in each grid box is S shaped and its derivative, the contribution-weighted rain rate spectrum, is Gaussian shaped. The 50% intercept of the cumulative contribution R50 is almost equivalent to the contribution-weighted mean logarithmic rain rate RL¯ based on all significant rain occurrences. The spatial patterns of R50 and RL¯ are similar to those obtained by mapping the fraction of the annual accumulation explained by rain occurrences with rates above various specified thresholds. The geographical distribution of R50 confirms the existence of patterns noted in prior analyses based on TRMM precipitation radar data and reveals several previously unnoticed features.

  12. The oxygen content of ocean bottom waters, the burial efficiency of organic carbon, and the regulation of atmospheric oxygen

    NASA Technical Reports Server (NTRS)

    Betts, J. N.; Holland, H. D.

    1991-01-01

    Data for the burial efficiency of organic carbon with marine sediments have been compiled for 69 locations. The burial efficiency as here defined is the ratio of the quantity of organic carbon which is ultimately buried to that which reaches the sediment-water interface. As noted previously, the sedimentation rate exerts a dominant influence on the burial efficiency. The logarithm of the burial efficiency is linearly related to the logarithm of the sedimentation rate at low sedimentation rates. At high sedimentation rates the burial efficiency can exceed 50% and becomes nearly independent of the sedimentation rate. The residual of the burial efficiency after the effect of the sedimentation rate has been subtracted is a weak function of the O2 concentration in bottom waters. The scatter is sufficiently large, so that the effect of the O2 concentration in bottom waters on the burial efficiency of organic matter could be either negligible or a minor but significant part of the mechanism that controls the level of O2 in the atmosphere.

  13. The logarithmic Cardy case: Boundary states and annuli

    NASA Astrophysics Data System (ADS)

    Fuchs, Jürgen; Gannon, Terry; Schaumann, Gregor; Schweigert, Christoph

    2018-05-01

    We present a model-independent study of boundary states in the Cardy case that covers all conformal field theories for which the representation category of the chiral algebra is a - not necessarily semisimple - modular tensor category. This class, which we call finite CFTs, includes all rational theories, but goes much beyond these, and in particular comprises many logarithmic conformal field theories. We show that the following two postulates for a Cardy case are compatible beyond rational CFT and lead to a universal description of boundary states that realizes a standard mathematical setup: First, for bulk fields, the pairing of left and right movers is given by (a coend involving) charge conjugation; and second, the boundary conditions are given by the objects of the category of chiral data. For rational theories our proposal reproduces the familiar result for the boundary states of the Cardy case. Further, with the help of sewing we compute annulus amplitudes. Our results show in particular that these possess an interpretation as partition functions, a constraint that for generic finite CFTs is much more restrictive than for rational ones.

  14. Deposition and persistence of beachcast seabird carcasses

    USGS Publications Warehouse

    van Pelt, Thomas I.; Piatt, John F.

    1995-01-01

    Following a massive wreck of guillemots (Uria aalge) in late winter and spring of 1993, we monitored the deposition and subsequent disappearance of 398 beachcast guillemot carcasses on two beaches in Resurrection Bay, Alaska, during a 100 day period. Deposition of carcasses declined logarithmically with time after the original event. Since fresh carcasses were more likely to be removed between counts than older carcasses, persistence rates increased logarithmically over time. Scavenging appeared to be the primary cause of carcass removal, followed by burial in beach debris and sand. Along-shore transport was negligible. We present an equation which estimates the number of carcasses deposited at time zero from beach surveys conducted some time later, using non-linear persistence rates that are a function of time. We use deposition rates to model the accumulation of beached carcasses, accounting for further deposition subsequent to the original event. Finally, we present a general method for extrapolating from a single count the number of carcasses cumulatively deposited on surveyed beaches, and discuss how our results can be used to assess the magnitude of mass seabird mortality events from beach surveys.

  15. Dynamical conductivity at the dirty superconductor-metal quantum phase transition.

    PubMed

    Del Maestro, Adrian; Rosenow, Bernd; Hoyos, José A; Vojta, Thomas

    2010-10-01

    We study the transport properties of ultrathin disordered nanowires in the neighborhood of the superconductor-metal quantum phase transition. To this end we combine numerical calculations with analytical strong-disorder renormalization group results. The quantum critical conductivity at zero temperature diverges logarithmically as a function of frequency. In the metallic phase, it obeys activated scaling associated with an infinite-randomness quantum critical point. We extend the scaling theory to higher dimensions and discuss implications for experiments.

  16. Longitudinal structure function from logarithmic slopes of F2 at low x

    NASA Astrophysics Data System (ADS)

    Boroun, G. R.

    2018-01-01

    Using Laplace transform techniques, I calculate the longitudinal structure function FL(x ,Q2) from the scaling violations of the proton structure function F2(x ,Q2) and make a critical study of this relationship between the structure functions at leading order (LO) up to next-to-next-to leading order (NNLO) analysis at small x . Furthermore, I consider heavy quark contributions to the relation between the structure functions, which leads to compact formula for Nf=3 +Heavy . The nonlinear corrections to the longitudinal structure function at LO up to NNLO analysis are shown in the Nf=4 (light quark flavor) based on the nonlinear corrections at R =2 and R =4 GeV-1 . The results are compared with experimental data of the longitudinal proton structure function FL in the range of 6.5 ≤Q2≤800 GeV2 .

  17. Postseismic deformation following the 2010 Mw 8.8 Maule and 2014 Mw 8.1 Pisagua megathrust earthquakes in Chile

    NASA Astrophysics Data System (ADS)

    Weiss, J. R.; Saunders, A.; Qiu, Q.; Foster, J. H.; Gomez, D.; Bevis, M. G.; Smalley, R., Jr.; Cimbaro, S.; Lenzano, L. E.; Barón, J.; Baez, J. C.; Echalar, A.; Avery, J.; Wright, T. J.

    2017-12-01

    We use a large regional network of continuous GPS sites to investigate postseismic deformation following the Mw 8.8 Maule and Mw 8.1 Pisagua earthquakes in Chile. Geodetic observations of surface displacements associated with megathrust earthquakes aid our understanding of the subduction zone earthquake cycle including postseismic processes such as afterslip and viscoelastic relaxation. The observations also help place constraints on the rheology and structure of the crust and upper mantle. We first empirically model the data and find that, while single-term logarithmic functions adequately fit the postseismic timeseries, they do a poor job of characterizing the rapid displacements in the days to weeks following the earthquakes. Combined exponential-logarithmic functions better capture the inferred near-field transition between afterslip and viscous relaxation, however displacements are best fit by three-term exponential functions with characteristic decay times of 15, 250, and 1500 days. Viscoelastic modeling of the velocity field and timeseries following the Maule earthquake suggests that the rheology is complex but is consistent with a 100-km-thick asthenosphere channel of viscosity 1018 Pa s sandwiched between a 40-km-thick elastic lid and a strong viscoelastic upper mantle. Variations in lid thickness of up to 40 km may be present and in some locations rapid deformation within the first months to years following the Maule event requires an even lower effective viscosity or a significant contribution from afterslip. We investigate this further by jointly inverting the GPS data for the time evolution of afterslip and viscous flow in the mantle wedge surrounding the Maule event.

  18. The application of reduced absorption cross section on the identification of the compounds with similar function-groups

    NASA Astrophysics Data System (ADS)

    Yu, Fei; Zuo, Jian; Mu, Kai-jun; Zhang, Zhen-wei; Zhang, Liang-liang; Zhang, Lei-wei; Zhang, Cun-lin

    2013-08-01

    Terahertz spectroscopy is a powerful tool for materials investigation. The low frequency vibrations were usually investigated by means of absorption coefficient regardless of the refractive index. It leads to the disregard of some inherent low-frequency vibrational information of the chemical compounds. Moreover, due to the scattering inside the sample, there are some distortions of the absorption features, so that the absorption dependent material identification is not valid enough. Here, a statistical parameter named reduced absorption cross section (RACS) is introduced. This can not only help us investigate the molecular dynamics but also distinguish one chemical compound with another which has similar function-groups. Experiments are carried out on L-Tyrosine and L-Phenylalanine and the different mass ratios of their mixtures as an example of the application of RACS. The results come out that the RACS spectrum of L-Tyrosine and L-Phenylalanine reserve the spectral fingerprint information of absorption spectrum. The log plot of RACSs of the two amino acids show power-law behavior σR(~ν) ~ (ν~α), and there is a linear relation between the wavenumber and the RACS in the double logarithmic plot. The exponents α, at the same time, are the slopes of the RACS curves in the double logarithmic plot. The big differences of the exponents α between the two amino acids and their mixtures can be seen visually from the slopes of the RACS curves. So we can use RACS analytical method to distinguish some complex compounds with similar function-groups and mixtures from another which has similar absorption peaks in THz region.

  19. Product and Quotient Rules from Logarithmic Differentiation

    ERIC Educational Resources Information Center

    Chen, Zhibo

    2012-01-01

    A new application of logarithmic differentiation is presented, which provides an alternative elegant proof of two basic rules of differentiation: the product rule and the quotient rule. The proof can intrigue students, help promote their critical thinking and rigorous reasoning and deepen their understanding of previously encountered concepts. The…

  20. Regularized Laplacian determinants of self-similar fractals

    NASA Astrophysics Data System (ADS)

    Chen, Joe P.; Teplyaev, Alexander; Tsougkas, Konstantinos

    2018-06-01

    We study the spectral zeta functions of the Laplacian on fractal sets which are locally self-similar fractafolds, in the sense of Strichartz. These functions are known to meromorphically extend to the entire complex plane, and the locations of their poles, sometimes referred to as complex dimensions, are of special interest. We give examples of locally self-similar sets such that their complex dimensions are not on the imaginary axis, which allows us to interpret their Laplacian determinant as the regularized product of their eigenvalues. We then investigate a connection between the logarithm of the determinant of the discrete graph Laplacian and the regularized one.

  1. Sulfate passivation in the lead-acid system as a capacity limiting process

    NASA Astrophysics Data System (ADS)

    Kappus, W.; Winsel, A.

    1982-10-01

    Calculations of the discharge capacity of Pb and PbO 2 electrodes as a function of various parameters are presented. They are based on the solution-precipitation mechanism for the discharge reaction and its formulation by Winsel et al. A logarithmic pore size distribution is used to fit experimental porosigrams of Pb and PbO 2 electrodes. Based on this pore size distribution the capacity is calculated as a function of current, BET surface, and porosity of the PbSO 4 diaphragm. The PbSO 4 supersaturation as the driving force of the diffusive transport is chosen as a free parameter.

  2. Public Transport Systems in Poland: From Bialystok to Zielona Góra by Bus and Tram Using Universal Statistics of Complex Networks

    NASA Astrophysics Data System (ADS)

    Sienkiewicz, J.; Holyst, J. A.

    2005-05-01

    We have examined a topology of 21 public transport networks in Poland. Our data exhibit several universal features in considered systems when they are analyzed from the point of view of evolving networks. Depending on the assumed definition of a network topology the degree distribution can follow a power law p(k) ˜ k-γ or can be described by an exponential function p(k) ˜ exp (-α k). In the first case one observes that mean distances between two nodes are a linear function of logarithms of their degrees product.

  3. From statistics of regular tree-like graphs to distribution function and gyration radius of branched polymers

    NASA Astrophysics Data System (ADS)

    Grosberg, Alexander Y.; Nechaev, Sergei K.

    2015-08-01

    We consider flexible branched polymer, with quenched branch structure, and show that its conformational entropy as a function of its gyration radius R, at large R, obeys, in the scaling sense, Δ S˜ {R}2/({a}2L), with a bond length (or Kuhn segment) and L defined as an average spanning distance. We show that this estimate is valid up to at most the logarithmic correction for any tree. We do so by explicitly computing the largest eigenvalues of Kramers matrices for both regular and ‘sparse’ three-branched trees, uncovering on the way their peculiar mathematical properties.

  4. Definition and Evolution of Transverse Momentum Distributions

    NASA Astrophysics Data System (ADS)

    Echevarría, Miguel G.; Idilbi, Ahmad; Scimemi, Ignazio

    We consider the definition of unpolarized transverse-momentum-dependent parton distribution functions while staying on-the-light-cone. By imposing a requirement of identical treatment of two collinear sectors, our approach, compatible with a generic factorization theorem with the soft function included, is valid for all non-ultra-violet regulators (as it should), an issue which causes much confusion in the whole field. We explain how large logarithms can be resummed in a way which can be considered as an alternative to the use of Collins-Soper evolution equation. The evolution properties are also discussed and the gauge-invariance, in both classes of gauges, regular and singular, is emphasized.

  5. Discrete Scale Invariance of Human Large EEG Voltage Deflections is More Prominent in Waking than Sleep Stage 2.

    PubMed

    Zorick, Todd; Mandelkern, Mark A

    2015-01-01

    Electroencephalography (EEG) is typically viewed through the lens of spectral analysis. Recently, multiple lines of evidence have demonstrated that the underlying neuronal dynamics are characterized by scale-free avalanches. These results suggest that techniques from statistical physics may be used to analyze EEG signals. We utilized a publicly available database of fourteen subjects with waking and sleep stage 2 EEG tracings per subject, and observe that power-law dynamics of critical-state neuronal avalanches are not sufficient to fully describe essential features of EEG signals. We hypothesized that this could reflect the phenomenon of discrete scale invariance (DSI) in EEG large voltage deflections (LVDs) as being more prominent in waking consciousness. We isolated LVDs, and analyzed logarithmically transformed LVD size probability density functions (PDF) to assess for DSI. We find evidence of increased DSI in waking, as opposed to sleep stage 2 consciousness. We also show that the signatures of DSI are specific for EEG LVDs, and not a general feature of fractal simulations with similar statistical properties to EEG. Removing only LVDs from waking EEG produces a reduction in power in the alpha and beta frequency bands. These findings may represent a new insight into the understanding of the cortical dynamics underlying consciousness.

  6. INTERNAL LIMITING MEMBRANE PEELING VERSUS INVERTED FLAP TECHNIQUE FOR TREATMENT OF FULL-THICKNESS MACULAR HOLES: A COMPARATIVE STUDY IN A LARGE SERIES OF PATIENTS.

    PubMed

    Rizzo, Stanislao; Tartaro, Ruggero; Barca, Francesco; Caporossi, Tomaso; Bacherini, Daniela; Giansanti, Fabrizio

    2017-12-08

    The inverted flap (IF) technique has recently been introduced in macular hole (MH) surgery. The IF technique has shown an increase of the success rate in the case of large MHs and in MHs associated with high myopia. This study reports the anatomical and functional results in a large series of patients affected by MH treated using pars plana vitrectomy and gas tamponade combined with internal limiting membrane (ILM) peeling or IF. This is a retrospective, consecutive, nonrandomized comparative study of patients affected by idiopathic or myopic MH treated using small-gauge pars plana vitrectomy (25- or 23-gauge) between January 2011 and May 2016. The patients were divided into two groups according to the ILM removal technique (complete removal vs. IF). A subgroup analysis was performed according to the MH diameter (MH < 400 µm and MH ≥ 400 µm), axial length (AL < 26 mm and AL ≥ 26 mm), and the presence of chorioretinal atrophy in the macular area (present or absent). We included 620 eyes of 570 patients affected by an MH, 300 patients underwent pars plana vitrectomy and ILM peeling and 320 patients underwent pars plana vitrectomy and IF. Overall, 84.94% of the patients had complete anatomical success characterized by MH closure after the operation. In particular, among the patients who underwent only ILM peeling the closure rate was 78.75%; among the patients who underwent the IF technique, it was 91.93% (P = 0.001); and among the patients affected by full-thickness MH ≥400 µm, success was achieved in 95.6% of the cases in the IF group and in 78.6% in the ILM peeling group (P = 0.001); among the patients with an axial length ≥26 mm, success was achieved in 88.4% of the cases in the IF group and in 38.9% in the ILM peeling group (P = 0.001). Average preoperative best-corrected visual acuity was 0.77 (SD = 0.32) logarithm of the minimum angle of resolution (20/118 Snellen) in the peeling group and 0.74 (SD = 0.33) logarithm of the minimum angle of resolution (20/110 Snellen) in the IF group (P = 0.31). Mean postoperative best-corrected visual acuity was 0.52 (SD = 0.42) logarithm of the minimum angle of resolution (20/66 Snellen) in the peeling group and 0.43 (SD = 0.31) logarithm of the minimum angle of resolution (20/53 Snellen) in the IF group (P = 0.003). Vitrectomy associated with the inverted ILM flap technique seems to be effective surgery for idiopathic and myopic large MHs, improving both functional and anatomical outcomes.

  7. Comments on "The multisynapse neural network and its application to fuzzy clustering".

    PubMed

    Yu, Jian; Hao, Pengwei

    2005-05-01

    In the above-mentioned paper, Wei and Fahn proposed a neural architecture, the multisynapse neural network, to solve constrained optimization problems including high-order, logarithmic, and sinusoidal forms, etc. As one of its main applications, a fuzzy bidirectional associative clustering network (FBACN) was proposed for fuzzy-partition clustering according to the objective-functional method. The connection between the objective-functional-based fuzzy c-partition algorithms and FBACN is the Lagrange multiplier approach. Unfortunately, the Lagrange multiplier approach was incorrectly applied so that FBACN does not equivalently minimize its corresponding constrained objective-function. Additionally, Wei and Fahn adopted traditional definition of fuzzy c-partition, which is not satisfied by FBACN. Therefore, FBACN can not solve constrained optimization problems, either.

  8. Critical N = (1, 1) general massive supergravity

    NASA Astrophysics Data System (ADS)

    Deger, Nihat Sadik; Moutsopoulos, George; Rosseel, Jan

    2018-04-01

    In this paper we study the supermultiplet structure of N = (1, 1) General Massive Supergravity at non-critical and critical points of its parameter space. To do this, we first linearize the theory around its maximally supersymmetric AdS3 vacuum and obtain the full linearized Lagrangian including fermionic terms. At generic values, linearized modes can be organized as two massless and 2 massive multiplets where supersymmetry relates them in the standard way. At critical points logarithmic modes appear and we find that in three of such points some of the supersymmetry transformations are non-invertible in logarithmic multiplets. However, in the fourth critical point, there is a massive logarithmic multiplet with invertible supersymmetry transformations.

  9. Extracellular metalloproteinases in Phytomonas serpens.

    PubMed

    Vermelho, Alane B; Almeida, Flávia V S; Bronzato, Leandro S; Branquinha, Marta H

    2003-03-01

    The detection of extracellular proteinases in Phytomonas serpens, a trypanosomatid isolated from tomato fruits, is demonstrated in this paper. Maximal production occurred at the end of the logarithmic phase of growth. These enzymes exhibited selective substrate utilization in SDS-PAGE, being more active with gelatin; hemoglobin and bovine serum albumin were not degraded. Three proteinases were detected in SDS-PAGE-gelatin, with apparent molecular masses between 94 and 70 kDa. The proteolytic activity was completely blocked by 1,10-phenanthroline and strongly inhibited by EDTA, whereas a partial inhibition was observed with trans-epoxysuccinyl-L-leucylamido-(4-guanidino) butane (E-64) and soybean trypsin inhibitor; phenylmethylsulfonyl fluoride weakly inhibited the enzymes. This inhibition profile indicated that these extracellular proteinases belong to the metalloproteinase class.

  10. Next-to-leading logarithmic QCD contribution of the electromagnetic dipole operator to B¯→Xsγγ with a massive strange quark

    NASA Astrophysics Data System (ADS)

    Asatrian, H. M.; Greub, C.

    2014-05-01

    We calculate the O(αs) corrections to the double differential decay width dΓ77/(ds1ds2) for the process B¯→Xsγγ, originating from diagrams involving the electromagnetic dipole operator O7. The kinematical variables s1 and s2 are defined as si=(pb-qi)2/mb2, where pb, q1, q2 are the momenta of the b quark and two photons. We introduce a nonzero mass ms for the strange quark to regulate configurations where the gluon or one of the photons become collinear with the strange quark and retain terms which are logarithmic in ms, while discarding terms which go to zero in the limit ms→0. When combining virtual and bremsstrahlung corrections, the infrared and collinear singularities induced by soft and/or collinear gluons drop out. By our cuts the photons do not become soft, but one of them can become collinear with the strange quark. This implies that in the final result a single logarithm of ms survives. In principle, the configurations with collinear photon emission could be treated using fragmentation functions. In a related work we find that similar results can be obtained when simply interpreting ms appearing in the final result as a constituent mass. We do so in the present paper and vary ms between 400 and 600 MeV in the numerics. This work extends a previous paper by us, where only the leading power terms with respect to the (normalized) hadronic mass s3=(pb-q1-q2)2/mb2 were taken into account in the underlying triple differential decay width dΓ77/(ds1ds2ds3).

  11. Genetic linkage map construction and QTL mapping of salt tolerance traits in Zoysiagrass (Zoysia japonica).

    PubMed

    Guo, Hailin; Ding, Wanwen; Chen, Jingbo; Chen, Xuan; Zheng, Yiqi; Wang, Zhiyong; Liu, Jianxiu

    2014-01-01

    Zoysiagrass (Zoysia Willd.) is an important warm season turfgrass that is grown in many parts of the world. Salt tolerance is an important trait in zoysiagrass breeding programs. In this study, a genetic linkage map was constructed using sequence-related amplified polymorphism markers and random amplified polymorphic DNA markers based on an F1 population comprising 120 progeny derived from a cross between Zoysia japonica Z105 (salt-tolerant accession) and Z061 (salt-sensitive accession). The linkage map covered 1211 cM with an average marker distance of 5.0 cM and contained 24 linkage groups with 242 marker loci (217 sequence-related amplified polymorphism markers and 25 random amplified polymorphic DNA markers). Quantitative trait loci affecting the salt tolerance of zoysiagrass were identified using the constructed genetic linkage map. Two significant quantitative trait loci (qLF-1 and qLF-2) for leaf firing percentage were detected; qLF-1 at 36.3 cM on linkage group LG4 with a logarithm of odds value of 3.27, which explained 13.1% of the total variation of leaf firing and qLF-2 at 42.3 cM on LG5 with a logarithm of odds value of 2.88, which explained 29.7% of the total variation of leaf firing. A significant quantitative trait locus (qSCW-1) for reduced percentage of dry shoot clipping weight was detected at 44.1 cM on LG5 with a logarithm of odds value of 4.0, which explained 65.6% of the total variation. This study provides important information for further functional analysis of salt-tolerance genes in zoysiagrass. Molecular markers linked with quantitative trait loci for salt tolerance will be useful in zoysiagrass breeding programs using marker-assisted selection.

  12. Dynamical conductivity at the dirty superconductor-metal quantum phase transition

    NASA Astrophysics Data System (ADS)

    Hoyos, J. A.; Del Maestro, Adrian; Rosenow, Bernd; Vojta, Thomas

    2011-03-01

    We study the transport properties of ultrathin disordered nanowires in the neighborhood of the superconductor-metal quantum phase transition. To this end we combine numerical calculations with analytical strong-disorder renormalization group results. The quantum critical conductivity at zero temperature diverges logarithmically as a function of frequency. In the metallic phase, it obeys activated scaling associated with an infinite-randomness quantum critical point. We extend the scaling theory to higher dimensions and discuss implications for experiments. Financial support: Fapesp, CNPq, NSF, and Research Corporation.

  13. Enhancement of concentration range of chromatographically detectable components with array detector mass spectrometry

    DOEpatents

    Enke, Christie

    2013-02-19

    Methods and instruments for high dynamic range analysis of sample components are described. A sample is subjected to time-dependent separation, ionized, and the ions dispersed with a constant integration time across an array of detectors according to the ions m/z values. Each of the detectors in the array has a dynamically adjustable gain or a logarithmic response function, producing an instrument capable of detecting a ratio of responses or 4 or more orders of magnitude.

  14. How Many Is a Zillion? Sources of Number Distortion

    ERIC Educational Resources Information Center

    Rips, Lance J.

    2013-01-01

    When young children attempt to locate the positions of numerals on a number line, the positions are often logarithmically rather than linearly distributed. This finding has been taken as evidence that the children represent numbers on a mental number line that is logarithmically calibrated. This article reports a statistical simulation showing…

  15. Logarithmic Transformations in Regression: Do You Transform Back Correctly?

    ERIC Educational Resources Information Center

    Dambolena, Ismael G.; Eriksen, Steven E.; Kopcso, David P.

    2009-01-01

    The logarithmic transformation is often used in regression analysis for a variety of purposes such as the linearization of a nonlinear relationship between two or more variables. We have noticed that when this transformation is applied to the response variable, the computation of the point estimate of the conditional mean of the original response…

  16. Spatially averaged flow over a wavy boundary revisited

    USGS Publications Warehouse

    McLean, S.R.; Wolfe, S.R.; Nelson, J.M.

    1999-01-01

    Vertical profiles of streamwise velocity measured over bed forms are commonly used to deduce boundary shear stress for the purpose of estimating sediment transport. These profiles may be derived locally or from some sort of spatial average. Arguments for using the latter procedure are based on the assumption that spatial averaging of the momentum equation effectively removes local accelerations from the problem. Using analogies based on steady, uniform flows, it has been argued that the spatially averaged velocity profiles are approximately logarithmic and can be used to infer values of boundary shear stress. This technique of using logarithmic profiles is investigated using detailed laboratory measurements of flow structure and boundary shear stress over fixed two-dimensional bed forms. Spatial averages over the length of the bed form of mean velocity measurements at constant distances from the mean bed elevation yield vertical profiles that are highly logarithmic even though the effect of the bottom topography is observed throughout the water column. However, logarithmic fits of these averaged profiles do not yield accurate estimates of the measured total boundary shear stress. Copyright 1999 by the American Geophysical Union.

  17. Logarithmic r-θ mapping for hybrid optical neural network filter for multiple objects recognition within cluttered scenes

    NASA Astrophysics Data System (ADS)

    Kypraios, Ioannis; Young, Rupert C. D.; Chatwin, Chris R.; Birch, Phil M.

    2009-04-01

    θThe window unit in the design of the complex logarithmic r-θ mapping for hybrid optical neural network filter can allow multiple objects of the same class to be detected within the input image. Additionally, the architecture of the neural network unit of the complex logarithmic r-θ mapping for hybrid optical neural network filter becomes attractive for accommodating the recognition of multiple objects of different classes within the input image by modifying the output layer of the unit. We test the overall filter for multiple objects of the same and of different classes' recognition within cluttered input images and video sequences of cluttered scenes. Logarithmic r-θ mapping for hybrid optical neural network filter is shown to exhibit with a single pass over the input data simultaneously in-plane rotation, out-of-plane rotation, scale, log r-θ map translation and shift invariance, and good clutter tolerance by recognizing correctly the different objects within the cluttered scenes. We record in our results additional extracted information from the cluttered scenes about the objects' relative position, scale and in-plane rotation.

  18. [Ophthalmologic reading charts : Part 2: Current logarithmically scaled reading charts].

    PubMed

    Radner, W

    2016-12-01

    To analyze currently available reading charts regarding print size, logarithmic print size progression, and the background of test-item standardization. For the present study, the following logarithmically scaled reading charts were investigated using a measuring microscope (iNexis VMA 2520; Nikon, Tokyo): Eschenbach, Zeiss, OCULUS, MNREAD (Minnesota Near Reading Test), Colenbrander, and RADNER. Calculations were made according to EN-ISO 8596 and the International Research Council recommendations. Modern reading charts and cards exhibit a logarithmic progression of print sizes. The RADNER reading charts comprise four different cards with standardized test items (sentence optotypes), a well-defined stop criterion, accurate letter sizes, and a high print quality. Numbers and Landolt rings are also given in the booklet. The OCULUS cards have currently been reissued according to recent standards and also exhibit a high print quality. In addition to letters, numbers, Landolt rings, and examples taken from a timetable and the telephone book, sheet music is also offered. The Colenbrander cards use short sentences of 44 characters, including spaces, and exhibit inaccuracy at smaller letter sizes, as do the MNREAD cards. The MNREAD cards use sentences of 60 characters, including spaces, and have a high print quality. Modern reading charts show that international standards can be achieved with test items similar to optotypes, by using recent technology and developing new concepts of test-item standardization. Accurate print sizes, high print quality, and a logarithmic progression should become the minimum requirements for reading charts and reading cards in ophthalmology.

  19. Linear air-fuel sensor development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garzon, F.; Miller, C.

    1996-12-14

    The electrochemical zirconia solid electrolyte oxygen sensor, is extensively used for monitoring oxygen concentrations in various fields. They are currently utilized in automobiles to monitor the exhaust gas composition and control the air-to-fuel ratio, thus reducing harmful emission components and improving fuel economy. Zirconia oxygen sensors, are divided into two classes of devices: (1) potentiometric or logarithmic air/fuel sensors; and (2) amperometric or linear air/fuel sensors. The potentiometric sensors are ideally suited to monitor the air-to-fuel ratio close to the complete combustion stoichiometry; a value of about 14.8 to 1 parts by volume. This occurs because the oxygen concentration changesmore » by many orders of magnitude as the air/fuel ratio is varied through the stoichiometric value. However, the potentiometric sensor is not very sensitive to changes in oxygen partial pressure away from the stoichiometric point due to the logarithmic dependence of the output voltage signal on the oxygen partial pressure. It is often advantageous to operate gasoline power piston engines with excess combustion air; this improves fuel economy and reduces hydrocarbon emissions. To maintain stable combustion away from stoichiometry, and enable engines to operate in the excess oxygen (lean burn) region several limiting-current amperometric sensors have been reported. These sensors are based on the electrochemical oxygen ion pumping of a zirconia electrolyte. They typically show reproducible limiting current plateaus with an applied voltage caused by the gas diffusion overpotential at the cathode.« less

  20. An Investigation of Students' Errors in Logarithms

    ERIC Educational Resources Information Center

    Ganesan, Raman; Dindyal, Jaguthsing

    2014-01-01

    In this study we set out to investigate the errors made by students in logarithms. A test with 16 items was administered to 89 Secondary three students (Year 9). The errors made by the students were categorized using four categories from a framework by Movshovitz-Hadar, Zaslavsky, and Inbar (1987). It was found that students in the top third were…

  1. Decay of Correlations, Quantitative Recurrence and Logarithm Law for Contracting Lorenz Attractors

    NASA Astrophysics Data System (ADS)

    Galatolo, Stefano; Nisoli, Isaia; Pacifico, Maria Jose

    2018-03-01

    In this paper we prove that a class of skew products maps with non uniformly hyperbolic base has exponential decay of correlations. We apply this to obtain a logarithm law for the hitting time associated to a contracting Lorenz attractor at all the points having a well defined local dimension, and a quantitative recurrence estimation.

  2. Dead-time compensation for a logarithmic display rate meter

    DOEpatents

    Larson, John A.; Krueger, Frederick P.

    1988-09-20

    An improved circuit is provided for application to a radiation survey meter that uses a detector that is subject to dead time. The circuit compensates for dead time over a wide range of count rates by producing a dead-time pulse for each detected event, a live-time pulse that spans the interval between dead-time pulses, and circuits that average the value of these pulses over time. The logarithm of each of these values is obtained and the logarithms are subtracted to provide a signal that is proportional to a count rate that is corrected for the effects of dead time. The circuit produces a meter indication and is also capable of producing an audible indication of detected events.

  3. Dead-time compensation for a logarithmic display rate meter

    DOEpatents

    Larson, J.A.; Krueger, F.P.

    1987-10-05

    An improved circuit is provided for application to a radiation survey meter that uses a detector that is subject to dead time. The circuit compensates for dead time over a wide range of count rates by producing a dead-time pulse for each detected event, a live-time pulse that spans the interval between dead-time pulses, and circuits that average the value of these pulses over time. The logarithm of each of these values is obtained and the logarithms are subtracted to provide a signal that is proportional to a count rate that is corrected for the effects of dead time. The circuit produces a meter indication and is also capable of producing an audible indication of detected events. 5 figs.

  4. GKS. Minimal Graphical Kernel System C Binding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simons, R.W.

    1985-10-01

    GKS (the Graphical Kernel System) is both an American National Standard (ANS) and an ISO international standard graphics package. It conforms to ANS X3.124-1985 and to the May 1985 draft proposal for the GKS C Language Binding standard under development by the X3H3 Technical Committee. This implementation includes level ma (the lowest level of the ANS) and some routines from level mb. The following graphics capabilities are supported: two-dimensional lines, markers, text, and filled areas; control over color, line type, and character height and alignment; multiple simultaneous workstations and multiple transformations; and locator and choice input. Tektronix 4014 and 4115more » terminals are supported, and support for other devices may be added. Since this implementation was developed under UNIX, it uses makefiles, C shell scripts, the ar library maintainer, editor scripts, and other UNIX utilities. Therefore, implementing it under another operating system may require considerable effort. Also included with GKS is the small plot package (SPP), a direct descendant of the WEASEL plot package developed at Sandia. SPP is built on the GKS; therefore, all of the capabilities of GKS are available. It is not necessary to use GKS functions, since entire plots can be produced using only SPP functions, but the addition of GKS will give the programmer added power and flexibility. SPP provides single-call plot commands, linear and logarithmic axis commands, control for optional plotting of tick marks and tick mark labels, and permits plotting of data with or without markers and connecting lines.« less

  5. Exact infinite-time statistics of the Loschmidt echo for a quantum quench.

    PubMed

    Campos Venuti, Lorenzo; Jacobson, N Tobias; Santra, Siddhartha; Zanardi, Paolo

    2011-07-01

    The equilibration dynamics of a closed quantum system is encoded in the long-time distribution function of generic observables. In this Letter we consider the Loschmidt echo generalized to finite temperature, and show that we can obtain an exact expression for its long-time distribution for a closed system described by a quantum XY chain following a sudden quench. In the thermodynamic limit the logarithm of the Loschmidt echo becomes normally distributed, whereas for small quenches in the opposite, quasicritical regime, the distribution function acquires a universal double-peaked form indicating poor equilibration. These findings, obtained by a central limit theorem-type result, extend to completely general models in the small-quench regime.

  6. Performance evaluation of MLP and RBF feed forward neural network for the recognition of off-line handwritten characters

    NASA Astrophysics Data System (ADS)

    Rishi, Rahul; Choudhary, Amit; Singh, Ravinder; Dhaka, Vijaypal Singh; Ahlawat, Savita; Rao, Mukta

    2010-02-01

    In this paper we propose a system for classification problem of handwritten text. The system is composed of preprocessing module, supervised learning module and recognition module on a very broad level. The preprocessing module digitizes the documents and extracts features (tangent values) for each character. The radial basis function network is used in the learning and recognition modules. The objective is to analyze and improve the performance of Multi Layer Perceptron (MLP) using RBF transfer functions over Logarithmic Sigmoid Function. The results of 35 experiments indicate that the Feed Forward MLP performs accurately and exhaustively with RBF. With the change in weight update mechanism and feature-drawn preprocessing module, the proposed system is competent with good recognition show.

  7. Macronuclear Cytology of Synchronized Tetrahymena pyriformis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cameron, I. L.; Padilla, G. M.; Miller, Jr., O. L.

    1966-05-01

    Elliott, Kennedy and Bak ('62) and Elliott ('63) followed fine structural changes in macronuclei of Tetrahymena pyriformis which were synchronized by the heat shock method of Scherbaum and Zeuthen ('54). Using Elliott's morphological descriptions as a basis, we designed our investigations with two main objectives: First, to again study the. morphological changes which occur in the macronucleus of Tetrahymena synchronized by the heat shock method. The second objective was to compare these observations with Tetrahymena synchronized by an alternate method recently reported by Padilla and Cameron ('64). Therefore, we were able to compare the results from two different synchronization methodsmore » and to contrast these findings with the macronuclear cytology of Tetrahymena taken from a logarithmically growing culture. Comparison of cells treated in these three different ways enables us to evaluate the two different synchronization methods and to gain more information on the structural changes taking place in the macronucleus of Tetrahymena as a function of the cell cycle. Our observations were confined primarily to nucleolar morphology. The results indicate that cells synchronized by the Padilla and Cameron method more closely resemble logarithmically growing Tetrahymena in the macronuclear structure than do cells obtained by the Scherbaum and·Zeuthen synchronization method. .« less

  8. A new real-time guidance strategy for aerodynamic ascent flight

    NASA Astrophysics Data System (ADS)

    Yamamoto, Takayuki; Kawaguchi, Jun'ichiro

    2007-12-01

    Reusable launch vehicles are conceived to constitute the future space transportation system. If these vehicles use air-breathing propulsion and lift taking-off horizontally, the optimal steering for these vehicles exhibits completely different behavior from that in conventional rockets flight. In this paper, the new guidance strategy is proposed. This method derives from the optimality condition as for steering and an analysis concludes that the steering function takes the form comprised of Linear and Logarithmic terms, which include only four parameters. The parameter optimization of this method shows the acquired terminal horizontal velocity is almost same with that obtained by the direct numerical optimization. This supports the parameterized Liner Logarithmic steering law. And here is shown that there exists a simple linear relation between the terminal states and the parameters to be corrected. The relation easily makes the parameters determined to satisfy the terminal boundary conditions in real-time. The paper presents the guidance results for the practical application cases. The results show the guidance is well performed and satisfies the terminal boundary conditions specified. The strategy built and presented here does guarantee the robust solution in real-time excluding any optimization process, and it is found quite practical.

  9. Logarithmic field dependence of the Thermal Conductivity in La_2-xSr_xCuO_4

    NASA Astrophysics Data System (ADS)

    Krishana, K.; Ong, N. P.; Kimura, T.

    1997-03-01

    We have investigated the thermal conductivity κ of La_2-xSr_xCuO4 in fields B upto 14 tesla. To minimize errors caused by the field sensitivity of the thermocouple sensors, we used a sensitive null-detection technique. We find that below Tc κ varies as -logB in high fields and in the low field limit it approaches a constant. The κ vs. B data at these temperatures collapse to a universal curve , which fits very well to an expression involving the digamma function and reminiscent of 2-D weak localization. The field scale derived from this scaling is linear in T. The logarithmic dependence of κ strongly suggests an electronic origin for anomaly in κ below T_c. Our experiment precludes conventional vortex scattering of phonons as the source of the anomaly. The data fit poorly to these models and the derived mean-free-paths are non monotonic and 5 to 8 times larger than obtained from heat capacity. Also comparison of the x=0.17 and x=0.08 samples give field scales opposite to what is expected from vortex scattering.

  10. Portable geiger counter with logarithmic scale (in Portuguese)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oliveira, L.A.C.; de Andrade Chagas, E.; de Bittencourt, F.A.

    1971-06-01

    From 23rd annual meeting of the Brazilian Society for the Advancement of Science; Curitiba, Brazil (4 Jul 1971). A portable scaler with logarithmic scale covering 3 decades: 1 to 10, 10 to 10/sup 2/, and 10/sup 2/ to 10/sup 3/ cps is presented. Electrica l energy is supplied by 6 volts given by 4 D type batteries. (INIS)

  11. Regional Frequency Computation Users Manual.

    DTIC Science & Technology

    1972-07-01

    increment of flow used to prevent infinite logarithms for events with zero flow X = Mean logarithm of flow events N = Total years of record S = Unbiased...C LIBRARY 3’jr.RfUTINFES USEO--ALflGpSINvAB3 1002 c PRGRAM ~ SUBRflUTINES CR0UTpR,chfNN-SFE COt’MENTS I-N RNGEN 100 C REFERENCE TOl TAPE ? AT

  12. A new type of density-management diagram for slash pine plantations

    Treesearch

    Curtis L. VanderSchaaf

    2006-01-01

    Many Density-Management Diagrams (DMD) have been developed for conifer species throughout the world based on stand density index (SDI). The diagrams often plot the logarithm of average tree size (volume, weight, or quadratic mean diameter) over the logarithm of trees per unit area. A new type of DMD is presented for slash pine (Pinus elliottii var elliottii)...

  13. MUTATIONAL AND TRANSCRIPTIONAL RESPONSES OF STATIONARY- AND LOGARITHMIC-PHASE SALMONELLA TO MX: CORRELATION OF MUTATIONAL RESPONSE TO CHANGES IN GENE EXPRESSION

    EPA Science Inventory

    We measured the mutational and transcriptional response of stationary-phase and logarithmic-phase S. typhimurium TA100 to 3 concentrations of the drinking water mutagen 3-chloro-4-(dichloromethyl)-5-hydroxy-2(5H)-furanone (MX). The mutagenicity of MX in strain TA100 was evaluated...

  14. A numerical solution for two-dimensional Fredholm integral equations of the second kind with kernels of the logarithmic potential form

    NASA Technical Reports Server (NTRS)

    Gabrielsen, R. E.; Uenal, A.

    1981-01-01

    Two dimensional Fredholm integral equations with logarithmic potential kernels are numerically solved. The explicit consequence of these solutions to their true solutions is demonstrated. The results are based on a previous work in which numerical solutions were obtained for Fredholm integral equations of the second kind with continuous kernels.

  15. Kinetics of drug release from ointments: Role of transient-boundary layer.

    PubMed

    Xu, Xiaoming; Al-Ghabeish, Manar; Krishnaiah, Yellela S R; Rahman, Ziyaur; Khan, Mansoor A

    2015-10-15

    In the current work, an in vitro release testing method suitable for ointment formulations was developed using acyclovir as a model drug. Release studies were carried out using enhancer cells on acyclovir ointments prepared with oleaginous, absorption, and water-soluble bases. Kinetics and mechanism of drug release was found to be highly dependent on the type of ointment bases. In oleaginous bases, drug release followed a unique logarithmic-time dependent profile; in both absorption and water-soluble bases, drug release exhibited linearity with respect to square root of time (Higuchi model) albeit differences in the overall release profile. To help understand the underlying cause of logarithmic-time dependency of drug release, a novel transient-boundary hypothesis was proposed, verified, and compared to Higuchi theory. Furthermore, impact of drug solubility (under various pH conditions) and temperature on drug release were assessed. Additionally, conditions under which deviations from logarithmic-time drug release kinetics occur were determined using in situ UV fiber-optics. Overall, the results suggest that for oleaginous ointments containing dispersed drug particles, kinetics and mechanism of drug release is controlled by expansion of transient boundary layer, and drug release increases linearly with respect to logarithmic time. Published by Elsevier B.V.

  16. Utility of Fite-Faraco stain for both mast cell count and bacillary index in skin biopsies of leprosy patients.

    PubMed

    Chatura, K R; Sangeetha, S

    2012-01-01

    To assess the utility of a single stain for both mast cell count and bacillary index (BI), 50 skin-biopsie patients were stained with Fite-Faraco (FF) stain, viewed under oil immersion and BI calculated using the Ridley's logarithmic scale, and mast cells counted as the number of cells per mm2. Mean mast cell count per mm2 at the tuberculoid pole was lowest in TT 7.9 and highest in BT 14.23. At the lepromatous end, it was highest in BL 9.21, while in LL it was 8.23. Highest counts were seen in the borderline types overall. The correlation coefficient between histopathological diagnosis and BI is 0.822 which is a positive correlation to a significant degree. The correlation coefficient between histopathological diagnosis and mast cell count was found to be -0.17, which is a negative correlation but not to a significant degree. FF stain was utilised to visualise both bacilli for estimation of BI and mast cells for mast cell count, a seldom attempted feature in literature.

  17. A step-by-step guide to non-linear regression analysis of experimental data using a Microsoft Excel spreadsheet.

    PubMed

    Brown, A M

    2001-06-01

    The objective of this present study was to introduce a simple, easily understood method for carrying out non-linear regression analysis based on user input functions. While it is relatively straightforward to fit data with simple functions such as linear or logarithmic functions, fitting data with more complicated non-linear functions is more difficult. Commercial specialist programmes are available that will carry out this analysis, but these programmes are expensive and are not intuitive to learn. An alternative method described here is to use the SOLVER function of the ubiquitous spreadsheet programme Microsoft Excel, which employs an iterative least squares fitting routine to produce the optimal goodness of fit between data and function. The intent of this paper is to lead the reader through an easily understood step-by-step guide to implementing this method, which can be applied to any function in the form y=f(x), and is well suited to fast, reliable analysis of data in all fields of biology.

  18. A law of the iterated logarithm for Grenander’s estimator

    PubMed Central

    Dümbgen, Lutz; Wellner, Jon A.; Wolff, Malcolm

    2016-01-01

    In this note we prove the following law of the iterated logarithm for the Grenander estimator of a monotone decreasing density: If f(t0) > 0, f′(t0) < 0, and f′ is continuous in a neighborhood of t0, then lim supn→∞(n2log logn)1/3(fn^(t0)−f(t0))=|f(t0)f′(t0)/2|1/32Malmost surely where M≡supg∈GTg=(3/4)1/3andTg≡argmaxu{g(u)−u2};here G is the two-sided Strassen limit set on R. The proof relies on laws of the iterated logarithm for local empirical processes, Groeneboom’s switching relation, and properties of Strassen’s limit set analogous to distributional properties of Brownian motion. PMID:28042197

  19. Fechner's law: where does the log transform come from?

    PubMed

    Laming, Donald

    2010-01-01

    This paper looks at Fechner's law in the light of 150 years of subsequent study. In combination with the normal, equal variance, signal-detection model, Fechner's law provides a numerically accurate account of discriminations between two separate stimuli, essentially because the logarithmic transform delivers a model for Weber's law. But it cannot be taken to be a measure of internal sensation because an equally accurate account is provided by a chi(2) model in which stimuli are scaled by their physical magnitude. The logarithmic transform of Fechner's law arises because, for the number of degrees of freedom typically required in the chi(2) model, the logarithm of a chi(2) variable is, to a good approximation, normal. This argument is set within a general theory of sensory discrimination.

  20. Elastic scattering of virtual photons via a quark loop in the double-logarithmic approximation

    NASA Astrophysics Data System (ADS)

    Ermolaev, B. I.; Ivanov, D. Yu.; Troyan, S. I.

    2018-04-01

    We calculate the amplitude of elastic photon-photon scattering via a single quark loop in the double-logarithmic approximation, presuming all external photons to be off-shell and unpolarized. At the same time we account for the running coupling effects. We consider this process in the forward kinematics at arbitrary relations between t and the external photon virtualities. We obtain explicit expressions for the photon-photon scattering amplitudes in all double-logarithmic kinematic regions. Then we calculate the small-x asymptotics of the obtained amplitudes and compare them with the parent amplitudes, thereby fixing the applicability regions of the asymptotics, i.e., fixing the applicability region for the nonvacuum Reggeons. We find that these Reggeons should be used at x <10-8 only.

  1. Qubit assisted enhancement of quantum correlations in an optomechanical system

    NASA Astrophysics Data System (ADS)

    Chakraborty, Subhadeep; Sarma, Amarendra K.

    2018-05-01

    We perform a theoretical study on quantum correlations in an optomechanical system where the mechanical mirror is perturbatively coupled to an auxiliary qubit. In our study, we consider logarithmic negativity to quantify the degree of stationary entanglement between the cavity field and mechanical mirror, and, Gaussian quantum discord as a witness of the quantumness of the correlation beyond entanglement. Utilizing experimentally feasible parameters, we show that both entanglement and quantum discord enhance significantly with increase in mirror-qubit coupling. Moreover, we find that in presence of the mirror-qubit coupling entanglement could be generated at a considerably lower optomechanical coupling strength, which is also extremely robust against the environmental temperature. Overall, our proposed scheme offers some considerable advantages for realizing the continuous-variable quantum information and communication.

  2. SAR-based change detection using hypothesis testing and Markov random field modelling

    NASA Astrophysics Data System (ADS)

    Cao, W.; Martinis, S.

    2015-04-01

    The objective of this study is to automatically detect changed areas caused by natural disasters from bi-temporal co-registered and calibrated TerraSAR-X data. The technique in this paper consists of two steps: Firstly, an automatic coarse detection step is applied based on a statistical hypothesis test for initializing the classification. The original analytical formula as proposed in the constant false alarm rate (CFAR) edge detector is reviewed and rewritten in a compact form of the incomplete beta function, which is a builtin routine in commercial scientific software such as MATLAB and IDL. Secondly, a post-classification step is introduced to optimize the noisy classification result in the previous step. Generally, an optimization problem can be formulated as a Markov random field (MRF) on which the quality of a classification is measured by an energy function. The optimal classification based on the MRF is related to the lowest energy value. Previous studies provide methods for the optimization problem using MRFs, such as the iterated conditional modes (ICM) algorithm. Recently, a novel algorithm was presented based on graph-cut theory. This method transforms a MRF to an equivalent graph and solves the optimization problem by a max-flow/min-cut algorithm on the graph. In this study this graph-cut algorithm is applied iteratively to improve the coarse classification. At each iteration the parameters of the energy function for the current classification are set by the logarithmic probability density function (PDF). The relevant parameters are estimated by the method of logarithmic cumulants (MoLC). Experiments are performed using two flood events in Germany and Australia in 2011 and a forest fire on La Palma in 2009 using pre- and post-event TerraSAR-X data. The results show convincing coarse classifications and considerable improvement by the graph-cut post-classification step.

  3. A Bid Price Equation For Timber Sales on the Ouachita and Ozark National Forests

    Treesearch

    Michael M. Huebschmann; Thomas B. Lynch; David K. Lewis; Daniel S. Tilley; James M. Guldin

    2004-01-01

    Data from 150 timber sales on the Ouachita and Ozark National Forests in Arkansas and southeaster n Oklahoma were used to develop an equation that relates bid prices to timber sale variables. Variables used to predict the natural logarithm of the real, winning total bid price are the natural logarithms of total sawtimber volume per sale, total pulpwood volume per sale...

  4. Linear and Logarithmic Speed-Accuracy Trade-Offs in Reciprocal Aiming Result from Task-Specific Parameterization of an Invariant Underlying Dynamics

    ERIC Educational Resources Information Center

    Bongers, Raoul M.; Fernandez, Laure; Bootsma, Reinoud J.

    2009-01-01

    The authors examined the origins of linear and logarithmic speed-accuracy trade-offs from a dynamic systems perspective on motor control. In each experiment, participants performed 2 reciprocal aiming tasks: (a) a velocity-constrained task in which movement time was imposed and accuracy had to be maximized, and (b) a distance-constrained task in…

  5. Polaron in the dilute critical Bose condensate

    NASA Astrophysics Data System (ADS)

    Pastukhov, Volodymyr

    2018-05-01

    The properties of an impurity immersed in a dilute D-dimensional Bose gas at temperatures close to its second-order phase transition point are considered. Particularly by means of the 1/N-expansion, we calculate the leading-order polaron energy and the damping rate in the limit of vanishing boson–boson interaction. It is shown that the perturbative effective mass and the quasiparticle residue diverge logarithmically in the long-length limit, signalling the non-analytic behavior of the impurity spectrum and pole-free structure of the polaron Green’s function in the infrared region, respectively.

  6. Temperature Scaling Law for Quantum Annealing Optimizers.

    PubMed

    Albash, Tameem; Martin-Mayor, Victor; Hen, Itay

    2017-09-15

    Physical implementations of quantum annealing unavoidably operate at finite temperatures. We point to a fundamental limitation of fixed finite temperature quantum annealers that prevents them from functioning as competitive scalable optimizers and show that to serve as optimizers annealer temperatures must be appropriately scaled down with problem size. We derive a temperature scaling law dictating that temperature must drop at the very least in a logarithmic manner but also possibly as a power law with problem size. We corroborate our results by experiment and simulations and discuss the implications of these to practical annealers.

  7. Anomalous symmetry breaking in classical two-dimensional diffusion of coherent atoms

    NASA Astrophysics Data System (ADS)

    Pugatch, Rami; Bhattacharyya, Dipankar; Amir, Ariel; Sagi, Yoav; Davidson, Nir

    2014-03-01

    The electromagnetically induced transparency (EIT) spectrum of atoms diffusing in and out of a narrow beam is measured and shown to manifest the two-dimensional δ-function anomaly in a classical setting. In the limit of small-area beams, the EIT line shape is independent of power, and equal to the renormalized local density of states of a free particle Hamiltonian. The measured spectra for different powers and beam sizes collapses to a single universal curve with a characteristic logarithmic Van Hove singularity close to resonance.

  8. Singularity Preserving Numerical Methods for Boundary Integral Equations

    NASA Technical Reports Server (NTRS)

    Kaneko, Hideaki (Principal Investigator)

    1996-01-01

    In the past twelve months (May 8, 1995 - May 8, 1996), under the cooperative agreement with Division of Multidisciplinary Optimization at NASA Langley, we have accomplished the following five projects: a note on the finite element method with singular basis functions; numerical quadrature for weakly singular integrals; superconvergence of degenerate kernel method; superconvergence of the iterated collocation method for Hammersteion equations; and singularity preserving Galerkin method for Hammerstein equations with logarithmic kernel. This final report consists of five papers describing these projects. Each project is preceeded by a brief abstract.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dumitru, Adrian; Skokov, Vladimir

    The conventional and linearly polarized Weizsäcker-Williams gluon distributions at small x are defined from the two-point function of the gluon field in light-cone gauge. They appear in the cross section for dijet production in deep inelastic scattering at high energy. We determine these functions in the small-x limit from solutions of the JIMWLK evolution equations and show that they exhibit approximate geometric scaling. Also, we discuss the functional distributions of these WW gluon distributions over the JIMWLK ensemble at rapidity Y ~ 1/αs. These are determined by a 2d Liouville action for the logarithm of the covariant gauge function g2trmore » A+(q)A+(-q). For transverse momenta on the order of the saturation scale we observe large variations across configurations (evolution trajectories) of the linearly polarized distribution up to several times its average, and even to negative values.« less

  10. Behavior of Caulobacter Crescentus Diagnosed Using a 3-Channel Microfluidic Device

    NASA Astrophysics Data System (ADS)

    Tang, Jay; Morse, Michael; Colin, Remy; Wilson, Laurence

    2015-03-01

    Many motile microorganisms are able to detect chemical gradients in their surroundings in order to bias their motion towards more favorable conditions. We study the biased motility of Caulobacter crescentus, a singly flagellated bacteria, which alternate between forward and backward swimming, driven by its flagella motor, which switches in rotation direction. We observe the swimming patterns of C. crescents in an oxygen gradient, which is established by flowing atmospheric air and pure nitrogen through a 3 parallel channel microfluidic device. In this setup, oxygen diffuses through the PDMS device and the bacterial medium, creating a linear gradient. Using low magnification, dark field microscopy, individual cells are tracked over a large field of view, with particular interest in the cells' motion relative to the oxygen gradient. Utilizing observable differences between backward and forward swimming motion, motor switching events can be identified. By analyzing these run time intervals between motor switches as a function of a cell's local oxygen level, we demonstrate that C. crescentus displays aerotacitc behavior by extending forward swimming run times while moving up an oxygen gradient, resulting in directed motility towards oxygen sources. Additionally, motor switching response is sensitive to both the steepness of the gradient experienced and background oxygen levels with cells exhibiting a logarithmic response to oxygen levels. Work funded by the United States National Science Foundation and by the Rowland Institute at Harvard University.

  11. Magnetic resonance spectroscopic analysis of neurometabolite changes in the developing rat brain at 7T.

    PubMed

    Ramu, Jaivijay; Konak, Tetyana; Liachenko, Serguei

    2016-11-15

    We utilized proton magnetic resonance spectroscopy to evaluate the metabolic profile of the hippocampus and anterior cingulate cortex of the developing rat brain from postnatal days 14-70. Measured metabolite concentrations were modeled using linear, exponential, or logarithmic functions and the time point at which the data reached plateau (i.e. when the portion of the data could be fit to horizontal line) was estimated and was interpreted as the time when the brain has reached maturity with respect to that metabolite. N-acetyl-aspartate and myo-inositol increased within the observed period. Gluthathione did not vary significantly, while taurine decreased initially and then stabilized. Phosphocreatine and total creatine had a tendency to increase towards the end of the experiment. Some differences between our data and the published literature were observed in the concentrations and dynamics of phosphocreatine, myo-inositol, and GABA in the hippocampus and creatine, GABA, glutamine, choline and N-acetyl-aspartate in the cortex. Such differences may be attributed to experimental conditions, analysis approaches and animal species. The latter is supported by differences between in-house rat colony and rats from Charles River Labs. Spectroscopy provides a valuable tool for non-invasive brain neurochemical profiling for use in developmental neurobiology research. Special attention needs to be paid to important sources of variation like animal strain and commercial source. Published by Elsevier B.V.

  12. An analog gamma correction scheme for high dynamic range CMOS logarithmic image sensors.

    PubMed

    Cao, Yuan; Pan, Xiaofang; Zhao, Xiaojin; Wu, Huisi

    2014-12-15

    In this paper, a novel analog gamma correction scheme with a logarithmic image sensor dedicated to minimize the quantization noise of the high dynamic applications is presented. The proposed implementation exploits a non-linear voltage-controlled-oscillator (VCO) based analog-to-digital converter (ADC) to perform the gamma correction during the analog-to-digital conversion. As a result, the quantization noise does not increase while the same high dynamic range of logarithmic image sensor is preserved. Moreover, by combining the gamma correction with the analog-to-digital conversion, the silicon area and overall power consumption can be greatly reduced. The proposed gamma correction scheme is validated by the reported simulation results and the experimental results measured for our designed test structure, which is fabricated with 0.35 μm standard complementary-metal-oxide-semiconductor (CMOS) process.

  13. Path Loss Prediction Formula in Urban Area for the Fourth-Generation Mobile Communication Systems

    NASA Astrophysics Data System (ADS)

    Kitao, Koshiro; Ichitsubo, Shinichi

    A site-general type prediction formula is created based on the measurement results in an urban area in Japan assuming that the prediction frequency range required for Fourth-Generation (4G) Mobile Communication Systems is from 3 to 6GHz, the distance range is 0.1 to 3km, and the base station (BS) height range is from 10 to 100m. Based on the measurement results, the path loss (dB) is found to be proportional to the logarithm of the distance (m), the logarithm of the BS height (m), and the logarithm of the frequency (GHz). Furthermore, we examine the extension of existing formulae such as the Okumura-Hata, Walfisch-Ikegami, and Sakagami formulae for 4G systems and propose a prediction formula based on the Extended Sakagami formula.

  14. An Analog Gamma Correction Scheme for High Dynamic Range CMOS Logarithmic Image Sensors

    PubMed Central

    Cao, Yuan; Pan, Xiaofang; Zhao, Xiaojin; Wu, Huisi

    2014-01-01

    In this paper, a novel analog gamma correction scheme with a logarithmic image sensor dedicated to minimize the quantization noise of the high dynamic applications is presented. The proposed implementation exploits a non-linear voltage-controlled-oscillator (VCO) based analog-to-digital converter (ADC) to perform the gamma correction during the analog-to-digital conversion. As a result, the quantization noise does not increase while the same high dynamic range of logarithmic image sensor is preserved. Moreover, by combining the gamma correction with the analog-to-digital conversion, the silicon area and overall power consumption can be greatly reduced. The proposed gamma correction scheme is validated by the reported simulation results and the experimental results measured for our designed test structure, which is fabricated with 0.35 μm standard complementary-metal-oxide-semiconductor (CMOS) process. PMID:25517692

  15. Eccentric Capitellar Ossification Limits the Utility of the Radiocapitellar Line in Young Children.

    PubMed

    Fader, Lauren M; Laor, Tal; Eismann, Emily A; Cornwall, Roger; Little, Kevin J

    2016-03-01

    The radiocapitellar line (RCL) has long been used for the radiographic evaluation of elbow alignment. In children, the capitellar ossific nucleus serves as a proxy for the entire capitellum, but this substitution has not been verified. Using magnetic resonance imaging (MRI), we sought to understand how maturation of the ossific nucleus of the capitellum affects the utility of RCL throughout skeletal maturation of the elbow. The RCL was drawn on coronal and sagittal MRIs in 82 children (43 boys, 39 girls; age range, 1 to 13 y) with at least 3 patients in each 1-year interval age group. The perpendicular distance of the RCL from the center of both the cartilaginous capitellum and the capitellar ossific nucleus was measured relative to its total width, and a percent offset for each measurement was calculated. Logarithmic regression analysis was performed to analyze the effect of age and sex on percent offset. The RCL reliably intersected with the central third of the cartilaginous capitellum at all ages in both planes. Although the RCL intersected with the ossified capitellum in all but 3 measurements, it intersected with the central third of the ossified capitellum less often in younger children in both sagittal (B=0.47, P<0.001) and coronal (B=0.31, P=0.002) planes. Percent offset decreased significantly with age in a logarithmic manner in both sagittal (r=0.57, P<0.001) and coronal (r=-0.47, P<0.001) planes. 95% confidence intervals predict that the sagittal plane RCL will accurately intersect the central third of the ossified capitellum by age 10 years in girls and age 11 years in boys but not in the coronal plane. Eccentric ossification of the capitellum explains RCL variability in young children. The RCL does not reliably intersect the central third of the ossified capitellum until ages 10 years in girls and 11 years in boys in the sagittal plane. The RCL should be used within its limitations in skeletally immature children and should be combined with advanced imaging if necessary.

  16. The theory of maximally and minimally even sets, the one- dimensional antiferromagnetic Ising model, and the continued fraction compromise of musical scales

    NASA Astrophysics Data System (ADS)

    Douthett, Elwood (Jack) Moser, Jr.

    1999-10-01

    Cyclic configurations of white and black sites, together with convex (concave) functions used to weight path length, are investigated. The weights of the white set and black set are the sums of the weights of the paths connecting the white sites and black sites, respectively, and the weight between sets is the sum of the weights of the paths that connect sites opposite in color. It is shown that when the weights of all configurations of a fixed number of white and a fixed number of black sites are compared, minimum (maximum) weight of a white set, minimum (maximum) weight of the a black set, and maximum (minimum) weight between sets occur simultaneously. Such configurations are called maximally even configurations. Similarly, the configurations whose weights are the opposite extremes occur simultaneously and are called minimally even configurations. Algorithms that generate these configurations are constructed and applied to the one- dimensional antiferromagnetic spin-1/2 Ising model. Next the goodness of continued fractions as applied to musical intervals (frequency ratios and their base 2 logarithms) is explored. It is shown that, for the intermediate convergents between two consecutive principal convergents of an irrational number, the first half of the intermediate convergents are poorer approximations than the preceding principal convergent while the second half are better approximations; the goodness of a middle intermediate convergent can only be determined by calculation. These convergents are used to determine what equal-tempered systems have intervals that most closely approximate the musical fifth (pn/ qn = log2(3/2)). The goodness of exponentiated convergents ( 2pn/qn~3/2 ) is also investigated. It is shown that, with the exception of a middle convergent, the goodness of the exponential form agrees with that of its logarithmic Counterpart As in the case of the logarithmic form, the goodness of a middle intermediate convergent in the exponential form can only be determined by calculation. A Desirability Function is constructed that simultaneously measures how well multiple intervals fit in a given equal-tempered system. These measurements are made for octave (base 2) and tritave systems (base 3). Combinatorial properties important to music modulation are considered. These considerations lead These considerations lead to the construction of maximally even scales as partitions of an equal-tempered system.

  17. Anharmonic effects in the quantum cluster equilibrium method

    NASA Astrophysics Data System (ADS)

    von Domaros, Michael; Perlt, Eva

    2017-03-01

    The well-established quantum cluster equilibrium (QCE) model provides a statistical thermodynamic framework to apply high-level ab initio calculations of finite cluster structures to macroscopic liquid phases using the partition function. So far, the harmonic approximation has been applied throughout the calculations. In this article, we apply an important correction in the evaluation of the one-particle partition function and account for anharmonicity. Therefore, we implemented an analytical approximation to the Morse partition function and the derivatives of its logarithm with respect to temperature, which are required for the evaluation of thermodynamic quantities. This anharmonic QCE approach has been applied to liquid hydrogen chloride and cluster distributions, and the molar volume, the volumetric thermal expansion coefficient, and the isobaric heat capacity have been calculated. An improved description for all properties is observed if anharmonic effects are considered.

  18. Magnetotelluric inversion via reverse time migration algorithm of seismic data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ha, Taeyoung; Shin, Changsoo

    2007-07-01

    We propose a new algorithm for two-dimensional magnetotelluric (MT) inversion. Our algorithm is an MT inversion based on the steepest descent method, borrowed from the backpropagation technique of seismic inversion or reverse time migration, introduced in the middle 1980s by Lailly and Tarantola. The steepest descent direction can be calculated efficiently by using the symmetry of numerical Green's function derived from a mixed finite element method proposed by Nedelec for Maxwell's equation, without calculating the Jacobian matrix explicitly. We construct three different objective functions by taking the logarithm of the complex apparent resistivity as introduced in the recent waveform inversionmore » algorithm by Shin and Min. These objective functions can be naturally separated into amplitude inversion, phase inversion and simultaneous inversion. We demonstrate our algorithm by showing three inversion results for synthetic data.« less

  19. Logarithmic Sobolev Inequalities on Path Spaces Over Riemannian Manifolds

    NASA Astrophysics Data System (ADS)

    Hsu, Elton P.

    Let Wo(M) be the space of paths of unit time length on a connected, complete Riemannian manifold M such that γ(0) =o, a fixed point on M, and ν the Wiener measure on Wo(M) (the law of Brownian motion on M starting at o).If the Ricci curvature is bounded by c, then the following logarithmic Sobolev inequality holds:

  20. Measuring Academic Progress of Students with Learning Difficulties: A Comparison of the Semi-Logarithmic Chart and Equal Interval Graph Paper.

    ERIC Educational Resources Information Center

    Marston, Doug; Deno, Stanley L.

    The accuracy of predictions of future student performance on the basis of graphing data on semi-logarithmic charts and equal interval graphs was examined. All 83 low-achieving students in grades 3 to 6 read randomly-selected lists of words from the Harris-Jacobson Word List for 1 minute. The number of words read correctly and words read…

  1. Pars plana Ahmed valve and vitrectomy in patients with glaucoma associated with posterior segment disease.

    PubMed

    Wallsh, Josh O; Gallemore, Ron P; Taban, Mehran; Hu, Charles; Sharareh, Behnam

    2013-01-01

    To assess the safety and efficacy of a modified technique for pars plana placement of the Ahmed valve in combination with pars plana vitrectomy in the treatment of glaucoma associated with posterior segment disease. Thirty-nine eyes with glaucoma associated with posterior segment disease underwent pars plana vitrectomy combined with Ahmed valve placement. All valves were placed in the pars plana using a modified technique, without the pars plana clip, and using a scleral patch graft. The 24 eyes diagnosed with neovascular glaucoma had an improvement in intraocular pressure from 37.6 mmHg to 13.8 mmHg and best-corrected visual acuity from 2.13 logarithm of minimum angle of resolution to 1.40 logarithm of minimum angle of resolution. Fifteen eyes diagnosed with steroid-induced glaucoma had an improvement in intraocular pressure from 27.9 mmHg to 14.1 mmHg and best-corrected visual acuity from 1.38 logarithm of minimum angle of resolution to 1.13 logarithm of minimum angle of resolution. Complications included four cases of cystic bleb formation and one case of choroidal detachment and explantation for hypotony. Ahmed valve placement through the pars plana during vitrectomy is an effective option for managing complex cases of glaucoma without the use of the pars plana clip.

  2. A Planar Microfluidic Mixer Based on Logarithmic Spirals

    PubMed Central

    Scherr, Thomas; Quitadamo, Christian; Tesvich, Preston; Park, Daniel Sang-Won; Tiersch, Terrence; Hayes, Daniel; Choi, Jin-Woo; Nandakumar, Krishnaswamy

    2013-01-01

    A passive, planar micromixer design based on logarithmic spirals is presented. The device was fabricated using polydimethylsiloxane soft photolithography techniques, and mixing performance was characterized via numerical simulation and fluorescent microscopy. Mixing efficiency initially declined as Reynolds number increased, and this trend continued until a Reynolds number of 15 where a minimum was reached at 53%. Mixing efficiency then began to increase reaching a maximum mixing efficiency of 86% at Re = 67. Three-dimensional simulations of fluid mixing in this design were compared to other planar geometries such as the Archimedes spiral and Meandering-S mixers. The implementation of logarithmic curvature offers several unique advantages that enhance mixing, namely a variable cross-sectional area and a logarithmically varying radius of curvature that creates 3-D Dean vortices. These flow phenomena were observed in simulations with multilayered fluid folding and validated with confocal microscopy. This design provides improved mixing performance over a broader range of Reynolds numbers than other reported planar mixers, all while avoiding external force fields, more complicated fabrication processes, and the introduction of flow obstructions or cavities that may unintentionally affect sensitive or particulate-containing samples. Due to the planar design requiring only single-step lithographic features, this compact geometry could be easily implemented into existing micro-total analysis systems requiring effective rapid mixing. PMID:23956497

  3. A planar microfluidic mixer based on logarithmic spirals

    NASA Astrophysics Data System (ADS)

    Scherr, Thomas; Quitadamo, Christian; Tesvich, Preston; Sang-Won Park, Daniel; Tiersch, Terrence; Hayes, Daniel; Choi, Jin-Woo; Nandakumar, Krishnaswamy; Monroe, W. Todd

    2012-05-01

    A passive, planar micromixer design based on logarithmic spirals is presented. The device was fabricated using polydimethylsiloxane soft photolithography techniques, and mixing performance was characterized via numerical simulation and fluorescent microscopy. Mixing efficiency initially declined as the Reynolds number increased, and this trend continued until a Reynolds number of 15 where a minimum was reached at 53%. Mixing efficiency then began to increase reaching a maximum mixing efficiency of 86% at Re = 67. Three-dimensional (3D) simulations of fluid mixing in this design were compared to other planar geometries such as the Archimedes spiral and Meandering-S mixers. The implementation of logarithmic curvature offers several unique advantages that enhance mixing, namely a variable cross-sectional area and a logarithmically varying radius of curvature that creates 3D Dean vortices. These flow phenomena were observed in simulations with multilayered fluid folding and validated with confocal microscopy. This design provides improved mixing performance over a broader range of Reynolds numbers than other reported planar mixers, all while avoiding external force fields, more complicated fabrication processes and the introduction of flow obstructions or cavities that may unintentionally affect sensitive or particulate-containing samples. Due to the planar design requiring only single-step lithographic features, this compact geometry could be easily implemented into existing micro-total analysis systems requiring effective rapid mixing.

  4. Coherence and entanglement measures based on Rényi relative entropies

    NASA Astrophysics Data System (ADS)

    Zhu, Huangjun; Hayashi, Masahito; Chen, Lin

    2017-11-01

    We study systematically resource measures of coherence and entanglement based on Rényi relative entropies, which include the logarithmic robustness of coherence, geometric coherence, and conventional relative entropy of coherence together with their entanglement analogues. First, we show that each Rényi relative entropy of coherence is equal to the corresponding Rényi relative entropy of entanglement for any maximally correlated state. By virtue of this observation, we establish a simple operational connection between entanglement measures and coherence measures based on Rényi relative entropies. We then prove that all these coherence measures, including the logarithmic robustness of coherence, are additive. Accordingly, all these entanglement measures are additive for maximally correlated states. In addition, we derive analytical formulas for Rényi relative entropies of entanglement of maximally correlated states and bipartite pure states, which reproduce a number of classic results on the relative entropy of entanglement and logarithmic robustness of entanglement in a unified framework. Several nontrivial bounds for Rényi relative entropies of coherence (entanglement) are further derived, which improve over results known previously. Moreover, we determine all states whose relative entropy of coherence is equal to the logarithmic robustness of coherence. As an application, we provide an upper bound for the exact coherence distillation rate, which is saturated for pure states.

  5. Validation of SplitVectors Encoding for Quantitative Visualization of Large-Magnitude-Range Vector Fields

    PubMed Central

    Zhao, Henan; Bryant, Garnett W.; Griffin, Wesley; Terrill, Judith E.; Chen, Jian

    2017-01-01

    We designed and evaluated SplitVectors, a new vector field display approach to help scientists perform new discrimination tasks on large-magnitude-range scientific data shown in three-dimensional (3D) visualization environments. SplitVectors uses scientific notation to display vector magnitude, thus improving legibility. We present an empirical study comparing the SplitVectors approach with three other approaches - direct linear representation, logarithmic, and text display commonly used in scientific visualizations. Twenty participants performed three domain analysis tasks: reading numerical values (a discrimination task), finding the ratio between values (a discrimination task), and finding the larger of two vectors (a pattern detection task). Participants used both mono and stereo conditions. Our results suggest the following: (1) SplitVectors improve accuracy by about 10 times compared to linear mapping and by four times to logarithmic in discrimination tasks; (2) SplitVectors have no significant differences from the textual display approach, but reduce cluttering in the scene; (3) SplitVectors and textual display are less sensitive to data scale than linear and logarithmic approaches; (4) using logarithmic can be problematic as participants' confidence was as high as directly reading from the textual display, but their accuracy was poor; and (5) Stereoscopy improved performance, especially in more challenging discrimination tasks. PMID:28113469

  6. Validation of SplitVectors Encoding for Quantitative Visualization of Large-Magnitude-Range Vector Fields.

    PubMed

    Henan Zhao; Bryant, Garnett W; Griffin, Wesley; Terrill, Judith E; Jian Chen

    2017-06-01

    We designed and evaluated SplitVectors, a new vector field display approach to help scientists perform new discrimination tasks on large-magnitude-range scientific data shown in three-dimensional (3D) visualization environments. SplitVectors uses scientific notation to display vector magnitude, thus improving legibility. We present an empirical study comparing the SplitVectors approach with three other approaches - direct linear representation, logarithmic, and text display commonly used in scientific visualizations. Twenty participants performed three domain analysis tasks: reading numerical values (a discrimination task), finding the ratio between values (a discrimination task), and finding the larger of two vectors (a pattern detection task). Participants used both mono and stereo conditions. Our results suggest the following: (1) SplitVectors improve accuracy by about 10 times compared to linear mapping and by four times to logarithmic in discrimination tasks; (2) SplitVectors have no significant differences from the textual display approach, but reduce cluttering in the scene; (3) SplitVectors and textual display are less sensitive to data scale than linear and logarithmic approaches; (4) using logarithmic can be problematic as participants' confidence was as high as directly reading from the textual display, but their accuracy was poor; and (5) Stereoscopy improved performance, especially in more challenging discrimination tasks.

  7. The social media index: measuring the impact of emergency medicine and critical care websites.

    PubMed

    Thoma, Brent; Sanders, Jason L; Lin, Michelle; Paterson, Quinten S; Steeg, Jordon; Chan, Teresa M

    2015-03-01

    The number of educational resources created for emergency medicine and critical care (EMCC) that incorporate social media has increased dramatically. With no way to assess their impact or quality, it is challenging for educators to receive scholarly credit and for learners to identify respected resources. The Social Media index (SMi) was developed to help address this. We used data from social media platforms (Google PageRanks, Alexa Ranks, Facebook Likes, Twitter Followers, and Google+ Followers) for EMCC blogs and podcasts to derive three normalized (ordinal, logarithmic, and raw) formulas. The most statistically robust formula was assessed for 1) temporal stability using repeated measures and website age, and 2) correlation with impact by applying it to EMCC journals and measuring the correlation with known journal impact metrics. The logarithmic version of the SMi containing four metrics was the most statistically robust. It correlated significantly with website age (Spearman r=0.372; p<0.001) and repeated measures through seven months (r=0.929; p<0.001). When applied to EMCC journals, it correlated significantly with all impact metrics except number of articles published. The strongest correlations were seen with the Immediacy Index (r=0.609; p<0.001) and Article Influence Score (r=0.608; p<0.001). The SMi's temporal stability and correlation with journal impact factors suggests that it may be a stable indicator of impact for medical education websites. Further study is needed to determine whether impact correlates with quality and how learners and educators can best utilize this tool.

  8. Phenomenology of single-inclusive jet production with jet radius and threshold resummation

    NASA Astrophysics Data System (ADS)

    Liu, Xiaohui; Moch, Sven-Olaf; Ringer, Felix

    2018-03-01

    We perform a detailed study of inclusive jet production cross sections at the LHC and compare the QCD theory predictions based on the recently developed formalism for threshold and jet radius joint resummation at next-to-leading logarithmic accuracy to inclusive jet data collected by the CMS Collaboration at √{S }=7 and 13 TeV. We compute the cross sections at next-to-leading order in QCD with and without the joint resummation for different choices of jet radii R and observe that the joint resummation leads to crucial improvements in the description of the data. Comprehensive studies with different parton distribution functions demonstrate the necessity of considering the joint resummation in fits of those functions based on the LHC jet data.

  9. Height growth of solutions and a discrete Painlevé equation

    NASA Astrophysics Data System (ADS)

    Al-Ghassani, A.; Halburd, R. G.

    2015-07-01

    Consider the discrete equation where the right side is of degree two in yn and where the coefficients an, bn and cn are rational functions of n with rational coefficients. Suppose that there is a solution such that for all sufficiently large n, y_n\\in{Q} and the height of yn dominates the height of the coefficient functions an, bn and cn. We show that if the logarithmic height of yn grows no faster than a power of n then either the equation is a well known discrete Painlevé equation dPII or its autonomous version or yn is also an admissible solution of a discrete Riccati equation. This provides further evidence that slow height growth is a good detector of integrability.

  10. Correlation Length of Energy-Containing Structures in the Base of the Solar Corona

    NASA Astrophysics Data System (ADS)

    Abramenko, V.; Zank, G. P.; Dosch, A. M.; Yurchyshyn, V.

    2013-12-01

    An essential parameter for models of coronal heating and fast solar wind acceleration that relay on the dissipation of MHD turbulence is the characteristic energy-containing length of the squared velocity and magnetic field fluctuations transverse to the mean magnetic field inside a coronal hole (CH) at the base of the corona. The characteristic length scale defines directly the heating rate. Rather surprisingly, almost nothing is known observationally about this critical parameter. Currently, only a very rough estimate of characteristic length was obtained based on the fact that the network spacing is about 30000 km. We attempted estimation of this parameter from observations of photospheric random motions and magnetic fields measured in the photosphere inside coronal holes. We found that the characteristic length scale in the photosphere is about 600-2000 km, which is much smaller than that adopted in previous models. Our results provide a critical input parameter for current models of coronal heating and should yield an improved understanding of fast solar wind acceleration. Fig. 1-- Plotted is the natural logarithm of the correlation function of the transverse velocity fluctuations u^2 versus the spatial lag r for the two CHs. The color code refers to the accumulation time intervals of 2 (blue), 5 (green), 10 (red), and 20 (black) minutes. The values of the Batchelor integral length λ the correlation length ς and the e-folding length L in km are shown. Fig. 2-- Plot of the natural logarithm of the correlation function of magnetic fluctuations b^2 versus the spatial lag r. The insert shows this plot with linear axes.

  11. Testing the Hole-in-the-Pipe Model of nitric and nitrous oxide emissions from soils using the TRAGNET Database

    NASA Astrophysics Data System (ADS)

    Davidson, Eric A.; Verchot, Louis V.

    2000-12-01

    Because several soil properties and processes affect emissions of nitric oxide (NO) and nitrous oxide (N2O) from soils, it has been difficult to develop effective and robust algorithms to predict emissions of these gases in biogeochemical models. The conceptual "hole-in-the-pipe" (HIP) model has been used effectively to interpret results of numerous studies, but the ranges of climatic conditions and soil properties are often relatively narrow for each individual study. The Trace Gas Network (TRAGNET) database offers a unique opportunity to test the validity of one manifestation of the HIP model across a broad range of sites, including temperate and tropical climates, grasslands and forests, and native vegetation and agricultural crops. The logarithm of the sum of NO + N2O emissions was positively and significantly correlated with the logarithm of the sum of extractable soil NH4+ + NO3-. The logarithm of the ratio of NO:N2O emissions was negatively and significantly correlated with water-filled pore space (WFPS). These analyses confirm the applicability of the HIP model concept, that indices of soil N availability correlate with the sum of NO+N2O emissions, while soil water content is a strong and robust controller of the ratio of NO:N2O emissions. However, these parameterizations have only broad-brush accuracy because of unaccounted variation among studies in the soil depths where gas production occurs, where soil N and water are measured, and other factors. Although accurate predictions at individual sites may still require site-specific parameterization of these empirical functions, the parameterizations presented here, particularly the one for WFPS, may be appropriate for global biogeochemical modeling. Moreover, this integration of data sets demonstrates the broad ranging applicability of the HIP conceptual approach for understanding soil emissions of NO and N2O.

  12. Rock Failure Analysis Based on a Coupled Elastoplastic-Logarithmic Damage Model

    NASA Astrophysics Data System (ADS)

    Abdia, M.; Molladavoodi, H.; Salarirad, H.

    2017-12-01

    The rock materials surrounding the underground excavations typically demonstrate nonlinear mechanical response and irreversible behavior in particular under high in-situ stress states. The dominant causes of irreversible behavior are plastic flow and damage process. The plastic flow is controlled by the presence of local shear stresses which cause the frictional sliding. During this process, the net number of bonds remains unchanged practically. The overall macroscopic consequence of plastic flow is that the elastic properties (e.g. the stiffness of the material) are insensitive to this type of irreversible change. The main cause of irreversible changes in quasi-brittle materials such as rock is the damage process occurring within the material. From a microscopic viewpoint, damage initiates with the nucleation and growth of microcracks. When the microcracks length reaches a critical value, the coalescence of them occurs and finally, the localized meso-cracks appear. The macroscopic and phenomenological consequence of damage process is stiffness degradation, dilatation and softening response. In this paper, a coupled elastoplastic-logarithmic damage model was used to simulate the irreversible deformations and stiffness degradation of rock materials under loading. In this model, damage evolution & plastic flow rules were formulated in the framework of irreversible thermodynamics principles. To take into account the stiffness degradation and softening on post-peak region, logarithmic damage variable was implemented. Also, a plastic model with Drucker-Prager yield function was used to model plastic strains. Then, an algorithm was proposed to calculate the numerical steps based on the proposed coupled plastic and damage constitutive model. The developed model has been programmed in VC++ environment. Then, it was used as a separate and new constitutive model in DEM code (UDEC). Finally, the experimental Oolitic limestone rock behavior was simulated based on the developed model. The irreversible strains, softening and stiffness degradation were reproduced in the numerical results. Furthermore, the confinement pressure dependency of rock behavior was simulated in according to experimental observations.

  13. Sivers asymmetry in the pion induced Drell-Yan process at COMPASS within transverse momentum dependent factorization

    NASA Astrophysics Data System (ADS)

    Wang, Xiaoyu; Lu, Zhun

    2018-03-01

    We investigate the Sivers asymmetry in the pion-induced single polarized Drell-Yan process in the theoretical framework of the transverse momentum dependent factorization up to next-to-leading logarithmic order of QCD. Within the TMD evolution formalism of parton distribution functions, the recently extracted nonperturbative Sudakov form factor for the pion distribution functions as well as the one for the Sivers function of the proton are applied to numerically estimate the Sivers asymmetry in the π-p Drell-Yan at the kinematics of the COMPASS at CERN. In the low b region, the Sivers function in b -space can be expressed as the convolution of the perturbatively calculable hard coefficients and the corresponding collinear correlation function, of which the Qiu-Sterman function is the most relevant one. The effect of the energy-scale dependence of the Qiu-Sterman function to the asymmetry is also studied. We find that our prediction on the Sivers asymmetries as functions of xp, xπ, xF and q⊥ is consistent with the recent COMPASS measurement.

  14. A viable logarithmic f(R) model for inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amin, M.; Khalil, S.; Salah, M.

    2016-08-18

    Inflation in the framework of f(R) modified gravity is revisited. We study the conditions that f(R) should satisfy in order to lead to a viable inflationary model in the original form and in the Einstein frame. Based on these criteria we propose a new logarithmic model as a potential candidate for f(R) theories aiming to describe inflation consistent with observations from Planck satellite (2015). The model predicts scalar spectral index 0.9615

  15. A simplified implementation of van der Waals density functionals for first-principles molecular dynamics applications

    NASA Astrophysics Data System (ADS)

    Wu, Jun; Gygi, François

    2012-06-01

    We present a simplified implementation of the non-local van der Waals correlation functional introduced by Dion et al. [Phys. Rev. Lett. 92, 246401 (2004)] and reformulated by Román-Pérez et al. [Phys. Rev. Lett. 103, 096102 (2009)]. The proposed numerical approach removes the logarithmic singularity of the kernel function. Complete expressions of the self-consistent correlation potential and of the stress tensor are given. Combined with various choices of exchange functionals, five versions of van der Waals density functionals are implemented. Applications to the computation of the interaction energy of the benzene-water complex and to the computation of the equilibrium cell parameters of the benzene crystal are presented. As an example of crystal structure calculation involving a mixture of hydrogen bonding and dispersion interactions, we compute the equilibrium structure of two polymorphs of aspirin (2-acetoxybenzoic acid, C9H8O4) in the P21/c monoclinic structure.

  16. Scaling in the vicinity of the four-state Potts fixed point

    NASA Astrophysics Data System (ADS)

    Blöte, H. W. J.; Guo, Wenan; Nightingale, M. P.

    2017-08-01

    We study a self-dual generalization of the Baxter-Wu model, employing results obtained by transfer matrix calculations of the magnetic scaling dimension and the free energy. While the pure critical Baxter-Wu model displays the critical behavior of the four-state Potts fixed point in two dimensions, in the sense that logarithmic corrections are absent, the introduction of different couplings in the up- and down triangles moves the model away from this fixed point, so that logarithmic corrections appear. Real couplings move the model into the first-order range, away from the behavior displayed by the nearest-neighbor, four-state Potts model. We also use complex couplings, which bring the model in the opposite direction characterized by the same type of logarithmic corrections as present in the four-state Potts model. Our finite-size analysis confirms in detail the existing renormalization theory describing the immediate vicinity of the four-state Potts fixed point.

  17. Universal principles governing multiple random searchers on complex networks: The logarithmic growth pattern and the harmonic law

    NASA Astrophysics Data System (ADS)

    Weng, Tongfeng; Zhang, Jie; Small, Michael; Harandizadeh, Bahareh; Hui, Pan

    2018-03-01

    We propose a unified framework to evaluate and quantify the search time of multiple random searchers traversing independently and concurrently on complex networks. We find that the intriguing behaviors of multiple random searchers are governed by two basic principles—the logarithmic growth pattern and the harmonic law. Specifically, the logarithmic growth pattern characterizes how the search time increases with the number of targets, while the harmonic law explores how the search time of multiple random searchers varies relative to that needed by individual searchers. Numerical and theoretical results demonstrate these two universal principles established across a broad range of random search processes, including generic random walks, maximal entropy random walks, intermittent strategies, and persistent random walks. Our results reveal two fundamental principles governing the search time of multiple random searchers, which are expected to facilitate investigation of diverse dynamical processes like synchronization and spreading.

  18. Chemical origins of frictional aging.

    PubMed

    Liu, Yun; Szlufarska, Izabela

    2012-11-02

    Although the basic laws of friction are simple enough to be taught in elementary physics classes and although friction has been widely studied for centuries, in the current state of knowledge it is still not possible to predict a friction force from fundamental principles. One of the highly debated topics in this field is the origin of static friction. For most macroscopic contacts between two solids, static friction will increase logarithmically with time, a phenomenon that is referred to as aging of the interface. One known reason for the logarithmic growth of static friction is the deformation creep in plastic contacts. However, this mechanism cannot explain frictional aging observed in the absence of roughness and plasticity. Here, we discover molecular mechanisms that can lead to a logarithmic increase of friction based purely on interfacial chemistry. Predictions of our model are consistent with published experimental data on the friction of silica.

  19. A Probabilistic Model for Predicting Attenuation of Viruses During Percolation in Unsaturated Natural Barriers

    NASA Astrophysics Data System (ADS)

    Faulkner, B. R.; Lyon, W. G.

    2001-12-01

    We present a probabilistic model for predicting virus attenuation. The solution employs the assumption of complete mixing. Monte Carlo methods are used to generate ensemble simulations of virus attenuation due to physical, biological, and chemical factors. The model generates a probability of failure to achieve 4-log attenuation. We tabulated data from related studies to develop probability density functions for input parameters, and utilized a database of soil hydraulic parameters based on the 12 USDA soil categories. Regulators can use the model based on limited information such as boring logs, climate data, and soil survey reports for a particular site of interest. Plackett-Burman sensitivity analysis indicated the most important main effects on probability of failure to achieve 4-log attenuation in our model were mean logarithm of saturated hydraulic conductivity (+0.396), mean water content (+0.203), mean solid-water mass transfer coefficient (-0.147), and the mean solid-water equilibrium partitioning coefficient (-0.144). Using the model, we predicted the probability of failure of a one-meter thick proposed hydrogeologic barrier and a water content of 0.3. With the currently available data and the associated uncertainty, we predicted soils classified as sand would fail (p=0.999), silt loams would also fail (p=0.292), but soils classified as clays would provide the required 4-log attenuation (p=0.001). The model is extendible in the sense that probability density functions of parameters can be modified as future studies refine the uncertainty, and the lightweight object-oriented design of the computer model (implemented in Java) will facilitate reuse with modified classes. This is an abstract of a proposed presentation and does not necessarily reflect EPA policy.

  20. Calibration of redox potential in sperm wash media and evaluation of oxidation-reduction potential values in various assisted reproductive technology culture media using MiOXSYS system.

    PubMed

    Panner Selvam, M K; Henkel, R; Sharma, R; Agarwal, A

    2018-03-01

    Oxidation-reduction potential describes the balance between the oxidants and antioxidants in fluids including semen. Various artificial culture media are used in andrology and IVF laboratories for sperm preparation and to support the development of fertilized oocytes under in vitro conditions. The composition and conditions of these media are vital for optimal functioning of the gametes. Currently, there are no data on the status of redox potential of sperm processing and assisted reproduction media. The purpose of this study was to compare the oxidation-reduction potential values of the different media and to calibrate the oxidation-reduction potential values of the sperm wash medium using oxidative stress inducer cumene hydroperoxide and antioxidant ascorbic acid. Redox potential was measured in 10 different media ranging from sperm wash media, freezing media and assisted reproductive technology one-step medium to sequential media. Oxidation-reduction potential values of the sequential culture medium and one-step culture medium were lower and significantly different (p < 0.05) from the sperm wash media. Calibration of the sperm wash media using the oxidant cumene hydroperoxide and antioxidant ascorbic acid demonstrated that oxidation-reduction potential and the concentration of oxidant or antioxidant are logarithmically dependent. This study highlights the importance of calibrating the oxidation-reduction potential levels of the sperm wash media in order to utilize it as a reference value to identify the physiological range of oxidation-reduction potential that does not have any adverse effect on normal physiological sperm function. © 2017 American Society of Andrology and European Academy of Andrology.

  1. Desktop publishing and validation of custom near visual acuity charts.

    PubMed

    Marran, Lynn; Liu, Lei; Lau, George

    2008-11-01

    Customized visual acuity (VA) assessment is an important part of basic and clinical vision research. Desktop computer based distance VA measurements have been utilized, and shown to be accurate and reliable, but computer based near VA measurements have not been attempted, mainly due to the limited spatial resolution of computer monitors. In this paper, we demonstrate how to use desktop publishing to create printed custom near VA charts. We created a set of six near VA charts in a logarithmic progression, 20/20 through 20/63, with multiple lines of the same acuity level, different letter arrangements in each line and a random noise background. This design allowed repeated measures of subjective accommodative amplitude without the potential artifact of familiarity of the optotypes. The background maintained a constant and spatial frequency rich peripheral stimulus for accommodation across the six different acuity levels. The paper describes in detail how pixel-wise accurate black and white bitmaps of Sloan optotypes were used to create the printed custom VA charts. At all acuity levels, the physical sizes of the printed custom optotypes deviated no more than 0.034 log units from that of the standard, satisfying the 0.05 log unit ISO criterion we used to demonstrate physical equivalence. Also, at all acuity levels, log unit differences in the mean target distance for which reliable recognition of letters first occurred for the printed custom optotypes compared to the standard were found to be below 0.05, satisfying the 0.05 log unit ISO criterion we used to demonstrate functional equivalence. It is possible to use desktop publishing to create custom near VA charts that are physically and functionally equivalent to standard VA charts produced by a commercial printing process.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larche, Michael R.; Prowant, Matthew S.; Bruillard, Paul J.

    This study compares different approaches for imaging the internal architecture of graphite/epoxy composites using backscattered ultrasound. Two cases are studied. In the first, near-surface defects in a thin graphite/epoxy plates are imaged. The same backscattered waveforms were used to produce peak-to-peak, logarithm of signal energy, as well as entropy images of different types. All of the entropy images exhibit better border delineation and defect contrast than the either peak-to-peak or logarithm of signal energy. The best results are obtained using the joint entropy of the backscattered waveforms with a reference function. Two different references are examined. The first is amore » reflection of the insonifying pulse from a stainless steel reflector. The second is an approximate optimum obtained from an iterative parametric search. The joint entropy images produced using this reference exhibit three times the contrast obtained in previous studies. These plates were later destructively analyzed to determine size and location of near-surface defects and the results found to agree with the defect location and shape as indicated by the entropy images. In the second study, images of long carbon graphite fibers (50% by weight) in polypropylene thermoplastic are obtained as a first step toward ultrasonic determination of the distributions of fiber position and orientation.« less

  3. The nature of arms in spiral galaxies. IV. Symmetries and asymmetries

    NASA Astrophysics Data System (ADS)

    del Río, M. S.; Cepa, J.

    1999-01-01

    A Fourier analysis of the intensity distribution in the planes of nine spiral galaxies is performed. In terms of the arm classification scheme of \\cite[Elmegreen & Elmegreen (1987)]{ee87} seven of the galaxies have well-defined arms (classes 12 and 9) and two have intermediate-type arms (class 5). The galaxies studied are NGC 157, 753, 895, 4321, 6764, 6814, 6951, 7479 and 7723. For each object Johnson B-band images are available which are decomposed into angular components, for different angular periodicities. No a priori assumption is made concerning the form of the arms. The base function used in the analysis is a logarithmic spiral. The main result obtained with this method is that the dominant component (or mode) usually changes at corotation. In some cases, this change to a different mode persists only for a short range about corotation, but in other cases the change is permanent. The agreement between pitch angles found with this method and by fitting logarithmic spirals to mean arm positions (del Río & Cepa 1998b, hereafter \\cite[Paper III]{p3}) is good, except for those cases where bars are strong and dominant. Finally, a comparison is made with the ``symmetrization'' method introduced by Elmegreen, Elmegreen & Montenegro (1992, hereafter EEM), which also shows the different symmetric components.

  4. Log-polar mapping-based scale space tracking with adaptive target response

    NASA Astrophysics Data System (ADS)

    Li, Dongdong; Wen, Gongjian; Kuai, Yangliu; Zhang, Ximing

    2017-05-01

    Correlation filter-based tracking has exhibited impressive robustness and accuracy in recent years. Standard correlation filter-based trackers are restricted to translation estimation and equipped with fixed target response. These trackers produce an inferior performance when encountered with a significant scale variation or appearance change. We propose a log-polar mapping-based scale space tracker with an adaptive target response. This tracker transforms the scale variation of the target in the Cartesian space into a shift along the logarithmic axis in the log-polar space. A one-dimensional scale correlation filter is learned online to estimate the shift along the logarithmic axis. With the log-polar representation, scale estimation is achieved accurately without a multiresolution pyramid. To achieve an adaptive target response, a variance of the Gaussian function is computed from the response map and updated online with a learning rate parameter. Our log-polar mapping-based scale correlation filter and adaptive target response can be combined with any correlation filter-based trackers. In addition, the scale correlation filter can be extended to a two-dimensional correlation filter to achieve joint estimation of the scale variation and in-plane rotation. Experiments performed on an OTB50 benchmark demonstrate that our tracker achieves superior performance against state-of-the-art trackers.

  5. Effects of intermolecular interactions on absorption intensities of the fundamental and the first, second, and third overtones of OH stretching vibrations of methanol and t-butanol‑d9 in n-hexane studied by visible/near-infrared/infrared spectroscopy.

    PubMed

    Morisawa, Yusuke; Suga, Arisa

    2018-05-15

    Visible (Vis), near-infrared (NIR) and IR spectra in the 15,600-2500cm -1 region were measured for methanol, methanol-d 3 , and t-butanol-d 9 in n-hexane to investigate effects of intermolecular interaction on absorption intensities of the fundamental and the first, second, and third overtones of their OH stretching vibrations. The relative area intensities of OH stretching bands of free and hydrogen-bonded species were plotted versus the vibrational quantum number using logarithm plots (V=1-4) for 0.5M methanol, 0.5M methanol‑d 3 , and 0.5M t-butanol-d 9 in n-hexane. In the logarithm plots the relative intensities of free species yield a linear dependence irrespective of the solutes while those of hydrogen-bonded species deviate significantly from the linearity. The observed results suggest that the modifications in dipole moment functions of the OH bond induced by the formation of the hydrogen bondings change transient dipole moment, leading to the deviations of the dependences of relative absorption intensities on the vibrational quantum number from the linearity. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Effects of intermolecular interactions on absorption intensities of the fundamental and the first, second, and third overtones of OH stretching vibrations of methanol and t-butanol‑d9 in n-hexane studied by visible/near-infrared/infrared spectroscopy

    NASA Astrophysics Data System (ADS)

    Morisawa, Yusuke; Suga, Arisa

    2018-05-01

    Visible (Vis), near-infrared (NIR) and IR spectra in the 15,600-2500 cm- 1 region were measured for methanol, methanol-d3, and t-butanol-d9 in n-hexane to investigate effects of intermolecular interaction on absorption intensities of the fundamental and the first, second, and third overtones of their OH stretching vibrations. The relative area intensities of OH stretching bands of free and hydrogen-bonded species were plotted versus the vibrational quantum number using logarithm plots (V = 1-4) for 0.5 M methanol, 0.5 M methanol‑d3, and 0.5 M t-butanol-d9 in n-hexane. In the logarithm plots the relative intensities of free species yield a linear dependence irrespective of the solutes while those of hydrogen-bonded species deviate significantly from the linearity. The observed results suggest that the modifications in dipole moment functions of the OH bond induced by the formation of the hydrogen bondings change transient dipole moment, leading to the deviations of the dependences of relative absorption intensities on the vibrational quantum number from the linearity.

  7. Quantifying fluctuations in market liquidity: analysis of the bid-ask spread.

    PubMed

    Plerou, Vasiliki; Gopikrishnan, Parameswaran; Stanley, H Eugene

    2005-04-01

    Quantifying the statistical features of the bid-ask spread offers the possibility of understanding some aspects of market liquidity. Using quote data for the 116 most frequently traded stocks on the New York Stock Exchange over the two-year period 1994-1995, we analyze the fluctuations of the average bid-ask spread S over a time interval deltat. We find that S is characterized by a distribution that decays as a power law P[S>x] approximately x(-zeta(S) ), with an exponent zeta(S) approximately = 3 for all 116 stocks analyzed. Our analysis of the autocorrelation function of S shows long-range power-law correlations, (S(t)S(t + tau)) approximately tau(-mu(s)), similar to those previously found for the volatility. We next examine the relationship between the bid-ask spread and the volume Q, and find that S approximately ln Q; we find that a similar logarithmic relationship holds between the transaction-level bid-ask spread and the trade size. We then study the relationship between S and other indicators of market liquidity such as the frequency of trades N and the frequency of quote updates U, and find S approximately ln N and S approximately ln U. Lastly, we show that the bid-ask spread and the volatility are also related logarithmically.

  8. Using sky radiances measured by ground based AERONET Sun-Radiometers for cirrus cloud detection

    NASA Astrophysics Data System (ADS)

    Sinyuk, A.; Holben, B. N.; Eck, T. F.; Slutsker, I.; Lewis, J. R.

    2013-12-01

    Screening of cirrus clouds using observations of optical depth (OD) only has proven to be a difficult task due mostly to some clouds having temporally and spatially stable OD. On the other hand, the sky radiances measurements which in AERONET protocol are taken throughout the day may contain additional cloud information. In this work the potential of using sky radiances for cirrus cloud detection is investigated. The detection is based on differences in the angular shape of sky radiances due to cirrus clouds and aerosol (see Figure). The range of scattering angles from 3 to 6 degrees was selected due to two primary reasons: high sensitivity to cirrus clouds presence, and close proximity to the Sun. The angular shape of sky radiances was parametrized by its curvature, which is a parameter defined as a combination of the first and second derivatives as a function of scattering angle. We demonstrate that a slope of the logarithm of curvature versus logarithm of scattering angle in this selected range of scattering angles is sensitive to cirrus cloud presence. We also demonstrate that restricting the values of the slope below some threshold value can be used for cirrus cloud screening. The threshold value of the slope was estimated using collocated measurements of AERONET data and MPLNET lidars.

  9. Estimation of Psychophysical Thresholds Based on Neural Network Analysis of DPOAE Input/Output Functions

    NASA Astrophysics Data System (ADS)

    Naghibolhosseini, Maryam; Long, Glenis

    2011-11-01

    The distortion product otoacoustic emission (DPOAE) input/output (I/O) function may provide a potential tool for evaluating cochlear compression. Hearing loss causes an increase in the level of the sound that is just audible for the person, which affects the cochlea compression and thus the dynamic range of hearing. Although the slope of the I/O function is highly variable when the total DPOAE is used, separating the nonlinear-generator component from the reflection component reduces this variability. We separated the two components using least squares fit (LSF) analysis of logarithmic sweeping tones, and confirmed that the separated generator component provides more consistent I/O functions than the total DPOAE. In this paper we estimated the slope of the I/O functions of the generator components at different sound levels using LSF analysis. An artificial neural network (ANN) was used to estimate psychophysical thresholds using the estimated slopes of the I/O functions. DPOAE I/O functions determined in this way may help to estimate hearing thresholds and cochlear health.

  10. Thermochemical Data for Propellant Ingredients and their Products of Explosion

    DTIC Science & Technology

    1949-12-01

    oases except perhaps at temperatures below 2000°K. The logarithms of all the equilibrium constants except Ko have been tabulated since these logarithms...have almost constant first differences. Linear interpolation may lead to an error of a unit or two in the third decimal place for Ko but the...dissociation products OH, H and KO will be formed and at still higher temperatures the other dissociation products 0*, 0, N and C will begin to appear

  11. On the Existence of the Logarithmic Surface Layer in the Inner Core of Hurricanes

    DTIC Science & Technology

    2012-01-01

    characteristics of eyewall boundary layer of Hurricane Hugo (1989). Mon. Wea. Rev., 139, 1447-1462. Zhang, JA, Montgomery MT. 2012 Observational...the inner core of hurricanes Roger K. Smitha ∗and Michael T. Montgomeryb a Meteorological Institute, University of Munich, Munich, Germany b Dept. of...logarithmic surface layer”, or log layer, in the boundary layer of the rapidly-rotating core of a hurricane . One such study argues that boundary-layer

  12. Efficient Queries of Stand-off Annotations for Natural Language Processing on Electronic Medical Records.

    PubMed

    Luo, Yuan; Szolovits, Peter

    2016-01-01

    In natural language processing, stand-off annotation uses the starting and ending positions of an annotation to anchor it to the text and stores the annotation content separately from the text. We address the fundamental problem of efficiently storing stand-off annotations when applying natural language processing on narrative clinical notes in electronic medical records (EMRs) and efficiently retrieving such annotations that satisfy position constraints. Efficient storage and retrieval of stand-off annotations can facilitate tasks such as mapping unstructured text to electronic medical record ontologies. We first formulate this problem into the interval query problem, for which optimal query/update time is in general logarithm. We next perform a tight time complexity analysis on the basic interval tree query algorithm and show its nonoptimality when being applied to a collection of 13 query types from Allen's interval algebra. We then study two closely related state-of-the-art interval query algorithms, proposed query reformulations, and augmentations to the second algorithm. Our proposed algorithm achieves logarithmic time stabbing-max query time complexity and solves the stabbing-interval query tasks on all of Allen's relations in logarithmic time, attaining the theoretic lower bound. Updating time is kept logarithmic and the space requirement is kept linear at the same time. We also discuss interval management in external memory models and higher dimensions.

  13. Efficient Queries of Stand-off Annotations for Natural Language Processing on Electronic Medical Records

    PubMed Central

    Luo, Yuan; Szolovits, Peter

    2016-01-01

    In natural language processing, stand-off annotation uses the starting and ending positions of an annotation to anchor it to the text and stores the annotation content separately from the text. We address the fundamental problem of efficiently storing stand-off annotations when applying natural language processing on narrative clinical notes in electronic medical records (EMRs) and efficiently retrieving such annotations that satisfy position constraints. Efficient storage and retrieval of stand-off annotations can facilitate tasks such as mapping unstructured text to electronic medical record ontologies. We first formulate this problem into the interval query problem, for which optimal query/update time is in general logarithm. We next perform a tight time complexity analysis on the basic interval tree query algorithm and show its nonoptimality when being applied to a collection of 13 query types from Allen’s interval algebra. We then study two closely related state-of-the-art interval query algorithms, proposed query reformulations, and augmentations to the second algorithm. Our proposed algorithm achieves logarithmic time stabbing-max query time complexity and solves the stabbing-interval query tasks on all of Allen’s relations in logarithmic time, attaining the theoretic lower bound. Updating time is kept logarithmic and the space requirement is kept linear at the same time. We also discuss interval management in external memory models and higher dimensions. PMID:27478379

  14. A study of the eigenvectors of the vibrational modes in crystalline cytidine via high-pressure Raman spectroscopy.

    PubMed

    Lee, Scott A; Pinnick, David A; Anderson, A

    2015-01-01

    Raman spectroscopy has been used to study the eigenvectors and eigenvalues of the vibrational modes of crystalline cytidine at 295 K and high pressures by evaluating the logarithmic derivative of the vibrational frequency ω with respect to pressure P: [Formula: see text]. Crystalline samples of molecular materials have strong intramolecular bonds and weak intermolecular bonds. This hierarchy of bonding strengths causes the vibrational optical modes localized within a molecular unit ("internal" modes) to be relatively high in frequency while the modes in which the molecular units vibrate against each other ("external" modes) have relatively low frequencies. The value of the logarithmic derivative is a useful diagnostic probe of the nature of the eigenvector of the vibrational modes because stretching modes (which are predominantly internal to the molecule) have low logarithmic derivatives while external modes have higher logarithmic derivatives. In crystalline cytidine, the modes at 85.8, 101.4, and 110.6 cm(-1) are external in which the molecules of the unit cell vibrate against each other in either translational or librational motions (or some linear combination thereof). All of the modes above 320 cm(-1) are predominantly internal stretching modes. The remaining modes below 320 cm(-1) include external modes and internal modes, mostly involving either torsional or bending motions of groups of atoms within a molecule.

  15. Threshold and Jet Radius Joint Resummation for Single-Inclusive Jet Production

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Xiaohui; Moch, Sven -Olaf; Ringer, Felix

    Here, we present the first threshold and jet radius jointly resummed cross section for single-inclusive hadronic jet production. We work at next-to-leading logarithmic accuracy and our framework allows for a systematic extension beyond the currently achieved precision. Long-standing numerical issues are overcome by performing the resummation directly in momentum space within soft collinear effective theory. We present the first numerical results for the LHC and observe an improved description of the available data. Our results are of immediate relevance for LHC precision phenomenology including the extraction of parton distribution functions and the QCD strong coupling constant.

  16. The effect of temperature on size and development in three species of benthic copepod.

    PubMed

    Abdullahi, B A; Laybourn-Parry, Johanna

    1985-09-01

    The effect of temperature on the size and development times of three benthic cyclopoid copepods, Acanthocyclops viridis, A. vernalis and Macrocyclops albidus were investigated within the normal environmental temperature range (5°C-20°C). Adult weight decreased as temperature increased. All three species complete their development at 5°C and development times at all temperatures are presented as curvilinear logarithmic temperature functions. The duration of development decreases as temperature rises. The results are compared with those reported else-where for benthic and planktonic species and the ecological implications are discussed.

  17. Properties of a center/surround retinex. Part 1: Signal processing design

    NASA Technical Reports Server (NTRS)

    Rahaman, Zia-Ur

    1995-01-01

    The last version of Edwin Land's retinex model for human vision's lightness and color constancy has been implemented. Previous research has established the mathematical foundations of Land's retinex but has not examined specific design issues and their effects on the properties of the retinex operation. Here we describe the signal processing design of the retinex. We find that the placement of the logarithmic function is important and produces best results when placed after the surround formation. We also find that best rendition is obtained for a 'canonical' gain-offset applied after the retinex operation.

  18. Applying the log-normal distribution to target detection

    NASA Astrophysics Data System (ADS)

    Holst, Gerald C.

    1992-09-01

    Holst and Pickard experimentally determined that MRT responses tend to follow a log-normal distribution. The log normal distribution appeared reasonable because nearly all visual psychological data is plotted on a logarithmic scale. It has the additional advantage that it is bounded to positive values; an important consideration since probability of detection is often plotted in linear coordinates. Review of published data suggests that the log-normal distribution may have universal applicability. Specifically, the log-normal distribution obtained from MRT tests appears to fit the target transfer function and the probability of detection of rectangular targets.

  19. A first determination of the unpolarized quark TMDs from a global analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bacchetta, Alessandro; Delcarro, Filippo; Pisano, Cristian

    Transverse momentum dependent distribution and fragmentation functions of unpolarized quarks inside unpolarized protons are extracted, for the first time, through a simultaneous analysis of semi-inclusive deep-inelastic scattering, Drell-Yan and Z boson hadroproduction processes. This study is performed at leading order in perturbative QCD, with energy scale evolution at the next-to-leading logarithmic accuracy. Moreover, some specific choices are made to deal with low scale evolution around 1 GeV2. Since only data in the low transverse momentum region are considered, no matching to fixed-order calculations at high transverse momentum is needed.

  20. Non-autonomous Hénon--Heiles systems

    NASA Astrophysics Data System (ADS)

    Hone, Andrew N. W.

    1998-07-01

    Scaling similarity solutions of three integrable PDEs, namely the Sawada-Kotera, fifth order KdV and Kaup-Kupershmidt equations, are considered. It is shown that the resulting ODEs may be written as non-autonomous Hamiltonian equations, which are time-dependent generalizations of the well-known integrable Hénon-Heiles systems. The (time-dependent) Hamiltonians are given by logarithmic derivatives of the tau-functions (inherited from the original PDEs). The ODEs for the similarity solutions also have inherited Bäcklund transformations, which may be used to generate sequences of rational solutions as well as other special solutions related to the first Painlevé transcendent.

  1. Investigation of the effect of temperature on aging behavior of Fe-doped lead zirconate titanate

    NASA Astrophysics Data System (ADS)

    Promsawat, Napatporn; Promsawat, Methee; Janphuang, Pattanaphong; Marungsri, Boonruang; Luo, Zhenhua; Pojprapai, Soodkhet

    The aging degradation behavior of Fe-doped Lead zirconate titanate (PZT) subjected to different heat-treated temperatures was investigated over 1000h. The aging degradation in the piezoelectric properties of PZT was indicated by the decrease in piezoelectric charge coefficient, electric field-induced strain and remanent polarization. It was found that the aging degradation became more pronounced at temperature above 50% of the PZT’s Curie temperature. A mathematical model based on the linear logarithmic stretched exponential function was applied to explain the aging behavior. A qualitative aging model based on polar macrodomain switchability was proposed.

  2. Slow Lévy flights

    NASA Astrophysics Data System (ADS)

    Boyer, Denis; Pineda, Inti

    2016-02-01

    Among Markovian processes, the hallmark of Lévy flights is superdiffusion, or faster-than-Brownian dynamics. Here we show that Lévy laws, as well as Gaussian distributions, can also be the limit distributions of processes with long-range memory that exhibit very slow diffusion, logarithmic in time. These processes are path dependent and anomalous motion emerges from frequent relocations to already visited sites. We show how the central limit theorem is modified in this context, keeping the usual distinction between analytic and nonanalytic characteristic functions. A fluctuation-dissipation relation is also derived. Our results may have important applications in the study of animal and human displacements.

  3. Threshold and Jet Radius Joint Resummation for Single-Inclusive Jet Production

    DOE PAGES

    Liu, Xiaohui; Moch, Sven -Olaf; Ringer, Felix

    2017-11-20

    Here, we present the first threshold and jet radius jointly resummed cross section for single-inclusive hadronic jet production. We work at next-to-leading logarithmic accuracy and our framework allows for a systematic extension beyond the currently achieved precision. Long-standing numerical issues are overcome by performing the resummation directly in momentum space within soft collinear effective theory. We present the first numerical results for the LHC and observe an improved description of the available data. Our results are of immediate relevance for LHC precision phenomenology including the extraction of parton distribution functions and the QCD strong coupling constant.

  4. Entanglement entropy of critical spin liquids.

    PubMed

    Zhang, Yi; Grover, Tarun; Vishwanath, Ashvin

    2011-08-05

    Quantum spin liquids are phases of matter whose internal structure is not captured by a local order parameter. Particularly intriguing are critical spin liquids, where strongly interacting excitations control low energy properties. Here we calculate their bipartite entanglement entropy that characterizes their quantum structure. In particular we calculate the Renyi entropy S(2) on model wave functions obtained by Gutzwiller projection of a Fermi sea. Although the wave functions are not sign positive, S(2) can be calculated on relatively large systems (>324 spins) using the variational Monte Carlo technique. On the triangular lattice we find that entanglement entropy of the projected Fermi sea state violates the boundary law, with S(2) enhanced by a logarithmic factor. This is an unusual result for a bosonic wave function reflecting the presence of emergent fermions. These techniques can be extended to study a wide class of other phases.

  5. Elliptic polylogarithms and iterated integrals on elliptic curves. Part I: general formalism

    NASA Astrophysics Data System (ADS)

    Broedel, Johannes; Duhr, Claude; Dulat, Falko; Tancredi, Lorenzo

    2018-05-01

    We introduce a class of iterated integrals, defined through a set of linearly independent integration kernels on elliptic curves. As a direct generalisation of multiple polylogarithms, we construct our set of integration kernels ensuring that they have at most simple poles, implying that the iterated integrals have at most logarithmic singularities. We study the properties of our iterated integrals and their relationship to the multiple elliptic polylogarithms from the mathematics literature. On the one hand, we find that our iterated integrals span essentially the same space of functions as the multiple elliptic polylogarithms. On the other, our formulation allows for a more direct use to solve a large variety of problems in high-energy physics. We demonstrate the use of our functions in the evaluation of the Laurent expansion of some hypergeometric functions for values of the indices close to half integers.

  6. Rough-to-smooth transition of an equilibrium neutral constant stress layer

    NASA Technical Reports Server (NTRS)

    Logan, E., Jr.; Fichtl, G. H.

    1975-01-01

    Purpose of research on rough-to-smooth transition of an equilibrium neutral constant stress layer is to develop a model for low-level atmospheric flow over terrains of abruptly changing roughness, such as those occurring near the windward end of a landing strip, and to use the model to derive functions which define the extent of the region affected by the roughness change and allow adequate prediction of wind and shear stress profiles at all points within the region. A model consisting of two bounding logarithmic layers and an intermediate velocity defect layer is assumed, and dimensionless velocity and stress distribution functions which meet all boundary and matching conditions are hypothesized. The functions are used in an asymptotic form of the equation of motion to derive a relation which governs the growth of the internal boundary layer. The growth relation is used to predict variation of surface shear stress.

  7. Frustration in Condensed Matter and Protein Folding

    NASA Astrophysics Data System (ADS)

    Lorelli, S.; Cabot, A.; Sundarprasad, N.; Boekema, C.

    Using computer modeling we study frustration in condensed matter and protein folding. Frustration is due to random and/or competing interactions. One definition of frustration is the sum of squares of the differences between actual and expected distances between characters. If this sum is non-zero, then the system is said to have frustration. A simulation tracks the movement of characters to lower their frustration. Our research is conducted on frustration as a function of temperature using a logarithmic scale. At absolute zero, the relaxation for frustration is a power function for randomly assigned patterns or an exponential function for regular patterns like Thomson figures. These findings have implications for protein folding; we attempt to apply our frustration modeling to protein folding and dynamics. We use coding in Python to simulate different ways a protein can fold. An algorithm is being developed to find the lowest frustration (and thus energy) states possible. Research supported by SJSU & AFC.

  8. Estimation of a monotone percentile residual life function under random censorship.

    PubMed

    Franco-Pereira, Alba M; de Uña-Álvarez, Jacobo

    2013-01-01

    In this paper, we introduce a new estimator of a percentile residual life function with censored data under a monotonicity constraint. Specifically, it is assumed that the percentile residual life is a decreasing function. This assumption is useful when estimating the percentile residual life of units, which degenerate with age. We establish a law of the iterated logarithm for the proposed estimator, and its n-equivalence to the unrestricted estimator. The asymptotic normal distribution of the estimator and its strong approximation to a Gaussian process are also established. We investigate the finite sample performance of the monotone estimator in an extensive simulation study. Finally, data from a clinical trial in primary biliary cirrhosis of the liver are analyzed with the proposed methods. One of the conclusions of our work is that the restricted estimator may be much more efficient than the unrestricted one. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. RELIC: a novel dye-bias correction method for Illumina Methylation BeadChip.

    PubMed

    Xu, Zongli; Langie, Sabine A S; De Boever, Patrick; Taylor, Jack A; Niu, Liang

    2017-01-03

    The Illumina Infinium HumanMethylation450 BeadChip and its successor, Infinium MethylationEPIC BeadChip, have been extensively utilized in epigenome-wide association studies. Both arrays use two fluorescent dyes (Cy3-green/Cy5-red) to measure methylation level at CpG sites. However, performance difference between dyes can result in biased estimates of methylation levels. Here we describe a novel method, called REgression on Logarithm of Internal Control probes (RELIC) to correct for dye bias on whole array by utilizing the intensity values of paired internal control probes that monitor the two color channels. We evaluate the method in several datasets against other widely used dye-bias correction methods. Results on data quality improvement showed that RELIC correction statistically significantly outperforms alternative dye-bias correction methods. We incorporated the method into the R package ENmix, which is freely available from the Bioconductor website ( https://www.bioconductor.org/packages/release/bioc/html/ENmix.html ). RELIC is an efficient and robust method to correct for dye-bias in Illumina Methylation BeadChip data. It outperforms other alternative methods and conveniently implemented in R package ENmix to facilitate DNA methylation studies.

  10. A measurement of the proton structure function F2( x, Q2)

    NASA Astrophysics Data System (ADS)

    Ahmed, T.; Aid, S.; Akhundov, A.; Andreev, V.; Andrieu, B.; Appuhn, R.-D.; Arpagaus, M.; Babaev, A.; Baehr, J.; Bán, J.; Baranov, P.; Barrelet, E.; Bartel, W.; Barth, M.; Bassler, U.; Beck, H. P.; Behrend, H.-J.; Belousov, A.; Berger, Ch.; Bergstein, H.; Bernardi, G.; Bernet, R.; Bertrand-Coremans, G.; Besançon, M.; Beyer, R.; Biddulph, P.; Bizot, J. C.; Blobel, V.; Borras, K.; Botterweck, F.; Boudry, V.; Braemer, A.; Brasse, F.; Braunschweig, W.; Brisson, V.; Bruncko, D.; Brune, C.; Buchholz, R.; Büngener, L.; Bürger, J.; Büsser, F. W.; Buniatian, A.; Burke, S.; Buschhorn, G.; Campbell, A. J.; Carli, T.; Charles, F.; Clarke, D.; Clegg, A. B.; Clerbaux, B.; Colombo, M.; Contreras, J. G.; Cormack, C.; Coughlan, J. A.; Courau, A.; Coutures, Ch.; Cozzika, G.; Criegge, L.; Cussans, D. G.; Cvach, J.; Dagoret, S.; Dainton, J. B.; Danilov, M.; Dau, W. D.; Daum, K.; David, M.; Deffur, E.; Delcourt, B.; Del Buono, L.; De Roeck, A.; De Wolf, E. A.; Di Nezza, P.; Dollfus, C.; Dowell, J. D.; Dreis, H. B.; Droutskoi, V.; Duboc, J.; Düllmann, D.; Dünger, O.; Duhm, H.; Ebert, J.; Ebert, T. R.; Eckerlin, G.; Efremenko, V.; Egli, S.; Ehrlichmann, H.; Eichenberger, S.; Eichler, R.; Eisele, F.; Eisenhandler, E.; Ellison, R. J.; Elsen, E.; Erdmann, M.; Erdmann, W.; Evrard, E.; Favart, L.; Fedotov, A.; Feeken, D.; Felst, R.; Feltesse, J.; Ferencei, J.; Ferrarotto, F.; Flamm, K.; Fleischer, M.; Flieser, M.; Flügge, G.; Fomenko, A.; Fominykh, B.; Forbush, M.; Formánek, J.; Foster, J. M.; Franke, G.; Fretwurst, E.; Gabathuler, E.; Gabathuler, K.; Gamerdinger, K.; Garvey, J.; Gayler, J.; Gebauer, M.; Gellrich, A.; Genzel, H.; Gerhards, R.; Goerlach, U.; Goerlich, L.; Gogitidze, N.; Goldberg, M.; Goldner, D.; Gonzalez-Pineiro, B.; Gorelov, I.; Goritchev, P.; Grab, C.; Grässler, H.; Grässler, R.; Greenshaw, T.; Grindhammer, G.; Gruber, A.; Gruber, C.; Haack, J.; Haidt, D.; Hajduk, L.; Hamon, O.; Hampel, M.; Hanlon, E. M.; Hapke, M.; Haynes, W. J.; Heatherington, J.; Heinzelmann, G.; Henderson, R. C. W.; Henschel, H.; Herma, R.; Herynek, I.; Hess, M. F.; Hildesheim, W.; Hill, P.; Hiller, K. H.; Hilton, C. D.; Hladký, J.; Hoeger, K. C.; Höppner, M.; Horisberger, R.; Hudgson, V. L.; Huet, Ph.; Hütte, M.; Hufnagel, H.; Ibbotson, M.; Itterbeck, H.; Jabiol, M.-A.; Jacholkowska, A.; Jacobsson, C.; Jaffre, M.; Janoth, J.; Jansen, T.; Jönsson, L.; Johannsen, K.; Johnson, D. P.; Johnson, L.; Jung, H.; Kalmus, P. I. P.; Kant, D.; Kaschowitz, R.; Kasselmann, P.; Kathage, U.; Katzy, J.; Kaufmann, H. H.; Kazarian, S.; Kenyon, I. R.; Kermiche, S.; Keuker, C.; Kiesling, C.; Klein, M.; Kleinwort, C.; Knies, G.; Ko, W.; Köhler, T.; Köhne, J.; Kolanoski, H.; Kole, F.; Kolya, S. D.; Korbel, V.; Korn, M.; Kostka, P.; Kotelnikov, S. K.; Krämerkämper, T.; Krasny, M. W.; Krehbiel, H.; Krücker, D.; Krüger, U.; Krüner-Marquis, U.; Kubenka, J. P.; Küster, H.; Kuhlen, M.; Kurča, T.; Kurzhöfer, J.; Kuznik, B.; Lacour, D.; Lamarche, F.; Lander, R.; Landon, M. P. J.; Lange, W.; Lanius, P.; Laporte, J.-F.; Lebedev, A.; Leverenz, C.; Levonian, S.; Ley, Ch.; Lindner, A.; Lindström, G.; Linsel, F.; Lipinski, J.; List, B.; Loch, P.; Lohmander, H.; Lopez, G. C.; Lubimov, V.; Lüke, D.; Magnussen, N.; Malinovski, E.; Mani, S.; Maraček, R.; Marage, P.; Marks, J.; Marshall, R.; Martens, J.; Martin, R.; Martyn, H.-U.; Martyniak, J.; Masson, S.; Mavroidis, T.; Maxfield, S. J.; McMahon, S. J.; Mehta, A.; Meier, K.; Mercer, D.; Merz, T.; Meyer, C. A.; Meyer, H.; Meyer, J.; Mikocki, S.; Milstead, D.; Moreau, F.; Morris, J. V.; Mroczko, E.; Müller, G.; Müller, K.; Murín, P.; Nagovizin, V.; Nahnhauer, R.; Naroska, B.; Naumann, Th.; Newman, P. R.; Newton, D.; Neyret, D.; Nguyen, H. K.; Nicholls, T. C.; Niebergall, F.; Niebuhr, C.; Nisius, R.; Nowak, G.; Noyes, G. W.; Nyberg-Werther, M.; Oakden, M.; Oberlack, H.; Obrock, U.; Olsson, J. E.; Panaro, E.; Panitch, A.; Pascaud, C.; Patel, G. D.; Peppel, E.; Perez, E.; Phillips, J. P.; Pichler, Ch.; Pitzl, D.; Pope, G.; Prell, S.; Prosi, R.; Rädel, G.; Raupach, F.; Reimer, P.; Reinshagen, S.; Ribarics, P.; Rick, H.; Riech, V.; Riedlberger, J.; Riess, S.; Rietz, M.; Rizvi, E.; Robertson, S. M.; Robmann, P.; Roloff, H. E.; Roosen, R.; Rosenbauer, K.; Rostovtsev, A.; Rouse, F.; Royon, C.; Rüter, K.; Rusakov, S.; Rybicki, K.; Rylko, R.; Sahlmann, N.; Sanchez, E.; Sankey, D. P. C.; Savitsky, M.; Schacht, P.; Schiek, S.; Schleper, P.; von Schlippe, W.; Schmidt, C.; Schmidt, D.; Schmidt, G.; Schöning, A.; Schröder, V.; Schuhmann, E.; Schwab, B.; Schwind, A.; Seehausen, U.; Sefkow, F.; Seidel, M.; Sell, R.; Semenov, A.; Shekelyan, V.; Sheviakov, I.; Shooshtari, H.; Shtarkov, L. N.; Siegmon, G.; Siewert, U.; Sirois, Y.; Skillicorn, I. O.; Smirnov, P.; Smith, J. R.; Soloviev, Y.; Spiekermann, J.; Spitzer, H.; Starosta, R.; Steenbock, M.; Steffen, P.; Steinberg, R.; Stella, B.; Stephens, K.; Stier, J.; Stiewe, J.; Stösslein, U.; Strachota, J.; Straumann, U.; Struczinski, W.; Sutton, J. P.; Tapprogge, S.; Taylor, R. E.; Tchernyshov, V.; Thiebaux, C.; Thompson, G.; Truöl, P.; Turnau, J.; Tutas, J.; Uelkes, P.; Usik, A.; Valkár, S.; Valkárová, A.; Vallée, C.; Van Esch, P.; Van Mechelen, P.; Vartapetian, A.; Vazdik, Y.; Vecko, M.; Verrecchia, P.; Villet, G.; Wacker, K.; Wagener, A.; Wagener, M.; Walker, I. W.; Walther, A.; Weber, G.; Weber, M.; Wegener, D.; Wegner, A.; Wellisch, H. P.; West, L. R.; Willard, S.; Winde, M.; Winter, G.-G.; Wright, A. E.; Wünsch, E.; Wulff, N.; Yiou, T. P.; Žáček, J.; Zarbock, D.; Zhang, Z.; Zhokin, A.; Zimmer, M.; Zimmermann, W.; Zomer, F.; Zuber, K.; H1 Collaboration

    1995-02-01

    A measurement of the proton structure function F2( x, Q2) is reported for momentum transfers squared Q2 between 4.5 GeV 2 and 1600 GeV 2 and for Bjorken x between 1.8 × 10 -14 and 0.13 using data collected by the HERA experiment H1 in 1993. It is observed that F2 increases significantly with decreasing x, confirming our previous measurement made with one tenth of the data available in this analysis. The Q2 dependence is approximately logarithmic over the full kinematic range covered. The subsample of deep inelastic events with a large pseudo-rapidity gap in the hadronic energy flow close to the proton remnant is used to measure the "diffractive" contribution to F2.

  11. [Models for biomass estimation of four shrub species planted in urban area of Xi'an city, Northwest China].

    PubMed

    Yao, Zheng-Yang; Liu, Jian-Jun

    2014-01-01

    Four common greening shrub species (i. e. Ligustrum quihoui, Buxus bodinieri, Berberis xinganensis and Buxus megistophylla) in Xi'an City were selected to develop the highest correlation and best-fit estimation models for the organ (branch, leaf and root) and total biomass against different independent variables. The results indicated that the organ and total biomass optimal models of the four shrubs were power functional model (CAR model) except for the leaf biomass model of B. megistophylla which was logarithmic functional model (VAR model). The independent variables included basal diameter, crown diameter, crown diameter multiplied by height, canopy area and canopy volume. B. megistophylla significantly differed from the other three shrub species in the independent variable selection, which were basal diameter and crown-related factors, respectively.

  12. Hard diffraction in the QCD dipole picture

    NASA Astrophysics Data System (ADS)

    Bialas, A.; Peschanski, R.

    1996-02-01

    Using the QCD dipole picture of the BFKL pomeron, the gluon contribution to the cross-section for single diffractive dissociation in deep-inelastic high-energy scattering is calculated. The resulting contribution to the proton diffractive structure function integrated over t is given in terms of relevant variables, xP, Q2, and β = {x Bj}/{x P}. It factorizes into an explicit x P-dependent Hard Pomeron flux factor and structure function. The lux factor is found to have substantial logarithmic corrections which may account for the recent measurements of the Pomeron intercept in this process. The triple Pomeron coupling is shown to be strongly enhanced by the resummation of leading logs. The obtained pattern of scaling violation at small β is similar to that for F2 at small xBj.

  13. The Time-Dependent Wavelet Spectrum of HH 1 and 2

    NASA Astrophysics Data System (ADS)

    Raga, A. C.; Reipurth, B.; Esquivel, A.; González-Gómez, D.; Riera, A.

    2018-04-01

    We have calculated the wavelet spectra of four epochs (spanning ≍20 yr) of Hα and [S II] HST images of HH 1 and 2. From these spectra we calculated the distribution functions of the (angular) radii of the emission structures. We found that the size distributions have maxima (corresponding to the characteristic sizes of the observed structures) with radii that are logarithmically spaced with factors of ≍2→3 between the successive peaks. The positions of these peaks generally showed small shifts towards larger sizes as a function of time. This result indicates that the structures of HH 1 and 2 have a general expansion (seen at all scales), and/or are the result of a sequence of merging events resulting in the formation of knots with larger characteristic sizes.

  14. Structure-Function Relationship between Flicker-Defined Form Perimetry and Spectral-Domain Optical Coherence Tomography in Glaucoma Suspects.

    PubMed

    Reznicek, Lukas; Muth, Daniel; Vogel, Michaela; Hirneiß, Christoph

    2017-03-01

    To evaluate the relationship between functional parameters of repeated flicker-defined form perimetry (FDF) and structural parameters of spectral-domain optical coherence tomography (SD-OCT) in glaucoma suspects with normal findings in achromatic standard automated perimetry (SAP). Patients with optic nerve heads (ONH) clinically suspicious for glaucoma and normal SAP findings were enrolled in this prospective study. Each participant underwent visual field (VF) testing with FDF perimetry, using the Heidelberg Edge Perimeter (HEP, Heidelberg Engineering, Heidelberg, Germany) at two consecutive visits. Peripapillary RNFL thickness was obtained by SD-OCT (Spectralis, Heidelberg Engineering, Heidelberg, Germany). Correlations and regression analyses of global and sectoral peripapillary RNFL thickness with corresponding global and regional VF sensitivities were investigated. A consecutive series of 65 study eyes of 36 patients were prospectively included. The second FDF test (HEP II) was used for analysis. Cluster-point based suspicious VF defects were found in 34 eyes (52%). Significant correlations were observed between mean global MD (PSD) of HEP II and SD-OCT-based global peripapillary RNFL thickness (r = 0.380, p = 0.003 for MD and r = -0.516, p < 0.001 for PSD) and RNFL classification scores (R 2 = 0.157, p = 0.002 for MD and R 2 = 0.172, p = 0.001 for PSD). Correlations between mean global MD and PSD of HEP II and sectoral peripapillary RNFL thickness and classification scores showed highest correlations between function and structure for the temporal superior and temporal inferior sectors whereas sectoral MD and PSD correlated weaker with sectoral RNFL thickness. Correlations between linear RNFL values and untransformed logarithmic MD values for each segment were less significant than correlations between logarithmic MD values and RNFL thickness. In glaucoma suspects with normal SAP, global and sectoral peripapillary RNFL thickness is correlated with sensitivity and VF defects in FDF perimetry.

  15. What Governs Friction of Silicon Oxide in Humid Environment: Contact Area between Solids, Water Meniscus around the Contact, or Water Layer Structure?

    PubMed

    Chen, Lei; Xiao, Chen; Yu, Bingjun; Kim, Seong H; Qian, Linmao

    2017-09-26

    In order to understand the interfacial parameters governing the friction force (F t ) between silicon oxide surfaces in humid environment, the sliding speed (v) and relative humidity (RH) dependences of F t were measured for a silica sphere (1 μm radius) sliding on a silicon oxide (SiO x ) surface, using atomic force microscopy (AFM), and analyzed with a mathematical model describing interfacial contacts under a dynamic condition. Generally, F t decreases logarithmically with increasing v to a cutoff value below which its dependence on interfacial chemistry and sliding condition is relatively weak. Above the cutoff value, the logarithmic v dependence could be divided into two regimes: (i) when RH is lower than 50%, F t is a function of both v and RH; (ii) in contrast, at RH ≥ 50%, F t is a function of v only, but not RH. These complicated v and RH dependences were hypothesized to originate from the structure of the water layer adsorbed on the surface and the water meniscus around the annulus of the contact area. This hypothesis was tested by analyzing F t as a function of the water meniscus area (A m ) and volume (V m ) estimated from a thermally activated water-bridge formation model. Surprisingly, it was found that F t varies linearly with V m and correlates poorly with A m at RH < 50%; and then its V m dependence becomes weaker as RH increases above 50%. Comparing the friction data with the attenuated total reflection infrared (ATR-IR) spectroscopy analysis result of the adsorbed water layer, it appeared that the solidlike water layer structure formed on the silica surface plays a critical role in friction at RH < 50% and its contribution diminishes at RH ≥ 50%. These findings give a deeper insight into the role of water condensation in friction of the silicon oxide single asperity contact under ambient conditions.

  16. Optical Logarithmic Transformation of Speckle Images with Bacteriorhodopsin Films

    NASA Technical Reports Server (NTRS)

    Downie, John D.

    1995-01-01

    The application of logarithmic transformations to speckle images is sometimes desirable in converting the speckle noise distribution into an additive, constant-variance noise distribution. The optical transmission properties of some bacteriorhodopsin films are well suited to implement such a transformation optically in a parallel fashion. I present experimental results of the optical conversion of a speckle image into a transformed image with signal-independent noise statistics, using the real-time photochromic properties of bacteriorhodopsin. The original and transformed noise statistics are confirmed by histogram analysis.

  17. The exponentiated Hencky energy: anisotropic extension and case studies

    NASA Astrophysics Data System (ADS)

    Schröder, Jörg; von Hoegen, Markus; Neff, Patrizio

    2017-10-01

    In this paper we propose an anisotropic extension of the isotropic exponentiated Hencky energy, based on logarithmic strain invariants. Unlike other elastic formulations, the isotropic exponentiated Hencky elastic energy has been derived solely on differential geometric grounds, involving the geodesic distance of the deformation gradient \\varvec{F} to the group of rotations. We formally extend this approach towards anisotropy by defining additional anisotropic logarithmic strain invariants with the help of suitable structural tensors and consider our findings for selected case studies.

  18. A law of iterated logarithm for the subfractional Brownian motion and an application.

    PubMed

    Qi, Hongsheng; Yan, Litan

    2018-01-01

    Let [Formula: see text] be a sub-fractional Brownian motion with Hurst index [Formula: see text]. In this paper, we give a local law of the iterated logarithm of the form [Formula: see text] almost surely, for all [Formula: see text], where [Formula: see text] for [Formula: see text]. As an application, we introduce the [Formula: see text]-variation of [Formula: see text] driven by [Formula: see text] [Formula: see text] with [Formula: see text].

  19. Logarithmic singularities and quantum oscillations in magnetically doped topological insulators

    NASA Astrophysics Data System (ADS)

    Nandi, D.; Sodemann, Inti; Shain, K.; Lee, G. H.; Huang, K.-F.; Chang, Cui-Zu; Ou, Yunbo; Lee, S. P.; Ward, J.; Moodera, J. S.; Kim, P.; Yacoby, A.

    2018-02-01

    We report magnetotransport measurements on magnetically doped (Bi,Sb ) 2Te3 films grown by molecular beam epitaxy. In Hall bar devices, we observe logarithmic dependence of transport coefficients in temperature and bias voltage which can be understood to arise from electron-electron interaction corrections to the conductivity and self-heating. Submicron scale devices exhibit intriguing quantum oscillations at high magnetic fields with dependence on bias voltage. The observed quantum oscillations can be attributed to bulk and surface transport.

  20. An Estimation of the Logarithmic Timescale in Ergodic Dynamics

    NASA Astrophysics Data System (ADS)

    Gomez, Ignacio S.

    An estimation of the logarithmic timescale in quantum systems having an ergodic dynamics in the semiclassical limit, is presented. The estimation is based on an extension of the Krieger’s finite generator theorem for discretized σ-algebras and using the time rescaling property of the Kolmogorov-Sinai entropy. The results are in agreement with those obtained in the literature but with a simpler mathematics and within the context of the ergodic theory. Moreover, some consequences of the Poincaré’s recurrence theorem are also explored.

  1. Method for determining formation quality factor from seismic data

    DOEpatents

    Taner, M. Turhan; Treitel, Sven

    2005-08-16

    A method is disclosed for calculating the quality factor Q from a seismic data trace. The method includes calculating a first and a second minimum phase inverse wavelet at a first and a second time interval along the seismic data trace, synthetically dividing the first wavelet by the second wavelet, Fourier transforming the result of the synthetic division, calculating the logarithm of this quotient of Fourier transforms and determining the slope of a best fit line to the logarithm of the quotient.

  2. Natural Strain

    NASA Technical Reports Server (NTRS)

    Freed, Alan D.

    1997-01-01

    Logarithmic strain is the preferred measure of strain used by materials scientists, who typically refer to it as the "true strain." It was Nadai who gave it the name "natural strain," which seems more appropriate. This strain measure was proposed by Ludwik for the one-dimensional extension of a rod with length l. It was defined via the integral of dl/l to which Ludwik gave the name "effective specific strain." Today, it is after Hencky, who extended Ludwik's measure to three-dimensional analysis by defining logarithmic strains for the three principal directions.

  3. Aging Wiener-Khinchin theorem and critical exponents of 1/f^{β} noise.

    PubMed

    Leibovich, N; Dechant, A; Lutz, E; Barkai, E

    2016-11-01

    The power spectrum of a stationary process may be calculated in terms of the autocorrelation function using the Wiener-Khinchin theorem. We here generalize the Wiener-Khinchin theorem for nonstationary processes and introduce a time-dependent power spectrum 〈S_{t_{m}}(ω)〉 where t_{m} is the measurement time. For processes with an aging autocorrelation function of the form 〈I(t)I(t+τ)〉=t^{Υ}ϕ_{EA}(τ/t), where ϕ_{EA}(x) is a nonanalytic function when x is small, we find aging 1/f^{β} noise. Aging 1/f^{β} noise is characterized by five critical exponents. We derive the relations between the scaled autocorrelation function and these exponents. We show that our definition of the time-dependent spectrum retains its interpretation as a density of Fourier modes and discuss the relation to the apparent infrared divergence of 1/f^{β} noise. We illustrate our results for blinking-quantum-dot models, single-file diffusion, and Brownian motion in a logarithmic potential.

  4. Stochastic analysis of three-dimensional flow in a bounded domain

    USGS Publications Warehouse

    Naff, R.L.; Vecchia, A.V.

    1986-01-01

    A commonly accepted first-order approximation of the equation for steady state flow in a fully saturated spatially random medium has the form of Poisson's equation. This form allows for the advantageous use of Green's functions to solve for the random output (hydraulic heads) in terms of a convolution over the random input (the logarithm of hydraulic conductivity). A solution for steady state three- dimensional flow in an aquifer bounded above and below is presented; consideration of these boundaries is made possible by use of Green's functions to solve Poisson's equation. Within the bounded domain the medium hydraulic conductivity is assumed to be a second-order stationary random process as represented by a simple three-dimensional covariance function. Upper and lower boundaries are taken to be no-flow boundaries; the mean flow vector lies entirely in the horizontal dimensions. The resulting hydraulic head covariance function exhibits nonstationary effects resulting from the imposition of boundary conditions. Comparisons are made with existing infinite domain solutions.

  5. Embedded function methods for supersonic turbulent boundary layers

    NASA Technical Reports Server (NTRS)

    He, J.; Kazakia, J. Y.; Walker, J. D. A.

    1990-01-01

    The development of embedded functions to represent the mean velocity and total enthalpy distributions in the wall layer of a supersonic turbulent boundary layer is considered. The asymptotic scaling laws (in the limit of large Reynolds number) for high speed compressible flows are obtained to facilitate eventual implementation of the embedded functions in a general prediction method. A self-consistent asymptotic structure is derived, as well as a compressible law of the wall in which the velocity and total enthalpy are logarithmic within the overlap zone, but in the Howarth-Dorodnitsyn variable. Simple outer region turbulence models are proposed (some of which are modifications of existing incompressible models) to reflect the effects of compressibility. As a test of the methodology and the new turbulence models, a set of self-similar outer region profiles is obtained for constant pressure flow; these are then coupled with embedded functions in the wall layer. The composite profiles thus obtained are compared directly with experimental data and good agreement is obtained for flows with Mach numbers up to 10.

  6. Potential function of element measurement for form-finding of wide sense tensegrity

    NASA Astrophysics Data System (ADS)

    Soe, C. K.; Obiya, H.; Koga, D.; Nizam, Z. M.; Ijima, K.

    2018-04-01

    Tensegrity is a unique morphological structure in which disconnected compression members and connected tension members make the whole structure in self-equilibrium. Many researches have been done on tensegrity structure because of its mysteriousness in form-finding analysis. This study is proposed to investigate the trends and to group into some patterns of the shape that a tensegrity structure can have under the same connectivity and support condition. In this study, tangent stiffness method adopts two different functions, namely power function and logarithm function to element measurement. Numerical examples are based on a simplex initial shape with statically determinate support condition to examine the pure effectiveness of two proposed methods. The tangent stiffness method that can evaluate strict rigid body displacement of elements has a superiority to define various measure potentials and to allow the use of virtual element stiffness freely. From the results of numerical examples, the finding of the dominant trends and patterns of the equilibrium solutions is achieved although it has many related solutions under the same circumstances.

  7. Growth Substrate- and Phase-Specific Expression of Biphenyl, Benzoate, and C1 Metabolic Pathways in Burkholderia xenovorans LB400

    PubMed Central

    Denef, V. J.; Patrauchan, M. A.; Florizone, C.; Park, J.; Tsoi, T. V.; Verstraete, W.; Tiedje, J. M.; Eltis, L. D.

    2005-01-01

    Recent microarray experiments suggested that Burkholderia xenovorans LB400, a potent polychlorinated biphenyl (PCB)-degrading bacterium, utilizes up to three apparently redundant benzoate pathways and a C1 metabolic pathway during biphenyl and benzoate metabolism. To better characterize the roles of these pathways, we performed quantitative proteome profiling of cells grown on succinate, benzoate, or biphenyl and harvested during either mid-logarithmic growth or the transition between the logarithmic and stationary growth phases. The Bph enzymes, catabolizing biphenyl, were ∼16-fold more abundant in biphenyl- versus succinate-grown cells. Moreover, the upper and lower bph pathways were independently regulated. Expression of each benzoate pathway depended on growth substrate and phase. Proteins specifying catabolism via benzoate dihydroxylation and catechol ortho-cleavage (ben-cat pathway) were approximately an order of magnitude more abundant in benzoate- versus biphenyl-grown cells at the same growth phase. The chromosomal copy of the benzoyl-coenzyme A (CoA) (boxC) pathway was also expressed during growth on biphenyl: BoxC proteins were approximately twice as abundant as Ben and Cat proteins under these conditions. By contrast, proteins of the megaplasmid copy of the benzoyl-CoA (boxM) pathway were only detected in transition-phase benzoate-grown cells. Other proteins detected at increased levels in benzoate- and biphenyl-grown cells included general stress response proteins potentially induced by reactive oxygen species formed during aerobic aromatic catabolism. Finally, C1 metabolic enzymes were present in biphenyl-grown cells during transition phase. This study provides insights into the physiological roles and integration of apparently redundant catabolic pathways in large-genome bacteria and establishes a basis for investigating the PCB-degrading abilities of this strain. PMID:16291673

  8. Pollution potential leaching index as a tool to assess water leaching risk of arsenic in excavated urban soils.

    PubMed

    Li, Jining; Kosugi, Tomoya; Riya, Shohei; Hashimoto, Yohey; Hou, Hong; Terada, Akihiko; Hosomi, Masaaki

    2018-01-01

    Leaching of hazardous trace elements from excavated urban soils during construction of cities has received considerable attention in recent years in Japan. A new concept, the pollution potential leaching index (PPLI), was applied to assess the risk of arsenic (As) leaching from excavated soils. Sequential leaching tests (SLT) with two liquid-to-solid (L/S) ratios (10 and 20Lkg -1 ) were conducted to determine the PPLI values, which represent the critical cumulative L/S ratios at which the average As concentrations in the cumulative leachates are reduced to critical values (10 or 5µgL -1 ). Two models (a logarithmic function model and an empirical two-site first-order leaching model) were compared to estimate the PPLI values. The fractionations of As before and after SLT were extracted according to a five-step sequential extraction procedure. Ten alkaline excavated soils were obtained from different construction projects in Japan. Although their total As contents were low (from 6.75 to 79.4mgkg -1 ), the As leaching was not negligible. Different L/S ratios at each step of the SLT had little influence on the cumulative As release or PPLI values. Experimentally determined PPLI values were in agreement with those from model estimations. A five-step SLT with an L/S of 10Lkg -1 at each step, combined with a logarithmic function fitting was suggested for the easy estimation of PPLI. Results of the sequential extraction procedure showed that large portions of more labile As fractions (non-specifically and specifically sorbed fractions) were removed during long-term leaching and so were small, but non-negligible, portions of strongly bound As fractions. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Nanoscale multiphase phase field approach for stress- and temperature-induced martensitic phase transformations with interfacial stresses at finite strains

    NASA Astrophysics Data System (ADS)

    Basak, Anup; Levitas, Valery I.

    2018-04-01

    A thermodynamically consistent, novel multiphase phase field approach for stress- and temperature-induced martensitic phase transformations at finite strains and with interfacial stresses has been developed. The model considers a single order parameter to describe the austenite↔martensitic transformations, and another N order parameters describing N variants and constrained to a plane in an N-dimensional order parameter space. In the free energy model coexistence of three or more phases at a single material point (multiphase junction), and deviation of each variant-variant transformation path from a straight line have been penalized. Some shortcomings of the existing models are resolved. Three different kinematic models (KMs) for the transformation deformation gradient tensors are assumed: (i) In KM-I the transformation deformation gradient tensor is a linear function of the Bain tensors for the variants. (ii) In KM-II the natural logarithms of the transformation deformation gradient is taken as a linear combination of the natural logarithm of the Bain tensors multiplied with the interpolation functions. (iii) In KM-III it is derived using the twinning equation from the crystallographic theory. The instability criteria for all the phase transformations have been derived for all the kinematic models, and their comparative study is presented. A large strain finite element procedure has been developed and used for studying the evolution of some complex microstructures in nanoscale samples under various loading conditions. Also, the stresses within variant-variant boundaries, the sample size effect, effect of penalizing the triple junctions, and twinned microstructures have been studied. The present approach can be extended for studying grain growth, solidifications, para↔ferro electric transformations, and diffusive phase transformations.

  10. Autonomic Recovery Is Delayed in Chinese Compared with Caucasian following Treadmill Exercise.

    PubMed

    Sun, Peng; Yan, Huimin; Ranadive, Sushant M; Lane, Abbi D; Kappus, Rebecca M; Bunsawat, Kanokwan; Baynard, Tracy; Hu, Min; Li, Shichang; Fernhall, Bo

    2016-01-01

    Caucasian populations have a higher prevalence of cardiovascular disease (CVD) when compared with their Chinese counterparts and CVD is associated with autonomic function. It is unknown whether autonomic function during exercise recovery differs between Caucasians and Chinese. The present study investigated autonomic recovery following an acute bout of treadmill exercise in healthy Caucasians and Chinese. Sixty-two participants (30 Caucasian and 32 Chinese, 50% male) performed an acute bout of treadmill exercise at 70% of heart rate reserve. Heart rate variability (HRV) and baroreflex sensitivity (BRS) were obtained during 5-min epochs at pre-exercise, 30-min, and 60-min post-exercise. HRV was assessed using frequency [natural logarithm of high (LnHF) and low frequency (LnLF) powers, normalized high (nHF) and low frequency (nLF) powers, and LF/HF ratio] and time domains [Root mean square of successive differences (RMSSD), natural logarithm of RMSSD (LnRMSSD) and R-R interval (RRI)]. Spontaneous BRS included both up-up and down-down sequences. At pre-exercise, no group differences were observed for any HR, HRV and BRS parameters. During exercise recovery, significant race-by-time interactions were observed for LnHF, nHF, nLF, LF/HF, LnRMSSD, RRI, HR, and BRS (up-up). The declines in LnHF, nHF, RMSSD, RRI and BRS (up-up) and the increases in LF/HF, nLF and HR were blunted in Chinese when compared to Caucasians from pre-exercise to 30-min to 60-min post-exercise. Chinese exhibited delayed autonomic recovery following an acute bout of treadmill exercise. This delayed autonomic recovery may result from greater sympathetic dominance and extended vagal withdrawal in Chinese. Chinese Clinical Trial Register ChiCTR-IPR-15006684.

  11. Autonomic Recovery Is Delayed in Chinese Compared with Caucasian following Treadmill Exercise

    PubMed Central

    Sun, Peng; Yan, Huimin; Ranadive, Sushant M.; Lane, Abbi D.; Kappus, Rebecca M.; Bunsawat, Kanokwan; Baynard, Tracy; Hu, Min; Li, Shichang; Fernhall, Bo

    2016-01-01

    Caucasian populations have a higher prevalence of cardiovascular disease (CVD) when compared with their Chinese counterparts and CVD is associated with autonomic function. It is unknown whether autonomic function during exercise recovery differs between Caucasians and Chinese. The present study investigated autonomic recovery following an acute bout of treadmill exercise in healthy Caucasians and Chinese. Sixty-two participants (30 Caucasian and 32 Chinese, 50% male) performed an acute bout of treadmill exercise at 70% of heart rate reserve. Heart rate variability (HRV) and baroreflex sensitivity (BRS) were obtained during 5-min epochs at pre-exercise, 30-min, and 60-min post-exercise. HRV was assessed using frequency [natural logarithm of high (LnHF) and low frequency (LnLF) powers, normalized high (nHF) and low frequency (nLF) powers, and LF/HF ratio] and time domains [Root mean square of successive differences (RMSSD), natural logarithm of RMSSD (LnRMSSD) and R–R interval (RRI)]. Spontaneous BRS included both up-up and down-down sequences. At pre-exercise, no group differences were observed for any HR, HRV and BRS parameters. During exercise recovery, significant race-by-time interactions were observed for LnHF, nHF, nLF, LF/HF, LnRMSSD, RRI, HR, and BRS (up-up). The declines in LnHF, nHF, RMSSD, RRI and BRS (up-up) and the increases in LF/HF, nLF and HR were blunted in Chinese when compared to Caucasians from pre-exercise to 30-min to 60-min post-exercise. Chinese exhibited delayed autonomic recovery following an acute bout of treadmill exercise. This delayed autonomic recovery may result from greater sympathetic dominance and extended vagal withdrawal in Chinese. Trial Registration: Chinese Clinical Trial Register ChiCTR-IPR-15006684 PMID:26784109

  12. High-Throughput Analytical Techniques for Determination of Residues of 653 Multiclass Pesticides and Chemical Pollutants in Tea, Part VI: Study of the Degradation of 271 Pesticide Residues in Aged Oolong Tea by Gas Chromatography-Tandem Mass Spectrometry and Its Application in Predicting the Residue Concentrations of Target Pesticides.

    PubMed

    Chang, Qiao-Ying; Pang, Guo-Fang; Fan, Chun-Lin; Chen, Hui; Wang, Zhi-Bin

    2016-07-01

    The degradation rate of 271 pesticide residues in aged Oolong tea at two spray concentrations, named a and b (a < b), were monitored for 120 days using GC-tandem MS (GC-MS/MS). To research the degradation trends and establish regression equations, determination days were plotted as horizontal ordinates and the residue concentrations of pesticide were plotted as vertical ordinates. Here, we consider the degradation equations of 271 pesticides over 40 and 120 days, summarize the degradation rates in six aspects (A-F), and discuss the degradation trends of the 271 pesticides in aged Oolong tea in detail. The results indicate that >70% of the determined pesticides coincide with the degradation regularity of trends A, B, and E, i.e., the concentration of pesticide will decrease within 4 months. Next, 20 representative pesticides were selected for further study at higher spray concentrations, named c and d (d > c > b > a), in aged Oolong tea over another 90 days. The determination days were plotted on the x-axis, and the differences between each determined result and first-time-determined value of target pesticides were plotted on the y-axis. The logarithmic function was obtained by fitting the 90-day determination results, allowing the degradation value of a target pesticide on a specific day to be calculated. These logarithmic functions at d concentration were applied to predict the residue concentrations of pesticides at c concentration. Results revealed that 70% of the 20 pesticides had the lower deviation ratios of predicted and measured results.

  13. Controls on the Stability of Atmospheric O2 over Geologic Time Scales (Invited)

    NASA Astrophysics Data System (ADS)

    Rothman, D.; Bosak, T.

    2013-12-01

    The concentration of free oxygen in Earth's surface environment represents a balance between the accumulation of O2, due to long-term burial of organic carbon in sediments, and the consumption of O2 by weathering processes and the oxidation of reduced gases. The stability of modern O2 levels is typically attributed to a negative feedback that emerges when the production and consumption fluxes are expressed as a function of O2 concentration. Empirical studies of modern burial of organic carbon suggest that the production of O2 is a logarithmically decreasing function of the duration of time---the "oxygen exposure time (OET)"--over which sedimentary organic carbon is exposed to O2. The OET hypothesis implies that a fraction of organic matter is physically protected from anaerobic decay by its association with clay-sized mineral surface area, but susceptible to aerobic decay, either oxidatively or via free extracellular hydrolytic enzymes. By assuming that the long-term aerobic degradation is diffusion-limited, we predict the logarithmic decay of the OET curve. We note, however, that exposure to O2 may enhance not only degradation but also physical protection due to the precipitation of iron oxides and clay minerals. When the rate of transformation from the unprotected state to the protected state exceeds a small fraction of the average oxidative degradation rate, our theoretical OET curve develops a maximum at small O2 exposure times. In this case, the equilibrium O2 concentration can lose its stability. These observations may help explain major fluctuations in Earth's carbon cycle and the rise of O2 during the Proterozoic (2000--542 Ma).

  14. Entanglement between random and clean quantum spin chains

    NASA Astrophysics Data System (ADS)

    Juhász, Róbert; Kovács, István A.; Roósz, Gergő; Iglói, Ferenc

    2017-08-01

    The entanglement entropy in clean, as well as in random quantum spin chains has a logarithmic size-dependence at the critical point. Here, we study the entanglement of composite systems that consist of a clean subsystem and a random subsystem, both being critical. In the composite, antiferromagnetic XX-chain with a sharp interface, the entropy is found to grow in a double-logarithmic fashion {{ S}}∼ \\ln\\ln(L) , where L is the length of the chain. We have also considered an extended defect at the interface, where the disorder penetrates into the homogeneous region in such a way that the strength of disorder decays with the distance l from the contact point as  ∼l-κ . For κ<1/2 , the entropy scales as {{ S}}(κ) ≃ \\frac{\\ln 2 (1-2κ)}{6}{\\ln L} , while for κ ≥slant 1/2 , when the extended interface defect is an irrelevant perturbation, we recover the double-logarithmic scaling. These results are explained through strong-disorder RG arguments.

  15. Spatiotemporal characterization of Ensemble Prediction Systems - the Mean-Variance of Logarithms (MVL) diagram

    NASA Astrophysics Data System (ADS)

    Gutiérrez, J. M.; Primo, C.; Rodríguez, M. A.; Fernández, J.

    2008-02-01

    We present a novel approach to characterize and graphically represent the spatiotemporal evolution of ensembles using a simple diagram. To this aim we analyze the fluctuations obtained as differences between each member of the ensemble and the control. The lognormal character of these fluctuations suggests a characterization in terms of the first two moments of the logarithmic transformed values. On one hand, the mean is associated with the exponential growth in time. On the other hand, the variance accounts for the spatial correlation and localization of fluctuations. In this paper we introduce the MVL (Mean-Variance of Logarithms) diagram to intuitively represent the interplay and evolution of these two quantities. We show that this diagram uncovers useful information about the spatiotemporal dynamics of the ensemble. Some universal features of the diagram are also described, associated either with the nonlinear system or with the ensemble method and illustrated using both toy models and numerical weather prediction systems.

  16. The effect of multiplicity of stellar encounters and the diffusion coefficients in a locally homogeneous three-dimensional stellar medium: Removing the classical divergence

    NASA Astrophysics Data System (ADS)

    Rastorguev, A. S.; Utkin, N. D.; Chumak, O. V.

    2017-08-01

    Agekyan's λ-factor that allows for the effect of multiplicity of stellar encounters with large impact parameters has been used for the first time to directly calculate the diffusion coefficients in the phase space of a stellar system. Simple estimates show that the cumulative effect, i.e., the total contribution of distant encounters to the change in the velocity of a test star, given the multiplicity of stellar encounters, is finite, and the logarithmic divergence inherent in the classical description of diffusion is removed, as was shown previously byKandrup using a different, more complex approach. In this case, the expressions for the diffusion coefficients, as in the classical description, contain the logarithm of the ratio of two independent quantities: the mean interparticle distance and the impact parameter of a close encounter. However, the physical meaning of this logarithmic factor changes radically: it reflects not the divergence but the presence of two characteristic length scales inherent in the stellar medium.

  17. SU(3) Landau gauge gluon and ghost propagators using the logarithmic lattice gluon field definition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ilgenfritz, Ernst-Michael; Humboldt-Universitaet zu Berlin, Institut fuer Physik, 12489 Berlin; Menz, Christoph

    2011-03-01

    We study the Landau gauge gluon and ghost propagators of SU(3) gauge theory, employing the logarithmic definition for the lattice gluon fields and implementing the corresponding form of the Faddeev-Popov matrix. This is necessary in order to consistently compare lattice data for the bare propagators with that of higher-loop numerical stochastic perturbation theory. In this paper we provide such a comparison, and introduce what is needed for an efficient lattice study. When comparing our data for the logarithmic definition to that of the standard lattice Landau gauge we clearly see the propagators to be multiplicatively related. The data of themore » associated ghost-gluon coupling matches up almost completely. For the explored lattice spacings and sizes discretization artifacts, finite size, and Gribov-copy effects are small. At weak coupling and large momentum, the bare propagators and the ghost-gluon coupling are seen to be approached by those of higher-order numerical stochastic perturbation theory.« less

  18. Gravitational Field as a Pressure Force from Logarithmic Lagrangians and Non-Standard Hamiltonians: The Case of Stellar Halo of Milky Way

    NASA Astrophysics Data System (ADS)

    El-Nabulsi, Rami Ahmad

    2018-03-01

    Recently, the notion of non-standard Lagrangians was discussed widely in literature in an attempt to explore the inverse variational problem of nonlinear differential equations. Different forms of non-standard Lagrangians were introduced in literature and have revealed nice mathematical and physical properties. One interesting form related to the inverse variational problem is the logarithmic Lagrangian, which has a number of motivating features related to the Liénard-type and Emden nonlinear differential equations. Such types of Lagrangians lead to nonlinear dynamics based on non-standard Hamiltonians. In this communication, we show that some new dynamical properties are obtained in stellar dynamics if standard Lagrangians are replaced by Logarithmic Lagrangians and their corresponding non-standard Hamiltonians. One interesting consequence concerns the emergence of an extra pressure term, which is related to the gravitational field suggesting that gravitation may act as a pressure in a strong gravitational field. The case of the stellar halo of the Milky Way is considered.

  19. Logarithmic entropy of Kehagias-Sfetsos black hole with self-gravitation in asymptotically flat IR modified Hořava gravity

    NASA Astrophysics Data System (ADS)

    Liu, Molin; Lu, Junwang

    2011-05-01

    Motivated by recent logarithmic entropy of Hořava-Lifshitz gravity, we investigate Hawking radiation for Kehagias-Sfetsos black hole from tunneling perspective. After considering the effect of self-gravitation, we calculate the emission rate and entropy of quantum tunneling by using Kraus-Parikh-Wilczek method. Meanwhile, both massless and massive particles are considered in this Letter. Interestingly, two types tunneling particles have the same emission rate Γ and entropy Sb whose analytical formulae are Γ=exp[π(rin2-rout2)/2+π/αln rin/rout] and Sb=A/4+π/αln(A/4), respectively. Here, α is the Hořava-Lifshitz field parameter. The results show that the logarithmic entropy of Hořava-Lifshitz gravity could be explained well by the self-gravitation, which is totally different from other methods. The study of this semiclassical tunneling process may shed light on understanding the Hořava-Lifshitz gravity.

  20. Blue spectra of Kalb-Ramond axions and fully anisotropic string cosmologies

    NASA Astrophysics Data System (ADS)

    Giovannini, Massimo

    1999-03-01

    The inhomogeneities associated with massless Kalb-Ramond axions can be amplified not only in isotropic (four-dimensional) string cosmological models but also in the fully anisotropic case. If the background geometry is isotropic, the axions (which are not part of the homogeneous background) develop outside the horizon, the growing modes leading, ultimately, to logarithmic energy spectra which are ``red'' in frequency and increase at large distance scales. We show that this conclusion can be avoided not only in the case of higher dimensional backgrounds with contracting internal dimensions but also in the case of string cosmological scenarios which are completely anisotropic in four dimensions. In this case the logarithmic energy spectra turn out to be ``blue'' in frequency and, consequently, decreasing at large distance scales. We elaborate on anisotropic dilaton-driven models and we argue that, incidentally, the background models leading to blue (or flat) logarithmic energy spectra for axionic fluctuations are likely to be isotropized by the effect of string tension corrections.

  1. The existence of inflection points for generalized log-aesthetic curves satisfying G1 data

    NASA Astrophysics Data System (ADS)

    Karpagavalli, R.; Gobithaasan, R. U.; Miura, K. T.; Shanmugavel, Madhavan

    2015-12-01

    Log-Aesthetic (LA) curves have been implemented in a CAD/CAM system for various design feats. LA curves possess linear Logarithmic Curvature Graph (LCG) with gradient (shape parameter) denoted as α. In 2009, a generalized form of LA curves called Generalized Log-Aesthetic Curves (GLAC) has been proposed which has an extra shape parameter as ν compared to LA curves. Recently, G1 continuous GLAC algorithm has been proposed which utilizes the extra shape parameter using four control points. This paper discusses on the existence of inflection points in a GLAC segment satisfying G1 Hermite data and the effect of inflection point on convex hull property. It is found that the existence of inflection point can be avoided by manipulating the value of α. Numerical experiments show that the increase of α may remove the inflection point (if any) in a GLAC segment.

  2. A Fan-tastic Alternative to Bulbs: Learning Circuits with Fans

    NASA Astrophysics Data System (ADS)

    Ekey, Robert; Edwards, Andrea; McCullough, Roy; Reitz, William; Mitchell, Brandon

    2017-01-01

    The incandescent bulb has been a useful tool for teaching basic electrical circuits, as brightness is related to the current or power flowing through a bulb. This has led to the development of qualitative pedagogical treatments for examining resistive combinations in simple circuits using bulbs and batteries, which were first introduced by James Evans and thoroughly expanded upon by McDermott and others. This paper argues that replacing bulbs with small computer fans leads to similar, if not greater, insight of experimental results that can be qualitatively observed using a variety of senses. The magnitude of current through a fan is related to the frequency of the rotating fan blades, which can be seen, heard, and felt by the students. Experiments using incandescent bulbs only utilize vision, which is not ideal as the human eyes' perception of brightness is skewed because the response to light intensity is logarithmic rather than linear.

  3. Resolving Mixed Algal Species in Hyperspectral Images

    PubMed Central

    Mehrubeoglu, Mehrube; Teng, Ming Y.; Zimba, Paul V.

    2014-01-01

    We investigated a lab-based hyperspectral imaging system's response from pure (single) and mixed (two) algal cultures containing known algae types and volumetric combinations to characterize the system's performance. The spectral response to volumetric changes in single and combinations of algal mixtures with known ratios were tested. Constrained linear spectral unmixing was applied to extract the algal content of the mixtures based on abundances that produced the lowest root mean square error. Percent prediction error was computed as the difference between actual percent volumetric content and abundances at minimum RMS error. Best prediction errors were computed as 0.4%, 0.4% and 6.3% for the mixed spectra from three independent experiments. The worst prediction errors were found as 5.6%, 5.4% and 13.4% for the same order of experiments. Additionally, Beer-Lambert's law was utilized to relate transmittance to different volumes of pure algal suspensions demonstrating linear logarithmic trends for optical property measurements. PMID:24451451

  4. Fluorophore-assisted carbohydrate electrophoresis for the determination of molecular mass of heparins and low-molecular-weight (LMW) heparins.

    PubMed

    Buzzega, Dania; Maccari, Francesca; Volpi, Nicola

    2008-11-01

    We report the use of fluorophore-assisted carbohydrate electrophoresis (FACE) to determine the molecular mass (M) values of heparins (Heps) and low-molecular-weight (LMW)-Hep derivatives. Hep are labeled with 8-aminonaphthalene-1,3,6-trisulfonic acid and FACE is able to resolve each fraction as a discrete band depending on their M. After densitometric acquisition, the migration distance of each Hep standard is acquired and the third-grade polynomial calibration standard curve is determined by plotting the logarithms of the M values as a function of migration ratio. Purified Hep samples having different properties, pharmaceutical Heps and various LMW-Heps were analyzed by both FACE and conventional high-performance size-exclusion liquid chromatography (HPSEC) methods. The molecular weight value on the top of the chromatographic peak (Mp), the number-average Mn, weight-average Mw and polydispersity (Mw/Mn) were examined by both techniques and found to be similar. This approach offers certain advantages over the HPSEC method. The derivatization process with 8-aminonaphthalene-1,3,6-trisulfonic acid is complete after 4 h so that many samples may be analyzed in a day also considering that multiple samples can be run simultaneously and in parallel and that a single FACE analysis requires approx. 15 min. Furthermore, FACE is a very sensitive method as it requires approx. 5-10 microg of Heps, about 10-100-fold lower than samples and standards used in HPSEC evaluation. Finally, the utilization of mini-gels allows the use of very low amounts of reagents with neither expensive equipment nor any complicated procedures having to be applied. This study demonstrates that FACE analysis is a sensitive method for the determination of the M values of Heps and LMW-Heps with possible utilization in virtually any kind of research and development such as quality control laboratories due to its rapid, parallel analysis of multiple samples by means of common and simple largely used analytical laboratory equipment.

  5. Holographic Rényi entropy in AdS3/LCFT2 correspondence

    NASA Astrophysics Data System (ADS)

    Chen, Bin; Song, Feng-yan; Zhang, Jia-ju

    2014-03-01

    The recent study in AdS3/CFT2 correspondence shows that the tree level contribution and 1-loop correction of holographic Rényi entanglement entropy (HRE) exactly match the direct CFT computation in the large central charge limit. This allows the Rényi entanglement entropy to be a new window to study the AdS/CFT correspondence. In this paper we generalize the study of Rényi entanglement entropy in pure AdS3 gravity to the massive gravity theories at the critical points. For the cosmological topological massive gravity (CTMG), the dual conformal field theory (CFT) could be a chiral conformal field theory or a logarithmic conformal field theory (LCFT), depending on the asymptotic boundary conditions imposed. In both cases, by studying the short interval expansion of the Rényi entanglement entropy of two disjoint intervals with small cross ratio x, we find that the classical and 1-loop HRE are in exact match with the CFT results, up to order x 6. To this order, the difference between the massless graviton and logarithmic mode can be seen clearly. Moreover, for the cosmological new massive gravity (CNMG) at critical point, which could be dual to a logarithmic CFT as well, we find the similar agreement in the CNMG/LCFT correspondence. Furthermore we read the 2-loop correction of graviton and logarithmic mode to HRE from CFT computation. It has distinct feature from the one in pure AdS3 gravity.

  6. Magnetic hierarchical deposition

    NASA Astrophysics Data System (ADS)

    Posazhennikova, Anna I.; Indekeu, Joseph O.

    2014-11-01

    We consider random deposition of debris or blocks on a line, with block sizes following a rigorous hierarchy: the linear size equals 1/λn in generation n, in terms of a rescaling factor λ. Without interactions between the blocks, this model is described by a logarithmic fractal, studied previously, which is characterized by a constant increment of the length, area or volume upon proliferation. We study to what extent the logarithmic fractality survives, if each block is equipped with an Ising (pseudo-)spin s=±1 and the interactions between those spins are switched on (ranging from antiferromagnetic to ferromagnetic). It turns out that the dependence of the surface topology on the interaction sign and strength is not trivial. For instance, deep in the ferromagnetic regime, our numerical experiments and analytical results reveal a sharp crossover from a Euclidean transient, consisting of aggregated domains of aligned spins, to an asymptotic logarithmic fractal growth. In contrast, deep into the antiferromagnetic regime the surface roughness is important and is shown analytically to be controlled by vacancies induced by frustrated spins. Finally, in the weak interaction regime, we demonstrate that the non-interacting model is extremal in the sense that the effect of the introduction of interactions is only quadratic in the magnetic coupling strength. In all regimes, we demonstrate the adequacy of a mean-field approximation whenever vacancies are rare. In sum, the logarithmic fractal character is robust with respect to the introduction of spatial correlations in the hierarchical deposition process.

  7. Effects of air temperature and velocity on the drying kinetics and product particle size of starch from arrowroot (Maranta arundinacae)

    NASA Astrophysics Data System (ADS)

    Caparanga, Alvin R.; Reyes, Rachael Anne L.; Rivas, Reiner L.; De Vera, Flordeliza C.; Retnasamy, Vithyacharan; Aris, Hasnizah

    2017-11-01

    This study utilized the 3k factorial design with k as the two varying factors namely, temperature and air velocity. The effects of temperature and air velocity on the drying rate curves and on the average particle diameter of the arrowroot starch were investigated. Extracted arrowroot starch samples were dried based on the designed parameters until constant weight was obtained. The resulting initial moisture content of the arrowroot starch was 49.4%. Higher temperatures correspond to higher drying rates and faster drying time while air velocity effects were approximately negligible or had little effect. Drying rate is a function of temperature and time. The constant rate period was not observed for the drying rate of arrowroot starch. The drying curves were fitted against five mathematical models: Lewis, Page, Henderson and Pabis, Logarithmic and Midili. The Midili Model was the best fit for the experimental data since it yielded the highest R2 and the lowest RSME values for all runs. Scanning electron microscopy (SEM) was used for qualitative analysis and for determination of average particle diameter of the starch granules. The starch granules average particle diameter had a range of 12.06 - 24.60 μm. The use of ANOVA proved that particle diameters for each run varied significantly with each other. And, the Taguchi Design proved that high temperatures yield lower average particle diameter, while high air velocities yield higher average particle diameter.

  8. Analysis of multigrid methods on massively parallel computers: Architectural implications

    NASA Technical Reports Server (NTRS)

    Matheson, Lesley R.; Tarjan, Robert E.

    1993-01-01

    We study the potential performance of multigrid algorithms running on massively parallel computers with the intent of discovering whether presently envisioned machines will provide an efficient platform for such algorithms. We consider the domain parallel version of the standard V cycle algorithm on model problems, discretized using finite difference techniques in two and three dimensions on block structured grids of size 10(exp 6) and 10(exp 9), respectively. Our models of parallel computation were developed to reflect the computing characteristics of the current generation of massively parallel multicomputers. These models are based on an interconnection network of 256 to 16,384 message passing, 'workstation size' processors executing in an SPMD mode. The first model accomplishes interprocessor communications through a multistage permutation network. The communication cost is a logarithmic function which is similar to the costs in a variety of different topologies. The second model allows single stage communication costs only. Both models were designed with information provided by machine developers and utilize implementation derived parameters. With the medium grain parallelism of the current generation and the high fixed cost of an interprocessor communication, our analysis suggests an efficient implementation requires the machine to support the efficient transmission of long messages, (up to 1000 words) or the high initiation cost of a communication must be significantly reduced through an alternative optimization technique. Furthermore, with variable length message capability, our analysis suggests the low diameter multistage networks provide little or no advantage over a simple single stage communications network.

  9. Impact of the bottom drag coefficient on saltwater intrusion in the extremely shallow estuary

    NASA Astrophysics Data System (ADS)

    Lyu, Hanghang; Zhu, Jianrong

    2018-02-01

    The interactions between the extremely shallow, funnel-shaped topography and dynamic processes in the North Branch (NB) of the Changjiang Estuary produce a particular type of saltwater intrusion, saltwater spillover (SSO), from the NB into the South Branch (SB). This dominant type of saltwater intrusion threatens the winter water supplies of reservoirs located in the estuary. Simulated SSO was weaker than actual SSO in previous studies, and this problem has not been solved until now. The improved ECOM-si model with the advection scheme HSIMT-TVD was applied in this study. Logarithmic and Chézy-Manning formulas of the bottom drag coefficient (BDC) were established in the model to investigate the associated effect on saltwater intrusion in the NB. Modeled data and data collected at eight measurement stations located in the NB from February 19 to March 1, 2017, were compared, and three skill assessment indicators, the correlation coefficient (CC), root-mean-square error (RMSE), and skill score (SS), of water velocity and salinity were used to quantitatively validate the model. The results indicated that the water velocities modeled using the Chézy-Manning formula of BDC were slightly more accurate than those based on the logarithmic BDC formula, but the salinities produced by the latter formula were more accurate than those of the former. The results showed that the BDC increases when water depth decreases during ebb tide, and the results based on the Chézy-Manning formula were smaller than those based on the logarithmic formula. Additionally, the landward net water flux in the upper reaches of the NB during spring tide increases based on the Chézy-Manning formula, and saltwater intrusion in the NB was enhanced, especially in the upper reaches of the NB. At a transect in the upper reaches of the NB, the net transect water flux (NTWF) is upstream in spring tide and downstream in neap tide, and the values produced by the Chézy-Manning formula are much larger than those based on the logarithmic formula. Notably, SSO during spring tide was 1.8 times larger based on the Chézy-Manning formula than that based on the logarithmic formula. The model underestimated SSO and salinity at the hydrological stations in the SB based on the logarithmic BDC formula but successfully simulated SSO and the temporal variations in salinity in the SB using the Chézy-Manning formula of BDC.

  10. Convoluted Quasi Sturmian basis for the two-electron continuum

    NASA Astrophysics Data System (ADS)

    Ancarani, Lorenzo Ugo; Zaytsev, A. S.; Zaytsev, S. A.

    2016-09-01

    In the construction of solutions for the Coulomb three-body scattering problem one encounters a series of mathematical and numerical difficulties, one of which are the cumbersome boundary conditions the wave function should obey. We propose to describe a Coulomb three-body system continuum with a set of two-particle functions, named Convoluted Quasi Sturmian (CQS) in. They are built using recently introduced Quasi Sturmian (QS) functions which have the merit of possessing a closed form. Unlike a simple product of two one-particle functions, by construction, the CQS functions look asymptotically like a six-dimensional outgoing spherical wave. The proposed CQS basis is tested through the study of the double ionization of helium by high-energy electron impact in the framework of the Temkin-Poet model. An adequate logarithmic-like phase factor is further included in order to take into account the Coulomb interelectronic interaction and formally build the correct asymptotic behavior when all interparticle distances are large. With such a phase-factor (that can be easily extended to take into account higher partial waves) rapid convergence of the expansion can be obtained.

  11. Implied adjusted volatility functions: Empirical evidence from Australian index option market

    NASA Astrophysics Data System (ADS)

    Harun, Hanani Farhah; Hafizah, Mimi

    2015-02-01

    This study aims to investigate the implied adjusted volatility functions using the different Leland option pricing models and to assess whether the use of the specified implied adjusted volatility function can lead to an improvement in option valuation accuracy. The implied adjusted volatility is investigated in the context of Standard and Poor/Australian Stock Exchange (S&P/ASX) 200 index options over the course of 2001-2010, which covers the global financial crisis in the mid-2007 until the end of 2008. Both in- and out-of-sample resulted in approximately similar pricing error along the different Leland models. Results indicate that symmetric and asymmetric models of both moneyness ratio and logarithmic transformation of moneyness provide the overall best result in both during and post-crisis periods. We find that in the different period of interval (pre-, during and post-crisis) is subject to a different implied adjusted volatility function which best explains the index options. Hence, it is tremendously important to identify the intervals beforehand in investigating the implied adjusted volatility function.

  12. Heavy dark matter annihilation from effective field theory.

    PubMed

    Ovanesyan, Grigory; Slatyer, Tracy R; Stewart, Iain W

    2015-05-29

    We formulate an effective field theory description for SU(2)_{L} triplet fermionic dark matter by combining nonrelativistic dark matter with gauge bosons in the soft-collinear effective theory. For a given dark matter mass, the annihilation cross section to line photons is obtained with 5% precision by simultaneously including Sommerfeld enhancement and the resummation of electroweak Sudakov logarithms at next-to-leading logarithmic order. Using these results, we present more accurate and precise predictions for the gamma-ray line signal from annihilation, updating both existing constraints and the reach of future experiments.

  13. Two-Jet Rate in e+e- at Next-to-Next-to-Leading-Logarithmic Order

    NASA Astrophysics Data System (ADS)

    Banfi, Andrea; McAslan, Heather; Monni, Pier Francesco; Zanderighi, Giulia

    2016-10-01

    We present the first next-to-next-to-leading-logarithmic resummation for the two-jet rate in e+e- annihilation in the Durham and Cambridge algorithms. The results are obtained by extending the ares method to observables involving any global, recursively infrared and collinear safe jet algorithm in e+e- collisions. As opposed to other methods, this approach does not require a factorization theorem for the observables. We present predictions matched to next-to-next-to-leading order and a comparison to LEP data.

  14. Compensating for Electrode Polarization in Dielectric Spectroscopy Studies of Colloidal Suspensions: Theoretical Assessment of Existing Methods

    PubMed Central

    Chassagne, Claire; Dubois, Emmanuelle; Jiménez, María L.; van der Ploeg, J. P. M; van Turnhout, Jan

    2016-01-01

    Dielectric spectroscopy can be used to determine the dipole moment of colloidal particles from which important interfacial electrokinetic properties, for instance their zeta potential, can be deduced. Unfortunately, dielectric spectroscopy measurements are hampered by electrode polarization (EP). In this article, we review several procedures to compensate for this effect. First EP in electrolyte solutions is described: the complex conductivity is derived as function of frequency, for two cell geometries (planar and cylindrical) with blocking electrodes. The corresponding equivalent circuit for the electrolyte solution is given for each geometry. This equivalent circuit model is extended to suspensions. The complex conductivity of a suspension, in the presence of EP, is then calculated from the impedance. Different methods for compensating for EP are critically assessed, with the help of the theoretical findings. Their limit of validity is given in terms of characteristic frequencies. We can identify with one of these frequencies the frequency range within which data uncorrected for EP may be used to assess the dipole moment of colloidal particles. In order to extract this dipole moment from the measured data, two methods are reviewed: one is based on the use of existing models for the complex conductivity of suspensions, the other is the logarithmic derivative method. An extension to multiple relaxations of the logarithmic derivative method is proposed. PMID:27486575

  15. Detection of a Divot in the Scattering Population's Size Distribution

    NASA Astrophysics Data System (ADS)

    Shankman, Cory; Gladman, B.; Kaib, N.; Kavelaars, J.; Petit, J.

    2012-10-01

    Via joint analysis of the calibrated Canada France Ecliptic Place Survey (CFEPS, Petit et al 2011, AJ 142, 131), which found scattering Kuiper Belt objects, and models of their orbital distribution, we show that there should be enough kilometer-scale scattering objects to supply the Jupiter Family Comets (JFCs). Surprisingly, our analysis favours a divot (an abrupt drop and then recovery) in the size distribution at a diameter of 100 km, which results in a temporary flattening of the cumulative size distribution until it returns to a collisional equilibrium slope. Using the absolutely calibrated CFEPS survey we estimate that there are 2 x 10**9 scattering objects with H_g < 18, which is sufficient to provide the currently estimated JFC resupply rate. We also find that the primordial disk from which the scattering objects came must have had a "hot" initial inclination distribution before the giant planets scattered it out. We find that a divot, in the absolute magnitude number distribution, with a bright-end logarithmic slope of 0.8, a drop at a g-band H magnitude of 9, and a faint side logarithmic slope of 0.5 satisfies our data and simultaneously explains several existing nagging puzzles about Kuiper Belt luminosity functions (see Gladman et al., this meeting). Multiple explanations of how such a feature could have arisen will be discussed. This research was supported by the Natural Sciences and Engineering Research Council of Canada.

  16. High-Accuracy Comparison Between the Post-Newtonian and Self-Force Dynamics of Black-Hole Binaries

    NASA Astrophysics Data System (ADS)

    Blanchet, Luc; Detweiler, Steven; Le Tiec, Alexandre; Whiting, Bernard F.

    The relativistic motion of a compact binary system moving in circular orbit is investigated using the post-Newtonian (PN) approximation and the perturbative self-force (SF) formalism. A particular gauge-invariant observable quantity is computed as a function of the binary's orbital frequency. The conservative effect induced by the gravitational SF is obtained numerically with high precision, and compared to the PN prediction developed to high order. The PN calculation involves the computation of the 3PN regularized metric at the location of the particle. Its divergent self-field is regularized by means of dimensional regularization. The poles ∝ {(d - 3)}^{-1} that occur within dimensional regularization at the 3PN order disappear from the final gauge-invariant result. The leading 4PN and next-to-leading 5PN conservative logarithmic contributions originating from gravitational wave tails are also obtained. Making use of these exact PN results, some previously unknown PN coefficients are measured up to the very high 7PN order by fitting to the numerical SF data. Using just the 2PN and new logarithmic terms, the value of the 3PN coefficient is also confirmed numerically with very high precision. The consistency of this cross-cultural comparison provides a crucial test of the very different regularization methods used in both SF and PN formalisms, and illustrates the complementarity of these approximation schemes when modeling compact binary systems.

  17. Chagas disease vector control and Taylor's law

    PubMed Central

    Rodríguez-Planes, Lucía I.; Gaspe, María S.; Cecere, María C.; Cardinal, Marta V.

    2017-01-01

    Background Large spatial and temporal fluctuations in the population density of living organisms have profound consequences for biodiversity conservation, food production, pest control and disease control, especially vector-borne disease control. Chagas disease vector control based on insecticide spraying could benefit from improved concepts and methods to deal with spatial variations in vector population density. Methodology/Principal findings We show that Taylor's law (TL) of fluctuation scaling describes accurately the mean and variance over space of relative abundance, by habitat, of four insect vectors of Chagas disease (Triatoma infestans, Triatoma guasayana, Triatoma garciabesi and Triatoma sordida) in 33,908 searches of people's dwellings and associated habitats in 79 field surveys in four districts in the Argentine Chaco region, before and after insecticide spraying. As TL predicts, the logarithm of the sample variance of bug relative abundance closely approximates a linear function of the logarithm of the sample mean of abundance in different habitats. Slopes of TL indicate spatial aggregation or variation in habitat suitability. Predictions of new mathematical models of the effect of vector control measures on TL agree overall with field data before and after community-wide spraying of insecticide. Conclusions/Significance A spatial Taylor's law identifies key habitats with high average infestation and spatially highly variable infestation, providing a new instrument for the control and elimination of the vectors of a major human disease. PMID:29190728

  18. Theoretical and experimental study on fiber-optic displacement sensor with bowknot bending modulation

    NASA Astrophysics Data System (ADS)

    Zheng, Yong; Huang, Da; Zhu, Zheng-Wei

    2018-03-01

    A novel and simple fiber-optic sensor for measuring a large displacement range in civil engineering has been developed. The sensor incorporates an extremely simple bowknot bending modulation that increases its sensitivity in bending, light source and detector. In this paper, to better understand the working principle and improve the performance of the sensor, the transduction of displacement to light loss is described analytically by using the geometry of sensor and principle of optical fiber loss. Results of the calibration tests show a logarithmic function relationship between light loss and displacement with two calibrated parameters. The sensor has a response over a wide displacement range of 44.7 mm with an initial accuracy of 2.65 mm, while for a small displacement range of 34 mm it shows a more excellent accuracy of 0.98 mm. The direct shear tests for the six models with the same dimensions were conducted to investigate the application of the sensor for warning the shear and sliding failure in civil engineering materials or geo-materials. Results address that the sliding displacement of sliding body can be relatively accurately captured by the theory logarithmic relation between sliding distance and optical loss in a definite structure, having a large dynamic range of 22.32 mm with an accuracy of 0.99 mm, which suggests that the sensor has a promising prospect in monitoring civil engineering, especially for landslides.

  19. Fast and selective determination of total protein in milk powder via titration of moving reaction boundary electrophoresis.

    PubMed

    Guo, Cheng-ye; Wang, Hou-yu; Liu, Xiao-ping; Fan, Liu-yin; Zhang, Lei; Cao, Cheng-xi

    2013-05-01

    In this paper, moving reaction boundary titration (MRBT) was developed for rapid and accurate quantification of total protein in infant milk powder, from the concept of moving reaction boundary (MRB) electrophoresis. In the method, the MRB was formed by the hydroxide ions and the acidic residues of milk proteins immobilized via cross-linked polyacrylamide gel (PAG), an acid-base indicator was used to denote the boundary motion. As a proof of concept, we chose five brands of infant milk powders to study the feasibility of MRBT method. The calibration curve of MRB velocity versus logarithmic total protein content of infant milk powder sample was established based on the visual signal of MRB motion as a function of logarithmic milk protein content. Weak influence of nonprotein nitrogen (NPN) reagents (e.g., melamine and urea) on MRBT method was observed, due to the fact that MRB was formed with hydroxide ions and the acidic residues of captured milk proteins, rather than the alkaline residues or the NPN reagents added. The total protein contents in infant milk powder samples detected via the MRBT method were in good agreement with those achieved by the classic Kjeldahl method. In addition, the developed method had much faster measuring speed compared with the Kjeldahl method. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Validation of Simplified Urban-Canopy Aerodynamic Parametrizations Using a Numerical Simulation of an Actual Downtown Area

    NASA Astrophysics Data System (ADS)

    Ramirez, N.; Afshari, Afshin; Norford, L.

    2018-07-01

    A steady-state Reynolds-averaged Navier-Stoke computational fluid dynamics (CFD) investigation of boundary-layer flow over a major portion of downtown Abu Dhabi is conducted. The results are used to derive the shear stress and characterize the logarithmic region for eight sub-domains, where the sub-domains overlap and are overlaid in the streamwise direction. They are characterized by a high frontal area index initially, which decreases significantly beyond the fifth sub-domain. The plan area index is relatively stable throughout the domain. For each sub-domain, the estimated local roughness length and displacement height derived from CFD results are compared to prevalent empirical formulations. We further validate and tune a mixing-length model proposed by Coceal and Belcher (Q J R Meteorol Soc 130:1349-1372, 2004). Finally, the in-canopy wind-speed attenuation is analysed as a function of fetch. It is shown that, while there is some room for improvement in Macdonald's empirical formulations (Boundary-Layer Meteorol 97:25-45, 2000), Coceal and Belcher's mixing model in combination with the resolution method of Di Sabatino et al. (Boundary-Layer Meteorol 127:131-151, 2008) can provide a robust estimation of the average wind speed in the logarithmic region. Within the roughness sublayer, a properly parametrized Cionco exponential model is shown to be quite accurate.

  1. Generalized Lenard-Balescu calculations of electron-ion temperature relaxation in beryllium plasma.

    PubMed

    Fu, Zhen-Guo; Wang, Zhigang; Li, Da-Fang; Kang, Wei; Zhang, Ping

    2015-09-01

    The problem of electron-ion temperature relaxation in beryllium plasma at various densities (0.185-18.5g/cm^{3}) and temperatures [(1.0-8)×10^{3} eV] is investigated by using the generalized Lenard-Balescu theory. We consider the correlation effects between electrons and ions via classical and quantum static local field corrections. The numerical results show that the electron-ion pair distribution function at the origin approaches the maximum when the electron-electron coupling parameter equals unity. The classical result of the Coulomb logarithm is in agreement with the quantum result in both the weak (Γ_{ee}<10^{-2}) and strong (Γ_{ee}>1) electron-electron coupling ranges, whereas it deviates from the quantum result at intermediate values of the coupling parameter (10^{-2}<Γ_{ee}<1). We find that with increasing density of Be, the Coulomb logarithm will decrease and the corresponding relaxation rate ν_{ie} will increase. In addition, a simple fitting law ν_{ie}/ν_{ie}^{(0)}=a(ρ_{Be}/ρ_{0})^{b} is determined, where ν_{ie}^{(0)} is the relaxation rate corresponding to the normal metal density of Be and ρ_{0}, a, and b are the fitting parameters related to the temperature and the degree of ionization 〈Z〉 of the system. Our results are expected to be useful for future inertial confinement fusion experiments involving Be plasma.

  2. A scale-entropy diffusion equation to describe the multi-scale features of turbulent flames near a wall

    NASA Astrophysics Data System (ADS)

    Queiros-Conde, D.; Foucher, F.; Mounaïm-Rousselle, C.; Kassem, H.; Feidt, M.

    2008-12-01

    Multi-scale features of turbulent flames near a wall display two kinds of scale-dependent fractal features. In scale-space, an unique fractal dimension cannot be defined and the fractal dimension of the front is scale-dependent. Moreover, when the front approaches the wall, this dependency changes: fractal dimension also depends on the wall-distance. Our aim here is to propose a general geometrical framework that provides the possibility to integrate these two cases, in order to describe the multi-scale structure of turbulent flames interacting with a wall. Based on the scale-entropy quantity, which is simply linked to the roughness of the front, we thus introduce a general scale-entropy diffusion equation. We define the notion of “scale-evolutivity” which characterises the deviation of a multi-scale system from the pure fractal behaviour. The specific case of a constant “scale-evolutivity” over the scale-range is studied. In this case, called “parabolic scaling”, the fractal dimension is a linear function of the logarithm of scale. The case of a constant scale-evolutivity in the wall-distance space implies that the fractal dimension depends linearly on the logarithm of the wall-distance. We then verified experimentally, that parabolic scaling represents a good approximation of the real multi-scale features of turbulent flames near a wall.

  3. Quantum square-well with logarithmic central spike

    NASA Astrophysics Data System (ADS)

    Znojil, Miloslav; Semorádová, Iveta

    2018-01-01

    Singular repulsive barrier V (x) = -gln(|x|) inside a square-well is interpreted and studied as a linear analog of the state-dependent interaction ℒeff(x) = -gln[ψ∗(x)ψ(x)] in nonlinear Schrödinger equation. In the linearized case, Rayleigh-Schrödinger perturbation theory is shown to provide a closed-form spectrum at sufficiently small g or after an amendment of the unperturbed Hamiltonian. At any spike strength g, the model remains solvable numerically, by the matching of wave functions. Analytically, the singularity is shown regularized via the change of variables x = expy which interchanges the roles of the asymptotic and central boundary conditions.

  4. On Connected Diagrams and Cumulants of Erdős-Rényi Matrix Models

    NASA Astrophysics Data System (ADS)

    Khorunzhiy, O.

    2008-08-01

    Regarding the adjacency matrices of n-vertex graphs and related graph Laplacian we introduce two families of discrete matrix models constructed both with the help of the Erdős-Rényi ensemble of random graphs. Corresponding matrix sums represent the characteristic functions of the average number of walks and closed walks over the random graph. These sums can be considered as discrete analogues of the matrix integrals of random matrix theory. We study the diagram structure of the cumulant expansions of logarithms of these matrix sums and analyze the limiting expressions as n → ∞ in the cases of constant and vanishing edge probabilities.

  5. On the Malthusian theory of long swings.

    PubMed

    Waterman, A M

    1987-05-01

    "In the Essay on Population economic growth consists of alternating surges of population (during which real wages fall and the rate of profit rises) and capital (during which the reverse occurs). A series of temporary equilibria exists at which wages are maximal, the rate of profit minimal, and fully employed work-force in technically determined relation to fixed capital stock. Between these equilibria occur episodes of excess labour, below-maximum wages, above minimum profit-rate and capital accumulation. Malthus's 'ratios' presuppose a logarithmic production function that implies first, that the full-employment real wage will fall to subsistence; secondly, that the full-employment 'wages fund' is constant." (SUMMARY IN FRE) excerpt

  6. The structure of polarization maps of skin histological sections in the Fourier domain for the tasks of benign and malignant formations differentiation

    NASA Astrophysics Data System (ADS)

    Ushenko, V. A.; Dubolazov, A. V.; Savich, V. O.; Novakovskaya, O. Y.; Olar, O. V.; Marchuk, Y. F.

    2015-02-01

    The optical model of birefringent networks of biological tissues is presented. The technique of Fourier polarimetry for selection of manifestations of linear and circular birefringence of protein fibrils is suggested. The results of investigations of statistical (statistical moments of the 1st-4th orders), correlation (dispersion and excess of autocorrelation functions) and scalar-self-similar (logarithmic dependencies of power spectra) structure of Fourier spectra of polarization azimuths distribution of laser images of skin samples are presented. The criteria of differentiation of postoperative biopsy of benign (keratoma) and malignant (adenocarcinoma) skin tumors are determined.

  7. Scalar Casimir energies in M4>=N for even N

    NASA Astrophysics Data System (ADS)

    Kantowski, R.; Milton, Kimball A.

    1987-01-01

    We construct a Green's-function formalism for computing vacuum-fluctuation energies of scalar fields in 4+N dimensions, where the extra N dimensions are compactified into a hypersphere SN of radius a. In all cases a leading cosmological energy term ucosmo~aN/b4+N results. Here b is an ultraviolet cutoff at the Planck scale. In all cases an unambiguous Casimir energy is computed. For odd N these energies agree with those calculated by Candelas and Weinberg. For even N, the Casimir energy is logarithmically divergent: uCasimir~(αN/a4)ln(a/b). The coefficients αN are computed in terms of Bernoulli numbers.

  8. A quantum diffusion law

    NASA Astrophysics Data System (ADS)

    Satpathi, Urbashi; Sinha, Supurna; Sorkin, Rafael D.

    2017-12-01

    We analyse diffusion at low temperature by bringing the fluctuation-dissipation theorem (FDT) to bear on a physically natural, viscous response-function R(t) . The resulting diffusion-law exhibits several distinct regimes of time and temperature, each with its own characteristic rate of spreading. As with earlier analyses, we find logarithmic spreading in the quantum regime, indicating that this behavior is robust. A consistent R(t) must satisfy the key physical requirements of Wightman positivity and passivity, and we prove that ours does so. We also prove in general that these two conditions are equivalent when the FDT holds. Given current technology, our diffusion law can be tested in a laboratory with ultra cold atoms.

  9. Application of a hybrid computer to sweep frequency data processing

    NASA Technical Reports Server (NTRS)

    Milner, E. J.; Bruton, W. M.

    1973-01-01

    A hybrid computer program is presented which can process as many as 10 channels of sweep frequency data simultaneously. The program needs only the sine sweep signal used to drive the system, and its correponding quadrature component, to process the data. It can handle a maximum frequency range of 0.5 to 500 hertz. Magnitude and phase are calculated at logarithmically spaced points covering the frequency range of interest. When the sweep is completed, these results are stored in digital form. Thus, a tabular listing and/or a plot of any processed data channel or the transfer function relating any two of them is immediately available.

  10. Characterizing Weak-Link Effects in Mo/Au Transition-Edge Sensors

    NASA Technical Reports Server (NTRS)

    Smith, Stephen

    2011-01-01

    We are developing Mo/Au bilayer transition-edge sensors (TESs) for applications in X-ray astronomy. Critical current measurements on these TESs show they act as weak superconducting links exhibiting oscillatory, Fraunhofer-like, behavior with applied magnetic field. In this contribution we investigate the implications of this behavior for TES detectors, under operational bias conditions. This includes characterizing the logarithmic resistance sensitivity with temperature, (alpha, and current, beta, as a function of applied magnetic field and bias point within the resistive transition. Results show that these important device parameters exhibit similar oscillatory behavior with applied magnetic field, which in turn affects the signal responsivity, noise and energy resolution.

  11. Fractal analysis of phasic laser images of the myocardium for the purpose of diagnostics of acute coronary insufficiency

    NASA Astrophysics Data System (ADS)

    Wanchuliak, O. Y.; Bachinskyi, V. T.

    2011-09-01

    In this work on the base of Mueller-matrix description of optical anisotropy, the possibility of monitoring of time changes of myocardium tissue birefringence, has been considered. The optical model of polycrystalline networks of myocardium is suggested. The results of investigating the interrelation between the values correlation (correlation area, asymmetry coefficient and autocorrelation function excess) and fractal (dispersion of logarithmic dependencies of power spectra) parameters are presented. They characterize the distributions of Mueller matrix elements in the points of laser images of myocardium histological sections. The criteria of differentiation of death coming reasons are determined.

  12. A Wide Dynamic Range Tapped Linear Array Image Sensor

    NASA Astrophysics Data System (ADS)

    Washkurak, William D.; Chamberlain, Savvas G.; Prince, N. Daryl

    1988-08-01

    Detectors for acousto-optic signal processing applications require fast transient response as well as wide dynamic range. There are two major choices of detectors: conductive or integration mode. Conductive mode detectors have an initial transient period before they reach then' i equilibrium state. The duration of 1 his period is dependent on light level as well as detector capacitance. At low light levels a conductive mode detector is very slow; response time is typically on the order of milliseconds. Generally. to obtain fast transient response an integrating mode detector is preferred. With integrating mode detectors. the dynamic range is determined by the charge storage capability of the tran-sport shift registers and the noise level of the image sensor. The conventional net hod used to improve dynamic range is to increase the shift register charge storage capability. To achieve a dynamic range of fifty thousand assuming two hundred noise equivalent electrons, a charge storage capability of ten million electrons would be required. In order to accommodate this amount of charge. unrealistic shift registers widths would be required. Therefore, with an integrating mode detector it is difficult to achieve a dynamic range of over four orders of magnitude of input light intensity. Another alternative is to solve the problem at the photodetector aml not the shift, register. DALSA's wide dynamic range detector utilizes an optimized, ion implant doped, profiled MOSFET photodetector specifically designed for wide dynamic range. When this new detector operates at high speed and at low light levels the photons are collected and stored in an integrating fashion. However. at bright light levels where transient periods are short, the detector switches into a conductive mode. The light intensity is logarithmically compressed into small charge packets, easily carried by the CCD shift register. As a result of the logarithmic conversion, dynamic ranges of over six orders of magnitide are obtained. To achieve the short integration times necessary in acousto-optic applications. t he wide dynamic range detector has been implemented into a tapped array architecture with eight outputs and 256 photoelements. Operation of each 01)1,1)111 at 16 MHz yields detector integration times of 2 micro-seconds. Buried channel two phase CCD shift register technology is utilized to minimize image sensor noise improve video output rates and increase ease of operation.

  13. Fermi-edge singularity and the functional renormalization group

    NASA Astrophysics Data System (ADS)

    Kugler, Fabian B.; von Delft, Jan

    2018-05-01

    We study the Fermi-edge singularity, describing the response of a degenerate electron system to optical excitation, in the framework of the functional renormalization group (fRG). Results for the (interband) particle-hole susceptibility from various implementations of fRG (one- and two-particle-irreducible, multi-channel Hubbard–Stratonovich, flowing susceptibility) are compared to the summation of all leading logarithmic (log) diagrams, achieved by a (first-order) solution of the parquet equations. For the (zero-dimensional) special case of the x-ray-edge singularity, we show that the leading log formula can be analytically reproduced in a consistent way from a truncated, one-loop fRG flow. However, reviewing the underlying diagrammatic structure, we show that this derivation relies on fortuitous partial cancellations special to the form of and accuracy applied to the x-ray-edge singularity and does not generalize.

  14. A class of reduced-order models in the theory of waves and stability.

    PubMed

    Chapman, C J; Sorokin, S V

    2016-02-01

    This paper presents a class of approximations to a type of wave field for which the dispersion relation is transcendental. The approximations have two defining characteristics: (i) they give the field shape exactly when the frequency and wavenumber lie on a grid of points in the (frequency, wavenumber) plane and (ii) the approximate dispersion relations are polynomials that pass exactly through points on this grid. Thus, the method is interpolatory in nature, but the interpolation takes place in (frequency, wavenumber) space, rather than in physical space. Full details are presented for a non-trivial example, that of antisymmetric elastic waves in a layer. The method is related to partial fraction expansions and barycentric representations of functions. An asymptotic analysis is presented, involving Stirling's approximation to the psi function, and a logarithmic correction to the polynomial dispersion relation.

  15. A computational exploration of the McCoy-Tracy-Wu solutions of the third Painlevé equation

    NASA Astrophysics Data System (ADS)

    Fasondini, Marco; Fornberg, Bengt; Weideman, J. A. C.

    2018-01-01

    The method recently developed by the authors for the computation of the multivalued Painlevé transcendents on their Riemann surfaces (Fasondini et al., 2017) is used to explore families of solutions to the third Painlevé equation that were identified by McCoy et al. (1977) and which contain a pole-free sector. Limiting cases, in which the solutions are singular functions of the parameters, are also investigated and it is shown that a particular set of limiting solutions is expressible in terms of special functions. Solutions that are single-valued, logarithmically (infinitely) branched and algebraically branched, with any number of distinct sheets, are encountered. The algebraically branched solutions have multiple pole-free sectors on their Riemann surfaces that are accounted for by using asymptotic formulae and Bäcklund transformations.

  16. Multiple utility constrained multi-objective programs using Bayesian theory

    NASA Astrophysics Data System (ADS)

    Abbasian, Pooneh; Mahdavi-Amiri, Nezam; Fazlollahtabar, Hamed

    2018-03-01

    A utility function is an important tool for representing a DM's preference. We adjoin utility functions to multi-objective optimization problems. In current studies, usually one utility function is used for each objective function. Situations may arise for a goal to have multiple utility functions. Here, we consider a constrained multi-objective problem with each objective having multiple utility functions. We induce the probability of the utilities for each objective function using Bayesian theory. Illustrative examples considering dependence and independence of variables are worked through to demonstrate the usefulness of the proposed model.

  17. Thermodynamics of computation and information distance

    NASA Astrophysics Data System (ADS)

    Bennett, Charles H.; Gacs, Peter; Li, Ming; Vitanyi, Paul M. R. B.; Zurek, Wojciech H.

    1993-06-01

    Intuitively, the minimal information distance between x and y is the length of the shortest program for a universal computer to transform x into y and y into x. This measure is shown to be, up to a logarithmic additive term, equal to the maximum of the conditional Kolmogorov complexities E(sub 1)(x,y) = max(K(y/x), K(x/y)). Any reasonable distance to measure similarity of pictures should be an effectively approximable, symmetric, positive function of x and y satisfying a reasonable normalization condition and obeying the triangle inequality. It turns out that E(sub 1) is minimal up to an additive constant among all such distances. Hence it is a universal 'picture distance', which accounts for any effective similarity between pictures. A third information distance, based on the ideal that the aim should be for dissipationless computations, and hence for reversible ones, is given by the length E(sub 2)(x,y) = KR(y/x) = KR(x/y) of the shortest reversible program that transforms x into y and y into x on a universal reversible computer. It is shown that also E(sub 2) = E(sub 1), up to a logarithmic additive term. It is remarkable that three so differently motivated definitions turn out to define one and the same notion. Another information distance, E(sub 3), is obtained by minimizing the total amount of information flowing in and out during a reversible computation in which the program is not retained, in other words the number of extra bits (apart from x) that must be irreversibly supplied at the beginning, plus the number of garbage bits (apart from y) that must be irreversibly erased at the end of the computation to obtain a 'clean' y. This distance is within a logarithmic additive term of the sum of the conditional complexities, E(sub 3)(x, y) = K(y/x) + K(x/y). Using the physical theory of reversible computation, the simple difference K(x) - K(y) is shown to be an appropriate (universal, antisymmetric, and transitive) measure of the amount of thermodynamic work required to transform string x into string y by the most efficient process.

  18. Granuloma Weight and the α1-acute Phase Protein Response in Rats Injected with Turpentine

    PubMed Central

    Darcy, D. A.

    1970-01-01

    Rats of 6 different age (and weight) groups were injected with turpentine subcutaneously in a single depot at 4 different doses per kg. body weight. In each age/weight group the weight of the turpentine granuloma produced at 48 hr was proportional to log turpentine dose. The 48 hr response of the α1-AP (acute phase) globulin was also proportional to log turpentine dose and was proportional to the granuloma weight. When rats of different age/weight groups were compared it was found that granuloma weight increased logarithmically with body weight for a given turpentine dose per kg. body weight. More remarkably, granuloma weight increased logarithmically with body weight for a constant volume of turpentine injected per rat, thus 0·2 ml. of turpentine gave an 0·65 g. granuloma in 60 g. (4-week old) rats and a 5 g. granuloma in 371 g. (40-week old) rats. The possibility of an age influence on this phenomenon was not excluded by these experiments. The α1-AP globulin response also increased logarithmically with body weight for a given turpentine dose per kg. body weight. For a constant volume of turpentine per rat, the response increased logarithmically with body weight and directly with granuloma weight. It was concluded that this acute phase protein response is closely correlated with the size of the lesion. There was some evidence, however, that the age of the rat may make a contribution to the response. The histology of the granulomata is described. PMID:4190826

  19. Finite-time singularities in the dynamics of hyperinflation in an economy

    NASA Astrophysics Data System (ADS)

    Szybisz, Martín A.; Szybisz, Leszek

    2009-08-01

    The dynamics of hyperinflation episodes is studied by applying a theoretical approach based on collective “adaptive inflation expectations” with a positive nonlinear feedback proposed in the literature. In such a description it is assumed that the growth rate of the logarithmic price, r(t) , changes with a velocity obeying a power law which leads to a finite-time singularity at a critical time tc . By revising that model we found that, indeed, there are two types of singular solutions for the logarithmic price, p(t) . One is given by the already reported form p(t)≈(tc-t)-α (with α>0 ) and the other exhibits a logarithmic divergence, p(t)≈ln[1/(tc-t)] . The singularity is a signature for an economic crash. In the present work we express p(t) explicitly in terms of the parameters introduced throughout the formulation avoiding the use of any combination of them defined in the original paper. This procedure allows to examine simultaneously the time series of r(t) and p(t) performing a linked error analysis of the determined parameters. For the first time this approach is applied for analyzing the very extreme historical hyperinflations occurred in Greece (1941-1944) and Yugoslavia (1991-1994). The case of Greece is compatible with a logarithmic singularity. The study is completed with an analysis of the hyperinflation spiral currently experienced in Zimbabwe. According to our results, an economic crash in this country is predicted for these days. The robustness of the results to changes of the initial time of the series and the differences with a linear feedback are discussed.

  20. Multiplicative surrogate standard deviation: a group metric for the glycemic variability of individual hospitalized patients.

    PubMed

    Braithwaite, Susan S; Umpierrez, Guillermo E; Chase, J Geoffrey

    2013-09-01

    Group metrics are described to quantify blood glucose (BG) variability of hospitalized patients. The "multiplicative surrogate standard deviation" (MSSD) is the reverse-transformed group mean of the standard deviations (SDs) of the logarithmically transformed BG data set of each patient. The "geometric group mean" (GGM) is the reverse-transformed group mean of the means of the logarithmically transformed BG data set of each patient. Before reverse transformation is performed, the mean of means and mean of SDs each has its own SD, which becomes a multiplicative standard deviation (MSD) after reverse transformation. Statistical predictions and comparisons of parametric or nonparametric tests remain valid after reverse transformation. A subset of a previously published BG data set of 20 critically ill patients from the first 72 h of treatment under the SPRINT protocol was transformed logarithmically. After rank ordering according to the SD of the logarithmically transformed BG data of each patient, the cohort was divided into two equal groups, those having lower or higher variability. For the entire cohort, the GGM was 106 (÷/× 1.07) mg/dl, and MSSD was 1.24 (÷/× 1.07). For the subgroups having lower and higher variability, respectively, the GGM did not differ, 104 (÷/× 1.07) versus 109 (÷/× 1.07) mg/dl, but the MSSD differed, 1.17 (÷/× 1.03) versus 1.31 (÷/× 1.05), p = .00004. By using the MSSD with its MSD, groups can be characterized and compared according to glycemic variability of individual patient members. © 2013 Diabetes Technology Society.

Top