Sample records for logarithmic barrier method

  1. Comparing barrier algorithms

    NASA Technical Reports Server (NTRS)

    Arenstorf, Norbert S.; Jordan, Harry F.

    1987-01-01

    A barrier is a method for synchronizing a large number of concurrent computer processes. After considering some basic synchronization mechanisms, a collection of barrier algorithms with either linear or logarithmic depth are presented. A graphical model is described that profiles the execution of the barriers and other parallel programming constructs. This model shows how the interaction between the barrier algorithms and the work that they synchronize can impact their performance. One result is that logarithmic tree structured barriers show good performance when synchronizing fixed length work, while linear self-scheduled barriers show better performance when synchronizing fixed length work with an imbedded critical section. The linear barriers are better able to exploit the process skew associated with critical sections. Timing experiments, performed on an eighteen processor Flex/32 shared memory multiprocessor, that support these conclusions are detailed.

  2. Microbial ranking of porous packaging materials (exposure chamber method), ASTM method: collaborative study.

    PubMed

    Placencia, A M; Peeler, J T

    1999-01-01

    A collaborative study involving 11 laboratories was conducted to measure the microbial barrier effectiveness of porous medical packaging. Two randomly cut samples from each of 6 commercially available porous materials and one positive and one negative control were tested by one operator in each of 11 laboratories. Microbial barrier effectiveness was measured in terms of logarithm reduction value (LRV), which reflects the log10 microbial penetration of the material being tested. The logarithm of the final concentration is subtracted from that of the initial concentration to obtain the LRV. Thus the higher the LRV, the better the barrier. Repeatability standard deviations ranged from 6.42 to 16.40; reproducibility standard deviations ranged from 15.50 to 22.70. Materials B(53), C(50), D(CT), and E(45MF) differ significantly from the positive control. The microbial ranking of porous packaging materials (exposure chamber method), ASTM method, has been adopted First Action by AOAC INTERNATIONAL.

  3. Distributed Optimal Consensus Over Resource Allocation Network and Its Application to Dynamical Economic Dispatch.

    PubMed

    Li, Chaojie; Yu, Xinghuo; Huang, Tingwen; He, Xing; Chaojie Li; Xinghuo Yu; Tingwen Huang; Xing He; Li, Chaojie; Huang, Tingwen; He, Xing; Yu, Xinghuo

    2018-06-01

    The resource allocation problem is studied and reformulated by a distributed interior point method via a -logarithmic barrier. By the facilitation of the graph Laplacian, a fully distributed continuous-time multiagent system is developed for solving the problem. Specifically, to avoid high singularity of the -logarithmic barrier at boundary, an adaptive parameter switching strategy is introduced into this dynamical multiagent system. The convergence rate of the distributed algorithm is obtained. Moreover, a novel distributed primal-dual dynamical multiagent system is designed in a smart grid scenario to seek the saddle point of dynamical economic dispatch, which coincides with the optimal solution. The dual decomposition technique is applied to transform the optimization problem into easily solvable resource allocation subproblems with local inequality constraints. The good performance of the new dynamical systems is, respectively, verified by a numerical example and the IEEE six-bus test system-based simulations.

  4. Logarithmic compression methods for spectral data

    DOEpatents

    Dunham, Mark E.

    2003-01-01

    A method is provided for logarithmic compression, transmission, and expansion of spectral data. A log Gabor transformation is made of incoming time series data to output spectral phase and logarithmic magnitude values. The output phase and logarithmic magnitude values are compressed by selecting only magnitude values above a selected threshold and corresponding phase values to transmit compressed phase and logarithmic magnitude values. A reverse log Gabor transformation is then performed on the transmitted phase and logarithmic magnitude values to output transmitted time series data to a user.

  5. Equilibrium Solutions of the Logarithmic Hamiltonian Leapfrog for the N-body Problem

    NASA Astrophysics Data System (ADS)

    Minesaki, Yukitaka

    2018-04-01

    We prove that a second-order logarithmic Hamiltonian leapfrog for the classical general N-body problem (CGNBP) designed by Mikkola and Tanikawa and some higher-order logarithmic Hamiltonian methods based on symmetric multicompositions of the logarithmic algorithm exactly reproduce the orbits of elliptic relative equilibrium solutions in the original CGNBP. These methods are explicit symplectic methods. Before this proof, only some implicit discrete-time CGNBPs proposed by Minesaki had been analytically shown to trace the orbits of elliptic relative equilibrium solutions. The proof is therefore the first existence proof for explicit symplectic methods. Such logarithmic Hamiltonian methods with a variable time step can also precisely retain periodic orbits in the classical general three-body problem, which generic numerical methods with a constant time step cannot do.

  6. Characterization of DBD Plasma Actuators Performance Without External Flow - Part I: Thrust-Voltage Quadratic Relationship in Logarithmic Space for Sinusoidal Excitation

    NASA Technical Reports Server (NTRS)

    Ashpis, David E.; Laun, Matthew C.

    2016-01-01

    Results of characterization of Dielectric Barrier Discharge (DBD) plasma actuators without external flow are presented. The results include aerodynamic and electric performance of the actuators without external flow for different geometrical parameters, dielectric materials and applied voltage level and wave form.

  7. Simulating the component counts of combinatorial structures.

    PubMed

    Arratia, Richard; Barbour, A D; Ewens, W J; Tavaré, Simon

    2018-02-09

    This article describes and compares methods for simulating the component counts of random logarithmic combinatorial structures such as permutations and mappings. We exploit the Feller coupling for simulating permutations to provide a very fast method for simulating logarithmic assemblies more generally. For logarithmic multisets and selections, this approach is replaced by an acceptance/rejection method based on a particular conditioning relationship that represents the distribution of the combinatorial structure as that of independent random variables conditioned on a weighted sum. We show how to improve its acceptance rate. We illustrate the method by estimating the probability that a random mapping has no repeated component sizes, and establish the asymptotic distribution of the difference between the number of components and the number of distinct component sizes for a very general class of logarithmic structures. Copyright © 2018. Published by Elsevier Inc.

  8. Electronic filters, signal conversion apparatus, hearing aids and methods

    NASA Technical Reports Server (NTRS)

    Morley, Jr., Robert E. (Inventor); Engebretson, A. Maynard (Inventor); Engel, George L. (Inventor); Sullivan, Thomas J. (Inventor)

    1994-01-01

    An electronic filter for filtering an electrical signal. Signal processing circuitry therein includes a logarithmic filter having a series of filter stages with inputs and outputs in cascade and respective circuits associated with the filter stages for storing electrical representations of filter parameters. The filter stages include circuits for respectively adding the electrical representations of the filter parameters to the electrical signal to be filtered thereby producing a set of filter sum signals. At least one of the filter stages includes circuitry for producing a filter signal in substantially logarithmic form at its output by combining a filter sum signal for that filter stage with a signal from an output of another filter stage. The signal processing circuitry produces an intermediate output signal, and a multiplexer connected to the signal processing circuit multiplexes the intermediate output signal with the electrical signal to be filtered so that the logarithmic filter operates as both a logarithmic prefilter and a logarithmic postfilter. Other electronic filters, signal conversion apparatus, electroacoustic systems, hearing aids and methods are also disclosed.

  9. Programming of the complex logarithm function in the solution of the cracked anisotropic plate loaded by a point force

    NASA Astrophysics Data System (ADS)

    Zaal, K. J. J. M.

    1991-06-01

    In programming solutions of complex function theory, the complex logarithm function is replaced by the complex logarithmic function, introducing a discontinuity along the branch cut into the programmed solution which was not present in the mathematical solution. Recently, Liaw and Kamel presented their solution of the infinite anisotropic centrally cracked plate loaded by an arbitrary point force, which they used as Green's function in a boundary element method intended to evaluate the stress intensity factor at the tip of a crack originating from an elliptical home. Their solution may be used as Green's function of many more numerical methods involving anisotropic elasticity. In programming applications of Liaw and Kamel's solution, the standard definition of the logarithmic function with the branch cut at the nonpositive real axis cannot provide a reliable computation of the displacement field for Liaw and Kamel's solution. Either the branch cut should be redefined outside the domain of the logarithmic function, after proving that the domain is limited to a part of the plane, or the logarithmic function should be defined on its Riemann surface. A two dimensional line fractal can provide the link between all mesh points on the plane essential to evaluate the logarithm function on its Riemann surface. As an example, a two dimensional line fractal is defined for a mesh once used by Erdogan and Arin.

  10. Next-to-leading-logarithmic power corrections for N -jettiness subtraction in color-singlet production

    NASA Astrophysics Data System (ADS)

    Boughezal, Radja; Isgrò, Andrea; Petriello, Frank

    2018-04-01

    We present a detailed derivation of the power corrections to the factorization theorem for the 0-jettiness event shape variable T . Our calculation is performed directly in QCD without using the formalism of effective field theory. We analytically calculate the next-to-leading logarithmic power corrections for small T at next-to-leading order in the strong coupling constant, extending previous computations which obtained only the leading-logarithmic power corrections. We address a discrepancy in the literature between results for the leading-logarithmic power corrections to a particular definition of 0-jettiness. We present a numerical study of the power corrections in the context of their application to the N -jettiness subtraction method for higher-order calculations, using gluon-fusion Higgs production as an example. The inclusion of the next-to-leading-logarithmic power corrections further improves the numerical efficiency of the approach beyond the improvement obtained from the leading-logarithmic power corrections.

  11. Electronic filters, repeated signal charge conversion apparatus, hearing aids and methods

    NASA Technical Reports Server (NTRS)

    Morley, Jr., Robert E. (Inventor); Engebretson, A. Maynard (Inventor); Engel, George L. (Inventor); Sullivan, Thomas J. (Inventor)

    1993-01-01

    An electronic filter for filtering an electrical signal. Signal processing circuitry therein includes a logarithmic filter having a series of filter stages with inputs and outputs in cascade and respective circuits associated with the filter stages for storing electrical representations of filter parameters. The filter stages include circuits for respectively adding the electrical representations of the filter parameters to the electrical signal to be filtered thereby producing a set of filter sum signals. At least one of the filter stages includes circuitry for producing a filter signal in substantially logarithmic form at its output by combining a filter sum signal for that filter stage with a signal from an output of another filter stage. The signal processing circuitry produces an intermediate output signal, and a multiplexer connected to the signal processing circuit multiplexes the intermediate output signal with the electrical signal to be filtered so that the logarithmic filter operates as both a logarithmic prefilter and a logarithmic postfilter. Other electronic filters, signal conversion apparatus, electroacoustic systems, hearing aids and methods are also disclosed.

  12. Entropy production of doubly stochastic quantum channels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Müller-Hermes, Alexander, E-mail: muellerh@posteo.net; Department of Mathematical Sciences, University of Copenhagen, 2100 Copenhagen; Stilck França, Daniel, E-mail: dsfranca@mytum.de

    2016-02-15

    We study the entropy increase of quantum systems evolving under primitive, doubly stochastic Markovian noise and thus converging to the maximally mixed state. This entropy increase can be quantified by a logarithmic-Sobolev constant of the Liouvillian generating the noise. We prove a universal lower bound on this constant that stays invariant under taking tensor-powers. Our methods involve a new comparison method to relate logarithmic-Sobolev constants of different Liouvillians and a technique to compute logarithmic-Sobolev inequalities of Liouvillians with eigenvectors forming a projective representation of a finite abelian group. Our bounds improve upon similar results established before and as an applicationmore » we prove an upper bound on continuous-time quantum capacities. In the last part of this work we study entropy production estimates of discrete-time doubly stochastic quantum channels by extending the framework of discrete-time logarithmic-Sobolev inequalities to the quantum case.« less

  13. Logarithmic current measurement circuit with improved accuracy and temperature stability and associated method

    DOEpatents

    Ericson, M. Nance; Rochelle, James M.

    1994-01-01

    A logarithmic current measurement circuit for operating upon an input electric signal utilizes a quad, dielectrically isolated, well-matched, monolithic bipolar transistor array. One group of circuit components within the circuit cooperate with two transistors of the array to convert the input signal logarithmically to provide a first output signal which is temperature-dependant, and another group of circuit components cooperate with the other two transistors of the array to provide a second output signal which is temperature-dependant. A divider ratios the first and second output signals to provide a resultant output signal which is independent of temperature. The method of the invention includes the operating steps performed by the measurement circuit.

  14. Numerical solution of the quantum Lenard-Balescu equation for a non-degenerate one-component plasma

    DOE PAGES

    Scullard, Christian R.; Belt, Andrew P.; Fennell, Susan C.; ...

    2016-09-01

    We present a numerical solution of the quantum Lenard-Balescu equation using a spectral method, namely an expansion in Laguerre polynomials. This method exactly conserves both particles and kinetic energy and facilitates the integration over the dielectric function. To demonstrate the method, we solve the equilibration problem for a spatially homogeneous one-component plasma with various initial conditions. Unlike the more usual Landau/Fokker-Planck system, this method requires no input Coulomb logarithm; the logarithmic terms in the collision integral arise naturally from the equation along with the non-logarithmic order-unity terms. The spectral method can also be used to solve the Landau equation andmore » a quantum version of the Landau equation in which the integration over the wavenumber requires only a lower cutoff. We solve these problems as well and compare them with the full Lenard-Balescu solution in the weak-coupling limit. Finally, we discuss the possible generalization of this method to include spatial inhomogeneity and velocity anisotropy.« less

  15. Antibacterial Activity of a Novel Peptide-Modified Lysin Against Acinetobacter baumannii and Pseudomonas aeruginosa

    PubMed Central

    Yang, Hang; Wang, Mengyue; Yu, Junping; Wei, Hongping

    2015-01-01

    The global emergence of multidrug-resistant (MDR) bacteria is a growing threat to public health worldwide. Natural bacteriophage lysins are promising alternatives in the treatment of infections caused by Gram-positive pathogens, but not Gram-negative ones, like Acinetobacter baumannii and Pseudomonas aeruginosa, due to the barriers posed by their outer membranes. Recently, modifying a natural lysin with an antimicrobial peptide was found able to break the barriers, and to kill Gram-negative pathogens. Herein, a new peptide-modified lysin (PlyA) was constructed by fusing the cecropin A peptide residues 1–8 (KWKLFKKI) with the OBPgp279 lysin and its antibacterial activity was studied. PlyA showed good and broad antibacterial activities against logarithmic phase A. baumannii and P. aeruginosa, but much reduced activities against the cells in stationary phase. Addition of outer membrane permeabilizers (EDTA and citric acid) could enhance the antibacterial activity of PlyA against stationary phase cells. Finally, no antibacterial activity of PlyA could be observed in some bio-matrices, such as culture media, milk, and sera. In conclusion, we reported here a novel peptide-modified lysin with significant antibacterial activity against both logarithmic (without OMPs) and stationary phase (with OMPs) A. baumannii and P. aeruginosa cells in buffer, but further optimization is needed to achieve broad activity in diverse bio-matrices. PMID:26733995

  16. Solving the Schroedinger equation for helium atom and its isoelectronic ions with the free iterative complement interaction (ICI) method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakashima, Hiroyuki; Nakatsuji, Hiroshi

    2007-12-14

    The Schroedinger equation was solved very accurately for helium atom and its isoelectronic ions (Z=1-10) with the free iterative complement interaction (ICI) method followed by the variational principle. We obtained highly accurate wave functions and energies of helium atom and its isoelectronic ions. For helium, the calculated energy was -2.903 724 377 034 119 598 311 159 245 194 404 446 696 905 37 a.u., correct over 40 digit accuracy, and for H{sup -}, it was -0.527 751 016 544 377 196 590 814 566 747 511 383 045 02 a.u. These results prove numerically that with the free ICImore » method, we can calculate the solutions of the Schroedinger equation as accurately as one desires. We examined several types of scaling function g and initial function {psi}{sub 0} of the free ICI method. The performance was good when logarithm functions were used in the initial function because the logarithm function is physically essential for three-particle collision area. The best performance was obtained when we introduce a new logarithm function containing not only r{sub 1} and r{sub 2} but also r{sub 12} in the same logarithm function.« less

  17. Method for determining formation quality factor from seismic data

    DOEpatents

    Taner, M. Turhan; Treitel, Sven

    2005-08-16

    A method is disclosed for calculating the quality factor Q from a seismic data trace. The method includes calculating a first and a second minimum phase inverse wavelet at a first and a second time interval along the seismic data trace, synthetically dividing the first wavelet by the second wavelet, Fourier transforming the result of the synthetic division, calculating the logarithm of this quotient of Fourier transforms and determining the slope of a best fit line to the logarithm of the quotient.

  18. Extended Phase-Space Methods for Enhanced Sampling in Molecular Simulations: A Review.

    PubMed

    Fujisaki, Hiroshi; Moritsugu, Kei; Matsunaga, Yasuhiro; Morishita, Tetsuya; Maragliano, Luca

    2015-01-01

    Molecular Dynamics simulations are a powerful approach to study biomolecular conformational changes or protein-ligand, protein-protein, and protein-DNA/RNA interactions. Straightforward applications, however, are often hampered by incomplete sampling, since in a typical simulated trajectory the system will spend most of its time trapped by high energy barriers in restricted regions of the configuration space. Over the years, several techniques have been designed to overcome this problem and enhance space sampling. Here, we review a class of methods that rely on the idea of extending the set of dynamical variables of the system by adding extra ones associated to functions describing the process under study. In particular, we illustrate the Temperature Accelerated Molecular Dynamics (TAMD), Logarithmic Mean Force Dynamics (LogMFD), and Multiscale Enhanced Sampling (MSES) algorithms. We also discuss combinations with techniques for searching reaction paths. We show the advantages presented by this approach and how it allows to quickly sample important regions of the free-energy landscape via automatic exploration.

  19. On the logarithmic-singularity correction in the kernel function method of subsonic lifting-surface theory

    NASA Technical Reports Server (NTRS)

    Lan, C. E.; Lamar, J. E.

    1977-01-01

    A logarithmic-singularity correction factor is derived for use in kernel function methods associated with Multhopp's subsonic lifting-surface theory. Because of the form of the factor, a relation was formulated between the numbers of chordwise and spanwise control points needed for good accuracy. This formulation is developed and discussed. Numerical results are given to show the improvement of the computation with the new correction factor.

  20. A new "Logicle" display method avoids deceptive effects of logarithmic scaling for low signals and compensated data.

    PubMed

    Parks, David R; Roederer, Mario; Moore, Wayne A

    2006-06-01

    In immunofluorescence measurements and most other flow cytometry applications, fluorescence signals of interest can range down to essentially zero. After fluorescence compensation, some cell populations will have low means and include events with negative data values. Logarithmic presentation has been very useful in providing informative displays of wide-ranging flow cytometry data, but it fails to adequately display cell populations with low means and high variances and, in particular, offers no way to include negative data values. This has led to a great deal of difficulty in interpreting and understanding flow cytometry data, has often resulted in incorrect delineation of cell populations, and has led many people to question the correctness of compensation computations that were, in fact, correct. We identified a set of criteria for creating data visualization methods that accommodate the scaling difficulties presented by flow cytometry data. On the basis of these, we developed a new data visualization method that provides important advantages over linear or logarithmic scaling for display of flow cytometry data, a scaling we refer to as "Logicle" scaling. Logicle functions represent a particular generalization of the hyperbolic sine function with one more adjustable parameter than linear or logarithmic functions. Finally, we developed methods for objectively and automatically selecting an appropriate value for this parameter. The Logicle display method provides more complete, appropriate, and readily interpretable representations of data that includes populations with low-to-zero means, including distributions resulting from fluorescence compensation procedures, than can be produced using either logarithmic or linear displays. The method includes a specific algorithm for evaluating actual data distributions and deriving parameters of the Logicle scaling function appropriate for optimal display of that data. It is critical to note that Logicle visualization does not change the data values or the descriptive statistics computed from them. Copyright 2006 International Society for Analytical Cytology.

  1. Effect of localized states on the current-voltage characteristics of metal-semiconductor contacts with thin interfacial layer

    NASA Astrophysics Data System (ADS)

    Chattopadhyay, P.

    1994-10-01

    The role of discrete localized states on the current-voltage characteristics of metal-semiconductor contact is examined. It is seen that, because of these localized states, the logarithmic current vs voltage characteristics become nonlinear. Such nonlinearity is found sensitive to the temperature, and the energy and density of the localized states. The predicted temperature dependence of barrier height and the current-voltage characteristics are in agreement with the experimental results of Aboelfotoh [ Phys. Rev. B39, 5070 (1989)].

  2. An interior-point method for total variation regularized positron emission tomography image reconstruction

    NASA Astrophysics Data System (ADS)

    Bai, Bing

    2012-03-01

    There has been a lot of work on total variation (TV) regularized tomographic image reconstruction recently. Many of them use gradient-based optimization algorithms with a differentiable approximation of the TV functional. In this paper we apply TV regularization in Positron Emission Tomography (PET) image reconstruction. We reconstruct the PET image in a Bayesian framework, using Poisson noise model and TV prior functional. The original optimization problem is transformed to an equivalent problem with inequality constraints by adding auxiliary variables. Then we use an interior point method with logarithmic barrier functions to solve the constrained optimization problem. In this method, a series of points approaching the solution from inside the feasible region are found by solving a sequence of subproblems characterized by an increasing positive parameter. We use preconditioned conjugate gradient (PCG) algorithm to solve the subproblems directly. The nonnegativity constraint is enforced by bend line search. The exact expression of the TV functional is used in our calculations. Simulation results show that the algorithm converges fast and the convergence is insensitive to the values of the regularization and reconstruction parameters.

  3. Autonomous facial recognition system inspired by human visual system based logarithmical image visualization technique

    NASA Astrophysics Data System (ADS)

    Wan, Qianwen; Panetta, Karen; Agaian, Sos

    2017-05-01

    Autonomous facial recognition system is widely used in real-life applications, such as homeland border security, law enforcement identification and authentication, and video-based surveillance analysis. Issues like low image quality, non-uniform illumination as well as variations in poses and facial expressions can impair the performance of recognition systems. To address the non-uniform illumination challenge, we present a novel robust autonomous facial recognition system inspired by the human visual system based, so called, logarithmical image visualization technique. In this paper, the proposed method, for the first time, utilizes the logarithmical image visualization technique coupled with the local binary pattern to perform discriminative feature extraction for facial recognition system. The Yale database, the Yale-B database and the ATT database are used for computer simulation accuracy and efficiency testing. The extensive computer simulation demonstrates the method's efficiency, accuracy, and robustness of illumination invariance for facial recognition.

  4. Single ricin detection by atomic force microscopy chemomechanical mapping

    NASA Astrophysics Data System (ADS)

    Chen, Guojun; Zhou, Jianfeng; Park, Bosoon; Xu, Bingqian

    2009-07-01

    The authors report on a study of detecting ricin molecules immobilized on chemically modified Au (111) surface by chemomechanically mapping the molecular interactions with a chemically modified atomic force microscopy (AFM) tip. AFM images resolved the different fold-up conformations of single ricin molecule as well as their intramolecule structure of A- and B-chains. AFM force spectroscopy study of the interaction indicates that the unbinding force has a linear relation with the logarithmic force loading rate, which agrees well with calculations using one-barrier bond dissociation model.

  5. Two-Jet Rate in e+e- at Next-to-Next-to-Leading-Logarithmic Order

    NASA Astrophysics Data System (ADS)

    Banfi, Andrea; McAslan, Heather; Monni, Pier Francesco; Zanderighi, Giulia

    2016-10-01

    We present the first next-to-next-to-leading-logarithmic resummation for the two-jet rate in e+e- annihilation in the Durham and Cambridge algorithms. The results are obtained by extending the ares method to observables involving any global, recursively infrared and collinear safe jet algorithm in e+e- collisions. As opposed to other methods, this approach does not require a factorization theorem for the observables. We present predictions matched to next-to-next-to-leading order and a comparison to LEP data.

  6. Transfer couplings and hindrance far below the barrier for 40 Ca + 96 Zr

    DOE PAGES

    Stefanini, A. M.; Montagnoli, G.; Esbensen, H.; ...

    2015-01-29

    The sub-barrier fusion excitation function of 40Ca + 96Zr has been measured down to cross sections ≃2.4µb, i.e. two orders of magnitude smaller than obtained in the previous experiment, where the sub-barrier fusion of this system was found to be greatly enhanced with respect to 40Ca + 90Zr, and the need of coupling to transfer channels was suggested. The purpose of this work was to investigate the behavior of 40Ca + 96Zr fusion far below the barrier. The smooth trend of the excitation function has been found to continue, and the logarithmic slope increases very slowly. No indication of hindrancemore » shows up, and a comparison with 48Ca + 96Zr is very useful in this respect. A new CC analysis of the complete excitation function has been performed, including explicitly one- and two-nucleon Q >0 transfer channels. Such transfer couplings bring significant cross section enhancements, even at the level of a few µb. Locating the hindrance threshold, if any, in 40Ca + 96Zr would require challenging measurements of cross sections in the sub-µb range.« less

  7. A simplified application of the method of operators to the calculation of disturbed motions of an airplane

    NASA Technical Reports Server (NTRS)

    Jones, Robert T

    1937-01-01

    A simplified treatment of the application of Heaviside's operational methods to problems of airplane dynamics is given. Certain graphical methods and logarithmic formulas that lessen the amount of computation involved are explained. The problem representing a gust disturbance or control manipulation is taken up and it is pointed out that in certain cases arbitrary control manipulations may be dealt with as though they imposed specific constraints on the airplane, thus avoiding the necessity of any integration. The application of the calculations described in the text is illustrated by several examples chosen to show the use of the methods and the practicability of the graphical and logarithmic computations described.

  8. The impact of a cold chain break on the survival of Salmonella enterica and Listeria monocytogenes on minimally processed 'Conference' pears during their shelf life.

    PubMed

    Colás-Medà, Pilar; Viñas, Inmaculada; Alegre, Isabel; Abadias, Maribel

    2017-07-01

    In recent years, improved detection methods and increased fresh-cut processing of produce have led to an increased number of outbreaks associated with fresh fruits and vegetables. During fruit and vegetable processing, natural protective barriers are removed and tissues are cut, causing nutrient rich exudates and providing attachment sites for microbes. Consequently, fresh-cut produce is more susceptible to microbial proliferation than whole produce. The aim of this study was to examine the impact of storage temperature on the growth and survival of Listeria monocytogenes and Salmonella enterica on a fresh-cut 'Conference' pear over an 8 day storage period. Pears were cut, dipped in antioxidant solution, artificially inoculated with L. monocytogenes and S. enterica, packed under modified atmospheric conditions simulating commercial applications and stored in properly refrigerated conditions (constant storage at 4 °C for 8 days) or in temperature abuse conditions (3 days at 4 °C plus 5 days at 8 °C). After 8 days of storage, both conditions resulted in a significant decrease of S. enterica populations on pear wedges. In contrast, when samples were stored at 4 °C for 8 days, L. monocytogenes populations increased 1.6 logarithmic units, whereas under the temperature abuse conditions, L. monocytogenes populations increased 2.2 logarithmic units. Listeria monocytogenes was able to grow on fresh-cut pears processed under the conditions described here, despite low pH, refrigeration and use of modified atmosphere. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  9. Coulomb Logarithm in Nonideal and Degenerate Plasmas

    NASA Astrophysics Data System (ADS)

    Filippov, A. V.; Starostin, A. N.; Gryaznov, V. K.

    2018-03-01

    Various methods for determining the Coulomb logarithm in the kinetic theory of transport and various variants of the choice of the plasma screening constant, taking into account and disregarding the contribution of the ion component and the boundary value of the electron wavevector are considered. The correlation of ions is taken into account using the Ornstein-Zernike integral equation in the hypernetted-chain approximation. It is found that the effect of ion correlation in a nondegenerate plasma is weak, while in a degenerate plasma, this effect must be taken into account when screening is determined by the electron component alone. The calculated values of the electrical conductivity of a hydrogen plasma are compared with the values determined experimentally in the megabar pressure range. It is shown that the values of the Coulomb logarithm can indeed be smaller than unity. Special experiments are proposed for a more exact determination of the Coulomb logarithm in a magnetic field for extremely high pressures, for which electron scattering by ions prevails.

  10. Logarithmic Laplacian Prior Based Bayesian Inverse Synthetic Aperture Radar Imaging.

    PubMed

    Zhang, Shuanghui; Liu, Yongxiang; Li, Xiang; Bi, Guoan

    2016-04-28

    This paper presents a novel Inverse Synthetic Aperture Radar Imaging (ISAR) algorithm based on a new sparse prior, known as the logarithmic Laplacian prior. The newly proposed logarithmic Laplacian prior has a narrower main lobe with higher tail values than the Laplacian prior, which helps to achieve performance improvement on sparse representation. The logarithmic Laplacian prior is used for ISAR imaging within the Bayesian framework to achieve better focused radar image. In the proposed method of ISAR imaging, the phase errors are jointly estimated based on the minimum entropy criterion to accomplish autofocusing. The maximum a posterior (MAP) estimation and the maximum likelihood estimation (MLE) are utilized to estimate the model parameters to avoid manually tuning process. Additionally, the fast Fourier Transform (FFT) and Hadamard product are used to minimize the required computational efficiency. Experimental results based on both simulated and measured data validate that the proposed algorithm outperforms the traditional sparse ISAR imaging algorithms in terms of resolution improvement and noise suppression.

  11. Large-amplitude nuclear motion formulated in terms of dissipation of quantum fluctuations

    NASA Astrophysics Data System (ADS)

    Kuzyakin, R. A.; Sargsyan, V. V.; Adamian, G. G.; Antonenko, N. V.

    2017-01-01

    The potential-barrier penetrability and quasi-stationary thermal-decay rate of a metastable state are formulated in terms of microscopic quantum diffusion. Apart from linear coupling in momentum between the collective and internal subsystems, the formalism embraces the more general case of linear couplings in both the momentum and the coordinates. The developed formalism is then used for describing the process of projectile-nucleus capture by a target nucleus at incident energies near and below the Coulomb barrier. The capture partial probability, which determines the cross section for formation of a dinuclear system, is derived in analytical form. The total and partial capture cross sections, mean and root-mean-square angular momenta of the formed dinuclear system, astrophysical -factors, logarithmic derivatives, and barrier distributions are derived for various reactions. Also investigated are the effects of nuclear static deformation and neutron transfer between the interacting nuclei on the capture cross section and its isotopic dependence, and the entrance-channel effects on the capture process. The results of calculations for reactions involving both spherical and deformed nuclei are in good agreement with available experimental data.

  12. Logarithmic entropy of Kehagias-Sfetsos black hole with self-gravitation in asymptotically flat IR modified Hořava gravity

    NASA Astrophysics Data System (ADS)

    Liu, Molin; Lu, Junwang

    2011-05-01

    Motivated by recent logarithmic entropy of Hořava-Lifshitz gravity, we investigate Hawking radiation for Kehagias-Sfetsos black hole from tunneling perspective. After considering the effect of self-gravitation, we calculate the emission rate and entropy of quantum tunneling by using Kraus-Parikh-Wilczek method. Meanwhile, both massless and massive particles are considered in this Letter. Interestingly, two types tunneling particles have the same emission rate Γ and entropy Sb whose analytical formulae are Γ=exp[π(rin2-rout2)/2+π/αln rin/rout] and Sb=A/4+π/αln(A/4), respectively. Here, α is the Hořava-Lifshitz field parameter. The results show that the logarithmic entropy of Hořava-Lifshitz gravity could be explained well by the self-gravitation, which is totally different from other methods. The study of this semiclassical tunneling process may shed light on understanding the Hořava-Lifshitz gravity.

  13. Prediction of Soil pH Hyperspectral Spectrum in Guanzhong Area of Shaanxi Province Based on PLS

    NASA Astrophysics Data System (ADS)

    Liu, Jinbao; Zhang, Yang; Wang, Huanyuan; Cheng, Jie; Tong, Wei; Wei, Jing

    2017-12-01

    The soil pH of Fufeng County, Yangling County and Wugong County in Shaanxi Province was studied. The spectral reflectance was measured by ASD Field Spec HR portable terrain spectrum, and its spectral characteristics were analyzed. The first deviation of the original spectral reflectance of the soil, the second deviation, the logarithm of the reciprocal logarithm, the first order differential of the reciprocal logarithm and the second order differential of the reciprocal logarithm were used to establish the soil pH Spectral prediction model. The results showed that the correlation between the reflectance spectra after SNV pre-treatment and the soil pH was significantly improved. The optimal prediction model of soil pH established by partial least squares method was a prediction model based on the first order differential of the reciprocal logarithm of spectral reflectance. The principal component factor was 10, the decision coefficient Rc2 = 0.9959, the model root means square error RMSEC = 0.0076, the correction deviation SEC = 0.0077; the verification decision coefficient Rv2 = 0.9893, the predicted root mean square error RMSEP = 0.0157, The deviation of SEP = 0.0160, the model was stable, the fitting ability and the prediction ability were high, and the soil pH can be measured quickly.

  14. Viscoelastic subdiffusion: from anomalous to normal.

    PubMed

    Goychuk, Igor

    2009-10-01

    We study viscoelastic subdiffusion in bistable and periodic potentials within the generalized Langevin equation approach. Our results justify the (ultra)slow fluctuating rate view of the corresponding bistable non-Markovian dynamics which displays bursting and anticorrelation of the residence times in two potential wells. The transition kinetics is asymptotically stretched exponential when the potential barrier V0 several times exceeds thermal energy k(B)T [V(0) approximately (2-10)k(B)T] and it cannot be described by the non-Markovian rate theory (NMRT). The well-known NMRT result approximates, however, ever better with the increasing barrier height, the most probable logarithm of the residence times. Moreover, the rate description is gradually restored when the barrier height exceeds a fuzzy borderline which depends on the power-law exponent of free subdiffusion alpha . Such a potential-free subdiffusion is ergodic. Surprisingly, in periodic potentials it is not sensitive to the barrier height in the long time asymptotic limit. However, the transient to this asymptotic regime is extremally slow and it does profoundly depend on the barrier height. The time scale of such subdiffusion can exceed the mean residence time in a potential well or in a finite spatial domain by many orders of magnitude. All these features are in sharp contrast with an alternative subdiffusion mechanism involving jumps among traps with the divergent mean residence time in these traps.

  15. Rectification of depth measurement using pulsed thermography with logarithmic peak second derivative method

    NASA Astrophysics Data System (ADS)

    Li, Xiaoli; Zeng, Zhi; Shen, Jingling; Zhang, Cunlin; Zhao, Yuejin

    2018-03-01

    Logarithmic peak second derivative (LPSD) method is the most popular method for depth prediction in pulsed thermography. It is widely accepted that this method is independent of defect size. The theoretical model for LPSD method is based on the one-dimensional solution of heat conduction without considering the effect of defect size. When a decay term considering defect aspect ratio is introduced into the solution to correct the three-dimensional thermal diffusion effect, we found that LPSD method is affected by defect size by analytical model. Furthermore, we constructed the relation between the characteristic time of LPSD method and defect aspect ratio, which was verified with the experimental results of stainless steel and glass fiber reinforced plate (GFRP) samples. We also proposed an improved LPSD method for depth prediction when the effect of defect size was considered, and the rectification results of stainless steel and GFRP samples were presented and discussed.

  16. A comparison of lidar inversion methods for cirrus applications

    NASA Technical Reports Server (NTRS)

    Elouragini, Salem; Flamant, Pierre H.

    1992-01-01

    Several methods for inverting the lidar equation are suggested as means to derive the cirrus optical properties (beta backscatter, alpha extinction coefficients, and delta optical depth) at one wavelength. The lidar equation can be inverted in a linear or logarithmic form; either solution assumes a linear relationship: beta = kappa(alpha), where kappa is the lidar ratio. A number of problems prevent us from calculating alpha (or beta) with a good accuracy. Some of these are as follows: (1) the multiple scattering effect (most authors neglect it); (2) an absolute calibration of the lidar system (difficult and sometimes not possible); (3) lack of accuracy on the lidar ratio k (taken as constant, but in fact it varies with range and cloud species); and (4) the determination of boundary condition for logarithmic solution which depends on signal to noise ration (SNR) at cloud top. An inversion in a linear form needs an absolute calibration of the system. In practice one uses molecular backscattering below the cloud to calibrate the system. This method is not permanent because the lower atmosphere turbidity is variable. For a logarithmic solution, a reference extinction coefficient (alpha(sub f)) at cloud top is required. Several methods to determine alpha(sub f) were suggested. We tested these methods at low SNR. This led us to propose two new methods referenced as S1 and S2.

  17. The orientation distribution of tunneling-related quantities

    NASA Astrophysics Data System (ADS)

    Seif, W. M.; Refaie, A. I.; Botros, M. M.

    2018-03-01

    In the nuclear tunneling processes involving deformed nuclei, most of the tunneling-related quantities depend on the relative orientations of the participating nuclei. In the presence of different multipole deformations, we study the variation of a few relevant quantities for the α-decay and the sub-barrier fusion processes, in an orientation degree of freedom. The knocking frequency and the penetration probability are evaluated within the Wentzel-Kramers-Brillouin approximation. The interaction potential is calculated with Skyrme-type nucleon-nucleon interaction. We found that the width of the potential pocket, the Coulomb barrier radius, the penetration probability, the α-decay width, and the fusion cross-section follow consistently the orientation-angle variation of the radius of the deformed nucleus. The orientation distribution patterns of the pocket width, the barrier radius, the logarithms of the penetrability, the decay width, and the fusion cross-section are found to be highly analogous to pattern of the deformed-nucleus radius. The curve patterns of the orientation angle distributions of the internal pocket depth, the Coulomb barrier height and width, as well as the knocking frequency simulate inversely the variation of the deformed nucleus radius. The predicted orientation behaviors will be of a special interest in predicting the optimum orientations for the tunneling processes.

  18. Magnetic field induced suppression of the forward bias current in Bi2Se3/Si Schottky barrier diodes

    NASA Astrophysics Data System (ADS)

    Jin, Haoming; Hebard, Arthur

    Schottky diodes formed by van der Waals bonding between freshly cleaved flakes of the topological insulator Bi2Se3 and doped silicon substrates show electrical characteristics in good agreement with thermionic emission theory. The motivation is to use magnetic fields to modulate the conductance of the topologically protected conducting surface state. This surface state in close proximity to the semiconductor surface may play an important role in determining the nature of the Schottky barrier. Current-voltage (I-V) and capacitance-voltage (C-V) characteristics were obtained for temperatures in the range 50-300 K and magnetic fields, both perpendicular and parallel to the interface, as high as 7 T. The I-V curve shows more than 6 decades linearity on semi-logarithmic plots, allowing extraction of parameters such as ideality (η), zero-voltage Schottky barrier height (SBH), and series resistance (Rs). In forward bias we observe a field-induced decrease in current which becomes increasingly more pronounced at higher voltages and lower temperature, and is found to be correlated with changes in Rs rather than other barrier parameters. A comparison of changes in Rs in both field direction will be made with magnetoresistance in Bi2Se3 transport measurement. The work is supported by NSF through DMR 1305783.

  19. Activated dynamics in dense fluids of attractive nonspherical particles. II. Elasticity, barriers, relaxation, fragility, and self-diffusion

    NASA Astrophysics Data System (ADS)

    Tripathy, Mukta; Schweizer, Kenneth S.

    2011-04-01

    In paper II of this series we apply the center-of-mass version of Nonlinear Langevin Equation theory to study how short-range attractive interactions influence the elastic shear modulus, transient localization length, activated dynamics, and kinetic arrest of a variety of nonspherical particle dense fluids (and the spherical analog) as a function of volume fraction and attraction strength. The activation barrier (roughly the natural logarithm of the dimensionless relaxation time) is predicted to be a rich function of particle shape, volume fraction, and attraction strength, and the dynamic fragility varies significantly with particle shape. At fixed volume fraction, the barrier grows in a parabolic manner with inverse temperature nondimensionalized by an onset value, analogous to what has been established for thermal glass-forming liquids. Kinetic arrest boundaries lie at significantly higher volume fractions and attraction strengths relative to their dynamic crossover analogs, but their particle shape dependence remains the same. A limited universality of barrier heights is found based on the concept of an effective mean-square confining force. The mean hopping time and self-diffusion constant in the attractive glass region of the nonequilibrium phase diagram is predicted to vary nonmonotonically with attraction strength or inverse temperature, qualitatively consistent with recent computer simulations and colloid experiments.

  20. Evaluation of Low-Voltage Distribution Network Index Based on Improved Principal Component Analysis

    NASA Astrophysics Data System (ADS)

    Fan, Hanlu; Gao, Suzhou; Fan, Wenjie; Zhong, Yinfeng; Zhu, Lei

    2018-01-01

    In order to evaluate the development level of the low-voltage distribution network objectively and scientifically, chromatography analysis method is utilized to construct evaluation index model of low-voltage distribution network. Based on the analysis of principal component and the characteristic of logarithmic distribution of the index data, a logarithmic centralization method is adopted to improve the principal component analysis algorithm. The algorithm can decorrelate and reduce the dimensions of the evaluation model and the comprehensive score has a better dispersion degree. The clustering method is adopted to analyse the comprehensive score because the comprehensive score of the courts is concentrated. Then the stratification evaluation of the courts is realized. An example is given to verify the objectivity and scientificity of the evaluation method.

  1. Multiplicative noise removal via a learned dictionary.

    PubMed

    Huang, Yu-Mei; Moisan, Lionel; Ng, Michael K; Zeng, Tieyong

    2012-11-01

    Multiplicative noise removal is a challenging image processing problem, and most existing methods are based on the maximum a posteriori formulation and the logarithmic transformation of multiplicative denoising problems into additive denoising problems. Sparse representations of images have shown to be efficient approaches for image recovery. Following this idea, in this paper, we propose to learn a dictionary from the logarithmic transformed image, and then to use it in a variational model built for noise removal. Extensive experimental results suggest that in terms of visual quality, peak signal-to-noise ratio, and mean absolute deviation error, the proposed algorithm outperforms state-of-the-art methods.

  2. Fusion of 48Ti+58Fe and 58Ni+54Fe below the Coulomb barrier

    NASA Astrophysics Data System (ADS)

    Stefanini, A. M.; Montagnoli, G.; Corradi, L.; Courtin, S.; Bourgin, D.; Fioretto, E.; Goasduff, A.; Grebosz, J.; Haas, F.; Mazzocco, M.; Mijatović, T.; Montanari, D.; Pagliaroli, M.; Parascandolo, C.; Scarlassara, F.; Strano, E.; Szilner, S.; Toniolo, N.; Torresi, D.

    2015-12-01

    Background: No data on the fusion excitation function of 48Ti+58Fe in the energy region near the Coulomb barrier existed prior to the present work, while fusion of 58Ni+54Fe was investigated in detail some years ago, down to very low energies, and clear evidence of fusion hindrance was noticed at relatively high cross sections. 48Ti and 58Fe are soft and have a low-lying quadrupole excitation lying at ≈800 -900 keV only. Instead, 58Ni and 54Fe have a closed shell (protons and neutrons, respectively) and are rather rigid. Purpose: We aim to investigate (1) the possible influence of the different structures of the involved nuclei on the fusion excitation functions far below the barrier and, in particular, (2) whether hindrance is observed in 48Ti+58Fe , and to compare the results with current coupled-channels models. Methods: 48Ti beams from the XTU Tandem accelerator of INFN-Laboratori Nazionali di Legnaro were used. The experimental setup was based on an electrostatic beam separator, and fusion-evaporation residues (ERs) were detected at very forward angles. Angular distributions of ERs were measured. Results: Fusion cross sections of 48Ti+58Fe have been obtained in a range of nearly six orders of magnitude around the Coulomb barrier, down to σ ≃2 μ b . The sub-barrier cross sections of 48Ti+58Fe are much larger than those of 58Ni+54Fe . Significant differences are also observed in the logarithmic derivatives and astrophysical S factors. No evidence of hindrance is observed, because coupled-channels calculations using a standard Woods-Saxon potential are able to reproduce the data in the whole measured energy range. Analogous calculations for 58Ni+54Fe predict clearly too large cross sections at low energies. The two fusion barrier distributions are wide and display a complex structure that is only qualitatively fit by calculations. Conclusions: It is pointed out that all these different trends originate from the dissimilar low-energy nuclear structures of the involved nuclei. In particular, the strong quadrupole excitations in 48Ti and 58Fe produce the relative cross section enhancement and make the barrier distribution ≈2 MeV wider, thus probably pushing the threshold for hindrance below the measured limit.

  3. Brownian motion in time-dependent logarithmic potential: Exact results for dynamics and first-passage properties.

    PubMed

    Ryabov, Artem; Berestneva, Ekaterina; Holubec, Viktor

    2015-09-21

    The paper addresses Brownian motion in the logarithmic potential with time-dependent strength, U(x, t) = g(t)log(x), subject to the absorbing boundary at the origin of coordinates. Such model can represent kinetics of diffusion-controlled reactions of charged molecules or escape of Brownian particles over a time-dependent entropic barrier at the end of a biological pore. We present a simple asymptotic theory which yields the long-time behavior of both the survival probability (first-passage properties) and the moments of the particle position (dynamics). The asymptotic survival probability, i.e., the probability that the particle will not hit the origin before a given time, is a functional of the potential strength. As such, it exhibits a rather varied behavior for different functions g(t). The latter can be grouped into three classes according to the regime of the asymptotic decay of the survival probability. We distinguish 1. the regular (power-law decay), 2. the marginal (power law times a slow function of time), and 3. the regime of enhanced absorption (decay faster than the power law, e.g., exponential). Results of the asymptotic theory show good agreement with numerical simulations.

  4. Homotopy method for optimization of variable-specific-impulse low-thrust trajectories

    NASA Astrophysics Data System (ADS)

    Chi, Zhemin; Yang, Hongwei; Chen, Shiyu; Li, Junfeng

    2017-11-01

    The homotopy method has been used as a useful tool in solving fuel-optimal trajectories with constant-specific-impulse low thrust. However, the specific impulse is often variable for many practical solar electric power-limited thrusters. This paper investigates the application of the homotopy method for optimization of variable-specific-impulse low-thrust trajectories. Difficulties arise when the two commonly-used homotopy functions are employed for trajectory optimization. The optimal power throttle level and the optimal specific impulse are coupled with the commonly-used quadratic and logarithmic homotopy functions. To overcome these difficulties, a modified logarithmic homotopy function is proposed to serve as a gateway for trajectory optimization, leading to decoupled expressions of both the optimal power throttle level and the optimal specific impulse. The homotopy method based on this homotopy function is proposed. Numerical simulations validate the feasibility and high efficiency of the proposed method.

  5. Alternating current (AC) iontophoretic transport across human epidermal membrane: effects of AC frequency and amplitude.

    PubMed

    Yan, Guang; Xu, Qingfang; Anissimov, Yuri G; Hao, Jinsong; Higuchi, William I; Li, S Kevin

    2008-03-01

    As a continuing effort to understand the mechanisms of alternating current (AC) transdermal iontophoresis and the iontophoretic transport pathways in the stratum corneum (SC), the objectives of the present study were to determine the interplay of AC frequency, AC voltage, and iontophoretic transport of ionic and neutral permeants across human epidermal membrane (HEM) and use AC as a means to characterize the transport pathways. Constant AC voltage iontophoresis experiments were conducted with HEM in 0.10 M tetraethyl ammonium pivalate (TEAP). AC frequencies ranging from 0.0001 to 25 Hz and AC applied voltages of 0.5 and 2.5 V were investigated. Tetraethyl ammonium (TEA) and arabinose (ARA) were the ionic and neutral model permeants, respectively. In data analysis, the logarithm of the permeability coefficients of HEM for the model permeants was plotted against the logarithm of the HEM electrical resistance for each AC condition. As expected, linear correlations between the logarithms of permeability coefficients and the logarithms of resistances of HEM were observed, and the permeability data were first normalized and then compared at the same HEM electrical resistance using these correlations. Transport enhancement of the ionic permeant was significantly larger than that of the neutral permeant during AC iontophoresis. The fluxes of the ionic permeant during AC iontophoresis of 2.5 V in the frequency range from 5 to 1,000 Hz were relatively constant and were approximately 4 times over those of passive transport. When the AC frequency decreased from 5 to 0.001 Hz at 2.5 V, flux enhancement increased to around 50 times over passive transport. While the AC frequency for achieving the full effect of iontophoretic enhancement at low AC frequency was lower than anticipated, the frequency for approaching passive diffusion transport at high frequency was higher than expected from the HEM morphology. These observations are consistent with a transport model of multiple barriers in series and the previous hypothesis that the iontophoresis pathways across HEM under AC behave like a series of reservoirs interconnected by short pore pathways.

  6. Quantum square-well with logarithmic central spike

    NASA Astrophysics Data System (ADS)

    Znojil, Miloslav; Semorádová, Iveta

    2018-01-01

    Singular repulsive barrier V (x) = -gln(|x|) inside a square-well is interpreted and studied as a linear analog of the state-dependent interaction ℒeff(x) = -gln[ψ∗(x)ψ(x)] in nonlinear Schrödinger equation. In the linearized case, Rayleigh-Schrödinger perturbation theory is shown to provide a closed-form spectrum at sufficiently small g or after an amendment of the unperturbed Hamiltonian. At any spike strength g, the model remains solvable numerically, by the matching of wave functions. Analytically, the singularity is shown regularized via the change of variables x = expy which interchanges the roles of the asymptotic and central boundary conditions.

  7. Precise Determination of the Absorption Maximum in Wide Bands

    ERIC Educational Resources Information Center

    Eriksson, Karl-Hugo; And Others

    1977-01-01

    A precise method of determining absorption maxima where Gaussian functions occur is described. The method is based on a logarithmic transformation of the Gaussian equation and is suited for a mini-computer. (MR)

  8. Kinetics of drug release from ointments: Role of transient-boundary layer.

    PubMed

    Xu, Xiaoming; Al-Ghabeish, Manar; Krishnaiah, Yellela S R; Rahman, Ziyaur; Khan, Mansoor A

    2015-10-15

    In the current work, an in vitro release testing method suitable for ointment formulations was developed using acyclovir as a model drug. Release studies were carried out using enhancer cells on acyclovir ointments prepared with oleaginous, absorption, and water-soluble bases. Kinetics and mechanism of drug release was found to be highly dependent on the type of ointment bases. In oleaginous bases, drug release followed a unique logarithmic-time dependent profile; in both absorption and water-soluble bases, drug release exhibited linearity with respect to square root of time (Higuchi model) albeit differences in the overall release profile. To help understand the underlying cause of logarithmic-time dependency of drug release, a novel transient-boundary hypothesis was proposed, verified, and compared to Higuchi theory. Furthermore, impact of drug solubility (under various pH conditions) and temperature on drug release were assessed. Additionally, conditions under which deviations from logarithmic-time drug release kinetics occur were determined using in situ UV fiber-optics. Overall, the results suggest that for oleaginous ointments containing dispersed drug particles, kinetics and mechanism of drug release is controlled by expansion of transient boundary layer, and drug release increases linearly with respect to logarithmic time. Published by Elsevier B.V.

  9. Abelian non-global logarithms from soft gluon clustering

    NASA Astrophysics Data System (ADS)

    Kelley, Randall; Walsh, Jonathan R.; Zuberi, Saba

    2012-09-01

    Most recombination-style jet algorithms cluster soft gluons in a complex way. This leads to previously identified correlations in the soft gluon phase space and introduces logarithmic corrections to jet cross sections, which are known as clustering logarithms. The leading Abelian clustering logarithms occur at least at next-to leading logarithm (NLL) in the exponent of the distribution. Using the framework of Soft Collinear Effective Theory (SCET), we show that new clustering effects contributing at NLL arise at each order. While numerical resummation of clustering logs is possible, it is unlikely that they can be analytically resummed to NLL. Clustering logarithms make the anti-kT algorithm theoretically preferred, for which they are power suppressed. They can arise in Abelian and non-Abelian terms, and we calculate the Abelian clustering logarithms at O ( {α_s^2} ) for the jet mass distribution using the Cambridge/Aachen and kT algorithms, including jet radius dependence, which extends previous results. We find that clustering logarithms can be naturally thought of as a class of non-global logarithms, which have traditionally been tied to non-Abelian correlations in soft gluon emission.

  10. High-temperature properties of joint interface of VPS-tungsten coated CFC

    NASA Astrophysics Data System (ADS)

    Tamura, S.; Liu, X.; Tokunaga, K.; Tsunekawa, Y.; Okumiya, M.; Noda, N.; Yoshida, N.

    2004-08-01

    Tungsten coated carbon fiber composite (CFC) is a candidate material for the high heat flux components in fusion reactors. In order to investigate the high-temperature properties at the joint interface of coating, heat load experiments by using electron beam were performed on VPS-tungsten coated CX-2002U samples. After the heat load test for 3.6 ks at 1400 °C, tungsten-rhenium multilayer (diffusion barrier for carbon) at the joint interface of coating was observed clearly. But, at the temperatures above 1600 °C, the multilayer was disappeared and a tungsten carbide layer was formed in the VPS-tungsten coating. At the temperatures below 1800 °C, the thickness of this layer logarithmically increased with increasing its loading time. At 2000 °C, the growth of the tungsten carbide layer was proportional to the square root of loading time. These results indicate that the diffusion barrier for carbon is not expected to suppress the carbide formation at the joint interface of the VPS-tungsten coating above 1600 °C.

  11. Leading logarithmic corrections to the muonium hyperfine splitting and to the hydrogen Lamb shift

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karshenboim, S.G.

    1994-12-31

    Main leading corrections with recoil logarithm log(M/m) and low-energy logarithm log(Za) to the Muonium hyperfine splitting axe discussed. Logarithmic corrections have magnitudes of 0.1 {divided_by} 0.3 kHz. Non-leading higher order corrections axe expected to be not larger than 0.1 kHz. Leading logarithmic correction to the Hydrogen Lamb shift is also obtained.

  12. On the complexity of a combined homotopy interior method for convex programming

    NASA Astrophysics Data System (ADS)

    Yu, Bo; Xu, Qing; Feng, Guochen

    2007-03-01

    In [G.C. Feng, Z.H. Lin, B. Yu, Existence of an interior pathway to a Karush-Kuhn-Tucker point of a nonconvex programming problem, Nonlinear Anal. 32 (1998) 761-768; G.C. Feng, B. Yu, Combined homotopy interior point method for nonlinear programming problems, in: H. Fujita, M. Yamaguti (Eds.), Advances in Numerical Mathematics, Proceedings of the Second Japan-China Seminar on Numerical Mathematics, Lecture Notes in Numerical and Applied Analysis, vol. 14, Kinokuniya, Tokyo, 1995, pp. 9-16; Z.H. Lin, B. Yu, G.C. Feng, A combined homotopy interior point method for convex programming problem, Appl. Math. Comput. 84 (1997) 193-211.], a combined homotopy was constructed for solving non-convex programming and convex programming with weaker conditions, without assuming the logarithmic barrier function to be strictly convex and the solution set to be bounded. It was proven that a smooth interior path from an interior point of the feasible set to a K-K-T point of the problem exists. This shows that combined homotopy interior point methods can solve the problem that commonly used interior point methods cannot solveE However, so far, there is no result on its complexity, even for linear programming. The main difficulty is that the objective function is not monotonically decreasing on the combined homotopy path. In this paper, by taking a piecewise technique, under commonly used conditions, polynomiality of a combined homotopy interior point method is given for convex nonlinear programming.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jentschura, Ulrich D.; National Institute of Standards and Technology, Gaithersburg, Maryland 20899-8401; Mohr, Peter J.

    We describe the calculation of hydrogenic (one-loop) Bethe logarithms for all states with principal quantum numbers n{<=}200. While, in principle, the calculation of the Bethe logarithm is a rather easy computational problem involving only the nonrelativistic (Schroedinger) theory of the hydrogen atom, certain calculational difficulties affect highly excited states, and in particular states for which the principal quantum number is much larger than the orbital angular momentum quantum number. Two evaluation methods are contrasted. One of these is based on the calculation of the principal value of a specific integral over a virtual photon energy. The other method relies directlymore » on the spectral representation of the Schroedinger-Coulomb propagator. Selected numerical results are presented. The full set of values is available at arXiv.org/quant-ph/0504002.« less

  14. Logarithm conformal mapping brings the cloaking effect

    PubMed Central

    Xu, Lin; Chen, Huanyang

    2014-01-01

    Over the past years, invisibility cloaks have been extensively discussed since transformation optics emerges. Generally, the electromagnetic parameters of invisibility cloaks are complicated tensors, yet difficult to realize. As a special method of transformation optics, conformal mapping helps us design invisibility cloak with isotropic materials of a refractive index distribution. However, for all proposed isotropic cloaks, the refractive index range is at such a breadth that challenges current experimental fabrication. In this work, we propose two new kinds of logarithm conformal mappings for invisible device designs. For one of the mappings, the refractive index distribution of conformal cloak varies from 0 to 9.839, which is more feasible for future implementation. Numerical simulations by using finite element method are performed to confirm the theoretical analysis. PMID:25359138

  15. Effects of extracellular polymeric substances on the bioaccumulation of mercury and its toxicity toward the cyanobacterium Microcystis aeruginosa.

    PubMed

    Chen, Ho-Wen; Huang, Winn-Jung; Wu, Ting-Hsiang; Hon, Chen-Lin

    2014-01-01

    This investigation examines how extracellular polymeric substances (EPSs) and environmental factors affect the bioaccumulation and toxicity of inorganic mercury (+2 oxidation state, Hg(II)) using a culture of Microcystis aeruginosa, which dominates eutrophic reservoir populations. The identified EPSs were classified as carbohydrates and proteins. Evaluation of the bioaccumulation of Hg(II) in cells by multiple regression analysis reveals that the concentration of EPSs in filtrate, the initial concentration of Hg(II) in medium, and the culture age significantly affected the amount of Hg(II) accumulated. Composition profiles revealed that the concentrations of soluble carbohydrates were significantly higher in Hg(II)-accumulated cells than in the control ones. Preliminary results based on scanning electron microscopic (SEM) map investigations suggest that most of the Hg(II) was accumulated in the cytoplasm (intracellular). Additionally, the effective concentrations (EC50) of Hg(II) that inhibit the growth of M. aeruginosa were 38.6 μg L(-1) in the logarithmic phase and 17.5 μg L(-1) in the stationary phase. As expected, the production of more EPSs in the logarithmic phase typically implies higher EC50 values because EPSs may be regarded as a protective barrier of cells against an external Hg(II) load, enabling them to be less influenced by Hg(II).

  16. Chromatographic behaviour predicts the ability of potential nootropics to permeate the blood-brain barrier.

    PubMed

    Farsa, Oldřich

    2013-01-01

    The log BB parameter is the logarithm of the ratio of a compound's equilibrium concentrations in the brain tissue versus the blood plasma. This parameter is a useful descriptor in assessing the ability of a compound to permeate the blood-brain barrier. The aim of this study was to develop a Hansch-type linear regression QSAR model that correlates the parameter log BB and the retention time of drugs and other organic compounds on a reversed-phase HPLC containing an embedded amide moiety. The retention time was expressed by the capacity factor log k'. The second aim was to estimate the brain's absorption of 2-(azacycloalkyl)acetamidophenoxyacetic acids, which are analogues of piracetam, nefiracetam, and meclofenoxate. Notably, these acids may be novel nootropics. Two simple regression models that relate log BB and log k' were developed from an assay performed using a reversed-phase HPLC that contained an embedded amide moiety. Both the quadratic and linear models yielded statistical parameters comparable to previously published models of log BB dependence on various structural characteristics. The models predict that four members of the substituted phenoxyacetic acid series have a strong chance of permeating the barrier and being absorbed in the brain. The results of this study show that a reversed-phase HPLC system containing an embedded amide moiety is a functional in vitro surrogate of the blood-brain barrier. These results suggest that racetam-type nootropic drugs containing a carboxylic moiety could be more poorly absorbed than analogues devoid of the carboxyl group, especially if the compounds penetrate the barrier by a simple diffusion mechanism.

  17. Representational change and strategy use in children's number line estimation during the first years of primary school

    PubMed Central

    2012-01-01

    Background The objective of this study was to scrutinize number line estimation behaviors displayed by children in mathematics classrooms during the first three years of schooling. We extend existing research by not only mapping potential logarithmic-linear shifts but also provide a new perspective by studying in detail the estimation strategies of individual target digits within a number range familiar to children. Methods Typically developing children (n = 67) from Years 1-3 completed a number-to-position numerical estimation task (0-20 number line). Estimation behaviors were first analyzed via logarithmic and linear regression modeling. Subsequently, using an analysis of variance we compared the estimation accuracy of each digit, thus identifying target digits that were estimated with the assistance of arithmetic strategy. Results Our results further confirm a developmental logarithmic-linear shift when utilizing regression modeling; however, uniquely we have identified that children employ variable strategies when completing numerical estimation, with levels of strategy advancing with development. Conclusion In terms of the existing cognitive research, this strategy factor highlights the limitations of any regression modeling approach, or alternatively, it could underpin the developmental time course of the logarithmic-linear shift. Future studies need to systematically investigate this relationship and also consider the implications for educational practice. PMID:22217191

  18. High-energy evolution to three loops

    NASA Astrophysics Data System (ADS)

    Caron-Huot, Simon; Herranen, Matti

    2018-02-01

    The Balitsky-Kovchegov equation describes the high-energy growth of gauge theory scattering amplitudes as well as nonlinear saturation effects which stop it. We obtain the three-loop corrections to the equation in planar N = 4 super Yang-Mills theory. Our method exploits a recently established equivalence with the physics of soft wide-angle radiation, so-called non-global logarithms, and thus yields at the same time the threeloop evolution equation for non-global logarithms. As a by-product of our analysis, we develop a Lorentz-covariant method to subtract infrared and collinear divergences in crosssection calculations in the planar limit. We compare our result in the linear regime with a recent prediction for the so-called Pomeron trajectory, and compare its collinear limit with predictions from the spectrum of twist-two operators.

  19. Spatiotemporal characterization of Ensemble Prediction Systems - the Mean-Variance of Logarithms (MVL) diagram

    NASA Astrophysics Data System (ADS)

    Gutiérrez, J. M.; Primo, C.; Rodríguez, M. A.; Fernández, J.

    2008-02-01

    We present a novel approach to characterize and graphically represent the spatiotemporal evolution of ensembles using a simple diagram. To this aim we analyze the fluctuations obtained as differences between each member of the ensemble and the control. The lognormal character of these fluctuations suggests a characterization in terms of the first two moments of the logarithmic transformed values. On one hand, the mean is associated with the exponential growth in time. On the other hand, the variance accounts for the spatial correlation and localization of fluctuations. In this paper we introduce the MVL (Mean-Variance of Logarithms) diagram to intuitively represent the interplay and evolution of these two quantities. We show that this diagram uncovers useful information about the spatiotemporal dynamics of the ensemble. Some universal features of the diagram are also described, associated either with the nonlinear system or with the ensemble method and illustrated using both toy models and numerical weather prediction systems.

  20. Logarithmic black hole entropy corrections and holographic Rényi entropy

    NASA Astrophysics Data System (ADS)

    Mahapatra, Subhash

    2018-01-01

    The entanglement and Rényi entropies for spherical entangling surfaces in CFTs with gravity duals can be explicitly calculated by mapping these entropies first to the thermal entropy on hyperbolic space and then, using the AdS/CFT correspondence, to the Wald entropy of topological black holes. Here we extend this idea by taking into account corrections to the Wald entropy. Using the method based on horizon symmetries and the asymptotic Cardy formula, we calculate corrections to the Wald entropy and find that these corrections are proportional to the logarithm of the area of the horizon. With the corrected expression for the entropy of the black hole, we then find corrections to the Rényi entropies. We calculate these corrections for both Einstein and Gauss-Bonnet gravity duals. Corrections with logarithmic dependence on the area of the entangling surface naturally occur at the order GD^0. The entropic c-function and the inequalities of the Rényi entropy are also satisfied even with the correction terms.

  1. Convergence and stability of the exponential Euler method for semi-linear stochastic delay differential equations.

    PubMed

    Zhang, Ling

    2017-01-01

    The main purpose of this paper is to investigate the strong convergence and exponential stability in mean square of the exponential Euler method to semi-linear stochastic delay differential equations (SLSDDEs). It is proved that the exponential Euler approximation solution converges to the analytic solution with the strong order [Formula: see text] to SLSDDEs. On the one hand, the classical stability theorem to SLSDDEs is given by the Lyapunov functions. However, in this paper we study the exponential stability in mean square of the exact solution to SLSDDEs by using the definition of logarithmic norm. On the other hand, the implicit Euler scheme to SLSDDEs is known to be exponentially stable in mean square for any step size. However, in this article we propose an explicit method to show that the exponential Euler method to SLSDDEs is proved to share the same stability for any step size by the property of logarithmic norm.

  2. QLog Solar-Cell Mode Photodiode Logarithmic CMOS Pixel Using Charge Compression and Readout †

    PubMed Central

    Ni, Yang

    2018-01-01

    In this paper, we present a new logarithmic pixel design currently under development at New Imaging Technologies SA (NIT). This new logarithmic pixel design uses charge domain logarithmic signal compression and charge-transfer-based signal readout. This structure gives a linear response in low light conditions and logarithmic response in high light conditions. The charge transfer readout efficiently suppresses the reset (KTC) noise by using true correlated double sampling (CDS) in low light conditions. In high light conditions, thanks to charge domain logarithmic compression, it has been demonstrated that 3000 electrons should be enough to cover a 120 dB dynamic range with a mobile phone camera-like signal-to-noise ratio (SNR) over the whole dynamic range. This low electron count permits the use of ultra-small floating diffusion capacitance (sub-fF) without charge overflow. The resulting large conversion gain permits a single photon detection capability with a wide dynamic range without a complex sensor/system design. A first prototype sensor with 320 × 240 pixels has been implemented to validate this charge domain logarithmic pixel concept and modeling. The first experimental results validate the logarithmic charge compression theory and the low readout noise due to the charge-transfer-based readout. PMID:29443903

  3. QLog Solar-Cell Mode Photodiode Logarithmic CMOS Pixel Using Charge Compression and Readout.

    PubMed

    Ni, Yang

    2018-02-14

    In this paper, we present a new logarithmic pixel design currently under development at New Imaging Technologies SA (NIT). This new logarithmic pixel design uses charge domain logarithmic signal compression and charge-transfer-based signal readout. This structure gives a linear response in low light conditions and logarithmic response in high light conditions. The charge transfer readout efficiently suppresses the reset (KTC) noise by using true correlated double sampling (CDS) in low light conditions. In high light conditions, thanks to charge domain logarithmic compression, it has been demonstrated that 3000 electrons should be enough to cover a 120 dB dynamic range with a mobile phone camera-like signal-to-noise ratio (SNR) over the whole dynamic range. This low electron count permits the use of ultra-small floating diffusion capacitance (sub-fF) without charge overflow. The resulting large conversion gain permits a single photon detection capability with a wide dynamic range without a complex sensor/system design. A first prototype sensor with 320 × 240 pixels has been implemented to validate this charge domain logarithmic pixel concept and modeling. The first experimental results validate the logarithmic charge compression theory and the low readout noise due to the charge-transfer-based readout.

  4. Free Energy Reconstruction from Logarithmic Mean-Force Dynamics Using Multiple Nonequilibrium Trajectories.

    PubMed

    Morishita, Tetsuya; Yonezawa, Yasushige; Ito, Atsushi M

    2017-07-11

    Efficient and reliable estimation of the mean force (MF), the derivatives of the free energy with respect to a set of collective variables (CVs), has been a challenging problem because free energy differences are often computed by integrating the MF. Among various methods for computing free energy differences, logarithmic mean-force dynamics (LogMFD) [ Morishita et al., Phys. Rev. E 2012 , 85 , 066702 ] invokes the conservation law in classical mechanics to integrate the MF, which allows us to estimate the free energy profile along the CVs on-the-fly. Here, we present a method called parallel dynamics, which improves the estimation of the MF by employing multiple replicas of the system and is straightforwardly incorporated in LogMFD or a related method. In the parallel dynamics, the MF is evaluated by a nonequilibrium path-ensemble using the multiple replicas based on the Crooks-Jarzynski nonequilibrium work relation. Thanks to the Crooks relation, realizing full-equilibrium states is no longer mandatory for estimating the MF. Additionally, sampling in the hidden subspace orthogonal to the CV space is highly improved with appropriate weights for each metastable state (if any), which is hardly achievable by typical free energy computational methods. We illustrate how to implement parallel dynamics by combining it with LogMFD, which we call logarithmic parallel dynamics (LogPD). Biosystems of alanine dipeptide and adenylate kinase in explicit water are employed as benchmark systems to which LogPD is applied to demonstrate the effect of multiple replicas on the accuracy and efficiency in estimating the free energy profiles using parallel dynamics.

  5. Investigation of logarithmic spiral nanoantennas at optical frequencies

    NASA Astrophysics Data System (ADS)

    Verma, Anamika; Pandey, Awanish; Mishra, Vigyanshu; Singh, Ten; Alam, Aftab; Dinesh Kumar, V.

    2013-12-01

    The first study is reported of a logarithmic spiral antenna in the optical frequency range. Using the finite integration technique, we investigated the spectral and radiation properties of a logarithmic spiral nanoantenna and a complementary structure made of thin gold film. A comparison is made with results for an Archimedean spiral nanoantenna. Such nanoantennas can exhibit broadband behavior that is independent of polarization. Two prominent features of logarithmic spiral nanoantennas are highly directional far field emission and perfectly circularly polarized radiation when excited by a linearly polarized source. The logarithmic spiral nanoantenna promises potential advantages over Archimedean spirals and could be harnessed for several applications in nanophotonics and allied areas.

  6. True logarithmic amplification of frequency clock in SS-OCT for calibration

    PubMed Central

    Liu, Bin; Azimi, Ehsan; Brezinski, Mark E.

    2011-01-01

    With swept source optical coherence tomography (SS-OCT), imprecise signal calibration prevents optimal imaging of biological tissues such as coronary artery. This work demonstrates an approach using a true logarithmic amplifier to precondition the clock signal, with the effort to minimize the noises and phase errors for optimal calibration. This method was validated and tested with a high-speed SS-OCT. The experimental results manifest its superior ability on optimization of the calibration and improvement of the imaging performance. Particularly, this hardware-based approach is suitable for real-time calibration in a high-speed system where computation time is constrained. PMID:21698036

  7. Decibels Made Easy.

    ERIC Educational Resources Information Center

    Tindle, C. T.

    1996-01-01

    Describes a method to teach acoustics to students with minimal mathematical backgrounds. Discusses the uses of charts in teaching topics of sound intensity level and the decibel scale. Avoids the difficulties of working with logarithm functions. (JRH)

  8. Chromatographic Behaviour Predicts the Ability of Potential Nootropics to Permeate the Blood-Brain Barrier

    PubMed Central

    Farsa, Oldřich

    2013-01-01

    The log BB parameter is the logarithm of the ratio of a compound’s equilibrium concentrations in the brain tissue versus the blood plasma. This parameter is a useful descriptor in assessing the ability of a compound to permeate the blood-brain barrier. The aim of this study was to develop a Hansch-type linear regression QSAR model that correlates the parameter log BB and the retention time of drugs and other organic compounds on a reversed-phase HPLC containing an embedded amide moiety. The retention time was expressed by the capacity factor log k′. The second aim was to estimate the brain’s absorption of 2-(azacycloalkyl)acetamidophenoxyacetic acids, which are analogues of piracetam, nefiracetam, and meclofenoxate. Notably, these acids may be novel nootropics. Two simple regression models that relate log BB and log k′ were developed from an assay performed using a reversed-phase HPLC that contained an embedded amide moiety. Both the quadratic and linear models yielded statistical parameters comparable to previously published models of log BB dependence on various structural characteristics. The models predict that four members of the substituted phenoxyacetic acid series have a strong chance of permeating the barrier and being absorbed in the brain. The results of this study show that a reversed-phase HPLC system containing an embedded amide moiety is a functional in vitro surrogate of the blood-brain barrier. These results suggest that racetam-type nootropic drugs containing a carboxylic moiety could be more poorly absorbed than analogues devoid of the carboxyl group, especially if the compounds penetrate the barrier by a simple diffusion mechanism. PMID:23641330

  9. Cotunneling and polaronic effect in granular systems

    NASA Astrophysics Data System (ADS)

    Ioselevich, A. S.; Sivak, V. V.

    2017-06-01

    We theoretically study the conductivity in arrays of metallic grains due to the variable-range multiple cotunneling of electrons with short-range (screened) Coulomb interaction. The system is supposed to be coupled to random stray charges in the dielectric matrix that are only loosely bounded to their spatial positions by elastic forces. The flexibility of the stray charges gives rise to a polaronic effect, which leads to the onset of Arrhenius-type conductivity behavior at low temperatures, replacing conventional Mott variable-range hopping. The effective activation energy logarithmically depends on temperature due to fluctuations of the polaron barrier heights. We present the unified theory that covers both weak and strong polaron effect regimes of hopping in granular metals and describes the crossover from elastic to inelastic cotunneling.

  10. Stress Energy Tensor in LCFT and LOGARITHMIC Sugawara Construction

    NASA Astrophysics Data System (ADS)

    Kogan, Ian I.; Nichols, Alexander

    We discuss the partners of the stress energy tensor and their structure in Logarithmic conformal field theories. In particular we draw attention to the fundamental differences between theories with zero and non-zero central charge. However they are both characterised by at least two independent parameters. We show how, by using a generalised Sugawara construction, one can calculate the logarithmic partner of T. We show that such a construction works in the c=-2 theory using the conformal dimension one primary currents which generate a logarithmic extension of the Kac-Moody algebra. This is an expanded version of a talk presented by A. Nichols at the conference on Logarithmic Conformal Field Theory and its Applications in Tehran Iran, 2001.

  11. Sub-barrier fusion of Si+Si systems

    NASA Astrophysics Data System (ADS)

    Colucci, G.; Montagnoli, G.; Stefanini, A. M.; Bourgin, D.; Čolović, P.; Corradi, L.; Courtin, S.; Faggian, M.; Fioretto, E.; Galtarossa, F.; Goasduff, A.; Haas, F.; Mazzocco, M.; Scarlassara, F.; Stefanini, C.; Strano, E.; Urbani, M.; Szilner, S.; Zhang, G. L.

    2017-11-01

    The near- and sub-barrier fusion excitation function has been measured for the system 30Si+30Si at the Laboratori Nazionali di Legnaro of INFN, using the 30Si beam of the XTU Tandem accelerator in the energy range 47 - 90 MeV. A set-up based on a beam electrostatic deflector was used for detecting fusion evaporation residues. The measured cross sections have been compared to previous data on 28Si+28Si and Coupled Channels (CC) calculations have been performed using M3Y+repulsion and Woods-Saxon potentials, where the lowlying 2+ and 3- excitations have been included. A weak imaginary potential was found to be necessary to reproduce the low energy 28Si+28Si data. This probably simulates the effect of the oblate deformation of this nucleus. On the contrary, 30Si is a spherical nucleus, 30Si+30Si is nicely fit by CC calculations and no imaginary potential is needed. For this system, no maximum shows up for the astrophysical S-factor so that we have no evidence for hindrance, as confirmed by the comparison with CC calculations. The logarithmic derivative of the two symmetric systems highlights their different low energy trend. A difference can also be noted in the two barrier distributions, where the high-energy peak present in 28Si+28Si is not observed for 30Si+30Si, probably due to the weaker couplings in last case.

  12. Multilevel Monte Carlo for two phase flow and Buckley–Leverett transport in random heterogeneous porous media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Müller, Florian, E-mail: florian.mueller@sam.math.ethz.ch; Jenny, Patrick, E-mail: jenny@ifd.mavt.ethz.ch; Meyer, Daniel W., E-mail: meyerda@ethz.ch

    2013-10-01

    Monte Carlo (MC) is a well known method for quantifying uncertainty arising for example in subsurface flow problems. Although robust and easy to implement, MC suffers from slow convergence. Extending MC by means of multigrid techniques yields the multilevel Monte Carlo (MLMC) method. MLMC has proven to greatly accelerate MC for several applications including stochastic ordinary differential equations in finance, elliptic stochastic partial differential equations and also hyperbolic problems. In this study, MLMC is combined with a streamline-based solver to assess uncertain two phase flow and Buckley–Leverett transport in random heterogeneous porous media. The performance of MLMC is compared tomore » MC for a two dimensional reservoir with a multi-point Gaussian logarithmic permeability field. The influence of the variance and the correlation length of the logarithmic permeability on the MLMC performance is studied.« less

  13. Method of detecting system function by measuring frequency response

    DOEpatents

    Morrison, John L.; Morrison, William H.; Christophersen, Jon P.; Motloch, Chester G.

    2013-01-08

    Methods of rapidly measuring an impedance spectrum of an energy storage device in-situ over a limited number of logarithmically distributed frequencies are described. An energy storage device is excited with a known input signal, and a response is measured to ascertain the impedance spectrum. An excitation signal is a limited time duration sum-of-sines consisting of a select number of frequencies. In one embodiment, magnitude and phase of each frequency of interest within the sum-of-sines is identified when the selected frequencies and sample rate are logarithmic integer steps greater than two. This technique requires a measurement with a duration of one period of the lowest frequency. In another embodiment, where selected frequencies are distributed in octave steps, the impedance spectrum can be determined using a captured time record that is reduced to a half-period of the lowest frequency.

  14. Three site Higgsless model at one loop

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chivukula, R. Sekhar; Simmons, Elizabeth H.; Matsuzaki, Shinya

    2007-04-01

    In this paper we compute the one loop chiral-logarithmic corrections to all O(p{sup 4}) counterterms in the three site Higgsless model. The calculation is performed using the background field method for both the chiral and gauge fields, and using Landau gauge for the quantum fluctuations of the gauge fields. The results agree with our previous calculations of the chiral-logarithmic corrections to the S and T parameters in 't Hooft-Feynman gauge. The work reported here includes a complete evaluation of all one loop divergences in an SU(2)xU(1) nonlinear sigma model, corresponding to an electroweak effective Lagrangian in the absence of custodialmore » symmetry.« less

  15. Surface capillary currents: Rediscovery of fluid-structure interaction by forced evolving boundary theory

    NASA Astrophysics Data System (ADS)

    Wang, Chunbai; Mitra, Ambar K.

    2016-01-01

    Any boundary surface evolving in viscous fluid is driven with surface capillary currents. By step function defined for the fluid-structure interface, surface currents are found near a flat wall in a logarithmic form. The general flat-plate boundary layer is demonstrated through the interface kinematics. The dynamics analysis elucidates the relationship of the surface currents with the adhering region as well as the no-slip boundary condition. The wall skin friction coefficient, displacement thickness, and the logarithmic velocity-defect law of the smooth flat-plate boundary-layer flow are derived with the advent of the forced evolving boundary method. This fundamental theory has wide applications in applied science and engineering.

  16. Top Quark Mass Calibration for Monte Carlo Event Generators

    NASA Astrophysics Data System (ADS)

    Butenschoen, Mathias; Dehnadi, Bahman; Hoang, André H.; Mateu, Vicent; Preisser, Moritz; Stewart, Iain W.

    2016-12-01

    The most precise top quark mass measurements use kinematic reconstruction methods, determining the top mass parameter of a Monte Carlo event generator mtMC. Because of hadronization and parton-shower dynamics, relating mtMC to a field theory mass is difficult. We present a calibration procedure to determine this relation using hadron level QCD predictions for observables with kinematic mass sensitivity. Fitting e+e- 2-jettiness calculations at next-to-leading-logarithmic and next-to-next-to-leading-logarithmic order to pythia 8.205, mtMC differs from the pole mass by 900 and 600 MeV, respectively, and agrees with the MSR mass within uncertainties, mtMC≃mt,1 GeV MSR .

  17. Logarithmic conformal field theory

    NASA Astrophysics Data System (ADS)

    Gainutdinov, Azat; Ridout, David; Runkel, Ingo

    2013-12-01

    Conformal field theory (CFT) has proven to be one of the richest and deepest subjects of modern theoretical and mathematical physics research, especially as regards statistical mechanics and string theory. It has also stimulated an enormous amount of activity in mathematics, shaping and building bridges between seemingly disparate fields through the study of vertex operator algebras, a (partial) axiomatisation of a chiral CFT. One can add to this that the successes of CFT, particularly when applied to statistical lattice models, have also served as an inspiration for mathematicians to develop entirely new fields: the Schramm-Loewner evolution and Smirnov's discrete complex analysis being notable examples. When the energy operator fails to be diagonalisable on the quantum state space, the CFT is said to be logarithmic. Consequently, a logarithmic CFT is one whose quantum space of states is constructed from a collection of representations which includes reducible but indecomposable ones. This qualifier arises because of the consequence that certain correlation functions will possess logarithmic singularities, something that contrasts with the familiar case of power law singularities. While such logarithmic singularities and reducible representations were noted by Rozansky and Saleur in their study of the U (1|1) Wess-Zumino-Witten model in 1992, the link between the non-diagonalisability of the energy operator and logarithmic singularities in correlators is usually ascribed to Gurarie's 1993 article (his paper also contains the first usage of the term 'logarithmic conformal field theory'). The class of CFTs that were under control at this time was quite small. In particular, an enormous amount of work from the statistical mechanics and string theory communities had produced a fairly detailed understanding of the (so-called) rational CFTs. However, physicists from both camps were well aware that applications from many diverse fields required significantly more complicated non-rational theories. Examples include critical percolation, supersymmetric string backgrounds, disordered electronic systems, sandpile models describing avalanche processes, and so on. In each case, the non-rationality and non-unitarity of the CFT suggested that a more general theoretical framework was needed. Driven by the desire to better understand these applications, the mid-1990s saw significant theoretical advances aiming to generalise the constructs of rational CFT to a more general class. In 1994, Nahm introduced an algorithm for computing the fusion product of representations which was significantly generalised two years later by Gaberdiel and Kausch who applied it to explicitly construct (chiral) representations upon which the energy operator acts non-diagonalisably. Their work made it clear that underlying the physically relevant correlation functions are classes of reducible but indecomposable representations that can be investigated mathematically to the benefit of applications. In another direction, Flohr had meanwhile initiated the study of modular properties of the characters of logarithmic CFTs, a topic which had already evoked much mathematical interest in the rational case. Since these seminal theoretical papers appeared, the field has undergone rapid development, both theoretically and with regard to applications. Logarithmic CFTs are now known to describe non-local observables in the scaling limit of critical lattice models, for example percolation and polymers, and are an integral part of our understanding of quantum strings propagating on supermanifolds. They are also believed to arise as duals of three-dimensional chiral gravity models, fill out hidden sectors in non-rational theories with non-compact target spaces, and describe certain transitions in various incarnations of the quantum Hall effect. Other physical applications range from two-dimensional turbulence and non-equilibrium systems to aspects of the AdS/CFT correspondence and describing supersymmetric sigma models beyond the topological sector. We refer the reader to the reviews in this collection for further applications and details. More recently, our understanding of logarithmic CFT has improved dramatically thanks largely to a better understanding of the underlying mathematical structures. This includes those associated to the vertex operator algebras themselves (representations, characters, modular transformations, fusion, braiding) as well as structures associated with applications to two-dimensional statistical models (diagram algebras, eg. Temperley-Lieb quantum groups). Not only are we getting to the point where we understand how these structures differ from standard (rational) theories, but we are starting to tackle applications both in the boundary and bulk settings. It is now clear that the logarithmic case is generic, so it is this case that one should expect to encounter in applications. We therefore feel that it is timely to review what has been accomplished in order to disseminate this improved understanding and motivate further applications. We now give a quick overview of the articles that constitute this special issue. Adamović and Milas provide a detailed summary of their rigorous results pertaining to logarithmic vertex operator (super)algebras constructed from lattices. This survey discusses the C2-cofiniteness of the (p, p') triplet models (this is the generalisation of rationality to the logarithmic setting), describes Zhu's algebra for (some of) these theories and outlines the difficulties involved in explicitly constructing the modules responsible for their logarithmic nature. Cardy gives an account of a popular approach to logarithmic theories that regards them, heuristically at least, as limits of ordinary (but non-rational) CFTs. More precisely, it seems that any given correlator may be computed as a limit of standard (non-logarithmic) correlators, any logarithmic singularities that arise do so because of a degeneration when taking the limit. He then illustrates this phenomenon in several theories describing statistical lattice models including the n → 0 limit of the O(n ) model and the Q → 1 limit of the Q-state Potts model. Creutzig and Ridout review the continuum approach to logarithmic CFT, using the percolation (boundary) CFT to detail the connection between module structure and logarithmic singularities in correlators before describing their proposed solution to the thorny issue of generalising modular data and Verlinde formulae to the logarithmic setting. They illustrate this proposal using the three best-understood examples of logarithmic CFTs: the (1, 2) models, related to symplectic fermions; the fractional level WZW model on , related to the beta gamma ghosts; and the WZW model on GL(1|1). The analysis in each case requires that the spectrum be continuous; C2-cofinite models are only recovered as orbifolds. Flohr and Koehn consider the characters of the irreducible modules in the spectrum of a CFT and discuss why these only span a proper subspace of the space of torus vacuum amplitudes in the logarithmic case. This is illustrated explicitly for the (1, 2) triplet model and conclusions are drawn for the action of the modular group. They then note that the irreducible characters of this model also admit fermionic sum forms which seem to fit well into Nahmrsquo;s well-known conjecture for rational theories. Quasi-particle interpretations are also introduced, leading to the conclusion that logarithmic C2-cofinite theories are not so terribly different to rational theories, at least in some respects. Fuchs, Schweigert and Stigner address the problem of constructing local logarithmic CFTs starting from the chiral theory. They first review the construction of the local theory in the non-logarithmic setting from an angle that will then generalise to logarithmic theories. In particular, they observe that the bulk space can be understood as a certain coend. The authors then show how to carry out the construction of the bulk space in the category of modules over a factorisable ribbon Hopf algebra, which shares many properties with the braided categories arising from logarithmic chiral theories. The authors proceed to construct the analogue of all-genus correlators in their setting and establish invariance under the mapping class group, i.e. locality of the correlators. Gainutdinov, Jacobsen, Read, Saleur and Vasseur review their approach based on the assumption that certain classes of logarithmic CFTs admit lattice regularisations with local degrees of freedom, for example quantum spin chains (with local interactions). They therefore study the finite-dimensional algebras generated by the hamiltonian densities (typically the Temperley-Lieb algebras and their extensions) that describe the dynamics of these lattice models. The authors then argue that the lattice algebras exhibit, in finite size, mathematical properties that are in correspondence with those of their continuum limits, allowing one to predict continuum structures directly from the lattice. Moreover, the lattice models considered admit quantum group symmetries that play a central role in the algebraic analysis (representation structure and fusion). Grumiller, Riedler, Rosseel and Zojer review the role that logarithmic CFTs may play in certain versions of the AdS/CFT correspondence, particularly for what is known as topologically massive gravity (TMG). This has been a very active subject over the last five years and the article takes great care to disentangle the contributions from the many groups that have participated. They begin with some general remarks on logarithmic behaviour, much in the spirit of Cardyrsquo;s review, before detailing the distinction between the chiral (no logs) and logarithmic proposals for critical TMG. The latter is then subjected to various consistency checks before discussing evidence for logarithmic behaviour in more general classes of gravity theories including those with boundaries, supersymmetry and galilean relativity. Gurarie has written an historical overview of his seminal contributions to this field, putting his results (and those of his collaborators) in the context of understanding applications to condensed matter physics. This includes the link between the non-diagonalisability of L0 and logarithmic singularities, a study of the c → 0 catastrophe, and a proposed resolution involving supersymmetric partners for the stress-energy tensor and its logarithmic partner field. Henkel and Rouhani describe a direction in which logarithmic singularities are observed in correlators of non-relativistic field theories. Their review covers the appropriate modifications of conformal invariance that are appropriate to non-equilibrium statistical mechanics, strongly anisotropic critical points and certain variants of TMG. The main variation away from the standard relativistic idea of conformal invariance is that time is explicitly distinguished from space when considering dilations and this leads to a variety of algebraic structures to explore. In this review, the link between non-diagonalisable representations and logarithmic singularities in correlators is generalised to these algebras, before two applications of the theory are discussed. Huang and Lepowsky give a non-technical overview of their work on braided tensor structures on suitable categories of representations of vertex operator algebras. They also place their work in historic context and compare it to related approaches. The authors sketch their construction of the so-called P(z)-tensor product of modules of a vertex operator algebra, and the construction of the associativity isomorphisms for this tensor product. They proceed to give a guide to their works leading to the first authorrsquo;s proof of modularity for a class of vertex operator algebras, and to their works, joint with Zhang, on logarithmic intertwining operators and the resulting tensor product theory. Morin-Duchesne and Saint-Aubin have contributed a research article describing their recent characterisation of when the transfer matrix of a periodic loop model fails to be diagonalisable. This generalises their recent result for non-periodic loop models and provides rigorous methods to justify what has often been assumed in the lattice approach to logarithmic CFT. The philosophy here is one of analysing lattice models with finite size, aiming to demonstrate that non-diagonalisability survives the scaling limit. This is extremely difficult in general (see also the review by Gainutdinov et al ), so it is remarkable that it is even possible to demonstrate this at any level of generality. Quella and Schomerus have prepared an extensive review covering their longstanding collaboration on the logarithmic nature of conformal sigma models on Lie supergroups and their cosets with applications to string theory and AdS/CFT. Beginning with a very welcome overview of Lie superalgebras and their representations, harmonic analysis and cohomological reduction, they then apply these mathematical tools to WZW models on type I Lie supergroups and their homogeneous subspaces. Along the way, deformations are discussed and potential dualities in the corresponding string theories are described. Ruelle provides an exhaustive account of his substantial contributions to the study of the abelian sandpile model. This is a statistical model which has the surprising feature that many correlation functions can be computed exactly, in the bulk and on the boundary, even though the spectrum of conformal weights is largely unknown. Nevertheless, there is much evidence suggesting that its scaling limit is described by an, as yet unknown, c = -2 logarithmic CFT. Semikhatov and Tipunin present their very recent results regarding the construction of logarithmic chiral W-algebra extensions of a fractional level algebra. The idea is that these algebras are the centralisers of a rank-two Nichols algebra which possesses at least one fermionic generator. In turn, these Nichols algebra generators are represented by screening operators which naturally appear in CFT bosonisation. The major advantage of using these generators is that they give strong hints about the representation theory and fusion rules of the chiral algebra. Simmons has contributed an article describing the calculation of various correlation functions in the logarithmic CFT that describes critical percolation. These calculations are interpreted geometrically in a manner that should be familiar to mathematicians studying Schramm-Loewner evolutions and point towards a (largely unexplored) bridge connecting logarithmic CFT with this branch of mathematics. Of course, the field of logarithmic CFT has benefited greatly from the work of many of researchers who are not represented in this special issue. The interested reader will find many links to their work in the bibliographies of the special issue articles and reviews. In summary, logarithmic CFT describes an extension of the incredibly successful methods of rational CFT to a more general setting. This extension is necessary to properly describe many different fundamental phenomena of physical interest. The formalism is moreover highly non-trivial from a mathematical point of view and so logarithmic theories are of significant interest to both physicists and mathematicians. We hope that the collection of articles that follows will serve as an inspiration, and a valuable resource, for both of these communities.

  18. Macronuclear Cytology of Synchronized Tetrahymena pyriformis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cameron, I. L.; Padilla, G. M.; Miller, Jr., O. L.

    1966-05-01

    Elliott, Kennedy and Bak ('62) and Elliott ('63) followed fine structural changes in macronuclei of Tetrahymena pyriformis which were synchronized by the heat shock method of Scherbaum and Zeuthen ('54). Using Elliott's morphological descriptions as a basis, we designed our investigations with two main objectives: First, to again study the. morphological changes which occur in the macronucleus of Tetrahymena synchronized by the heat shock method. The second objective was to compare these observations with Tetrahymena synchronized by an alternate method recently reported by Padilla and Cameron ('64). Therefore, we were able to compare the results from two different synchronization methodsmore » and to contrast these findings with the macronuclear cytology of Tetrahymena taken from a logarithmically growing culture. Comparison of cells treated in these three different ways enables us to evaluate the two different synchronization methods and to gain more information on the structural changes taking place in the macronucleus of Tetrahymena as a function of the cell cycle. Our observations were confined primarily to nucleolar morphology. The results indicate that cells synchronized by the Padilla and Cameron method more closely resemble logarithmically growing Tetrahymena in the macronuclear structure than do cells obtained by the Scherbaum and·Zeuthen synchronization method. .« less

  19. Computing Logarithms by Hand

    ERIC Educational Resources Information Center

    Reed, Cameron

    2016-01-01

    How can old-fashioned tables of logarithms be computed without technology? Today, of course, no practicing mathematician, scientist, or engineer would actually use logarithms to carry out a calculation, let alone worry about deriving them from scratch. But high school students may be curious about the process. This article develops a…

  20. Logarithmic scaling for fluctuations of a scalar concentration in wall turbulence.

    PubMed

    Mouri, Hideaki; Morinaga, Takeshi; Yagi, Toshimasa; Mori, Kazuyasu

    2017-12-01

    Within wall turbulence, there is a sublayer where the mean velocity and the variance of velocity fluctuations vary logarithmically with the height from the wall. This logarithmic scaling is also known for the mean concentration of a passive scalar. By using heat as such a scalar in a laboratory experiment of a turbulent boundary layer, the existence of the logarithmic scaling is shown here for the variance of fluctuations of the scalar concentration. It is reproduced by a model of energy-containing eddies that are attached to the wall.

  1. Mathematical model for logarithmic scaling of velocity fluctuations in wall turbulence.

    PubMed

    Mouri, Hideaki

    2015-12-01

    For wall turbulence, moments of velocity fluctuations are known to be logarithmic functions of the height from the wall. This logarithmic scaling is due to the existence of a characteristic velocity and to the nonexistence of any characteristic height in the range of the scaling. By using the mathematics of random variables, we obtain its necessary and sufficient conditions. They are compared with characteristics of a phenomenological model of eddies attached to the wall and also with those of the logarithmic scaling of the mean velocity.

  2. Logarithmic amplifiers.

    PubMed

    Gandler, W; Shapiro, H

    1990-01-01

    Logarithmic amplifiers (log amps), which produce an output signal proportional to the logarithm of the input signal, are widely used in cytometry for measurements of parameters that vary over a wide dynamic range, e.g., cell surface immunofluorescence. Existing log amp circuits all deviate to some extent from ideal performance with respect to dynamic range and fidelity to the logarithmic curve; accuracy in quantitative analysis using log amps therefore requires that log amps be individually calibrated. However, accuracy and precision may be limited by photon statistics and system noise when very low level input signals are encountered.

  3. Stress Energy tensor in LCFT and the Logarithmic Sugawara construction

    NASA Astrophysics Data System (ADS)

    Kogan, Ian I.; Nichols, Alexander

    2002-01-01

    We discuss the partners of the stress energy tensor and their structure in Logarithmic conformal field theories. In particular we draw attention to the fundamental differences between theories with zero and non-zero central charge. However they are both characterised by at least two independent parameters. We show how, by using a generalised Sugawara construction, one can calculate the logarithmic partner of T. We show that such a construction works in the c = -2 theory using the conformal dimension one primary currents which generate a logarithmic extension of the Kac-Moody algebra.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stevens, Mark J.; Saleh, Omar A.

    We calculated the force-extension curves for a flexible polyelectrolyte chain with varying charge separations by performing Monte Carlo simulations of a 5000 bead chain using a screened Coulomb interaction. At all charge separations, the force-extension curves exhibit a Pincus-like scaling regime at intermediate forces and a logarithmic regime at large forces. As the charge separation increases, the Pincus regime shifts to a larger range of forces and the logarithmic regime starts are larger forces. We also found that force-extension curve for the corresponding neutral chain has a logarithmic regime. Decreasing the diameter of bead in the neutral chain simulations removedmore » the logarithmic regime, and the force-extension curve tends to the freely jointed chain limit. In conclusion, this result shows that only excluded volume is required for the high force logarithmic regime to occur.« less

  5. Logarithmic M(2,p) minimal models, their logarithmic couplings, and duality

    NASA Astrophysics Data System (ADS)

    Mathieu, Pierre; Ridout, David

    2008-10-01

    A natural construction of the logarithmic extension of the M(2,p) (chiral) minimal models is presented, which generalises our previous model of percolation ( p=3). Its key aspect is the replacement of the minimal model irreducible modules by reducible ones obtained by requiring that only one of the two principal singular vectors of each module vanish. The resulting theory is then constructed systematically by repeatedly fusing these building block representations. This generates indecomposable representations of the type which signify the presence of logarithmic partner fields in the theory. The basic data characterising these indecomposable modules, the logarithmic couplings, are computed for many special cases and given a new structural interpretation. Quite remarkably, a number of them are presented in closed analytic form (for general p). These are the prime examples of "gauge-invariant" data—quantities independent of the ambiguities present in defining the logarithmic partner fields. Finally, mere global conformal invariance is shown to enforce strong constraints on the allowed spectrum: It is not possible to include modules other than those generated by the fusion of the model's building blocks. This generalises the statement that there cannot exist two effective central charges in a c=0 model. It also suggests the existence of a second "dual" logarithmic theory for each p. Such dual models are briefly discussed.

  6. Are infant mortality rate declines exponential? The general pattern of 20th century infant mortality rate decline

    PubMed Central

    Bishai, David; Opuni, Marjorie

    2009-01-01

    Background Time trends in infant mortality for the 20th century show a curvilinear pattern that most demographers have assumed to be approximately exponential. Virtually all cross-country comparisons and time series analyses of infant mortality have studied the logarithm of infant mortality to account for the curvilinear time trend. However, there is no evidence that the log transform is the best fit for infant mortality time trends. Methods We use maximum likelihood methods to determine the best transformation to fit time trends in infant mortality reduction in the 20th century and to assess the importance of the proper transformation in identifying the relationship between infant mortality and gross domestic product (GDP) per capita. We apply the Box Cox transform to infant mortality rate (IMR) time series from 18 countries to identify the best fitting value of lambda for each country and for the pooled sample. For each country, we test the value of λ against the null that λ = 0 (logarithmic model) and against the null that λ = 1 (linear model). We then demonstrate the importance of selecting the proper transformation by comparing regressions of ln(IMR) on same year GDP per capita against Box Cox transformed models. Results Based on chi-squared test statistics, infant mortality decline is best described as an exponential decline only for the United States. For the remaining 17 countries we study, IMR decline is neither best modelled as logarithmic nor as a linear process. Imposing a logarithmic transform on IMR can lead to bias in fitting the relationship between IMR and GDP per capita. Conclusion The assumption that IMR declines are exponential is enshrined in the Preston curve and in nearly all cross-country as well as time series analyses of IMR data since Preston's 1975 paper, but this assumption is seldom correct. Statistical analyses of IMR trends should assess the robustness of findings to transformations other than the log transform. PMID:19698144

  7. How Do Students Acquire an Understanding of Logarithmic Concepts?

    ERIC Educational Resources Information Center

    Mulqueeny, Ellen

    2012-01-01

    The use of logarithms, an important tool for calculus and beyond, has been reduced to symbol manipulation without understanding in most entry-level college algebra courses. The primary aim of this research, therefore, was to investigate college students' understanding of logarithmic concepts through the use of a series of instructional tasks…

  8. Design of a Programmable Gain, Temperature Compensated Current-Input Current-Output CMOS Logarithmic Amplifier.

    PubMed

    Ming Gu; Chakrabartty, Shantanu

    2014-06-01

    This paper presents the design of a programmable gain, temperature compensated, current-mode CMOS logarithmic amplifier that can be used for biomedical signal processing. Unlike conventional logarithmic amplifiers that use a transimpedance technique to generate a voltage signal as a logarithmic function of the input current, the proposed approach directly produces a current output as a logarithmic function of the input current. Also, unlike a conventional transimpedance amplifier the gain of the proposed logarithmic amplifier can be programmed using floating-gate trimming circuits. The synthesis of the proposed circuit is based on the Hart's extended translinear principle which involves embedding a floating-voltage source and a linear resistive element within a translinear loop. Temperature compensation is then achieved using a translinear-based resistive cancelation technique. Measured results from prototypes fabricated in a 0.5 μm CMOS process show that the amplifier has an input dynamic range of 120 dB and a temperature sensitivity of 230 ppm/°C (27 °C- 57°C), while consuming less than 100 nW of power.

  9. Factorization for jet radius logarithms in jet mass spectra at the LHC

    DOE PAGES

    Kolodrubetz, Daniel W.; Pietrulewicz, Piotr; Stewart, Iain W.; ...

    2016-12-14

    To predict the jet mass spectrum at a hadron collider it is crucial to account for the resummation of logarithms between the transverse momentum of the jet and its invariant mass m J . For small jet areas there are additional large logarithms of the jet radius R, which affect the convergence of the perturbative series. We present an analytic framework for exclusive jet production at the LHC which gives a complete description of the jet mass spectrum including realistic jet algorithms and jet vetoes. It factorizes the scales associated with m J , R, and the jet veto, enablingmore » in addition the systematic resummation of jet radius logarithms in the jet mass spectrum beyond leading logarithmic order. We discuss the factorization formulae for the peak and tail region of the jet mass spectrum and for small and large R, and the relations between the different regimes and how to combine them. Regions of experimental interest are classified which do not involve large nonglobal logarithms. We also present universal results for nonperturbative effects and discuss various jet vetoes.« less

  10. Analytic Evolution of Singular Distribution Amplitudes in QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tandogan Kunkel, Asli

    2014-08-01

    Distribution amplitudes (DAs) are the basic functions that contain information about the quark momentum. DAs are necessary to describe hard exclusive processes in quantum chromodynamics. We describe a method of analytic evolution of DAs that have singularities such as nonzero values at the end points of the support region, jumps at some points inside the support region and cusps. We illustrate the method by applying it to the evolution of a at (constant) DA, antisymmetric at DA, and then use the method for evolution of the two-photon generalized distribution amplitude. Our approach to DA evolution has advantages over the standardmore » method of expansion in Gegenbauer polynomials [1, 2] and over a straightforward iteration of an initial distribution with evolution kernel. Expansion in Gegenbauer polynomials requires an infinite number of terms in order to accurately reproduce functions in the vicinity of singular points. Straightforward iteration of an initial distribution produces logarithmically divergent terms at each iteration. In our method the logarithmic singularities are summed from the start, which immediately produces a continuous curve. Afterwards, in order to get precise results, only one or two iterations are needed.« less

  11. Two-level Schwartz methods for nonconforming finite elements and discontinuous coefficients

    NASA Technical Reports Server (NTRS)

    Sarkis, Marcus

    1993-01-01

    Two-level domain decomposition methods are developed for a simple nonconforming approximation of second order elliptic problems. A bound is established for the condition number of these iterative methods, which grows only logarithmically with the number of degrees of freedom in each subregion. This bound holds for two and three dimensions and is independent of jumps in the value of the coefficients.

  12. Theoretical Studies of Relaxation and Optical Properties of Polymers

    NASA Astrophysics Data System (ADS)

    Jin, Bih-Yaw

    1993-01-01

    This thesis is composed of two parts. In the part one, the empirical correlation between the logarithm of tunneling splittings and the temperature at which the spin-lattice relaxation time is minimum for methyl groups in different molecular crystals is explained successfully by taking multiphonon processes into account. We show that one phonon transitions dominate in the low barrier limit. However, in the intermediate barrier range and high barrier limit, it is necessary to include multiphonon processes. We also show that the empirical correlation depends only logarithmically on the details of the phonon bath. In the part two, we have investigated the optical and relaxation properties of conjugated polymers. The connection between the vibronic picture of Raman scattering and the third order perturbation approach in solid state physics is clarified in chapter 2. Starting from the Kramers -Heissenberg-Dirac formula for Raman scattering, we derive expressions for the Condon and Herzberg-Teller terms from a simple two-level system to a two-band system, i.e. polyacetylene, by using traditional vibronic picture. Both the Condon and Herzberg-Teller terms contribute to two-band processes, while three-band processes consist only of Herzberg-Teller terms in the solid state limit. Close to resonance the Condon term dominates and converges to the usual solid state result. In the off-resonance region the Herzberg -Teller term is comparable to Condon term for both small molecule and solid state system. In chapter 3, we will concentrate on the lattice relaxation of the lowest optically allowed 1B_ {u} state, especially, the effect of electron correlation on the excited state geometric relaxation for finite polyenes. We have examined the competition between electron-electron interaction and electron-phonon coupling on the formation of localized lattice distortion in the 1B_{u} state for finite polyene with chain length up to 30 double bonds. The chain length dependence of the lattice relaxation in 1B _{u} state has been studied thoroughly within singly excited configuration interaction for short range Hubbard, extended Hubbard model and long-range Pariser -Parr-Pople model. We have found that local distortion is not favored until a critical chain length is reached. Beyond this critical length, which is a function of electron-electron interaction and electron-phonon coupling strength, a self -trapped exciton is formed rather than the separated soliton -antisoliton configuration as expected in the independent electron theory. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617 -253-5668; Fax 617-253-1690.).

  13. A Probabilistic Model for Predicting Attenuation of Viruses During Percolation in Unsaturated Natural Barriers

    NASA Astrophysics Data System (ADS)

    Faulkner, B. R.; Lyon, W. G.

    2001-12-01

    We present a probabilistic model for predicting virus attenuation. The solution employs the assumption of complete mixing. Monte Carlo methods are used to generate ensemble simulations of virus attenuation due to physical, biological, and chemical factors. The model generates a probability of failure to achieve 4-log attenuation. We tabulated data from related studies to develop probability density functions for input parameters, and utilized a database of soil hydraulic parameters based on the 12 USDA soil categories. Regulators can use the model based on limited information such as boring logs, climate data, and soil survey reports for a particular site of interest. Plackett-Burman sensitivity analysis indicated the most important main effects on probability of failure to achieve 4-log attenuation in our model were mean logarithm of saturated hydraulic conductivity (+0.396), mean water content (+0.203), mean solid-water mass transfer coefficient (-0.147), and the mean solid-water equilibrium partitioning coefficient (-0.144). Using the model, we predicted the probability of failure of a one-meter thick proposed hydrogeologic barrier and a water content of 0.3. With the currently available data and the associated uncertainty, we predicted soils classified as sand would fail (p=0.999), silt loams would also fail (p=0.292), but soils classified as clays would provide the required 4-log attenuation (p=0.001). The model is extendible in the sense that probability density functions of parameters can be modified as future studies refine the uncertainty, and the lightweight object-oriented design of the computer model (implemented in Java) will facilitate reuse with modified classes. This is an abstract of a proposed presentation and does not necessarily reflect EPA policy.

  14. The Logarithmic Tail of Néel Walls

    NASA Astrophysics Data System (ADS)

    Melcher, Christof

    We study the multiscale problem of a parametrized planar 180° rotation of magnetization states in a thin ferromagnetic film. In an appropriate scaling and when the film thickness is comparable to the Bloch line width, the underlying variational principle has the form where the reduced stray-field operator Q approximates (-Δ)1/2 as the quality factor Q tends to zero. We show that the associated Néel wall profile u exhibits a very long logarithmic tail. The proof relies on limiting elliptic regularity methods on the basis of the associated Euler-Lagrange equation and symmetrization arguments on the basis of the variational principle. Finally we study the renormalized limit behavior as Q tends to zero.

  15. A Comparative Study of the Dispersion of Multi-Wall Carbon Nanotubes Made by Arc-Discharge and Chemical Vapour Deposition.

    PubMed

    Frømyr, Tomas-Roll; Bourgeaux-Goget, Marie; Hansen, Finn Knut

    2015-05-01

    A method has been developed to characterize the dispersion of multi-wall carbon nanotubes in water using a disc centrifuge for the detection of individual carbon nanotubes, residual aggregates, and contaminants. Carbon nanotubes produced by arc-discharge have been measured and compared with carbon nanotubes produced by chemical vapour deposition. Studies performed on both pristine (see text) arc-discharge nanotubes is rather strong and that high ultra-sound intensity is required to achieve complete dispersion of carbon nanotube bundles. The logarithm of the mode of the particle size distribution of the arc-discharge carbon nanotubes was found to be a linear function of the logarithm of the total ultrasonic energy input in the dispersion process.

  16. Top Quark Mass Calibration for Monte Carlo Event Generators.

    PubMed

    Butenschoen, Mathias; Dehnadi, Bahman; Hoang, André H; Mateu, Vicent; Preisser, Moritz; Stewart, Iain W

    2016-12-02

    The most precise top quark mass measurements use kinematic reconstruction methods, determining the top mass parameter of a Monte Carlo event generator m_{t}^{MC}. Because of hadronization and parton-shower dynamics, relating m_{t}^{MC} to a field theory mass is difficult. We present a calibration procedure to determine this relation using hadron level QCD predictions for observables with kinematic mass sensitivity. Fitting e^{+}e^{-} 2-jettiness calculations at next-to-leading-logarithmic and next-to-next-to-leading-logarithmic order to pythia 8.205, m_{t}^{MC} differs from the pole mass by 900 and 600 MeV, respectively, and agrees with the MSR mass within uncertainties, m_{t}^{MC}≃m_{t,1  GeV}^{MSR}.

  17. On Complicated Expansions of Solutions to ODES

    NASA Astrophysics Data System (ADS)

    Bruno, A. D.

    2018-03-01

    Polynomial ordinary differential equations are studied by asymptotic methods. The truncated equation associated with a vertex or a nonhorizontal edge of their polygon of the initial equation is assumed to have a solution containing the logarithm of the independent variable. It is shown that, under very weak constraints, this nonpower asymptotic form of solutions to the original equation can be extended to an asymptotic expansion of these solutions. This is an expansion in powers of the independent variable with coefficients being Laurent series in decreasing powers of the logarithm. Such expansions are sometimes called psi-series. Algorithms for such computations are described. Six examples are given. Four of them are concern with Painlevé equations. An unexpected property of these expansions is revealed.

  18. Simulations of stretching a flexible polyelectrolyte with varying charge separation

    DOE PAGES

    Stevens, Mark J.; Saleh, Omar A.

    2016-07-22

    We calculated the force-extension curves for a flexible polyelectrolyte chain with varying charge separations by performing Monte Carlo simulations of a 5000 bead chain using a screened Coulomb interaction. At all charge separations, the force-extension curves exhibit a Pincus-like scaling regime at intermediate forces and a logarithmic regime at large forces. As the charge separation increases, the Pincus regime shifts to a larger range of forces and the logarithmic regime starts are larger forces. We also found that force-extension curve for the corresponding neutral chain has a logarithmic regime. Decreasing the diameter of bead in the neutral chain simulations removedmore » the logarithmic regime, and the force-extension curve tends to the freely jointed chain limit. In conclusion, this result shows that only excluded volume is required for the high force logarithmic regime to occur.« less

  19. Improved maximum average correlation height filter with adaptive log base selection for object recognition

    NASA Astrophysics Data System (ADS)

    Tehsin, Sara; Rehman, Saad; Awan, Ahmad B.; Chaudry, Qaiser; Abbas, Muhammad; Young, Rupert; Asif, Afia

    2016-04-01

    Sensitivity to the variations in the reference image is a major concern when recognizing target objects. A combinational framework of correlation filters and logarithmic transformation has been previously reported to resolve this issue alongside catering for scale and rotation changes of the object in the presence of distortion and noise. In this paper, we have extended the work to include the influence of different logarithmic bases on the resultant correlation plane. The meaningful changes in correlation parameters along with contraction/expansion in the correlation plane peak have been identified under different scenarios. Based on our research, we propose some specific log bases to be used in logarithmically transformed correlation filters for achieving suitable tolerance to different variations. The study is based upon testing a range of logarithmic bases for different situations and finding an optimal logarithmic base for each particular set of distortions. Our results show improved correlation and target detection accuracies.

  20. Establishment of a Method for Measuring Antioxidant Capacity in Urine, Based on Oxidation Reduction Potential and Redox Couple I2/KI

    PubMed Central

    Cao, Tinghui; He, Min; Bai, Tianyu

    2016-01-01

    Objectives. To establish a new method for determination of antioxidant capacity of human urine based on the redox couple I2/KI and to evaluate the redox status of healthy and diseased individuals. Methods. The method was based on the linear relationship between oxidation reduction potential (ORP) and logarithm of concentration ratio of I2/KI. ORP of a solution with a known concentration ratio of I2/KI will change when reacted with urine. To determine the accuracy of the method, both vitamin C and urine were reacted separately with I2/KI solution. The new method was compared with the traditional method of iodine titration and then used to measure the antioxidant capacity of urine samples from 30 diabetic patients and 30 healthy subjects. Results. A linear relationship was found between logarithm of concentration ratio of I2/KI and ORP (R 2 = 0.998). Both vitamin C and urine concentration showed a linear relationship with ORP (R 2 = 0.994 and 0.986, resp.). The precision of the method was in the acceptable range and results of two methods had a linear correlation (R 2 = 0.987). Differences in ORP values between diabetic group and control group were statistically significant (P < 0.05). Conclusions. A new method for measuring the antioxidant capacity of clinical urine has been established. PMID:28115919

  1. Benjamin Banneker and the Law of Sines

    ERIC Educational Resources Information Center

    Mahoney, John F.

    2005-01-01

    Benjamin Banneker, a self-taught mathematician, surveyor and astronomer published annual almanacs containing his astronomical observations and predictions. Banneker who also used logarithms to apply the Law of Sines believed that the method used to solve a mathematical problem depends on the tools available.

  2. Choice of crystal surface finishing for a dual-ended readout depth-of-interaction (DOI) detector.

    PubMed

    Fan, Peng; Ma, Tianyu; Wei, Qingyang; Yao, Rutao; Liu, Yaqiang; Wang, Shi

    2016-02-07

    The objective of this study was to choose the crystal surface finishing for a dual-ended readout (DER) DOI detector. Through Monte Carlo simulations and experimental studies, we evaluated 4 crystal surface finishing options as combinations of crystal surface polishing (diffuse or specular) and reflector (diffuse or specular) options on a DER detector. We also tested one linear and one logarithm DOI calculation algorithm. The figures of merit used were DOI resolution, DOI positioning error, and energy resolution. Both the simulation and experimental results show that (1) choosing a diffuse type in either surface polishing or reflector would improve DOI resolution but degrade energy resolution; (2) crystal surface finishing with a diffuse polishing combined with a specular reflector appears a favorable candidate with a good balance of DOI and energy resolution; and (3) the linear and logarithm DOI calculation algorithms show overall comparable DOI error, and the linear algorithm was better for photon interactions near the ends of the crystal while the logarithm algorithm was better near the center. These results provide useful guidance in DER DOI detector design in choosing the crystal surface finishing and DOI calculation methods.

  3. Leading chiral logarithms for the nucleon mass

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vladimirov, Alexey A.; Bijnens, Johan

    2016-01-22

    We give a short introduction to the calculation of the leading chiral logarithms, and present the results of the recent evaluation of the LLog series for the nucleon mass within the heavy baryon theory. The presented results are the first example of LLog calculation in the nucleon ChPT. We also discuss some regularities observed in the leading logarithmical series for nucleon mass.

  4. Robust Bioinformatics Recognition with VLSI Biochip Microsystem

    NASA Technical Reports Server (NTRS)

    Lue, Jaw-Chyng L.; Fang, Wai-Chi

    2006-01-01

    A microsystem architecture for real-time, on-site, robust bioinformatic patterns recognition and analysis has been proposed. This system is compatible with on-chip DNA analysis means such as polymerase chain reaction (PCR)amplification. A corresponding novel artificial neural network (ANN) learning algorithm using new sigmoid-logarithmic transfer function based on error backpropagation (EBP) algorithm is invented. Our results show the trained new ANN can recognize low fluorescence patterns better than the conventional sigmoidal ANN does. A differential logarithmic imaging chip is designed for calculating logarithm of relative intensities of fluorescence signals. The single-rail logarithmic circuit and a prototype ANN chip are designed, fabricated and characterized.

  5. Isotopic effects in sub-barrier fusion of Si + Si systems

    NASA Astrophysics Data System (ADS)

    Colucci, G.; Montagnoli, G.; Stefanini, A. M.; Esbensen, H.; Bourgin, D.; Čolović, P.; Corradi, L.; Faggian, M.; Fioretto, E.; Galtarossa, F.; Goasduff, A.; Grebosz, J.; Haas, F.; Mazzocco, M.; Scarlassara, F.; Stefanini, C.; Strano, E.; Szilner, S.; Urbani, M.; Zhang, G. L.

    2018-04-01

    Background: Recent measurements of fusion cross sections for the 28Si+28Si system revealed a rather unsystematic behavior; i.e., they drop faster near the barrier than at lower energies. This was tentatively attributed to the large oblate deformation of 28Si because coupled-channels (CC) calculations largely underestimate the 28Si+28Si cross sections at low energies, unless a weak imaginary potential is applied, probably simulating the deformation. 30Si has no permanent deformation and its low-energy excitations are of a vibrational nature. Previous measurements of this system reached only 4 mb, which is not sufficient to obtain information on effects that should show up at lower energies. Purpose: The aim of the present experiment was twofold: (i) to clarify the underlying fusion dynamics by measuring the symmetric case 30Si+30Si in an energy range from around the Coulomb barrier to deep sub-barrier energies, and (ii) to compare the results with the behavior of 28Si+28Si involving two deformed nuclei. Methods: 30Si beams from the XTU tandem accelerator of the Laboratori Nazionali di Legnaro of the Istituto Nazionale di Fisica Nucleare were used, bombarding thin metallic 30Si targets (50 μ g /cm2) enriched to 99.64 % in mass 30. An electrostatic beam deflector allowed the detection of fusion evaporation residues (ERs) at very forward angles, and angular distributions of ERs were measured. Results: The excitation function of 30Si+30Si was measured down to the level of a few microbarns. It has a regular shape, at variance with the unusual trend of 28Si+28Si . The extracted logarithmic derivative does not reach the LCS limit at low energies, so that no maximum of the S factor shows up. CC calculations were performed including the low-lying 2+ and 3- excitations. Conclusions: Using a Woods-Saxon potential the experimental cross sections at low energies are overpredicted, and this is a clear sign of hindrance, while the calculations performed with a M3Y + repulsion potential nicely fit the data at low energies, without the need of an imaginary potential. The comparison with the results for 28Si+28Si strengthens the explanation of the oblate shape of 28Si being the reason for the irregular behavior of that system.

  6. Program for Calculating the Cubic and Fifth Roots of a Number of Newton's Method (610 IBM Electronic Computer); PROGRAMMA PER IL CALCOLO DELLE RADICI TERZA E QUINTA DI UN NUMERO, COL METODO DI NEWTON (CALCOLATORE ELETTRONICO IBM 610)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giucci, D.

    1963-01-01

    A program was devised for calculating the cubic and fifth roots of a number of Newton's method using the 610 IBM electronic computer. For convenience a program was added for obtaining n/sup th/ roots by the logarithmic method. (auth)

  7. Strength and life criteria for corrugated fiberboard by three methods

    Treesearch

    Thomas J. Urbanik

    1997-01-01

    The conventional test method for determining the stacking life of corrugated containers at a fixed load level does not adequately predict a safe load when storage time is fixed. This study introduced multiple load levels and related the probability of time at failure to load. A statistical analysis of logarithm-of-time failure data varying with load level predicts the...

  8. Analysis of aircraft spectrometer data with logarithmic residuals

    NASA Technical Reports Server (NTRS)

    Green, A. A.; Craig, M. D.

    1985-01-01

    Spectra from airborne systems must be analyzed in terms of their mineral-related absorption features. Methods for removing backgrounds and extracting these features one at a time from reflectance spectra are discussed. Methods for converting radiance spectra into a form similar to reflectance spectra so that the feature extraction procedures can be implemented on aircraft spectrometer data are also discussed.

  9. Symmetry Properties of Potentiometric Titration Curves.

    ERIC Educational Resources Information Center

    Macca, Carlo; Bombi, G. Giorgio

    1983-01-01

    Demonstrates how the symmetry properties of titration curves can be efficiently and rigorously treated by means of a simple method, assisted by the use of logarithmic diagrams. Discusses the symmetry properties of several typical titration curves, comparing the graphical approach and an explicit mathematical treatment. (Author/JM)

  10. Exact density-potential pairs from complex-shifted axisymmetric systems

    NASA Astrophysics Data System (ADS)

    Ciotti, Luca; Marinacci, Federico

    2008-07-01

    In a previous paper, the complex-shift method has been applied to self-gravitating spherical systems, producing new analytical axisymmetric density-potential pairs. We now extend the treatment to the Miyamoto-Nagai disc and the Binney logarithmic halo, and we study the resulting axisymmetric and triaxial analytical density-potential pairs; we also show how to obtain the surface density of shifted systems from the complex shift of the surface density of the parent model. In particular, the systems obtained from Miyamoto-Nagai discs can be used to describe disc galaxies with a peanut-shaped bulge or with a central triaxial bar, depending on the direction of the shift vector. By using a constructive method that can be applied to generic axisymmetric systems, we finally show that the Miyamoto-Nagai and the Satoh discs, and the Binney logarithmic halo cannot be obtained from the complex shift of any spherical parent distribution. As a by-product of this study, we also found two new generating functions in closed form for even and odd Legendre polynomials, respectively.

  11. Traps in AlGaN /GaN/SiC heterostructures studied by deep level transient spectroscopy

    NASA Astrophysics Data System (ADS)

    Fang, Z.-Q.; Look, D. C.; Kim, D. H.; Adesida, I.

    2005-10-01

    AlGaN /GaN/SiC Schottky barrier diodes (SBDs), with and without Si3N4 passivation, have been characterized by temperature-dependent current-voltage and capacitance-voltage measurements, and deep level transient spectroscopy (DLTS). A dominant trap A1, with activation energy of 1.0 eV and apparent capture cross section of 2×10-12cm2, has been observed in both unpassivated and passivated SBDs. Based on the well-known logarithmic dependence of DLTS peak height with filling pulse width for a line-defect related trap, A1, which is commonly observed in thin GaN layers grown by various techniques, is believed to be associated with threading dislocations. At high temperatures, the DLTS signal sometimes becomes negative, likely due to an artificial surface-state effect.

  12. Top Quark Mass Calibration for Monte Carlo Event Generators

    DOE PAGES

    Butenschoen, Mathias; Dehnadi, Bahman; Hoang, André H.; ...

    2016-11-29

    The most precise top quark mass measurements use kinematic reconstruction methods, determining the top mass parameter of a Monte Carlo event generator mmore » $$MC\\atop{t}$$. Because of hadronization and parton-shower dynamics, relating m$$MC\\atop{t}$$ to a field theory mass is difficult. Here, we present a calibration procedure to determine this relation using hadron level QCD predictions for observables with kinematic mass sensitivity. Fitting e +e −2-jettiness calculations at next-to-leading-logarithmic and next-to-next-to-leading-logarithmic order to PYTHIA 8.205, m$$MC\\atop{t}$$ differs from the pole mass by 900 and 600 MeV, respectively, and agrees with the MSR mass within uncertainties, m$$MC\\atop{t}$$ ≃ m$$MSR\\atop{t,1 GeV}$$.« less

  13. On the problem of data assimilation by means of synchronization

    NASA Astrophysics Data System (ADS)

    Szendro, Ivan G.; RodríGuez, Miguel A.; López, Juan M.

    2009-10-01

    The potential use of synchronization as a method for data assimilation is investigated in a Lorenz96 model. Data representing the reality are obtained from a Lorenz96 model with added noise. We study the assimilation scheme by means of synchronization for different noise intensities. We use a novel plot representation of the synchronization error in a phase diagram consisting of two variables: the amplitude and the width of the error after a suitable logarithmic transformation (the so-called mean-variance of logarithms diagram). Our main result concerns the existence of an "optimal" coupling for which the synchronization is maximal. We finally show how this allows us to quantify the degree of assimilation, providing a criterion for the selection of optimal couplings and validity of models.

  14. Gauge boson exchange in AdS d+1

    NASA Astrophysics Data System (ADS)

    D'Hoker, Eric; Freedman, Daniel Z.

    1999-04-01

    We study the amplitude for exchange of massless gauge bosons between pairs of massive scalar fields in anti-de Sitter space. In the AdS/CFT correspondence this amplitude describes the contribution of conserved flavor symmetry currents to 4-point functions of scalar operators in the boundary conformal theory. A concise, covariant, Y2K compatible derivation of the gauge boson propagator in AdS d + 1 is given. Techniques are developed to calculate the two bulk integrals over AdS space leading to explicit expressions or convenient, simple integral representations for the amplitude. The amplitude contains leading power and sub-leading logarithmic singularities in the gauge boson channel and leading logarithms in the crossed channel. The new methods of this paper are expected to have other applications in the study of the Maldacena conjecture.

  15. Nonlinear Dot Plots.

    PubMed

    Rodrigues, Nils; Weiskopf, Daniel

    2018-01-01

    Conventional dot plots use a constant dot size and are typically applied to show the frequency distribution of small data sets. Unfortunately, they are not designed for a high dynamic range of frequencies. We address this problem by introducing nonlinear dot plots. Adopting the idea of nonlinear scaling from logarithmic bar charts, our plots allow for dots of varying size so that columns with a large number of samples are reduced in height. For the construction of these diagrams, we introduce an efficient two-way sweep algorithm that leads to a dense and symmetrical layout. We compensate aliasing artifacts at high dot densities by a specifically designed low-pass filtering method. Examples of nonlinear dot plots are compared to conventional dot plots as well as linear and logarithmic histograms. Finally, we include feedback from an expert review.

  16. Bio-Inspired Microsystem for Robust Genetic Assay Recognition

    PubMed Central

    Lue, Jaw-Chyng; Fang, Wai-Chi

    2008-01-01

    A compact integrated system-on-chip (SoC) architecture solution for robust, real-time, and on-site genetic analysis has been proposed. This microsystem solution is noise-tolerable and suitable for analyzing the weak fluorescence patterns from a PCR prepared dual-labeled DNA microchip assay. In the architecture, a preceding VLSI differential logarithm microchip is designed for effectively computing the logarithm of the normalized input fluorescence signals. A posterior VLSI artificial neural network (ANN) processor chip is used for analyzing the processed signals from the differential logarithm stage. A single-channel logarithmic circuit was fabricated and characterized. A prototype ANN chip with unsupervised winner-take-all (WTA) function was designed, fabricated, and tested. An ANN learning algorithm using a novel sigmoid-logarithmic transfer function based on the supervised backpropagation (BP) algorithm is proposed for robustly recognizing low-intensity patterns. Our results show that the trained new ANN can recognize low-fluorescence patterns better than an ANN using the conventional sigmoid function. PMID:18566679

  17. A study of the eigenvectors of low frequency vibrational modes in crystalline cytidine via high pressure Raman spectroscopy

    NASA Astrophysics Data System (ADS)

    Lee, Scott A.

    2014-03-01

    High-pressure Raman spectroscopy has been used to study the eigenvectors and eigenvalues of the low-frequency vibrational modes of crystalline cytidine at 295 K by evaluating the logarithmic derivative of the vibrational frequency with respect to pressure: 1/ω dω/dP. Crystalline samples of molecular materials such as cytidine have vibrational modes that are localized within a molecular unit (``internal'' modes) as well as modes in which the molecular units vibrate against each other (``external'' modes). The value of the logarithmic derivative is a diagnostic probe of the nature of the eigenvector of the vibrational modes, making high pressure experiments a very useful probe for such studies. Internal stretching modes have low logarithmic derivatives while external as well as internal torsional and bending modes have higher logarithmic derivatives. All of the Raman modes below 200 cm-1 in cytidine are found to have high logarithmic derivatives, consistent with being either external modes or internal torsional or bending modes.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kolodrubetz, Daniel W.; Pietrulewicz, Piotr; Stewart, Iain W.

    To predict the jet mass spectrum at a hadron collider it is crucial to account for the resummation of logarithms between the transverse momentum of the jet and its invariant mass m J . For small jet areas there are additional large logarithms of the jet radius R, which affect the convergence of the perturbative series. We present an analytic framework for exclusive jet production at the LHC which gives a complete description of the jet mass spectrum including realistic jet algorithms and jet vetoes. It factorizes the scales associated with m J , R, and the jet veto, enablingmore » in addition the systematic resummation of jet radius logarithms in the jet mass spectrum beyond leading logarithmic order. We discuss the factorization formulae for the peak and tail region of the jet mass spectrum and for small and large R, and the relations between the different regimes and how to combine them. Regions of experimental interest are classified which do not involve large nonglobal logarithms. We also present universal results for nonperturbative effects and discuss various jet vetoes.« less

  19. A non-local structural derivative model for characterization of ultraslow diffusion in dense colloids

    NASA Astrophysics Data System (ADS)

    Liang, Yingjie; Chen, Wen

    2018-03-01

    Ultraslow diffusion has been observed in numerous complicated systems. Its mean squared displacement (MSD) is not a power law function of time, but instead a logarithmic function, and in some cases grows even more slowly than the logarithmic rate. The distributed-order fractional diffusion equation model simply does not work for the general ultraslow diffusion. Recent study has used the local structural derivative to describe ultraslow diffusion dynamics by using the inverse Mittag-Leffler function as the structural function, in which the MSD is a function of inverse Mittag-Leffler function. In this study, a new stretched logarithmic diffusion law and its underlying non-local structural derivative diffusion model are proposed to characterize the ultraslow diffusion in aging dense colloidal glass at both the short and long waiting times. It is observed that the aging dynamics of dense colloids is a class of the stretched logarithmic ultraslow diffusion processes. Compared with the power, the logarithmic, and the inverse Mittag-Leffler diffusion laws, the stretched logarithmic diffusion law has better precision in fitting the MSD of the colloidal particles at high densities. The corresponding non-local structural derivative diffusion equation manifests clear physical mechanism, and its structural function is equivalent to the first-order derivative of the MSD.

  20. Dominance, Information, and Hierarchical Scaling of Variance Space.

    ERIC Educational Resources Information Center

    Ceurvorst, Robert W.; Krus, David J.

    1979-01-01

    A method for computation of dominance relations and for construction of their corresponding hierarchical structures is presented. The link between dominance and variance allows integration of the mathematical theory of information with least squares statistical procedures without recourse to logarithmic transformations of the data. (Author/CTM)

  1. Fast and selective determination of total protein in milk powder via titration of moving reaction boundary electrophoresis.

    PubMed

    Guo, Cheng-ye; Wang, Hou-yu; Liu, Xiao-ping; Fan, Liu-yin; Zhang, Lei; Cao, Cheng-xi

    2013-05-01

    In this paper, moving reaction boundary titration (MRBT) was developed for rapid and accurate quantification of total protein in infant milk powder, from the concept of moving reaction boundary (MRB) electrophoresis. In the method, the MRB was formed by the hydroxide ions and the acidic residues of milk proteins immobilized via cross-linked polyacrylamide gel (PAG), an acid-base indicator was used to denote the boundary motion. As a proof of concept, we chose five brands of infant milk powders to study the feasibility of MRBT method. The calibration curve of MRB velocity versus logarithmic total protein content of infant milk powder sample was established based on the visual signal of MRB motion as a function of logarithmic milk protein content. Weak influence of nonprotein nitrogen (NPN) reagents (e.g., melamine and urea) on MRBT method was observed, due to the fact that MRB was formed with hydroxide ions and the acidic residues of captured milk proteins, rather than the alkaline residues or the NPN reagents added. The total protein contents in infant milk powder samples detected via the MRBT method were in good agreement with those achieved by the classic Kjeldahl method. In addition, the developed method had much faster measuring speed compared with the Kjeldahl method. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. New graphic methods for determining the depth and thickness of strata and the projection of dip

    USGS Publications Warehouse

    Palmer, Harold S.

    1919-01-01

    Geologists, both in the field and in the office, frequently encounter trigonometric problems the solution of which, though simple enough, is somewhat laborious by the use of trigonometric and logarithmic tables. Charts, tables, and diagrams of various types for facilitating the computations have been published, and a new method may seem to be a superfluous addition to the literature.

  3. Z -boson decays to a vector quarkonium plus a photon

    NASA Astrophysics Data System (ADS)

    Bodwin, Geoffrey T.; Chung, Hee Sok; Ee, June-Haak; Lee, Jungil

    2018-01-01

    We compute the decay rates for the processes Z →V +γ , where Z is the Z -boson, γ is the photon, and V is one of the vector quarkonia J /ψ or ϒ (n S ), with n =1 , 2, or 3. Our computations include corrections through relative orders αs and v2 and resummations of logarithms of mZ2/mQ2, to all orders in αs, at next-to-leading-logarithmic accuracy. (v is the velocity of the heavy quark Q or the heavy antiquark Q ¯ in the quarkonium rest frame, and mZ and mQ are the masses of Z and Q , respectively.) Our calculations are the first to include both the order-αs correction to the light-cone distributions amplitude and the resummation of logarithms of mZ2/mQ2 and are the first calculations for the ϒ (2 S ) and ϒ (3 S ) final states. The resummations of logarithms of mZ2/mQ2 that are associated with the order-αs and order-v2 corrections are carried out by making use of the Abel-Padé method. We confirm the analytic result for the order-v2 correction that was presented in a previous publication, and we correct the relative sign of the direct and indirect amplitudes and some choices of scales in that publication. Our branching fractions for Z →J /ψ +γ and Z →ϒ (1 S )+γ differ by 2.0 σ and -4.0 σ , respectively, from the branching fractions that are given in the most recent publication on this topic (in units of the uncertainties that are given in that publication). However, we argue that the uncertainties in the rates are underestimated in that publication.

  4. Quantum loop corrections of a charged de Sitter black hole

    NASA Astrophysics Data System (ADS)

    Naji, J.

    2018-03-01

    A charged black hole in de Sitter (dS) space is considered and logarithmic corrected entropy used to study its thermodynamics. Logarithmic corrections of entropy come from thermal fluctuations, which play a role of quantum loop correction. In that case we are able to study the effect of quantum loop on black hole thermodynamics and statistics. As a black hole is a gravitational object, it helps to obtain some information about the quantum gravity. The first and second laws of thermodynamics are investigated for the logarithmic corrected case and we find that it is only valid for the charged dS black hole. We show that the black hole phase transition disappears in the presence of logarithmic correction.

  5. Ultrafiltrative deinking of flexographic ONP : the role of surfactants

    Treesearch

    Bradley H. Upton; Gopal A. Krishnagopalan; Said Abubakr

    1999-01-01

    Ultrafiltration is a potentially viable method of removing finely dispersed flexographic pigments from the deinking water loop. This work examines the effects of surface-active materials on ultrafiltration efficiency. A logarithmic relationship between permeate flax and pigment concentration was demonstrated at ink concentrations above 0.4%, permeation rates becoming...

  6. Small range logarithm calculation on Intel Quartus II Verilog

    NASA Astrophysics Data System (ADS)

    Mustapha, Muhazam; Mokhtar, Anis Shahida; Ahmad, Azfar Asyrafie

    2018-02-01

    Logarithm function is the inverse of exponential function. This paper implement power series of natural logarithm function using Verilog HDL in Quartus II. The mode of design used is RTL in order to decrease the number of megafunctions. The simulations were done to determine the precision and number of LEs used so that the output calculated accurately. It is found that the accuracy of the system only valid for the range of 1 to e.

  7. Promoting convergence: The Phi spiral in abduction of mouse corneal behaviors

    PubMed Central

    Rhee, Jerry; Nejad, Talisa Mohammad; Comets, Olivier; Flannery, Sean; Gulsoy, Eine Begum; Iannaccone, Philip; Foster, Craig

    2015-01-01

    Why do mouse corneal epithelial cells display spiraling patterns? We want to provide an explanation for this curious phenomenon by applying an idealized problem solving process. Specifically, we applied complementary line-fitting methods to measure transgenic epithelial reporter expression arrangements displayed on three mature, live enucleated globes to clarify the problem. Two prominent logarithmic curves were discovered, one of which displayed the ϕ ratio, an indicator of an optimal configuration in phyllotactic systems. We then utilized two different computational approaches to expose our current understanding of the behavior. In one procedure, which involved an isotropic mechanics-based finite element method, we successfully produced logarithmic spiral curves of maximum shear strain based pathlines but computed dimensions displayed pitch angles of 35° (ϕ spiral is ∼17°), which was altered when we fitted the model with published measurements of coarse collagen orientations. We then used model-based reasoning in context of Peircean abduction to select a working hypothesis. Our work serves as a concise example of applying a scientific habit of mind and illustrates nuances of executing a common method to doing integrative science. © 2014 Wiley Periodicals, Inc. Complexity 20: 22–38, 2015 PMID:25755620

  8. Continuous time random walk model with asymptotical probability density of waiting times via inverse Mittag-Leffler function

    NASA Astrophysics Data System (ADS)

    Liang, Yingjie; Chen, Wen

    2018-04-01

    The mean squared displacement (MSD) of the traditional ultraslow diffusion is a logarithmic function of time. Recently, the continuous time random walk model is employed to characterize this ultraslow diffusion dynamics by connecting the heavy-tailed logarithmic function and its variation as the asymptotical waiting time density. In this study we investigate the limiting waiting time density of a general ultraslow diffusion model via the inverse Mittag-Leffler function, whose special case includes the traditional logarithmic ultraslow diffusion model. The MSD of the general ultraslow diffusion model is analytically derived as an inverse Mittag-Leffler function, and is observed to increase even more slowly than that of the logarithmic function model. The occurrence of very long waiting time in the case of the inverse Mittag-Leffler function has the largest probability compared with the power law model and the logarithmic function model. The Monte Carlo simulations of one dimensional sample path of a single particle are also performed. The results show that the inverse Mittag-Leffler waiting time density is effective in depicting the general ultraslow random motion.

  9. Logarithmic spiral trajectories generated by Solar sails

    NASA Astrophysics Data System (ADS)

    Bassetto, Marco; Niccolai, Lorenzo; Quarta, Alessandro A.; Mengali, Giovanni

    2018-02-01

    Analytic solutions to continuous thrust-propelled trajectories are available in a few cases only. An interesting case is offered by the logarithmic spiral, that is, a trajectory characterized by a constant flight path angle and a fixed thrust vector direction in an orbital reference frame. The logarithmic spiral is important from a practical point of view, because it may be passively maintained by a Solar sail-based spacecraft. The aim of this paper is to provide a systematic study concerning the possibility of inserting a Solar sail-based spacecraft into a heliocentric logarithmic spiral trajectory without using any impulsive maneuver. The required conditions to be met by the sail in terms of attitude angle, propulsive performance, parking orbit characteristics, and initial position are thoroughly investigated. The closed-form variations of the osculating orbital parameters are analyzed, and the obtained analytical results are used for investigating the phasing maneuver of a Solar sail along an elliptic heliocentric orbit. In this mission scenario, the phasing orbit is composed of two symmetric logarithmic spiral trajectories connected with a coasting arc.

  10. Optimization of non-linear gradient in hydrophobic interaction chromatography for the analytical characterization of antibody-drug conjugates.

    PubMed

    Bobály, Balázs; Randazzo, Giuseppe Marco; Rudaz, Serge; Guillarme, Davy; Fekete, Szabolcs

    2017-01-20

    The goal of this work was to evaluate the potential of non-linear gradients in hydrophobic interaction chromatography (HIC), to improve the separation between the different homologous species (drug-to-antibody, DAR) of commercial antibody-drug conjugates (ADC). The selectivities between Brentuximab Vedotin species were measured using three different gradient profiles, namely linear, power function based and logarithmic ones. The logarithmic gradient provides the most equidistant retention distribution for the DAR species and offers the best overall separation of cysteine linked ADC in HIC. Another important advantage of the logarithmic gradient, is its peak focusing effect for the DAR0 species, which is particularly useful to improve the quantitation limit of DAR0. Finally, the logarithmic behavior of DAR species of ADC in HIC was modelled using two different approaches, based on i) the linear solvent strength theory (LSS) and two scouting linear gradients and ii) a new derived equation and two logarithmic scouting gradients. In both cases, the retention predictions were excellent and systematically below 3% compared to the experimental values. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Stochastic exponential synchronization of memristive neural networks with time-varying delays via quantized control.

    PubMed

    Zhang, Wanli; Yang, Shiju; Li, Chuandong; Zhang, Wei; Yang, Xinsong

    2018-08-01

    This paper focuses on stochastic exponential synchronization of delayed memristive neural networks (MNNs) by the aid of systems with interval parameters which are established by using the concept of Filippov solution. New intermittent controller and adaptive controller with logarithmic quantization are structured to deal with the difficulties induced by time-varying delays, interval parameters as well as stochastic perturbations, simultaneously. Moreover, not only control cost can be reduced but also communication channels and bandwidth are saved by using these controllers. Based on novel Lyapunov functions and new analytical methods, several synchronization criteria are established to realize the exponential synchronization of MNNs with stochastic perturbations via intermittent control and adaptive control with or without logarithmic quantization. Finally, numerical simulations are offered to substantiate our theoretical results. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. Singularity Preserving Numerical Methods for Boundary Integral Equations

    NASA Technical Reports Server (NTRS)

    Kaneko, Hideaki (Principal Investigator)

    1996-01-01

    In the past twelve months (May 8, 1995 - May 8, 1996), under the cooperative agreement with Division of Multidisciplinary Optimization at NASA Langley, we have accomplished the following five projects: a note on the finite element method with singular basis functions; numerical quadrature for weakly singular integrals; superconvergence of degenerate kernel method; superconvergence of the iterated collocation method for Hammersteion equations; and singularity preserving Galerkin method for Hammerstein equations with logarithmic kernel. This final report consists of five papers describing these projects. Each project is preceeded by a brief abstract.

  13. Limitations of the background field method applied to Rayleigh-Bénard convection

    NASA Astrophysics Data System (ADS)

    Nobili, Camilla; Otto, Felix

    2017-09-01

    We consider Rayleigh-Bénard convection as modeled by the Boussinesq equations, in the case of infinite Prandtl numbers and with no-slip boundary condition. There is a broad interest in bounds of the upwards heat flux, as given by the Nusselt number Nu, in terms of the forcing via the imposed temperature difference, as given by the Rayleigh number in the turbulent regime Ra ≫ 1 . In several studies, the background field method applied to the temperature field has been used to provide upper bounds on Nu in terms of Ra. In these applications, the background field method comes in the form of a variational problem where one optimizes a stratified temperature profile subject to a certain stability condition; the method is believed to capture the marginal stability of the boundary layer. The best available upper bound via this method is Nu ≲Ra/1 3 ( ln R a )/1 15 ; it proceeds via the construction of a stable temperature background profile that increases logarithmically in the bulk. In this paper, we show that the background temperature field method cannot provide a tighter upper bound in terms of the power of the logarithm. However, by another method, one does obtain the tighter upper bound Nu ≲ Ra /1 3 ( ln ln Ra ) /1 3 so that the result of this paper implies that the background temperature field method is unphysical in the sense that it cannot provide the optimal bound.

  14. Transistor circuit increases range of logarithmic current amplifier

    NASA Technical Reports Server (NTRS)

    Gilmour, G.

    1966-01-01

    Circuit increases the range of a logarithmic current amplifier by combining a commercially available amplifier with a silicon epitaxial transistor. A temperature compensating network is provided for the transistor.

  15. Logarithmic corrections to black hole entropy from Kerr/CFT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pathak, Abhishek; Porfyriadis, Achilleas P.; Strominger, Andrew

    It has been shown by A. Sen that logarithmic corrections to the black hole area-entropy law are entirely determined macroscopically from the massless particle spectrum. They therefore serve as powerful consistency checks on any proposed enumeration of quantum black hole microstates. Furthermore, Sen’s results include a macroscopic computation of the logarithmic corrections for a five-dimensional near extremal Kerr-Newman black hole. We compute these corrections microscopically using a stringy embedding of the Kerr/CFT correspondence and find perfect agreement.

  16. Logarithmic corrections to black hole entropy from Kerr/CFT

    DOE PAGES

    Pathak, Abhishek; Porfyriadis, Achilleas P.; Strominger, Andrew; ...

    2017-04-14

    It has been shown by A. Sen that logarithmic corrections to the black hole area-entropy law are entirely determined macroscopically from the massless particle spectrum. They therefore serve as powerful consistency checks on any proposed enumeration of quantum black hole microstates. Furthermore, Sen’s results include a macroscopic computation of the logarithmic corrections for a five-dimensional near extremal Kerr-Newman black hole. We compute these corrections microscopically using a stringy embedding of the Kerr/CFT correspondence and find perfect agreement.

  17. Integral definition of the logarithmic function and the derivative of the exponential function in calculus

    NASA Astrophysics Data System (ADS)

    Vaninsky, Alexander

    2015-04-01

    Defining the logarithmic function as a definite integral with a variable upper limit, an approach used by some popular calculus textbooks, is problematic. We discuss the disadvantages of such a definition and provide a way to fix the problem. We also consider a definition-based, rigorous derivation of the derivative of the exponential function that is easier, more intuitive, and complies with the standard definitions of the number e, the logarithmic, and the exponential functions.

  18. Construction of Logarithm Tables for Galois Fields

    ERIC Educational Resources Information Center

    Torres-Jimenez, Jose; Rangel-Valdez, Nelson; Gonzalez-Hernandez, Ana Loreto; Avila-George, Himer

    2011-01-01

    A branch of mathematics commonly used in cryptography is Galois Fields GF(p[superscript n]). Two basic operations performed in GF(p[superscript n]) are the addition and the multiplication. While the addition is generally easy to compute, the multiplication requires a special treatment. A well-known method to compute the multiplication is based on…

  19. Application of the Kano-Hamilton multiangle inversion method in clear atmospheres

    Treesearch

    Mariana Adam; Vladimir A. Kovalev; Cyle Wold; Jenny Newton; Markus Pahlow; Wei M. Hao; Marc B. Parlange

    2007-01-01

    An improved measurement methodology and a data-processing technique for multiangle data obtained with an elastic scanning lidar in clear atmospheres are introduced. Azimuthal and slope scans are combined to reduce the atmospheric heterogeneity. Vertical profiles of optical depth and intercept (proportional to the logarithm of the backscatter coefficient) are determined...

  20. Entanglement properties of the antiferromagnetic-singlet transition in the Hubbard model on bilayer square lattices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, Chia-Chen; Singh, Rajiv R. P.; Scalettar, Richard T.

    Here, we calculate the bipartite R enyi entanglement entropy of an L x L x 2 bilayer Hubbard model using a determinantal quantum Monte Carlo method recently proposed by Grover [Phys. Rev. Lett. 111, 130402 (2013)]. Two types of bipartition are studied: (i) One that divides the lattice into two L x L planes, and (ii) One that divides the lattice into two equal-size (L x L=2 x 2) bilayers. Furthermore, we compare our calculations with those for the tight-binding model studied by the correlation matrix method. As expected, the entropy for bipartition (i) scales as L 2, while themore » latter scales with L with possible logarithmic corrections. The onset of the antiferromagnet to singlet transition shows up by a saturation of the former to a maximal value and the latter to a small value in the singlet phase. We also comment on the large uncertainties in the numerical results with increasing U, which would have to be overcome before the critical behavior and logarithmic corrections can be quanti ed.« less

  1. Entanglement properties of the antiferromagnetic-singlet transition in the Hubbard model on bilayer square lattices

    DOE PAGES

    Chang, Chia-Chen; Singh, Rajiv R. P.; Scalettar, Richard T.

    2014-10-10

    Here, we calculate the bipartite R enyi entanglement entropy of an L x L x 2 bilayer Hubbard model using a determinantal quantum Monte Carlo method recently proposed by Grover [Phys. Rev. Lett. 111, 130402 (2013)]. Two types of bipartition are studied: (i) One that divides the lattice into two L x L planes, and (ii) One that divides the lattice into two equal-size (L x L=2 x 2) bilayers. Furthermore, we compare our calculations with those for the tight-binding model studied by the correlation matrix method. As expected, the entropy for bipartition (i) scales as L 2, while themore » latter scales with L with possible logarithmic corrections. The onset of the antiferromagnet to singlet transition shows up by a saturation of the former to a maximal value and the latter to a small value in the singlet phase. We also comment on the large uncertainties in the numerical results with increasing U, which would have to be overcome before the critical behavior and logarithmic corrections can be quanti ed.« less

  2. Generalized Skew Coefficients of Annual Peak Flows for Rural, Unregulated Streams in West Virginia

    USGS Publications Warehouse

    Atkins, John T.; Wiley, Jeffrey B.; Paybins, Katherine S.

    2009-01-01

    Generalized skew was determined from analysis of records from 147 streamflow-gaging stations in or near West Virginia. The analysis followed guidelines established by the Interagency Advisory Committee on Water Data described in Bulletin 17B, except that stations having 50 or more years of record were used instead of stations with the less restrictive recommendation of 25 or more years of record. The generalized-skew analysis included contouring, averaging, and regression of station skews. The best method was considered the one with the smallest mean square error (MSE). MSE is defined as the following quantity summed and divided by the number of peaks: the square of the difference of an individual logarithm (base 10) of peak flow less the mean of all individual logarithms of peak flow. Contouring of station skews was the best method for determining generalized skew for West Virginia, with a MSE of about 0.2174. This MSE is an improvement over the MSE of about 0.3025 for the national map presented in Bulletin 17B.

  3. The exponentiated Hencky-logarithmic strain energy. Part II: Coercivity, planar polyconvexity and existence of minimizers

    NASA Astrophysics Data System (ADS)

    Neff, Patrizio; Lankeit, Johannes; Ghiba, Ionel-Dumitrel; Martin, Robert; Steigmann, David

    2015-08-01

    We consider a family of isotropic volumetric-isochoric decoupled strain energies based on the Hencky-logarithmic (true, natural) strain tensor log U, where μ > 0 is the infinitesimal shear modulus, is the infinitesimal bulk modulus with the first Lamé constant, are dimensionless parameters, is the gradient of deformation, is the right stretch tensor and is the deviatoric part (the projection onto the traceless tensors) of the strain tensor log U. For small elastic strains, the energies reduce to first order to the classical quadratic Hencky energy which is known to be not rank-one convex. The main result in this paper is that in plane elastostatics the energies of the family are polyconvex for , extending a previous finding on its rank-one convexity. Our method uses a judicious application of Steigmann's polyconvexity criteria based on the representation of the energy in terms of the principal invariants of the stretch tensor U. These energies also satisfy suitable growth and coercivity conditions. We formulate the equilibrium equations, and we prove the existence of minimizers by the direct methods of the calculus of variations.

  4. Micromorphic approach for gradient-extended thermo-elastic-plastic solids in the logarithmic strain space

    NASA Astrophysics Data System (ADS)

    Aldakheel, Fadi

    2017-11-01

    The coupled thermo-mechanical strain gradient plasticity theory that accounts for microstructure-based size effects is outlined within this work. It extends the recent work of Miehe et al. (Comput Methods Appl Mech Eng 268:704-734, 2014) to account for thermal effects at finite strains. From the computational viewpoint, the finite element design of the coupled problem is not straightforward and requires additional strategies due to the difficulties near the elastic-plastic boundaries. To simplify the finite element formulation, we extend it toward the micromorphic approach to gradient thermo-plasticity model in the logarithmic strain space. The key point is the introduction of dual local-global field variables via a penalty method, where only the global fields are restricted by boundary conditions. Hence, the problem of restricting the gradient variable to the plastic domain is relaxed, which makes the formulation very attractive for finite element implementation as discussed in Forest (J Eng Mech 135:117-131, 2009) and Miehe et al. (Philos Trans R Soc A Math Phys Eng Sci 374:20150170, 2016).

  5. Compensating for Electrode Polarization in Dielectric Spectroscopy Studies of Colloidal Suspensions: Theoretical Assessment of Existing Methods

    PubMed Central

    Chassagne, Claire; Dubois, Emmanuelle; Jiménez, María L.; van der Ploeg, J. P. M; van Turnhout, Jan

    2016-01-01

    Dielectric spectroscopy can be used to determine the dipole moment of colloidal particles from which important interfacial electrokinetic properties, for instance their zeta potential, can be deduced. Unfortunately, dielectric spectroscopy measurements are hampered by electrode polarization (EP). In this article, we review several procedures to compensate for this effect. First EP in electrolyte solutions is described: the complex conductivity is derived as function of frequency, for two cell geometries (planar and cylindrical) with blocking electrodes. The corresponding equivalent circuit for the electrolyte solution is given for each geometry. This equivalent circuit model is extended to suspensions. The complex conductivity of a suspension, in the presence of EP, is then calculated from the impedance. Different methods for compensating for EP are critically assessed, with the help of the theoretical findings. Their limit of validity is given in terms of characteristic frequencies. We can identify with one of these frequencies the frequency range within which data uncorrected for EP may be used to assess the dipole moment of colloidal particles. In order to extract this dipole moment from the measured data, two methods are reviewed: one is based on the use of existing models for the complex conductivity of suspensions, the other is the logarithmic derivative method. An extension to multiple relaxations of the logarithmic derivative method is proposed. PMID:27486575

  6. Double Resummation for Higgs Production

    NASA Astrophysics Data System (ADS)

    Bonvini, Marco; Marzani, Simone

    2018-05-01

    We present the first double-resummed prediction of the inclusive cross section for the main Higgs production channel in proton-proton collisions, namely, gluon fusion. Our calculation incorporates to all orders in perturbation theory two distinct towers of logarithmic corrections which are enhanced, respectively, at threshold, i.e., large x , and in the high-energy limit, i.e., small x . Large-x logarithms are resummed to next-to-next-to-next-to-leading logarithmic accuracy, while small-x ones to leading logarithmic accuracy. The double-resummed cross section is furthermore matched to the state-of-the-art fixed-order prediction at next-to-next-to-next-to-leading accuracy. We find that double resummation corrects the Higgs production rate by 2% at the currently explored center-of-mass energy of 13 TeV and its impact reaches 10% at future circular colliders at 100 TeV.

  7. Fusion hindrance for the positive Q -value system 12C+30Si

    NASA Astrophysics Data System (ADS)

    Montagnoli, G.; Stefanini, A. M.; Jiang, C. L.; Hagino, K.; Galtarossa, F.; Colucci, G.; Bottoni, S.; Broggini, C.; Caciolli, A.; Čolović, P.; Corradi, L.; Courtin, S.; Depalo, R.; Fioretto, E.; Fruet, G.; Gal, A.; Goasduff, A.; Heine, M.; Hu, S. P.; Kaur, M.; Mijatović, T.; Mazzocco, M.; Montanari, D.; Scarlassara, F.; Strano, E.; Szilner, S.; Zhang, G. X.

    2018-02-01

    Background: The fusion reaction 12C+30Si is a link between heavier cases studied in recent years, and the light heavy-ion systems, e.g., 12C+12C , 16O+16O that have a prominent role in the dynamics of stellar evolution. 12C+30Si fusion itself is not a relevant process for astrophysics, but it is important to establish its behavior below the barrier, where couplings to low-lying collective modes and the hindrance phenomenon may determine the cross sections. The excitation function is presently completely unknown below the barrier for the 12C+30Si reaction, thus no reliable extrapolation into the astrophysical regime for the C+C and O+O cases can be performed. Purpose: Our aim was to carry out a complete measurement of the fusion excitation function of 12C+30Si from well below to above the Coulomb barrier, so as to clear up the consequence of couplings to low-lying states of 30Si, and whether the hindrance effect appears in this relatively light system which has a positive Q value for fusion. This would have consequences for the extrapolated behavior to even lighter systems. Methods: The inverse kinematics was used by sending 30Si beams delivered from the XTU Tandem accelerator of INFN-Laboratori Nazionali di Legnaro onto thin 12C (50 μ g /cm2 ) targets enriched to 99.9 % in mass 12. The fusion evaporation residues (ER) were detected at very forward angles, following beam separation by means of an electrostatic deflector. Angular distributions of ER were measured at Ebeam=45 , 59, and 80 MeV, and they were angle integrated to derive total fusion cross sections. Results: The fusion excitation function of 12C+30Si was measured with high statistical accuracy, covering more than five orders of magnitude down to a lowest cross section ≃3 μ b . The logarithmic slope and the S factor have been extracted and we have convincing phenomenological evidence of the hindrance effect. These results have been compared with the calculations performed within the model that considers a damping of the coupling strength well inside the Coulomb barrier. Conclusions: The experimental data are consistent with the coupled-channels calculations. A better fit is obtained by using the Yukawa-plus-exponential potential and a damping of the coupling strengths inside the barrier. The degree of hindrance is much smaller than the one in heavier systems. Also a phenomenological estimate reproduces quite closely the hindrance threshold for 12C+30Si , so that an extrapolation to the C+C and O+O cases can be reliably performed.

  8. Q estimation of seismic data using the generalized S-transform

    NASA Astrophysics Data System (ADS)

    Hao, Yaju; Wen, Xiaotao; Zhang, Bo; He, Zhenhua; Zhang, Rui; Zhang, Jinming

    2016-12-01

    Quality factor, Q, is a parameter that characterizes the energy dissipation during seismic wave propagation. The reservoir pore is one of the main factors that affect the value of Q. Especially, when pore space is filled with oil or gas, the rock usually exhibits a relative low Q value. Such a low Q value has been used as a direct hydrocarbon indicator by many researchers. The conventional Q estimation method based on spectral ratio suffers from the problem of waveform tuning; hence, many researchers have introduced time-frequency analysis techniques to tackle this problem. Unfortunately, the window functions adopted in time-frequency analysis algorithms such as continuous wavelet transform (CWT) and S-transform (ST) contaminate the amplitude spectra because the seismic signal is multiplied by the window functions during time-frequency decomposition. The basic assumption of the spectral ratio method is that there is a linear relationship between natural logarithmic spectral ratio and frequency. However, this assumption does not hold if we take the influence of window functions into consideration. In this paper, we first employ a recently developed two-parameter generalized S-transform (GST) to obtain the time-frequency spectra of seismic traces. We then deduce the non-linear relationship between natural logarithmic spectral ratio and frequency. Finally, we obtain a linear relationship between natural logarithmic spectral ratio and a newly defined parameter γ by ignoring the negligible second order term. The gradient of this linear relationship is 1/Q. Here, the parameter γ is a function of frequency and source wavelet. Numerical examples for VSP and post-stack reflection data confirm that our algorithm is capable of yielding accurate results. The Q-value results estimated from field data acquired in western China show reasonable comparison with oil-producing well location.

  9. A factorization approach to next-to-leading-power threshold logarithms

    NASA Astrophysics Data System (ADS)

    Bonocore, D.; Laenen, E.; Magnea, L.; Melville, S.; Vernazza, L.; White, C. D.

    2015-06-01

    Threshold logarithms become dominant in partonic cross sections when the selected final state forces gluon radiation to be soft or collinear. Such radiation factorizes at the level of scattering amplitudes, and this leads to the resummation of threshold logarithms which appear at leading power in the threshold variable. In this paper, we consider the extension of this factorization to include effects suppressed by a single power of the threshold variable. Building upon the Low-Burnett-Kroll-Del Duca (LBKD) theorem, we propose a decomposition of radiative amplitudes into universal building blocks, which contain all effects ultimately responsible for next-to-leading-power (NLP) threshold logarithms in hadronic cross sections for electroweak annihilation processes. In particular, we provide a NLO evaluation of the radiative jet function, responsible for the interference of next-to-soft and collinear effects in these cross sections. As a test, using our expression for the amplitude, we reproduce all abelian-like NLP threshold logarithms in the NNLO Drell-Yan cross section, including the interplay of real and virtual emissions. Our results are a significant step towards developing a generally applicable resummation formalism for NLP threshold effects, and illustrate the breakdown of next-to-soft theorems for gauge theory amplitudes at loop level.

  10. The ABC (in any D) of logarithmic CFT

    NASA Astrophysics Data System (ADS)

    Hogervorst, Matthijs; Paulos, Miguel; Vichi, Alessandro

    2017-10-01

    Logarithmic conformal field theories have a vast range of applications, from critical percolation to systems with quenched disorder. In this paper we thoroughly examine the structure of these theories based on their symmetry properties. Our analysis is model-independent and holds for any spacetime dimension. Our results include a determination of the general form of correlation functions and conformal block decompositions, clearing the path for future bootstrap applications. Several examples are discussed in detail, including logarithmic generalized free fields, holographic models, self-avoiding random walks and critical percolation.

  11. Two degree of freedom internal model control-PID design for LFC of power systems via logarithmic approximations.

    PubMed

    Singh, Jay; Chattterjee, Kalyan; Vishwakarma, C B

    2018-01-01

    Load frequency controller has been designed for reduced order model of single area and two-area reheat hydro-thermal power system through internal model control - proportional integral derivative (IMC-PID) control techniques. The controller design method is based on two degree of freedom (2DOF) internal model control which combines with model order reduction technique. Here, in spite of taking full order system model a reduced order model has been considered for 2DOF-IMC-PID design and the designed controller is directly applied to full order system model. The Logarithmic based model order reduction technique is proposed to reduce the single and two-area high order power systems for the application of controller design.The proposed IMC-PID design of reduced order model achieves good dynamic response and robustness against load disturbance with the original high order system. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  12. Mean-variance portfolio optimization by using time series approaches based on logarithmic utility function

    NASA Astrophysics Data System (ADS)

    Soeryana, E.; Fadhlina, N.; Sukono; Rusyaman, E.; Supian, S.

    2017-01-01

    Investments in stocks investors are also faced with the issue of risk, due to daily price of stock also fluctuate. For minimize the level of risk, investors usually forming an investment portfolio. Establishment of a portfolio consisting of several stocks are intended to get the optimal composition of the investment portfolio. This paper discussed about optimizing investment portfolio of Mean-Variance to stocks by using mean and volatility is not constant based on logarithmic utility function. Non constant mean analysed using models Autoregressive Moving Average (ARMA), while non constant volatility models are analysed using the Generalized Autoregressive Conditional heteroscedastic (GARCH). Optimization process is performed by using the Lagrangian multiplier technique. As a numerical illustration, the method is used to analyse some Islamic stocks in Indonesia. The expected result is to get the proportion of investment in each Islamic stock analysed.

  13. Non-linear regime of the Generalized Minimal Massive Gravity in critical points

    NASA Astrophysics Data System (ADS)

    Setare, M. R.; Adami, H.

    2016-03-01

    The Generalized Minimal Massive Gravity (GMMG) theory is realized by adding the CS deformation term, the higher derivative deformation term, and an extra term to pure Einstein gravity with a negative cosmological constant. In the present paper we obtain exact solutions to the GMMG field equations in the non-linear regime of the model. GMMG model about AdS_3 space is conjectured to be dual to a 2-dimensional CFT. We study the theory in critical points corresponding to the central charges c_-=0 or c_+=0, in the non-linear regime. We show that AdS_3 wave solutions are present, and have logarithmic form in critical points. Then we study the AdS_3 non-linear deformation solution. Furthermore we obtain logarithmic deformation of extremal BTZ black hole. After that using Abbott-Deser-Tekin method we calculate the energy and angular momentum of these types of black hole solutions.

  14. Higgs-boson production at small transverse momentum

    NASA Astrophysics Data System (ADS)

    Becher, Thomas; Neubert, Matthias; Wilhelm, Daniel

    2013-05-01

    Using methods from effective field theory, we have recently developed a novel, systematic framework for the calculation of the cross sections for electroweak gauge-boson production at small and very small transverse momentum q T , in which large logarithms of the scale ratio m V / q T are resummed to all orders. This formalism is applied to the production of Higgs bosons in gluon fusion at the LHC. The production cross section receives logarithmically enhanced corrections from two sources: the running of the hard matching coefficient and the collinear factorization anomaly. The anomaly leads to the dynamical generation of a non-perturbative scale {q_{*}}tilde{mkern6mu} {m_H}{e^{{{{{-const}} / {{{α_s}( {{m_H}} )}} .}}}}≈ 8 GeV, which protects the process from receiving large long-distance hadronic contributions. We present numerical predictions for the transverse-momentum spectrum of Higgs bosons produced at the LHC, finding that it is quite insensitive to hadronic effects.

  15. Betel-quid and alcohol use were associated with lipid accumulation product among male factory workers in Taiwan.

    PubMed

    Huang, Chih-Fang; Chen, Chao-Tung; Wang, Pei-Ming; Koo, Malcolm

    2015-05-01

    In this study, cardiometabolic risk associated with betel-quid, alcohol and cigarette use, based on a simple index-lipid accumulation product (LAP), was investigated in Taiwanese male factory workers. Male factory workers were recruited during their annual routine health examination at a hospital in south Taiwan. The risk of cardiometabolic disorders was estimated by the use of LAP, calculated as (waist circumference [cm]-65)×(triglyceride concentration [mmol/l]). Multiple linear regression analyses were conducted to assess the risk factors of natural logarithm-transformed LAP. Of the 815 participants, 40% (325/815) were current alcohol users, 30% (248/815) were current smokers and 7% (53/815) were current betel-quid users. Current betel-quid use, alcohol use, older age, lack of exercise and higher body mass index were found to be significant and independent factors associated with natural logarithm-transformed LAP. Betel-quid and alcohol, but not cigarette use, were independent risk factors of logarithm-transformed LAP, adjusting for age, exercise and body mass index in male Taiwanese factory workers. LAP can be considered as a simple and useful method for screening of cardiometabolic risk. © The Author 2014. Published by Oxford University Press on behalf of Royal Society of Tropical Medicine and Hygiene. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  16. Operator algebra as an application of logarithmic representation of infinitesimal generators

    NASA Astrophysics Data System (ADS)

    Iwata, Yoritaka

    2018-02-01

    The operator algebra is introduced based on the framework of logarithmic representation of infinitesimal generators. In conclusion a set of generally-unbounded infinitesimal generators is characterized as a module over the Banach algebra.

  17. Using the Logarithmic Concentration Diagram, Log "C", to Teach Acid-Base Equilibrium

    ERIC Educational Resources Information Center

    Kovac, Jeffrey

    2012-01-01

    Acid-base equilibrium is one of the most important and most challenging topics in a typical general chemistry course. This article introduces an alternative to the algebraic approach generally used in textbooks, the graphical log "C" method. Log "C" diagrams provide conceptual insight into the behavior of aqueous acid-base systems and allow…

  18. Logarithms in the Year 10 A.C.

    ERIC Educational Resources Information Center

    Kalman, Dan; Mitchell, Charles E.

    1981-01-01

    An alternative application of logarithms in the high school algebra curriculum that is not undermined by the existence and widespread availability of calculators is presented. The importance and use of linear relationships are underscored in the proposed lessons. (MP)

  19. Performance analysis of 60-min to 1-min integration time rain rate conversion models in Malaysia

    NASA Astrophysics Data System (ADS)

    Ng, Yun-Yann; Singh, Mandeep Singh Jit; Thiruchelvam, Vinesh

    2018-01-01

    Utilizing the frequency band above 10 GHz is in focus nowadays as a result of the fast expansion of radio communication systems in Malaysia. However, rain fade is the critical factor in attenuation of signal propagation for frequencies above 10 GHz. Malaysia is located in a tropical and equatorial region with high rain intensity throughout the year, and this study will review rain distribution and evaluate the performance of 60-min to 1-min integration time rain rate conversion methods for Malaysia. Several conversion methods such as Segal, Chebil & Rahman, Burgeono, Emiliani, Lavergnat and Gole (LG), Simplified Moupfouma, Joo et al., fourth order polynomial fit and logarithmic model have been chosen to evaluate the performance to predict 1-min rain rate for 10 sites in Malaysia. After the completion of this research, the results show that Chebil & Rahman model, Lavergnat & Gole model, Fourth order polynomial fit and Logarithmic model have shown the best performances in 60-min to 1-min rain rate conversion over 10 sites. In conclusion, it is proven that there is no single model which can claim to perform the best across 10 sites. By averaging RMSE and SC-RMSE over 10 sites, Chebil and Rahman model is the best method.

  20. The nature of arms in spiral galaxies. IV. Symmetries and asymmetries

    NASA Astrophysics Data System (ADS)

    del Río, M. S.; Cepa, J.

    1999-01-01

    A Fourier analysis of the intensity distribution in the planes of nine spiral galaxies is performed. In terms of the arm classification scheme of \\cite[Elmegreen & Elmegreen (1987)]{ee87} seven of the galaxies have well-defined arms (classes 12 and 9) and two have intermediate-type arms (class 5). The galaxies studied are NGC 157, 753, 895, 4321, 6764, 6814, 6951, 7479 and 7723. For each object Johnson B-band images are available which are decomposed into angular components, for different angular periodicities. No a priori assumption is made concerning the form of the arms. The base function used in the analysis is a logarithmic spiral. The main result obtained with this method is that the dominant component (or mode) usually changes at corotation. In some cases, this change to a different mode persists only for a short range about corotation, but in other cases the change is permanent. The agreement between pitch angles found with this method and by fitting logarithmic spirals to mean arm positions (del Río & Cepa 1998b, hereafter \\cite[Paper III]{p3}) is good, except for those cases where bars are strong and dominant. Finally, a comparison is made with the ``symmetrization'' method introduced by Elmegreen, Elmegreen & Montenegro (1992, hereafter EEM), which also shows the different symmetric components.

  1. Prediction of infarction volume and infarction growth rate in acute ischemic stroke.

    PubMed

    Kamran, Saadat; Akhtar, Naveed; Alboudi, Ayman; Kamran, Kainat; Ahmad, Arsalan; Inshasi, Jihad; Salam, Abdul; Shuaib, Ashfaq; Qidwai, Uvais

    2017-08-08

    The prediction of infarction volume after stroke onset depends on the shape of the growth dynamics of the infarction. To understand growth patterns that predict lesion volume changes, we studied currently available models described in literature and compared the models with Adaptive Neuro-Fuzzy Inference System [ANFIS], a method previously unused in the prediction of infarction growth and infarction volume (IV). We included 67 patients with malignant middle cerebral artery [MMCA] stroke who underwent decompressive hemicraniectomy. All patients had at least three cranial CT scans prior to the surgery. The rate of growth and volume of infarction measured on the third CT was predicted with ANFIS without statistically significant difference compared to the ground truth [P = 0.489]. This was not possible with linear, logarithmic or exponential methods. ANFIS was able to predict infarction volume [IV3] over a wide range of volume [163.7-600 cm 3 ] and time [22-110 hours]. The cross correlation [CRR] indicated similarity between the ANFIS-predicted IV3 and original data of 82% for ANFIS, followed by logarithmic 70%, exponential 63% and linear 48% respectively. Our study shows that ANFIS is superior to previously defined methods in the prediction of infarction growth rate (IGR) with reasonable accuracy, over wide time and volume range.

  2. Tolerance of ciliated protozoan Paramecium bursaria (Protozoa, Ciliophora) to ammonia and nitrites

    NASA Astrophysics Data System (ADS)

    Xu, Henglong; Song, Weibo; Lu, Lu; Alan, Warren

    2005-09-01

    The tolerance to ammonia and nitrites in freshwater ciliate Paramecium bursaria was measured in a conventional open system. The ciliate was exposed to different concentrations of ammonia and nitrites for 2h and 12h in order to determine the lethal concentrations. Linear regression analysis revealed that the 2h-LC50 value for ammonia was 95.94 mg/L and for nitrite 27.35 mg/L using probit scale method (with 95% confidence intervals). There was a linear correlation between the mortality probit scale and logarithmic concentration of ammonia which fit by a regression equation y=7.32 x 9.51 ( R 2=0.98; y, mortality probit scale; x, logarithmic concentration of ammonia), by which 2 h-LC50 value for ammonia was found to be 95.50 mg/L. A linear correlation between mortality probit scales and logarithmic concentration of nitrite is also followed the regression equation y=2.86 x+0.89 ( R 2=0.95; y, mortality probit scale; x, logarithmic concentration of nitrite). The regression analysis of toxicity curves showed that the linear correlation between exposed time of ammonia-N LC50 value and ammonia-N LC50 value followed the regression equation y=2 862.85 e -0.08 x ( R 2=0.95; y, duration of exposure to LC50 value; x, LC50 value), and that between exposed time of nitrite-N LC50 value and nitrite-N LC50 value followed the regression equation y=127.15 e -0.13 x ( R 2=0.91; y, exposed time of LC50 value; x, LC50 value). The results demonstrate that the tolerance to ammonia in P. bursaria is considerably higher than that of the larvae or juveniles of some metozoa, e.g. cultured prawns and oysters. In addition, ciliates, as bacterial predators, are likely to play a positive role in maintaining and improving water quality in aquatic environments with high-level ammonium, such as sewage treatment systems.

  3. Z-Boson Decays To A Vector Quarkonium Plus A Photon

    DOE PAGES

    Bodwin, Geoffrey T.; Chung, Hee Sok; Ee, June-Haak; ...

    2018-01-18

    We compute the decay rates for the processes Z → V + γ , where Z is the Z -boson, γ is the photon, and V is one of the vector quarkonia J / ψ or Υ ( n S ) , with n = 1 , 2, or 3. Our computations include corrections through relative orders α s and v 2 and resummations of logarithms of mmore » $$2\\atop{Z}$$/$$2\\atop{Q}$$, to all orders in α s , at next-to-leading-logarithmic accuracy. ( v is the velocity of the heavy quark Q or the heavy antiquark $$\\bar{Q}$$ in the quarkonium rest frame, and m Z and m Q are the masses of Z and Q , respectively.) Our calculations are the first to include both the order- α s correction to the light-cone distributions amplitude and the resummation of logarithms of m$$2\\atop{Z}$$/$$2\\atop{Q}$$ and are the first calculations for the Υ (2S) and Υ (3S) final states. The resummations of logarithms of m$$2\\atop{Z}$$/$$2\\atop{Q}$$ that are associated with the order- α s and order- v 2 corrections are carried out by making use of the Abel-Padé method. We confirm the analytic result for the order- v 2 correction that was presented in a previous publication, and we correct the relative sign of the direct and indirect amplitudes and some choices of scales in that publication. In conclusion, our branching fractions for Z → J / ψ + γ and Z → Υ (1 S) + γ differ by 2.0σ and -4.0 σ, respectively, from the branching fractions that are given in the most recent publication on this topic (in units of the uncertainties that are given in that publication). However, we argue that the uncertainties in the rates are underestimated in that publication.« less

  4. Z-Boson Decays To A Vector Quarkonium Plus A Photon

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bodwin, Geoffrey T.; Chung, Hee Sok; Ee, June-Haak

    We compute the decay rates for the processes Z → V + γ , where Z is the Z -boson, γ is the photon, and V is one of the vector quarkonia J / ψ or Υ ( n S ) , with n = 1 , 2, or 3. Our computations include corrections through relative orders α s and v 2 and resummations of logarithms of mmore » $$2\\atop{Z}$$/$$2\\atop{Q}$$, to all orders in α s , at next-to-leading-logarithmic accuracy. ( v is the velocity of the heavy quark Q or the heavy antiquark $$\\bar{Q}$$ in the quarkonium rest frame, and m Z and m Q are the masses of Z and Q , respectively.) Our calculations are the first to include both the order- α s correction to the light-cone distributions amplitude and the resummation of logarithms of m$$2\\atop{Z}$$/$$2\\atop{Q}$$ and are the first calculations for the Υ (2S) and Υ (3S) final states. The resummations of logarithms of m$$2\\atop{Z}$$/$$2\\atop{Q}$$ that are associated with the order- α s and order- v 2 corrections are carried out by making use of the Abel-Padé method. We confirm the analytic result for the order- v 2 correction that was presented in a previous publication, and we correct the relative sign of the direct and indirect amplitudes and some choices of scales in that publication. In conclusion, our branching fractions for Z → J / ψ + γ and Z → Υ (1 S) + γ differ by 2.0σ and -4.0 σ, respectively, from the branching fractions that are given in the most recent publication on this topic (in units of the uncertainties that are given in that publication). However, we argue that the uncertainties in the rates are underestimated in that publication.« less

  5. Advantages of using a logarithmic scale in pressure-volume diagrams for Carnot and other heat engine cycles

    NASA Astrophysics Data System (ADS)

    Shieh, Lih-Yir; Kan, Hung-Chih

    2014-04-01

    We demonstrate that plotting the P-V diagram of an ideal gas Carnot cycle on a logarithmic scale results in a more intuitive approach for deriving the final form of the efficiency equation. The same approach also facilitates the derivation of the efficiency of other thermodynamic engines that employ adiabatic ideal gas processes, such as the Brayton cycle, the Otto cycle, and the Diesel engine. We finally demonstrate that logarithmic plots of isothermal and adiabatic processes help with visualization in approximating an arbitrary process in terms of an infinite number of Carnot cycles.

  6. Resumming double non-global logarithms in the evolution of a jet

    NASA Astrophysics Data System (ADS)

    Hatta, Y.; Iancu, E.; Mueller, A. H.; Triantafyllopoulos, D. N.

    2018-02-01

    We consider the Banfi-Marchesini-Smye (BMS) equation which resums `non-global' energy logarithms in the QCD evolution of the energy lost by a pair of jets via soft radiation at large angles. We identify a new physical regime where, besides the energy logarithms, one has to also resum (anti)collinear logarithms. Such a regime occurs when the jets are highly collimated (boosted) and the relative angles between successive soft gluon emissions are strongly increasing. These anti-collinear emissions can violate the correct time-ordering for time-like cascades and result in large radiative corrections enhanced by double collinear logs, making the BMS evolution unstable beyond leading order. We isolate the first such a correction in a recent calculation of the BMS equation to next-to-leading order by Caron-Huot. To overcome this difficulty, we construct a `collinearly-improved' version of the leading-order BMS equation which resums the double collinear logarithms to all orders. Our construction is inspired by a recent treatment of the Balitsky-Kovchegov (BK) equation for the high-energy evolution of a space-like wavefunction, where similar time-ordering issues occur. We show that the conformal mapping relating the leading-order BMS and BK equations correctly predicts the physical time-ordering, but it fails to predict the detailed structure of the collinear improvement.

  7. Using History to Teach Mathematics: The Case of Logarithms

    NASA Astrophysics Data System (ADS)

    Panagiotou, Evangelos N.

    2011-01-01

    Many authors have discussed the question why we should use the history of mathematics to mathematics education. For example, Fauvel (For Learn Math, 11(2): 3-6, 1991) mentions at least fifteen arguments for applying the history of mathematics in teaching and learning mathematics. Knowing how to introduce history into mathematics lessons is a more difficult step. We found, however, that only a limited number of articles contain instructions on how to use the material, as opposed to numerous general articles suggesting the use of the history of mathematics as a didactical tool. The present article focuses on converting the history of logarithms into material appropriate for teaching students of 11th grade, without any knowledge of calculus. History uncovers that logarithms were invented prior of the exponential function and shows that the logarithms are not an arbitrary product, as is the case when we leap straight in the definition given in all modern textbooks, but they are a response to a problem. We describe step by step the historical evolution of the concept, in a way appropriate for use in class, until the definition of the logarithm as area under the hyperbola. Next, we present the formal development of the theory and define the exponential function. The teaching sequence has been successfully undertaken in two high school classrooms.

  8. Reconstructing free-energy landscapes for nonequilibrium periodic potentials

    NASA Astrophysics Data System (ADS)

    López-Alamilla, N. J.; Jack, Michael W.; Challis, K. J.

    2018-03-01

    We present a method for reconstructing the free-energy landscape of overdamped Brownian motion on a tilted periodic potential. Our approach exploits the periodicity of the system by using the k -space form of the Smoluchowski equation and we employ an iterative approach to determine the nonequilibrium tilt. We reconstruct landscapes for a number of example potentials to show the applicability of the method to both deep and shallow wells and near-to- and far-from-equilibrium regimes. The method converges logarithmically with the number of Fourier terms in the potential.

  9. A Numerical Comparison of Barrier and Modified Barrier Methods for Large-Scale Bound-Constrained Optimization

    NASA Technical Reports Server (NTRS)

    Nash, Stephen G.; Polyak, R.; Sofer, Ariela

    1994-01-01

    When a classical barrier method is applied to the solution of a nonlinear programming problem with inequality constraints, the Hessian matrix of the barrier function becomes increasingly ill-conditioned as the solution is approached. As a result, it may be desirable to consider alternative numerical algorithms. We compare the performance of two methods motivated by barrier functions. The first is a stabilized form of the classical barrier method, where a numerically stable approximation to the Newton direction is used when the barrier parameter is small. The second is a modified barrier method where a barrier function is applied to a shifted form of the problem, and the resulting barrier terms are scaled by estimates of the optimal Lagrange multipliers. The condition number of the Hessian matrix of the resulting modified barrier function remains bounded as the solution to the constrained optimization problem is approached. Both of these techniques can be used in the context of a truncated-Newton method, and hence can be applied to large problems, as well as on parallel computers. In this paper, both techniques are applied to problems with bound constraints and we compare their practical behavior.

  10. Tunneling spin polarization in planar tunnel junctions: measurements using NbN superconducting electrodes and evidence for Kondo-assisted tunneling

    NASA Astrophysics Data System (ADS)

    Yang, Hyunsoo

    2006-03-01

    The fundamental origin of tunneling magnetoresistance in magnetic tunnel junctions (MTJs) is the spin-polarized tunneling current, which can be measured directly using superconducting tunneling spectroscopy (STS). The STS technique was first developed by Meservey and Tedrow using aluminum superconducting electrodes. Al has been widely used because of its low spin orbit scattering. However, measurements must be made at low temperatures (<0.4 K) because of the low superconducting transition temperature of Al. Here, we demonstrate that superconducting electrodes formed from NbN can be used to measure tunneling spin polarization (TSP) at higher temperatures up to ˜1.2K. The tunneling magnetoresistance and polarization of the tunneling current in MTJs is highly sensitive to the detailed structure of the tunneling barrier. Using MgO tunnel barriers we find TSP values as high as 90% at 0.25K. The TMR is, however, depressed by insertion of ultra thin layers of both non-magnetic and magnetic metals in the middle of the MgO barrier. For ultra-thin, discontinuous magnetic layers of CoFe, we find evidence of Kondo assisted tunneling, from increased conductance at low temperatures (<50K) and bias voltage (<20 mV). Over the same temperature and bias voltage regimes the tunneling magnetoresistance is strongly depressed. We present other evidence of Kondo resonance including the logarithmic temperature dependence of the zero bias conductance peak. We infer the Kondo temperature from both the spectra width of this conductance peak as well as the temperature dependence of the TMR depression. The Kondo temperature is sensitive to the thickness of the inserted CoFe layer and decreases with increased CoFe thickness. * performed in collaboration with S-H. Yang, C. Kaiser, and S. Parkin.

  11. Imaging early pathogenesis of bubonic plague: are neutrophils commandeered for lymphatic transport of bacteria?

    PubMed

    Bland, David M; Anderson, Deborah M

    2013-11-05

    Vector-borne infections begin in the dermis when a pathogen is introduced by an arthropod during a blood meal. Several barriers separate an invading pathogen from its replicative niche, including phagocytic cells in the dermis that activate immunity by engulfing would-be pathogens and migrating to the lymph node. In addition, neutrophils circulating in the blood are rapidly recruited when the dermal barriers are penetrated. For flea-borne disease, no insect-encoded immune-suppressive molecules have yet been described that might influence the establishment of infection, leaving the bacteria on their own to defend against the mammalian immune system. Shortly after a flea transmits Yersinia pestis to a mammalian host, the bacteria are transported to the lymph node, where they grow logarithmically and later spread systemically. Even a single cell of Y. pestis can initiate a lethal case of plague. In their article, J. G. Shannon et al. [mBio 4(5):e00170-13, 2013, doi:10.1128/mBio.00170-13] used intravital microscopy to visualize trafficking of Y. pestis in transgenic mice in vivo, which allowed them to examine interactions between bacteria and specific immune cells. Bacteria appeared to preferentially interact with neutrophils but had no detectable interactions with dendritic cells. These findings suggest that Y. pestis infection of neutrophils not only prevents their activation but may even result in their return to circulation and migration to distal sites.

  12. Use of biopartitioning micellar chromatography and RP-HPLC for the determination of blood-brain barrier penetration of α-adrenergic/imidazoline receptor ligands, and QSPR analysis.

    PubMed

    Vucicevic, J; Popovic, M; Nikolic, K; Filipic, S; Obradovic, D; Agbaba, D

    2017-03-01

    For this study, 31 compounds, including 16 imidazoline/α-adrenergic receptor (IRs/α-ARs) ligands and 15 central nervous system (CNS) drugs, were characterized in terms of the retention factors (k) obtained using biopartitioning micellar and classical reversed phase chromatography (log k BMC and log k wRP , respectively). Based on the retention factor (log k wRP ) and slope of the linear curve (S) the isocratic parameter (φ 0 ) was calculated. Obtained retention factors were correlated with experimental log BB values for the group of examined compounds. High correlations were obtained between logarithm of biopartitioning micellar chromatography (BMC) retention factor and effective permeability (r(log k BMC /log BB): 0.77), while for RP-HPLC system the correlations were lower (r(log k wRP /log BB): 0.58; r(S/log BB): -0.50; r(φ 0 /P e ): 0.61). Based on the log k BMC retention data and calculated molecular parameters of the examined compounds, quantitative structure-permeability relationship (QSPR) models were developed using partial least squares, stepwise multiple linear regression, support vector machine and artificial neural network methodologies. A high degree of structural diversity of the analysed IRs/α-ARs ligands and CNS drugs provides wide applicability domain of the QSPR models for estimation of blood-brain barrier penetration of the related compounds.

  13. Size-dependent standard deviation for growth rates: Empirical results and theoretical modeling

    NASA Astrophysics Data System (ADS)

    Podobnik, Boris; Horvatic, Davor; Pammolli, Fabio; Wang, Fengzhong; Stanley, H. Eugene; Grosse, I.

    2008-05-01

    We study annual logarithmic growth rates R of various economic variables such as exports, imports, and foreign debt. For each of these variables we find that the distributions of R can be approximated by double exponential (Laplace) distributions in the central parts and power-law distributions in the tails. For each of these variables we further find a power-law dependence of the standard deviation σ(R) on the average size of the economic variable with a scaling exponent surprisingly close to that found for the gross domestic product (GDP) [Phys. Rev. Lett. 81, 3275 (1998)]. By analyzing annual logarithmic growth rates R of wages of 161 different occupations, we find a power-law dependence of the standard deviation σ(R) on the average value of the wages with a scaling exponent β≈0.14 close to those found for the growth of exports, imports, debt, and the growth of the GDP. In contrast to these findings, we observe for payroll data collected from 50 states of the USA that the standard deviation σ(R) of the annual logarithmic growth rate R increases monotonically with the average value of payroll. However, also in this case we observe a power-law dependence of σ(R) on the average payroll with a scaling exponent β≈-0.08 . Based on these observations we propose a stochastic process for multiple cross-correlated variables where for each variable (i) the distribution of logarithmic growth rates decays exponentially in the central part, (ii) the distribution of the logarithmic growth rate decays algebraically in the far tails, and (iii) the standard deviation of the logarithmic growth rate depends algebraically on the average size of the stochastic variable.

  14. Size-dependent standard deviation for growth rates: empirical results and theoretical modeling.

    PubMed

    Podobnik, Boris; Horvatic, Davor; Pammolli, Fabio; Wang, Fengzhong; Stanley, H Eugene; Grosse, I

    2008-05-01

    We study annual logarithmic growth rates R of various economic variables such as exports, imports, and foreign debt. For each of these variables we find that the distributions of R can be approximated by double exponential (Laplace) distributions in the central parts and power-law distributions in the tails. For each of these variables we further find a power-law dependence of the standard deviation sigma(R) on the average size of the economic variable with a scaling exponent surprisingly close to that found for the gross domestic product (GDP) [Phys. Rev. Lett. 81, 3275 (1998)]. By analyzing annual logarithmic growth rates R of wages of 161 different occupations, we find a power-law dependence of the standard deviation sigma(R) on the average value of the wages with a scaling exponent beta approximately 0.14 close to those found for the growth of exports, imports, debt, and the growth of the GDP. In contrast to these findings, we observe for payroll data collected from 50 states of the USA that the standard deviation sigma(R) of the annual logarithmic growth rate R increases monotonically with the average value of payroll. However, also in this case we observe a power-law dependence of sigma(R) on the average payroll with a scaling exponent beta approximately -0.08 . Based on these observations we propose a stochastic process for multiple cross-correlated variables where for each variable (i) the distribution of logarithmic growth rates decays exponentially in the central part, (ii) the distribution of the logarithmic growth rate decays algebraically in the far tails, and (iii) the standard deviation of the logarithmic growth rate depends algebraically on the average size of the stochastic variable.

  15. A new real-time guidance strategy for aerodynamic ascent flight

    NASA Astrophysics Data System (ADS)

    Yamamoto, Takayuki; Kawaguchi, Jun'ichiro

    2007-12-01

    Reusable launch vehicles are conceived to constitute the future space transportation system. If these vehicles use air-breathing propulsion and lift taking-off horizontally, the optimal steering for these vehicles exhibits completely different behavior from that in conventional rockets flight. In this paper, the new guidance strategy is proposed. This method derives from the optimality condition as for steering and an analysis concludes that the steering function takes the form comprised of Linear and Logarithmic terms, which include only four parameters. The parameter optimization of this method shows the acquired terminal horizontal velocity is almost same with that obtained by the direct numerical optimization. This supports the parameterized Liner Logarithmic steering law. And here is shown that there exists a simple linear relation between the terminal states and the parameters to be corrected. The relation easily makes the parameters determined to satisfy the terminal boundary conditions in real-time. The paper presents the guidance results for the practical application cases. The results show the guidance is well performed and satisfies the terminal boundary conditions specified. The strategy built and presented here does guarantee the robust solution in real-time excluding any optimization process, and it is found quite practical.

  16. Multilayer material characterization using thermographic signal reconstruction

    NASA Astrophysics Data System (ADS)

    Shepard, Steven M.; Beemer, Maria Frendberg

    2016-02-01

    Active-thermography has become a well-established Nondestructive Testing (NDT) method for detection of subsurface flaws. In its simplest form, flaw detection is based on visual identification of contrast between a flaw and local intact regions in an IR image sequence of the surface temperature as the sample responds to thermal stimulation. However, additional information and insight can be obtained from the sequence, even in the absence of a flaw, through analysis of the logarithmic derivatives of individual pixel time histories using the Thermographic Signal Reconstruction (TSR) method. For example, the response of a flaw-free multilayer sample to thermal stimulation can be viewed as a simple transition between the responses of infinitely thick samples of the individual constituent layers over the lifetime of the thermal diffusion process. The transition is represented compactly and uniquely by the logarithmic derivatives, based on the ratio of thermal effusivities of the layers. A spectrum of derivative responses relative to thermal effusivity ratios allows prediction of the time scale and detectability of the interface, and measurement of the thermophysical properties of one layer if the properties of the other are known. A similar transition between steady diffusion states occurs for flat bottom holes, based on the hole aspect ratio.

  17. Identification of Intensity Ratio Break Points from Photon Arrival Trajectories in Ratiometric Single Molecule Spectroscopy

    PubMed Central

    Bingemann, Dieter; Allen, Rachel M.

    2012-01-01

    We describe a statistical method to analyze dual-channel photon arrival trajectories from single molecule spectroscopy model-free to identify break points in the intensity ratio. Photons are binned with a short bin size to calculate the logarithm of the intensity ratio for each bin. Stochastic photon counting noise leads to a near-normal distribution of this logarithm and the standard student t-test is used to find statistically significant changes in this quantity. In stochastic simulations we determine the significance threshold for the t-test’s p-value at a given level of confidence. We test the method’s sensitivity and accuracy indicating that the analysis reliably locates break points with significant changes in the intensity ratio with little or no error in realistic trajectories with large numbers of small change points, while still identifying a large fraction of the frequent break points with small intensity changes. Based on these results we present an approach to estimate confidence intervals for the identified break point locations and recommend a bin size to choose for the analysis. The method proves powerful and reliable in the analysis of simulated and actual data of single molecule reorientation in a glassy matrix. PMID:22837704

  18. Logarithmic temporal axis manipulation and its application for measuring auditory contributions in F0 control using a transformed auditory feedback procedure

    NASA Astrophysics Data System (ADS)

    Yanaga, Ryuichiro; Kawahara, Hideki

    2003-10-01

    A new parameter extraction procedure based on logarithmic transformation of the temporal axis was applied to investigate auditory effects on voice F0 control to overcome artifacts due to natural fluctuations and nonlinearities in speech production mechanisms. The proposed method may add complementary information to recent findings reported by using frequency shift feedback method [Burnett and Larson, J. Acoust. Soc. Am. 112 (2002)], in terms of dynamic aspects of F0 control. In a series of experiments, dependencies of system parameters in F0 control on subjects, F0 and style (musical expressions and speaking) were tested using six participants. They were three male and three female students specialized in musical education. They were asked to sustain a Japanese vowel /a/ for about 10 s repeatedly up to 2 min in total while hearing F0 modulated feedback speech, that was modulated using an M-sequence. The results replicated qualitatively the previous finding [Kawahara and Williams, Vocal Fold Physiology, (1995)] and provided more accurate estimates. Relations with designing an artificial singer also will be discussed. [Work partly supported by the grant in aids in scientific research (B) 14380165 and Wakayama University.

  19. Mouse epileptic seizure detection with multiple EEG features and simple thresholding technique

    NASA Astrophysics Data System (ADS)

    Tieng, Quang M.; Anbazhagan, Ashwin; Chen, Min; Reutens, David C.

    2017-12-01

    Objective. Epilepsy is a common neurological disorder characterized by recurrent, unprovoked seizures. The search for new treatments for seizures and epilepsy relies upon studies in animal models of epilepsy. To capture data on seizures, many applications require prolonged electroencephalography (EEG) with recordings that generate voluminous data. The desire for efficient evaluation of these recordings motivates the development of automated seizure detection algorithms. Approach. A new seizure detection method is proposed, based on multiple features and a simple thresholding technique. The features are derived from chaos theory, information theory and the power spectrum of EEG recordings and optimally exploit both linear and nonlinear characteristics of EEG data. Main result. The proposed method was tested with real EEG data from an experimental mouse model of epilepsy and distinguished seizures from other patterns with high sensitivity and specificity. Significance. The proposed approach introduces two new features: negative logarithm of adaptive correlation integral and power spectral coherence ratio. The combination of these new features with two previously described features, entropy and phase coherence, improved seizure detection accuracy significantly. Negative logarithm of adaptive correlation integral can also be used to compute the duration of automatically detected seizures.

  20. Resumming double logarithms in the QCD evolution of color dipoles

    DOE PAGES

    Iancu, E.; Madrigal, J. D.; Mueller, A. H.; ...

    2015-05-01

    The higher-order perturbative corrections, beyond leading logarithmic accuracy, to the BFKL evolution in QCD at high energy are well known to suffer from a severe lack-of-convergence problem, due to radiative corrections enhanced by double collinear logarithms. Via an explicit calculation of Feynman graphs in light cone (time-ordered) perturbation theory, we show that the corrections enhanced by double logarithms (either energy-collinear, or double collinear) are associated with soft gluon emissions which are strictly ordered in lifetime. These corrections can be resummed to all orders by solving an evolution equation which is non-local in rapidity. This equation can be equivalently rewritten inmore » local form, but with modified kernel and initial conditions, which resum double collinear logs to all orders. We extend this resummation to the next-to-leading order BFKL and BK equations. The first numerical studies of the collinearly-improved BK equation demonstrate the essential role of the resummation in both stabilizing and slowing down the evolution.« less

  1. The energy distribution of subjets and the jet shape

    DOE PAGES

    Kang, Zhong-Bo; Ringer, Felix; Waalewijn, Wouter J.

    2017-07-13

    We present a framework that describes the energy distribution of subjets of radius r within a jet of radius R. We consider both an inclusive sample of subjets as well as subjets centered around a predetermined axis, from which the jet shape can be obtained. For r << R we factorize the physics at angular scales r and R to resum the logarithms of r/R. For central subjets, we consider both the standard jet axis and the winner-take-all axis, which involve double and single logarithms of r/R, respectively. All relevant one-loop matching coefficients are given, and an inconsistency in somemore » previous results for cone jets is resolved. Our results for the standard jet shape differ from previous calculations at next-to-leading logarithmic order, because we account for the recoil of the standard jet axis due to soft radiation. Numerical results are presented for an inclusive subjet sample for pp → jet + X at next-to-leading order plus leading logarithmic order.« less

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kang, Zhong-Bo; Ringer, Felix; Waalewijn, Wouter J.

    We present a framework that describes the energy distribution of subjets of radius r within a jet of radius R. We consider both an inclusive sample of subjets as well as subjets centered around a predetermined axis, from which the jet shape can be obtained. For r << R we factorize the physics at angular scales r and R to resum the logarithms of r/R. For central subjets, we consider both the standard jet axis and the winner-take-all axis, which involve double and single logarithms of r/R, respectively. All relevant one-loop matching coefficients are given, and an inconsistency in somemore » previous results for cone jets is resolved. Our results for the standard jet shape differ from previous calculations at next-to-leading logarithmic order, because we account for the recoil of the standard jet axis due to soft radiation. Numerical results are presented for an inclusive subjet sample for pp → jet + X at next-to-leading order plus leading logarithmic order.« less

  3. Dissipative quantum trajectories in complex space: Damped harmonic oscillator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chou, Chia-Chun, E-mail: ccchou@mx.nthu.edu.tw

    Dissipative quantum trajectories in complex space are investigated in the framework of the logarithmic nonlinear Schrödinger equation. The logarithmic nonlinear Schrödinger equation provides a phenomenological description for dissipative quantum systems. Substituting the wave function expressed in terms of the complex action into the complex-extended logarithmic nonlinear Schrödinger equation, we derive the complex quantum Hamilton–Jacobi equation including the dissipative potential. It is shown that dissipative quantum trajectories satisfy a quantum Newtonian equation of motion in complex space with a friction force. Exact dissipative complex quantum trajectories are analyzed for the wave and solitonlike solutions to the logarithmic nonlinear Schrödinger equation formore » the damped harmonic oscillator. These trajectories converge to the equilibrium position as time evolves. It is indicated that dissipative complex quantum trajectories for the wave and solitonlike solutions are identical to dissipative complex classical trajectories for the damped harmonic oscillator. This study develops a theoretical framework for dissipative quantum trajectories in complex space.« less

  4. Volatilities, Traded Volumes, and Price Increments in Derivative Securities

    NASA Astrophysics Data System (ADS)

    Kim, Kyungsik; Lim, Gyuchang; Kim, Soo Yong; Scalas, Enrico

    2007-03-01

    We apply the detrended fluctuation analysis (DFA) to the statistics of the Korean treasury bond (KTB) futures from which the logarithmic increments, volatilities, and traded volumes are estimated over a specific time lag. For our case, the logarithmic increment of futures prices has no long-memory property, while the volatility and the traded volume exhibit the existence of long-memory property. To analyze and calculate whether the volatility clustering is due to the inherent higher-order correlation not detected by applying directly the DFA to logarithmic increments of the KTB futures, it is of importance to shuffle the original tick data of futures prices and to generate the geometric Brownian random walk with the same mean and standard deviation. It is really shown from comparing the three tick data that the higher-order correlation inherent in logarithmic increments makes the volatility clustering. Particularly, the result of the DFA on volatilities and traded volumes may be supported the hypothesis of price changes.

  5. Volatilities, traded volumes, and the hypothesis of price increments in derivative securities

    NASA Astrophysics Data System (ADS)

    Lim, Gyuchang; Kim, SooYong; Scalas, Enrico; Kim, Kyungsik

    2007-08-01

    A detrended fluctuation analysis (DFA) is applied to the statistics of Korean treasury bond (KTB) futures from which the logarithmic increments, volatilities, and traded volumes are estimated over a specific time lag. In this study, the logarithmic increment of futures prices has no long-memory property, while the volatility and the traded volume exhibit the existence of the long-memory property. To analyze and calculate whether the volatility clustering is due to a inherent higher-order correlation not detected by with the direct application of the DFA to logarithmic increments of KTB futures, it is of importance to shuffle the original tick data of future prices and to generate a geometric Brownian random walk with the same mean and standard deviation. It was found from a comparison of the three tick data that the higher-order correlation inherent in logarithmic increments leads to volatility clustering. Particularly, the result of the DFA on volatilities and traded volumes can be supported by the hypothesis of price changes.

  6. Monte Carlo renormalization-group study of the Baxter-Wu model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Novotny, M.A.; Landau, D.P.; Swendsen, R.H.

    1982-07-01

    The effectiveness of a Monte Carlo renormalization-group method is studied by applying it to the Baxter-Wu model (Ising spins on a triangular lattice with three-spin interactions). The calculations yield three relevent eigenvalues in good agreement with exact or conjectured results. We demonstrate that the method is capable of distinguishing between models expected to be in the same universality class, when one of them (four-state Potts) exhibits logarithmic corrections to the usual power-law singularities and the other (Baxter-Wu) does not.

  7. Threshold resummation of soft gluons in hadronic reactions - an introduction.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berger, E. L.

    The authors discuss the motivation for resummation of the effects of initial-state soft gluon radiation, to all orders in the strong coupling strength, for processes in which the near-threshold region in the partonic subenergy is important. The author summarizes the method of perturbative resummation and its application to the calculation of the total cross section for top quark production at hadron colliders. Comments are included on the differences between the treatment of subleading logarithmic terms in this method and in other approaches.

  8. Hierarchical random additive process and logarithmic scaling of generalized high order, two-point correlations in turbulent boundary layer flow

    NASA Astrophysics Data System (ADS)

    Yang, X. I. A.; Marusic, I.; Meneveau, C.

    2016-06-01

    Townsend [Townsend, The Structure of Turbulent Shear Flow (Cambridge University Press, Cambridge, UK, 1976)] hypothesized that the logarithmic region in high-Reynolds-number wall-bounded flows consists of space-filling, self-similar attached eddies. Invoking this hypothesis, we express streamwise velocity fluctuations in the inertial layer in high-Reynolds-number wall-bounded flows as a hierarchical random additive process (HRAP): uz+=∑i=1Nzai . Here u is the streamwise velocity fluctuation, + indicates normalization in wall units, z is the wall normal distance, and ai's are independently, identically distributed random additives, each of which is associated with an attached eddy in the wall-attached hierarchy. The number of random additives is Nz˜ln(δ /z ) where δ is the boundary layer thickness and ln is natural log. Due to its simplified structure, such a process leads to predictions of the scaling behaviors for various turbulence statistics in the logarithmic layer. Besides reproducing known logarithmic scaling of moments, structure functions, and correlation function [" close="]3/2 uz(x ) uz(x +r ) >, new logarithmic laws in two-point statistics such as uz4(x ) > 1 /2, 1/3, etc. can be derived using the HRAP formalism. Supporting empirical evidence for the logarithmic scaling in such statistics is found from the Melbourne High Reynolds Number Boundary Layer Wind Tunnel measurements. We also show that, at high Reynolds numbers, the above mentioned new logarithmic laws can be derived by assuming the arrival of an attached eddy at a generic point in the flow field to be a Poisson process [Woodcock and Marusic, Phys. Fluids 27, 015104 (2015), 10.1063/1.4905301]. Taken together, the results provide new evidence supporting the essential ingredients of the attached eddy hypothesis to describe streamwise velocity fluctuations of large, momentum transporting eddies in wall-bounded turbulence, while observed deviations suggest the need for further extensions of the model.

  9. Triode carbon nanotube field emission display using barrier rib structure and manufacturing method thereof

    DOEpatents

    Han, In-taek; Kim, Jong-min

    2003-01-01

    A triode carbon nanotube field emission display (FED) using a barrier rib structure and a manufacturing method thereof are provided. In a triode carbon nanotube FED employing barrier ribs, barrier ribs are formed on cathode lines by a screen printing method, a mesh structure is mounted on the barrier ribs, and a spacer is inserted between the barrier ribs through slots of the mesh structure, thereby stably fixing the mesh structure and the spacer within a FED panel due to support by the barrier ribs.

  10. A Lagrange multiplier and Hopfield-type barrier function method for the traveling salesman problem.

    PubMed

    Dang, Chuangyin; Xu, Lei

    2002-02-01

    A Lagrange multiplier and Hopfield-type barrier function method is proposed for approximating a solution of the traveling salesman problem. The method is derived from applications of Lagrange multipliers and a Hopfield-type barrier function and attempts to produce a solution of high quality by generating a minimum point of a barrier problem for a sequence of descending values of the barrier parameter. For any given value of the barrier parameter, the method searches for a minimum point of the barrier problem in a feasible descent direction, which has a desired property that lower and upper bounds on variables are always satisfied automatically if the step length is a number between zero and one. At each iteration, the feasible descent direction is found by updating Lagrange multipliers with a globally convergent iterative procedure. For any given value of the barrier parameter, the method converges to a stationary point of the barrier problem without any condition on the objective function. Theoretical and numerical results show that the method seems more effective and efficient than the softassign algorithm.

  11. Nonlinear coherent optical image processing using logarithmic transmittance of bacteriorhodopsin films

    NASA Astrophysics Data System (ADS)

    Downie, John D.

    1995-08-01

    The transmission properties of some bacteriorhodopsin-film spatial light modulators are uniquely suited to allow nonlinear optical image-processing operations to be applied to images with multiplicative noise characteristics. A logarithmic amplitude-transmission characteristic of the film permits the conversion of multiplicative noise to additive noise, which may then be linearly filtered out in the Fourier plane of the transformed image. I present experimental results demonstrating the principle and the capability for several different image and noise situations, including deterministic noise and speckle. The bacteriorhodopsin film studied here displays the logarithmic transmission response for write intensities spanning a dynamic range greater than 2 orders of magnitude.

  12. Next-to-leading order Balitsky-Kovchegov equation with resummation

    DOE PAGES

    Lappi, T.; Mantysaari, H.

    2016-05-03

    Here, we solve the Balitsky-Kovchegov evolution equation at next-to-leading order accuracy including a resummation of large single and double transverse momentum logarithms to all orders. We numerically determine an optimal value for the constant under the large transverse momentum logarithm that enables including a maximal amount of the full NLO result in the resummation. When this value is used, the contribution from the α 2 s terms without large logarithms is found to be small at large saturation scales and at small dipoles. Close to initial conditions relevant for phenomenological applications, these fixed-order corrections are shown to be numerically important.

  13. Nonlinear Coherent Optical Image Processing Using Logarithmic Transmittance of Bacteriorhodopsin Films

    NASA Technical Reports Server (NTRS)

    Downie, John D.

    1995-01-01

    The transmission properties of some bacteriorhodopsin-film spatial light modulators are uniquely suited to allow nonlinear optical image-processing operations to be applied to images with multiplicative noise characteristics. A logarithmic amplitude-transmission characteristic of the film permits the conversion of multiplicative noise to additive noise, which may then be linearly filtered out in the Fourier plane of the transformed image. I present experimental results demonstrating the principle and the capability for several different image and noise situations, including deterministic noise and speckle. The bacteriorhodopsin film studied here displays the logarithmic transmission response for write intensities spanning a dynamic range greater than 2 orders of magnitude.

  14. LOGARITHMIC AMPLIFIER

    DOEpatents

    Wade, E.J.; Stone, R.S.

    1959-03-10

    Electronic,amplifier circuits, especially a logai-ithmic amplifier characterizxed by its greatly improved strability are discussed. According to the in ention, means are provided to feed bach the output valtagee to a diode in the amplifier input circuit, the diode being utilized to produce the logarithmic characteristics. The diode is tics, The diode isition therewith and having its filament operated from thc same source s the filament of the logarithmic diode. A bias current of relatively large value compareii with the signal current is continuously passed through the compiting dioie to render the diode insensitivy to variations in the signal current. by this odes kdu to variaelled, so that the stability of the amlifier will be unimpaired.

  15. Optimization of the Monte Carlo code for modeling of photon migration in tissue.

    PubMed

    Zołek, Norbert S; Liebert, Adam; Maniewski, Roman

    2006-10-01

    The Monte Carlo method is frequently used to simulate light transport in turbid media because of its simplicity and flexibility, allowing to analyze complicated geometrical structures. Monte Carlo simulations are, however, time consuming because of the necessity to track the paths of individual photons. The time consuming computation is mainly associated with the calculation of the logarithmic and trigonometric functions as well as the generation of pseudo-random numbers. In this paper, the Monte Carlo algorithm was developed and optimized, by approximation of the logarithmic and trigonometric functions. The approximations were based on polynomial and rational functions, and the errors of these approximations are less than 1% of the values of the original functions. The proposed algorithm was verified by simulations of the time-resolved reflectance at several source-detector separations. The results of the calculation using the approximated algorithm were compared with those of the Monte Carlo simulations obtained with an exact computation of the logarithm and trigonometric functions as well as with the solution of the diffusion equation. The errors of the moments of the simulated distributions of times of flight of photons (total number of photons, mean time of flight and variance) are less than 2% for a range of optical properties, typical of living tissues. The proposed approximated algorithm allows to speed up the Monte Carlo simulations by a factor of 4. The developed code can be used on parallel machines, allowing for further acceleration.

  16. Review of potential subsurface permeable barrier emplacement and monitoring technologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riggsbee, W.H.; Treat, R.L.; Stansfield, H.J.

    1994-02-01

    This report focuses on subsurface permeable barrier technologies potentially applicable to existing waste disposal sites. This report describes candidate subsurface permeable barriers, methods for emplacing these barriers, and methods used to monitor the barrier performance. Two types of subsurface barrier systems are described: those that apply to contamination.in the unsaturated zone, and those that apply to groundwater and to mobile contamination near the groundwater table. These barriers may be emplaced either horizontally or vertically depending on waste and site characteristics. Materials for creating permeable subsurface barriers are emplaced using one of three basic methods: injection, in situ mechanical mixing, ormore » excavation-insertion. Injection is the emplacement of dissolved reagents or colloidal suspensions into the soil at elevated pressures. In situ mechanical mixing is the physical blending of the soil and the barrier material underground. Excavation-insertion is the removal of a soil volume and adding barrier materials to the space created. Major vertical barrier emplacement technologies include trenching-backfilling; slurry trenching; and vertical drilling and injection, including boring (earth augering), cable tool drilling, rotary drilling, sonic drilling, jetting methods, injection-mixing in drilled holes, and deep soil mixing. Major horizontal barrier emplacement technologies include horizontal drilling, microtunneling, compaction boring, horizontal emplacement, longwall mining, hydraulic fracturing, and jetting methods.« less

  17. Ask the Experts

    ERIC Educational Resources Information Center

    Science Teacher, 2005

    2005-01-01

    This article features questions regarding logarithmic functions and hair growth. The first question is, "What is the underlying natural phenomenon that causes the natural log function to show up so frequently in scientific equations?" There are two reasons for this. The first is simply that the logarithm of a number is often used as a replacement…

  18. Product and Quotient Rules from Logarithmic Differentiation

    ERIC Educational Resources Information Center

    Chen, Zhibo

    2012-01-01

    A new application of logarithmic differentiation is presented, which provides an alternative elegant proof of two basic rules of differentiation: the product rule and the quotient rule. The proof can intrigue students, help promote their critical thinking and rigorous reasoning and deepen their understanding of previously encountered concepts. The…

  19. Adhesive flexible barrier film, method of forming same, and organic electronic device including same

    DOEpatents

    Blizzard, John Donald; Weidner, William Kenneth

    2013-02-05

    An adhesive flexible barrier film comprises a substrate and a barrier layer disposed on the substrate. The barrier layer is formed from a barrier composition comprising an organosilicon compound. The adhesive flexible barrier film also comprises an adhesive layer disposed on the barrier layer and formed from an adhesive composition. A method of forming the adhesive flexible barrier film comprises the steps of disposing the barrier composition on the substrate to form the barrier layer, disposing the adhesive composition on the barrier layer to form the adhesive layer, and curing the barrier layer and the adhesive layer. The adhesive flexible barrier film may be utilized in organic electronic devices.

  20. Long-range epidemic spreading in a random environment.

    PubMed

    Juhász, Róbert; Kovács, István A; Iglói, Ferenc

    2015-03-01

    Modeling long-range epidemic spreading in a random environment, we consider a quenched, disordered, d-dimensional contact process with infection rates decaying with distance as 1/rd+σ. We study the dynamical behavior of the model at and below the epidemic threshold by a variant of the strong-disorder renormalization-group method and by Monte Carlo simulations in one and two spatial dimensions. Starting from a single infected site, the average survival probability is found to decay as P(t)∼t-d/z up to multiplicative logarithmic corrections. Below the epidemic threshold, a Griffiths phase emerges, where the dynamical exponent z varies continuously with the control parameter and tends to zc=d+σ as the threshold is approached. At the threshold, the spatial extension of the infected cluster (in surviving trials) is found to grow as R(t)∼t1/zc with a multiplicative logarithmic correction and the average number of infected sites in surviving trials is found to increase as Ns(t)∼(lnt)χ with χ=2 in one dimension.

  1. On alternative q-Weibull and q-extreme value distributions: Properties and applications

    NASA Astrophysics Data System (ADS)

    Zhang, Fode; Ng, Hon Keung Tony; Shi, Yimin

    2018-01-01

    Tsallis statistics and Tsallis distributions have been attracting a significant amount of research work in recent years. Importantly, the Tsallis statistics, q-distributions have been applied in different disciplines. Yet, a relationship between some existing q-Weibull distributions and q-extreme value distributions that is parallel to the well-established relationship between the conventional Weibull and extreme value distributions through a logarithmic transformation has not be established. In this paper, we proposed an alternative q-Weibull distribution that leads to a q-extreme value distribution via the q-logarithm transformation. Some important properties of the proposed q-Weibull and q-extreme value distributions are studied. Maximum likelihood and least squares estimation methods are used to estimate the parameters of q-Weibull distribution and their performances are investigated through a Monte Carlo simulation study. The methodologies and the usefulness of the proposed distributions are illustrated by fitting the 2014 traffic fatalities data from The National Highway Traffic Safety Administration.

  2. Testing quantum gravity through dumb holes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pourhassan, Behnam, E-mail: b.pourhassan@du.ac.ir; Faizal, Mir, E-mail: f2mir@uwaterloo.ca; Irving K. Barber School of Arts and Sciences, University of British Columbia - Okanagan, Kelowna, BC V1V 1V7

    We propose a method to test the effects of quantum fluctuations on black holes by analyzing the effects of thermal fluctuations on dumb holes, the analogs for black holes. The proposal is based on the Jacobson formalism, where the Einstein field equations are viewed as thermodynamical relations, and so the quantum fluctuations are generated from the thermal fluctuations. It is well known that all approaches to quantum gravity generate logarithmic corrections to the entropy of a black hole and the coefficient of this term varies according to the different approaches to the quantum gravity. It is possible to demonstrate thatmore » such logarithmic terms are also generated from thermal fluctuations in dumb holes. In this paper, we claim that it is possible to experimentally test such corrections for dumb holes, and also obtain the correct coefficient for them. This fact can then be used to predict the effects of quantum fluctuations on realistic black holes, and so it can also be used, in principle, to experimentally test the different approaches to quantum gravity.« less

  3. Evaluation of a HDR image sensor with logarithmic response for mobile video-based applications

    NASA Astrophysics Data System (ADS)

    Tektonidis, Marco; Pietrzak, Mateusz; Monnin, David

    2017-10-01

    The performance of mobile video-based applications using conventional LDR (Low Dynamic Range) image sensors highly depends on the illumination conditions. As an alternative, HDR (High Dynamic Range) image sensors with logarithmic response are capable to acquire illumination-invariant HDR images in a single shot. We have implemented a complete image processing framework for a HDR sensor, including preprocessing methods (nonuniformity correction (NUC), cross-talk correction (CTC), and demosaicing) as well as tone mapping (TM). We have evaluated the HDR sensor for video-based applications w.r.t. the display of images and w.r.t. image analysis techniques. Regarding the display we have investigated the image intensity statistics over time, and regarding image analysis we assessed the number of feature correspondences between consecutive frames of temporal image sequences. For the evaluation we used HDR image data recorded from a vehicle on outdoor or combined outdoor/indoor itineraries, and we performed a comparison with corresponding conventional LDR image data.

  4. Spectral Elements Analysis for Viscoelastic Fluids at High Weissenberg Number Using Logarithmic conformation Tensor Model

    NASA Astrophysics Data System (ADS)

    Jafari, Azadeh; Deville, Michel O.; Fiétier, Nicolas

    2008-09-01

    This study discusses the capability of the constitutive laws for the matrix logarithm of the conformation tensor (LCT model) within the framework of the spectral elements method. The high Weissenberg number problems (HWNP) usually produce a lack of convergence of the numerical algorithms. Even though the question whether the HWNP is a purely numerical problem or rather a breakdown of the constitutive law of the model has remained somewhat of a mystery, it has been recognized that the selection of an appropriate constitutive equation constitutes a very crucial step although implementing a suitable numerical technique is still important for successful discrete modeling of non-Newtonian flows. The LCT model formulation of the viscoelastic equations originally suggested by Fattal and Kupferman is applied for 2-dimensional (2D) FENE-CR model. The Planar Poiseuille flow is considered as a benchmark problem to test this representation at high Weissenberg number. The numerical results are compared with numerical solution of the standard constitutive equation.

  5. Characterization of DBD Plasma Actuators Performance without External Flow . Part I; Thrust-Voltage Quadratic Relationship in Logarithmic Space for Sinusoidal Excitation

    NASA Technical Reports Server (NTRS)

    Ashpis, David E.; Laun, Matthew C.

    2016-01-01

    We present results of thrust measurements of Dielectric Barrier Discharge (DBD) plasma actuators. We have used a test setup, measurement, and data processing methodology that we developed in prior work. The tests were conducted with High Density Polyethylene (HDPE) actuators of three thicknesses. The applied voltage driving the actuators was a pure sinusoidal waveform. The test setup was suspended actuators with a partial liquid interface. The tests were conducted at low ambient humidity. The thrust was measured with an analytical balance and the results were corrected for anti-thrust to isolate the plasma generated thrust. Applying this approach resulted in smooth and repeatable data. It also enabled curve fitting that yielded quadratic relations between the plasma thrust and voltage in log-log space at constant frequencies. The results contrast power law relationships developed in literature that appear to be a rough approximation over a limited voltage range.

  6. Two-diode behavior in metal-ferroelectric-semiconductor structures with bismuth titanate interfacial layer

    NASA Astrophysics Data System (ADS)

    Durmuş, Perihan; Altindal, Şemsettin

    2017-10-01

    In this study, electrical parameters of the Al/Bi4Ti3O12/p-Si metal-ferroelectric-semiconductor (MFS) structure and their temperature dependence were investigated using current-voltage (I-V) data measured between 120 K and 300 K. Semi-logarithmic I-V plots of the structure revealed that fabricated structure presents two-diode behavior that leads to two sets of ideality factor, reverse saturation current and zero-bias barrier height (BH) values. Obtained results of these parameters suggest that current conduction mechanism (CCM) deviates strongly from thermionic emission theory particularly at low temperatures. High values of interface states and nkT/q-kT/q plot supported the idea of deviation from thermionic emission. In addition, ln(I)-ln(V) plots suggested that CCM varies from one bias region to another and depends on temperature as well. Series resistance values were calculated using Ohm’s law and Cheungs’ functions, and they decreased drastically with increasing temperature.

  7. Deducing the Kinetics of Protein Synthesis In Vivo from the Transition Rates Measured In Vitro

    PubMed Central

    Rudorf, Sophia; Thommen, Michael; Rodnina, Marina V.; Lipowsky, Reinhard

    2014-01-01

    The molecular machinery of life relies on complex multistep processes that involve numerous individual transitions, such as molecular association and dissociation steps, chemical reactions, and mechanical movements. The corresponding transition rates can be typically measured in vitro but not in vivo. Here, we develop a general method to deduce the in-vivo rates from their in-vitro values. The method has two basic components. First, we introduce the kinetic distance, a new concept by which we can quantitatively compare the kinetics of a multistep process in different environments. The kinetic distance depends logarithmically on the transition rates and can be interpreted in terms of the underlying free energy barriers. Second, we minimize the kinetic distance between the in-vitro and the in-vivo process, imposing the constraint that the deduced rates reproduce a known global property such as the overall in-vivo speed. In order to demonstrate the predictive power of our method, we apply it to protein synthesis by ribosomes, a key process of gene expression. We describe the latter process by a codon-specific Markov model with three reaction pathways, corresponding to the initial binding of cognate, near-cognate, and non-cognate tRNA, for which we determine all individual transition rates in vitro. We then predict the in-vivo rates by the constrained minimization procedure and validate these rates by three independent sets of in-vivo data, obtained for codon-dependent translation speeds, codon-specific translation dynamics, and missense error frequencies. In all cases, we find good agreement between theory and experiment without adjusting any fit parameter. The deduced in-vivo rates lead to smaller error frequencies than the known in-vitro rates, primarily by an improved initial selection of tRNA. The method introduced here is relatively simple from a computational point of view and can be applied to any biomolecular process, for which we have detailed information about the in-vitro kinetics. PMID:25358034

  8. Laser induced phosphorescence uranium analysis

    DOEpatents

    Bushaw, B.A.

    1983-06-10

    A method is described for measuring the uranium content of aqueous solutions wherein a uranyl phosphate complex is irradiated with a 5 nanosecond pulse of 425 nanometer laser light and resultant 520 nanometer emissions are observed for a period of 50 to 400 microseconds after the pulse. Plotting the natural logarithm of emission intensity as a function of time yields an intercept value which is proportional to uranium concentration.

  9. Fourth International Congress on Industrial and Applied Mathematics. Book of Abstracts

    DTIC Science & Technology

    1999-01-01

    Dipartimento di Matematica , Universita’ di Pavia, Italy) Logarithmic Sobolev inequalities for kinetic semiconductor equations In this paper we analyze the...terms of Whitney forms. FERNANDES, Paolo (Istituto per la Matematica Applicata del Consiglio Nazionale delle Ricerche, Italy) Dealing with realistic... Matematica dell Universita di Pavia, Italy. PERUGIA, Ilaria (Diaprtimento di Matematica , Universita’ di Pavia - Italy) An adaptive field-based method

  10. Estimating equations estimates of trends

    USGS Publications Warehouse

    Link, W.A.; Sauer, J.R.

    1994-01-01

    The North American Breeding Bird Survey monitors changes in bird populations through time using annual counts at fixed survey sites. The usual method of estimating trends has been to use the logarithm of the counts in a regression analysis. It is contended that this procedure is reasonably satisfactory for more abundant species, but produces biased estimates for less abundant species. An alternative estimation procedure based on estimating equations is presented.

  11. International Workshop on Discrete Time Domain Modelling of Electromagnetic Fields and Networks (2nd) Held in Berlin, Germany on October 28-29, 1993

    DTIC Science & Technology

    1993-10-29

    natural logarithm of the ratio of two maxima a period apart. Both methods are based on the results from the numerical integration. The details of this...check and okay member funtions are for sofware handshaking between the client and sever pracrss. Finally, the Forward function is used to initiate a

  12. Laser induced phosphorescence uranium analysis

    DOEpatents

    Bushaw, Bruce A.

    1986-01-01

    A method is described for measuring the uranium content of aqueous solutions wherein a uranyl phosphate complex is irradiated with a 5 nanosecond pulse of 425 nanometer laser light and resultant 520 nanometer emissions are observed for a period of 50 to 400 microseconds after the pulse. Plotting the natural logarithm of emission intensity as a function of time yields an intercept value which is proportional to uranium concentration.

  13. Comparison of two estimation methods for surface area concentration using number concentration and mass concentration of combustion-related ultrafine particles

    NASA Astrophysics Data System (ADS)

    Park, Ji Young; Raynor, Peter C.; Maynard, Andrew D.; Eberly, Lynn E.; Ramachandran, Gurumurthy

    Recent research has suggested that the adverse health effects caused by nanoparticles are associated with their surface area (SA) concentrations. In this study, SA was estimated in two ways using number and mass concentrations and compared with SA (SA meas) measured using a diffusion charger (DC). Aerosol measurements were made twice: once starting in October 2002 and again starting in December 2002 in Mysore, India in residences that used kerosene or liquefied petroleum gas (LPG) for cooking. Mass, number, and SA concentrations and size distributions by number were measured in each residence. The first estimation method (SA PSD) used the size distribution by number to estimate SA. The second method (SA INV) used a simple inversion scheme that incorporated number and mass concentrations while assuming a lognormal size distribution with a known geometrical standard deviation. SA PSD was, on average, 2.4 times greater (range = 1.6-3.4) than SA meas while SA INV was, on average, 6.0 times greater (range = 4.6-7.7) than SA meas. The logarithms of SA PSD and SA INV were found to be statistically significant predictors of the logarithm of SA meas. The study showed that particle number and mass concentration measurements can be used to estimate SA with a correction factor that ranges between 2 and 6.

  14. College Institutional Characteristics and the Use of Barrier Methods among Undergraduate Students

    ERIC Educational Resources Information Center

    Griner, Stacey B.; Thompson, Erika L.; Vamos, Cheryl A.; Logan, Rachel; Vázquez-Otero, Coralia; Daley, Ellen M.

    2017-01-01

    Sexually transmitted infections (STIs) may be prevented through the use of barrier methods, but rates of use among US college students are low. Previous research focuses on individual-level factors influencing barrier method use, but few studies consider community-level influences. This study examined consistency of barrier use by college…

  15. Critical N = (1, 1) general massive supergravity

    NASA Astrophysics Data System (ADS)

    Deger, Nihat Sadik; Moutsopoulos, George; Rosseel, Jan

    2018-04-01

    In this paper we study the supermultiplet structure of N = (1, 1) General Massive Supergravity at non-critical and critical points of its parameter space. To do this, we first linearize the theory around its maximally supersymmetric AdS3 vacuum and obtain the full linearized Lagrangian including fermionic terms. At generic values, linearized modes can be organized as two massless and 2 massive multiplets where supersymmetry relates them in the standard way. At critical points logarithmic modes appear and we find that in three of such points some of the supersymmetry transformations are non-invertible in logarithmic multiplets. However, in the fourth critical point, there is a massive logarithmic multiplet with invertible supersymmetry transformations.

  16. Military Geodesy and Geospace Science. Unit Three

    DTIC Science & Technology

    1981-02-01

    methods to be used. F;Im speed is a single number expressing the relative sensitivity of different films by summarizing some of the im- portant...both illustrated in Fig. 3.2-7. The Federal Method B Speed, S is defined as 0.5 (3.2-9) B H B where H B is the exposure required for a density of 0.30...11) must be replaced by an integral: 3-30 ’I R-48346 ’-- z U I 0.3 ,I1 BASE PLUS FOG HB LOGARITHM OF EXPOSURE (lux-sec) a) FEDERAL METHOD B SPEED FOR

  17. A parameterized logarithmic image processing method with Laplacian of Gaussian filtering for lung nodule enhancement in chest radiographs.

    PubMed

    Chen, Sheng; Yao, Liping; Chen, Bao

    2016-11-01

    The enhancement of lung nodules in chest radiographs (CXRs) plays an important role in the manual as well as computer-aided detection (CADe) lung cancer. In this paper, we proposed a parameterized logarithmic image processing (PLIP) method combined with the Laplacian of a Gaussian (LoG) filter to enhance lung nodules in CXRs. We first applied several LoG filters with varying parameters to an original CXR to enhance the nodule-like structures as well as the edges in the image. We then applied the PLIP model, which can enhance lung nodule images with high contrast and was beneficial in extracting effective features for nodule detection in the CADe scheme. Our method combined the advantages of both the PLIP algorithm and the LoG algorithm, which can enhance lung nodules in chest radiographs with high contrast. To test our nodule enhancement method, we tested a CADe scheme, with a relatively high performance in nodule detection, using a publically available database containing 140 nodules in 140 CXRs enhanced through our nodule enhancement method. The CADe scheme attained a sensitivity of 81 and 70 % with an average of 5.0 frame rate (FP) and 2.0 FP, respectively, in a leave-one-out cross-validation test. By contrast, the CADe scheme based on the original image recorded a sensitivity of 77 and 63 % at 5.0 FP and 2.0 FP, respectively. We introduced the measurement of enhancement by entropy evaluation to objectively assess our method. Experimental results show that the proposed method obtains an effective enhancement of lung nodules in CXRs for both radiologists and CADe schemes.

  18. Model Robust Calibration: Method and Application to Electronically-Scanned Pressure Transducers

    NASA Technical Reports Server (NTRS)

    Walker, Eric L.; Starnes, B. Alden; Birch, Jeffery B.; Mays, James E.

    2010-01-01

    This article presents the application of a recently developed statistical regression method to the controlled instrument calibration problem. The statistical method of Model Robust Regression (MRR), developed by Mays, Birch, and Starnes, is shown to improve instrument calibration by reducing the reliance of the calibration on a predetermined parametric (e.g. polynomial, exponential, logarithmic) model. This is accomplished by allowing fits from the predetermined parametric model to be augmented by a certain portion of a fit to the residuals from the initial regression using a nonparametric (locally parametric) regression technique. The method is demonstrated for the absolute scale calibration of silicon-based pressure transducers.

  19. FINE GRAIN NUCLEAR EMULSION

    DOEpatents

    Oliver, A.J.

    1962-04-24

    A method of preparing nuclear track emulsions having mean grain sizes less than 0.1 microns is described. The method comprises adding silver nitrate to potassium bromide at a rate at which there is always a constant, critical excess of silver ions. For minimum size grains, the silver ion concentration is maintained at the critical level of about pAg 2.0 to 5.0 during prectpitation, pAg being defined as the negative logarithm of the silver ion concentration. It is preferred to eliminate the excess silver at the conclusion of the precipitation steps. The emulsion is processed by methods in all other respects generally similar to the methods of the prior art. (AEC)

  20. Children's Early Mental Number Line: Logarithmic or Decomposed Linear?

    ERIC Educational Resources Information Center

    Moeller, Korbinean; Pixner, Silvia; Kaufmann, Liane; Nuerk, Hans-Christoph

    2009-01-01

    Recently, the nature of children's mental number line has received much investigation. In the number line task, children are required to mark a presented number on a physical number line with fixed endpoints. Typically, it was observed that the estimations of younger/inexperienced children were accounted for best by a logarithmic function, whereas…

  1. Approximating Exponential and Logarithmic Functions Using Polynomial Interpolation

    ERIC Educational Resources Information Center

    Gordon, Sheldon P.; Yang, Yajun

    2017-01-01

    This article takes a closer look at the problem of approximating the exponential and logarithmic functions using polynomials. Either as an alternative to or a precursor to Taylor polynomial approximations at the precalculus level, interpolating polynomials are considered. A measure of error is given and the behaviour of the error function is…

  2. How Many Is a Zillion? Sources of Number Distortion

    ERIC Educational Resources Information Center

    Rips, Lance J.

    2013-01-01

    When young children attempt to locate the positions of numerals on a number line, the positions are often logarithmically rather than linearly distributed. This finding has been taken as evidence that the children represent numbers on a mental number line that is logarithmically calibrated. This article reports a statistical simulation showing…

  3. Logarithmic Transformations in Regression: Do You Transform Back Correctly?

    ERIC Educational Resources Information Center

    Dambolena, Ismael G.; Eriksen, Steven E.; Kopcso, David P.

    2009-01-01

    The logarithmic transformation is often used in regression analysis for a variety of purposes such as the linearization of a nonlinear relationship between two or more variables. We have noticed that when this transformation is applied to the response variable, the computation of the point estimate of the conditional mean of the original response…

  4. Spatially averaged flow over a wavy boundary revisited

    USGS Publications Warehouse

    McLean, S.R.; Wolfe, S.R.; Nelson, J.M.

    1999-01-01

    Vertical profiles of streamwise velocity measured over bed forms are commonly used to deduce boundary shear stress for the purpose of estimating sediment transport. These profiles may be derived locally or from some sort of spatial average. Arguments for using the latter procedure are based on the assumption that spatial averaging of the momentum equation effectively removes local accelerations from the problem. Using analogies based on steady, uniform flows, it has been argued that the spatially averaged velocity profiles are approximately logarithmic and can be used to infer values of boundary shear stress. This technique of using logarithmic profiles is investigated using detailed laboratory measurements of flow structure and boundary shear stress over fixed two-dimensional bed forms. Spatial averages over the length of the bed form of mean velocity measurements at constant distances from the mean bed elevation yield vertical profiles that are highly logarithmic even though the effect of the bottom topography is observed throughout the water column. However, logarithmic fits of these averaged profiles do not yield accurate estimates of the measured total boundary shear stress. Copyright 1999 by the American Geophysical Union.

  5. Logarithmic r-θ mapping for hybrid optical neural network filter for multiple objects recognition within cluttered scenes

    NASA Astrophysics Data System (ADS)

    Kypraios, Ioannis; Young, Rupert C. D.; Chatwin, Chris R.; Birch, Phil M.

    2009-04-01

    θThe window unit in the design of the complex logarithmic r-θ mapping for hybrid optical neural network filter can allow multiple objects of the same class to be detected within the input image. Additionally, the architecture of the neural network unit of the complex logarithmic r-θ mapping for hybrid optical neural network filter becomes attractive for accommodating the recognition of multiple objects of different classes within the input image by modifying the output layer of the unit. We test the overall filter for multiple objects of the same and of different classes' recognition within cluttered input images and video sequences of cluttered scenes. Logarithmic r-θ mapping for hybrid optical neural network filter is shown to exhibit with a single pass over the input data simultaneously in-plane rotation, out-of-plane rotation, scale, log r-θ map translation and shift invariance, and good clutter tolerance by recognizing correctly the different objects within the cluttered scenes. We record in our results additional extracted information from the cluttered scenes about the objects' relative position, scale and in-plane rotation.

  6. HARDI DATA DENOISING USING VECTORIAL TOTAL VARIATION AND LOGARITHMIC BARRIER

    PubMed Central

    Kim, Yunho; Thompson, Paul M.; Vese, Luminita A.

    2010-01-01

    In this work, we wish to denoise HARDI (High Angular Resolution Diffusion Imaging) data arising in medical brain imaging. Diffusion imaging is a relatively new and powerful method to measure the three-dimensional profile of water diffusion at each point in the brain. These images can be used to reconstruct fiber directions and pathways in the living brain, providing detailed maps of fiber integrity and connectivity. HARDI data is a powerful new extension of diffusion imaging, which goes beyond the diffusion tensor imaging (DTI) model: mathematically, intensity data is given at every voxel and at any direction on the sphere. Unfortunately, HARDI data is usually highly contaminated with noise, depending on the b-value which is a tuning parameter pre-selected to collect the data. Larger b-values help to collect more accurate information in terms of measuring diffusivity, but more noise is generated by many factors as well. So large b-values are preferred, if we can satisfactorily reduce the noise without losing the data structure. Here we propose two variational methods to denoise HARDI data. The first one directly denoises the collected data S, while the second one denoises the so-called sADC (spherical Apparent Diffusion Coefficient), a field of radial functions derived from the data. These two quantities are related by an equation of the form S = SSexp (−b · sADC) (in the noise-free case). By applying these two different models, we will be able to determine which quantity will most accurately preserve data structure after denoising. The theoretical analysis of the proposed models is presented, together with experimental results and comparisons for denoising synthetic and real HARDI data. PMID:20802839

  7. [Ophthalmologic reading charts : Part 2: Current logarithmically scaled reading charts].

    PubMed

    Radner, W

    2016-12-01

    To analyze currently available reading charts regarding print size, logarithmic print size progression, and the background of test-item standardization. For the present study, the following logarithmically scaled reading charts were investigated using a measuring microscope (iNexis VMA 2520; Nikon, Tokyo): Eschenbach, Zeiss, OCULUS, MNREAD (Minnesota Near Reading Test), Colenbrander, and RADNER. Calculations were made according to EN-ISO 8596 and the International Research Council recommendations. Modern reading charts and cards exhibit a logarithmic progression of print sizes. The RADNER reading charts comprise four different cards with standardized test items (sentence optotypes), a well-defined stop criterion, accurate letter sizes, and a high print quality. Numbers and Landolt rings are also given in the booklet. The OCULUS cards have currently been reissued according to recent standards and also exhibit a high print quality. In addition to letters, numbers, Landolt rings, and examples taken from a timetable and the telephone book, sheet music is also offered. The Colenbrander cards use short sentences of 44 characters, including spaces, and exhibit inaccuracy at smaller letter sizes, as do the MNREAD cards. The MNREAD cards use sentences of 60 characters, including spaces, and have a high print quality. Modern reading charts show that international standards can be achieved with test items similar to optotypes, by using recent technology and developing new concepts of test-item standardization. Accurate print sizes, high print quality, and a logarithmic progression should become the minimum requirements for reading charts and reading cards in ophthalmology.

  8. Logarithmic spiral flap for circular or oval defects on the lateral surface of the nose and nasal ala: a series of 15 cases.

    PubMed

    Moreno-Artero, E; Redondo, P

    2015-10-01

    A large number of flaps, particularly rotation and transposition flaps, have been described for the closure of skin defects left by oncologic surgery of the nose. The logarithmic spiral flap is a variant of the rotation flap. We present a series of 15 patients with different types of skin tumor on the nose. The skin defect resulting from excision of the tumor by micrographic surgery was reconstructed using various forms of the logarithmic spiral flap. There are 3 essential aspects to flap design: commencement of the pedicle at the upper or lower border of the wound, a width of the distal end of the flap equal to the vertical diameter of the defect, and a progressive increase in the radius of the spiral from the distal end of the flap to its base. The cosmetic and functional results of surgical reconstruction were satisfactory, and no patient required additional treatment to improve scar appearance. The logarithmic spiral flap is useful for the closure of circular or oval defects situated on the lateral surface of the nose and nasal ala. The flap initiates at one of the borders of the wound as a pedicle with a radius that increases progressively to create a spiral. We propose the logarithmic spiral flap as an excellent option for the closure of circular or oval defects of the nose. Copyright © 2015 Elsevier España, S.L.U. and AEDV. All rights reserved.

  9. The complete two-loop integrated jet thrust distribution in soft-collinear effective theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    von Manteuffel, Andreas; Schabinger, Robert M.; Zhu, Hua Xing

    2014-03-01

    In this work, we complete the calculation of the soft part of the two-loop integrated jet thrust distribution in e+e- annihilation. This jet mass observable is based on the thrust cone jet algorithm, which involves a veto scale for out-of-jet radiation. The previously uncomputed part of our result depends in a complicated way on the jet cone size, r, and at intermediate stages of the calculation we actually encounter a new class of multiple polylogarithms. We employ an extension of the coproduct calculus to systematically exploit functional relations and represent our results concisely. In contrast to the individual contributions, themore » sum of all global terms can be expressed in terms of classical polylogarithms. Our explicit two-loop calculation enables us to clarify the small r picture discussed in earlier work. In particular, we show that the resummation of the logarithms of r that appear in the previously uncomputed part of the two-loop integrated jet thrust distribution is inextricably linked to the resummation of the non-global logarithms. Furthermore, we find that the logarithms of r which cannot be absorbed into the non-global logarithms in the way advocated in earlier work have coefficients fixed by the two-loop cusp anomalous dimension. We also show that in many cases one can straightforwardly predict potentially large logarithmic contributions to the integrated jet thrust distribution at L loops by making use of analogous contributions to the simpler integrated hemisphere soft function.« less

  10. Intensity-distance attenuation law in the continental Portugal using intensity data points

    NASA Astrophysics Data System (ADS)

    Le Goff, Boris; Bezzeghoud, Mourad; Borges, José Fernando

    2013-04-01

    Several attempts have been done to evaluate the intensity attenuation with the epicentral distance in the Iberian Peninsula [1, 2]. So far, the results are not satisfying or not using the intensity data points of the available events. We developed a new intensity law for the continental Portugal, using the macroseismic reports that provide intensity data points, instrumental magnitudes and instrumental locations. We collected 31 events from the Instituto Portugues do Mar e da Atmosfera (IPMA, Portugal; ex-IM), covering the period between 1909 and 1997, with a largest magnitude of 8.2, closed to the African-Eurasian plate boundary. For each event, the intensity data points are plotted versus the distance and different trend lines are achieved (linear, exponential and logarithmic). The better fits are obtained with the logarithmic trend lines. We evaluate a form of the attenuation equation as follow: I = c0(M) + c1(M).ln(R) (1) where I, M and R are, respectively, the intensity, the magnitude and the epicentral distance. To solve this equation, we investigate two methods. The first one consists in plotting the slope of the different logarithmic trends versus the magnitude, to estimate the parameter c1(M), and to evaluate how the intensity behaves in function of the magnitude. Another plot, representing the intercepts versus the magnitude, allows to determine the second parameter, c0(M). The second method consists in using the inverse theory. From the data, we recover the parameters of the model, using a linear inverse matrix. Both parameters, c0(M) and c1(M), are provided with their associated errors. A sensibility test will be achieved, using the macroseismic data, to estimate the resolution power of both methods. This new attenuation law will be used with the Bakun and Wentworth method [3] in order to reestimate the epicentral region and the magnitude estimation of the 1909 Benavente event. This attenuation law may also be adapted to be used in Probabilistic Seismic Hazard Analysis. [1] Lopez Casado, C., Molina Palacios, S., Delgado, J., and Pelaez, J.A., 2000, BSSA, 90, 1, pp. 34-47 [2] Sousa, M. L., and Oliveira, C. S., 1997, Natural Hazard, 14: 207-225 [3] Bakun, W. H., and Wentworth, C. M., 1997, BSSA, vol.87, No. 6, pp. 1502-1521

  11. Freezing transition of the directed polymer in a 1+d random medium: Location of the critical temperature and unusual critical properties

    NASA Astrophysics Data System (ADS)

    Monthus, Cécile; Garel, Thomas

    2006-07-01

    In dimension d⩾3 , the directed polymer in a random medium undergoes a phase transition between a free phase at high temperature and a low-temperature disorder-dominated phase. For the latter phase, Fisher and Huse have proposed a droplet theory based on the scaling of the free-energy fluctuations ΔF(l)˜lθ at scale l . On the other hand, in related growth models belonging to the Kardar-Parisi-Zhang universality class, Forrest and Tang have found that the height-height correlation function is logarithmic at the transition. For the directed polymer model at criticality, this translates into logarithmic free-energy fluctuations ΔFTc(l)˜(lnl)σ with σ=1/2 . In this paper, we propose a droplet scaling analysis exactly at criticality based on this logarithmic scaling. Our main conclusion is that the typical correlation length ξ(T) of the low-temperature phase diverges as lnξ(T)˜[-ln(Tc-T)]1/σ˜[-ln(Tc-T)]2 , instead of the usual power law ξ(T)˜(Tc-T)-ν . Furthermore, the logarithmic dependence of ΔFTc(l) leads to the conclusion that the critical temperature Tc actually coincides with the explicit upper bound T2 derived by Derrida and co-workers, where T2 corresponds to the temperature below which the ratio ZL2¯/(ZL¯)2 diverges exponentially in L . Finally, since the Fisher-Huse droplet theory was initially introduced for the spin-glass phase, we briefly mention the similarities with and differences from the directed polymer model. If one speculates that the free energy of droplet excitations for spin glasses is also logarithmic at Tc , one obtains a logarithmic decay for the mean square correlation function at criticality, C2(r)¯˜1/(lnr)σ , instead of the usual power law 1/rd-2+η .

  12. Misperceptions regarding protective barrier method use for safer sex among African-American women who have sex with women.

    PubMed

    Muzny, Christina A; Harbison, Hanne S; Pembleton, Elizabeth S; Hook, Edward W; Austin, Erika L

    2013-05-01

    Barrier methods for HIV and sexually transmissible infection (STI) prevention among women who have sex with women (WSW) are available, although their effectiveness has not been systematically investigated. These methods are infrequently used by WSW. As part of a larger study on STI risk perceptions and safer sex among African-American WSW, we discovered several misperceptions regarding barrier methods that may be associated with their limited use. Participants were recruited from the Jefferson County Health Department STI Clinic and through word of mouth in Birmingham, Alabama, for focus group discussions exploring perceptions of STI risk and safer sex. Seven focus groups with 29 participants were conducted (age range: 19-43 years). Several misperceptions regarding barrier methods were identified, notably the conflation of dental dams and female condoms. Descriptions of the use of barrier methods were qualified with phrases suggesting their hypothetical, rather than actual, use. Additional evidence that barrier methods are not actually used came from beliefs that dental dams and female condoms are available in major grocery stores or department store chains. Those providing sexual health services to WSW should be cautious in assuming that WSW have accurate information regarding barrier methods for safer sex. Sexual health services provided to WSW should include an accurate description of what barrier methods are, how to distinguish them from barrier methods more commonly used during heterosexual sex (female and male condoms), and how to use them correctly. Future studies are needed to address how effectively these measures reduce transmission of STIs among WSW.

  13. Infrared Pyrometry From Room Temperature To 700 Degrees C

    NASA Technical Reports Server (NTRS)

    Wheeler, Donald R.; Jones, William R., Jr.; Pepper, Stephen V.

    1989-01-01

    Consistent readings obtained when specimens prepared appropriately. New method largely overcomes limitations. Transmission of infrared increased by replacing customary metal-coated glass viewing port with quartz viewing port covered with tantalum mesh. Commercially available infrared microscope with focal distance of 53 cm focuses on spot only 1 mm wide on specimen. Microscope operated as radiometer. Output of detector varies by several orders of magnitude, processed by logarithmic amplifier before reading.

  14. Pattern recognition neural-net by spatial mapping of biology visual field

    NASA Astrophysics Data System (ADS)

    Lin, Xin; Mori, Masahiko

    2000-05-01

    The method of spatial mapping in biology vision field is applied to artificial neural networks for pattern recognition. By the coordinate transform that is called the complex-logarithm mapping and Fourier transform, the input images are transformed into scale- rotation- and shift- invariant patterns, and then fed into a multilayer neural network for learning and recognition. The results of computer simulation and an optical experimental system are described.

  15. Static versus Dynamic Disposition: The Role of GeoGebra in Representing Polynomial-Rational Inequalities and Exponential-Logarithmic Functions

    ERIC Educational Resources Information Center

    Caglayan, Günhan

    2014-01-01

    This study investigates prospective secondary mathematics teachers' visual representations of polynomial and rational inequalities, and graphs of exponential and logarithmic functions with GeoGebra Dynamic Software. Five prospective teachers in a university in the United States participated in this research study, which was situated within a…

  16. An Investigation of Students' Errors in Logarithms

    ERIC Educational Resources Information Center

    Ganesan, Raman; Dindyal, Jaguthsing

    2014-01-01

    In this study we set out to investigate the errors made by students in logarithms. A test with 16 items was administered to 89 Secondary three students (Year 9). The errors made by the students were categorized using four categories from a framework by Movshovitz-Hadar, Zaslavsky, and Inbar (1987). It was found that students in the top third were…

  17. Can One Take the Logarithm or the Sine of a Dimensioned Quantity or a Unit? Dimensional Analysis Involving Transcendental Functions

    ERIC Educational Resources Information Center

    Matta, Cherif F.; Massa, Lou; Gubskaya, Anna V.; Knoll, Eva

    2011-01-01

    The fate of dimensions of dimensioned quantities that are inserted into the argument of transcendental functions such as logarithms, exponentiation, trigonometric, and hyperbolic functions is discussed. Emphasis is placed on common misconceptions that are not often systematically examined in undergraduate courses of physical sciences. The argument…

  18. Decay of Correlations, Quantitative Recurrence and Logarithm Law for Contracting Lorenz Attractors

    NASA Astrophysics Data System (ADS)

    Galatolo, Stefano; Nisoli, Isaia; Pacifico, Maria Jose

    2018-03-01

    In this paper we prove that a class of skew products maps with non uniformly hyperbolic base has exponential decay of correlations. We apply this to obtain a logarithm law for the hitting time associated to a contracting Lorenz attractor at all the points having a well defined local dimension, and a quantitative recurrence estimation.

  19. Dead-time compensation for a logarithmic display rate meter

    DOEpatents

    Larson, John A.; Krueger, Frederick P.

    1988-09-20

    An improved circuit is provided for application to a radiation survey meter that uses a detector that is subject to dead time. The circuit compensates for dead time over a wide range of count rates by producing a dead-time pulse for each detected event, a live-time pulse that spans the interval between dead-time pulses, and circuits that average the value of these pulses over time. The logarithm of each of these values is obtained and the logarithms are subtracted to provide a signal that is proportional to a count rate that is corrected for the effects of dead time. The circuit produces a meter indication and is also capable of producing an audible indication of detected events.

  20. Dead-time compensation for a logarithmic display rate meter

    DOEpatents

    Larson, J.A.; Krueger, F.P.

    1987-10-05

    An improved circuit is provided for application to a radiation survey meter that uses a detector that is subject to dead time. The circuit compensates for dead time over a wide range of count rates by producing a dead-time pulse for each detected event, a live-time pulse that spans the interval between dead-time pulses, and circuits that average the value of these pulses over time. The logarithm of each of these values is obtained and the logarithms are subtracted to provide a signal that is proportional to a count rate that is corrected for the effects of dead time. The circuit produces a meter indication and is also capable of producing an audible indication of detected events. 5 figs.

  1. Non-additive non-interacting kinetic energy of rare gas dimers

    NASA Astrophysics Data System (ADS)

    Jiang, Kaili; Nafziger, Jonathan; Wasserman, Adam

    2018-03-01

    Approximations of the non-additive non-interacting kinetic energy (NAKE) as an explicit functional of the density are the basis of several electronic structure methods that provide improved computational efficiency over standard Kohn-Sham calculations. However, within most fragment-based formalisms, there is no unique exact NAKE, making it difficult to develop general, robust approximations for it. When adjustments are made to the embedding formalisms to guarantee uniqueness, approximate functionals may be more meaningfully compared to the exact unique NAKE. We use numerically accurate inversions to study the exact NAKE of several rare-gas dimers within partition density functional theory, a method that provides the uniqueness for the exact NAKE. We find that the NAKE decreases nearly exponentially with atomic separation for the rare-gas dimers. We compute the logarithmic derivative of the NAKE with respect to the bond length for our numerically accurate inversions as well as for several approximate NAKE functionals. We show that standard approximate NAKE functionals do not reproduce the correct behavior for this logarithmic derivative and propose two new NAKE functionals that do. The first of these is based on a re-parametrization of a conjoint Perdew-Burke-Ernzerhof (PBE) functional. The second is a simple, physically motivated non-decomposable NAKE functional that matches the asymptotic decay constant without fitting.

  2. Measuring aging rates of mice subjected to caloric restriction and genetic disruption of growth hormone signaling

    PubMed Central

    Koopman, Jacob J.E.; van Heemst, Diana; van Bodegom, David; Bonkowski, Michael S.; Sun, Liou Y.; Bartke, Andrzej

    2016-01-01

    Caloric restriction and genetic disruption of growth hormone signaling have been shown to counteract aging in mice. The effects of these interventions on aging are examined through age-dependent survival or through the increase in age-dependent mortality rates on a logarithmic scale fitted to the Gompertz model. However, these methods have limitations that impede a fully comprehensive disclosure of these effects. Here we examine the effects of these interventions on murine aging through the increase in age-dependent mortality rates on a linear scale without fitting them to a model like the Gompertz model. Whereas these interventions negligibly and non-consistently affected the aging rates when examined through the age-dependent mortality rates on a logarithmic scale, they caused the aging rates to increase at higher ages and to higher levels when examined through the age-dependent mortality rates on a linear scale. These results add to the debate whether these interventions postpone or slow aging and to the understanding of the mechanisms by which they affect aging. Since different methods yield different results, it is worthwhile to compare their results in future research to obtain further insights into the effects of dietary, genetic, and other interventions on the aging of mice and other species. PMID:26959761

  3. Measuring aging rates of mice subjected to caloric restriction and genetic disruption of growth hormone signaling.

    PubMed

    Koopman, Jacob J E; van Heemst, Diana; van Bodegom, David; Bonkowski, Michael S; Sun, Liou Y; Bartke, Andrzej

    2016-03-01

    Caloric restriction and genetic disruption of growth hormone signaling have been shown to counteract aging in mice. The effects of these interventions on aging are examined through age-dependent survival or through the increase in age-dependent mortality rates on a logarithmic scale fitted to the Gompertz model. However, these methods have limitations that impede a fully comprehensive disclosure of these effects. Here we examine the effects of these interventions on murine aging through the increase in age-dependent mortality rates on a linear scale without fitting them to a model like the Gompertz model. Whereas these interventions negligibly and non-consistently affected the aging rates when examined through the age-dependent mortality rates on a logarithmic scale, they caused the aging rates to increase at higher ages and to higher levels when examined through the age-dependent mortality rates on a linear scale. These results add to the debate whether these interventions postpone or slow aging and to the understanding of the mechanisms by which they affect aging. Since different methods yield different results, it is worthwhile to compare their results in future research to obtain further insights into the effects of dietary, genetic, and other interventions on the aging of mice and other species.

  4. Transformation of arbitrary distributions to the normal distribution with application to EEG test-retest reliability.

    PubMed

    van Albada, S J; Robinson, P A

    2007-04-15

    Many variables in the social, physical, and biosciences, including neuroscience, are non-normally distributed. To improve the statistical properties of such data, or to allow parametric testing, logarithmic or logit transformations are often used. Box-Cox transformations or ad hoc methods are sometimes used for parameters for which no transformation is known to approximate normality. However, these methods do not always give good agreement with the Gaussian. A transformation is discussed that maps probability distributions as closely as possible to the normal distribution, with exact agreement for continuous distributions. To illustrate, the transformation is applied to a theoretical distribution, and to quantitative electroencephalographic (qEEG) measures from repeat recordings of 32 subjects which are highly non-normal. Agreement with the Gaussian was better than using logarithmic, logit, or Box-Cox transformations. Since normal data have previously been shown to have better test-retest reliability than non-normal data under fairly general circumstances, the implications of our transformation for the test-retest reliability of parameters were investigated. Reliability was shown to improve with the transformation, where the improvement was comparable to that using Box-Cox. An advantage of the general transformation is that it does not require laborious optimization over a range of parameters or a case-specific choice of form.

  5. Estimating leaf nitrogen accumulation in maize based on canopy hyperspectrum data

    NASA Astrophysics Data System (ADS)

    Gu, Xiaohe; Wang, Lizhi; Song, Xiaoyu; Xu, Xingang

    2016-10-01

    Leaf nitrogen accumulation (LNA) has important influence on the formation of crop yield and grain protein. Monitoring leaf nitrogen accumulation of crop canopy quantitively and real-timely is helpful for mastering crop nutrition status, diagnosing group growth and managing fertilization precisely. The study aimed to develop a universal method to monitor LNA of maize by hyperspectrum data, which could provide mechanism support for mapping LNA of maize at county scale. The correlations between LNA and hyperspectrum reflectivity and its mathematical transformations were analyzed. Then the feature bands and its transformations were screened to develop the optimal model of estimating LNA based on multiple linear regression method. The in-situ samples were used to evaluate the accuracy of the estimating model. Results showed that the estimating model with one differential logarithmic transformation (lgP') of reflectivity could reach highest correlation coefficient (0.889) with lowest RMSE (0.646 g·m-2), which was considered as the optimal model for estimating LNA in maize. The determination coefficient (R2) of testing samples was 0.831, while the RMSE was 1.901 g·m-2. It indicated that the one differential logarithmic transformation of hyperspectrum had good response with LNA of maize. Based on this transformation, the optimal estimating model of LNA could reach good accuracy with high stability.

  6. Mapping soil total nitrogen of cultivated land at county scale by using hyperspectral image

    NASA Astrophysics Data System (ADS)

    Gu, Xiaohe; Zhang, Li Yan; Shu, Meiyan; Yang, Guijun

    2018-02-01

    Monitoring total nitrogen content (TNC) in the soil of cultivated land quantitively and mastering its spatial distribution are helpful for crop growing, soil fertility adjustment and sustainable development of agriculture. The study aimed to develop a universal method to map total nitrogen content in soil of cultivated land by HSI image at county scale. Several mathematical transformations were used to improve the expression ability of HSI image. The correlations between soil TNC and the reflectivity and its mathematical transformations were analyzed. Then the susceptible bands and its transformations were screened to develop the optimizing model of map soil TNC in the Anping County based on the method of multiple linear regression. Results showed that the bands of 14th, 16th, 19th, 37th and 60th with different mathematical transformations were screened as susceptible bands. Differential transformation was helpful for reducing the noise interference to the diagnosis ability of the target spectrum. The determination coefficient of the first order differential of logarithmic transformation was biggest (0.505), while the RMSE was lowest. The study confirmed the first order differential of logarithm transformation as the optimal inversion model for soil TNC, which was used to map soil TNC of cultivated land in the study area.

  7. Barrier Methods of Birth Control: Spermicide, Condom, Sponge, Diaphragm, and Cervical Cap

    MedlinePlus

    ... Education & Events Advocacy For Patients About ACOG Barrier Methods of Birth Control: Spermicide, Condom, Sponge, Diaphragm, and ... Cervical Cap FAQ022, March 2018 PDF Format Barrier Methods of Birth Control: Spermicide, Condom, Sponge, Diaphragm, and ...

  8. Method of installing subsurface barrier

    DOEpatents

    Nickelson, Reva A.; Richardson, John G.; Kostelnik, Kevin M.; Sloan, Paul A.

    2007-10-09

    Systems, components, and methods relating to subterranean containment barriers. Laterally adjacent tubular casings having male interlock structures and multiple female interlock structures defining recesses for receiving a male interlock structure are used to create subterranean barriers for containing and treating buried waste and its effluents. The multiple female interlock structures enable the barriers to be varied around subsurface objects and to form barrier sidewalls. The barrier may be used for treating and monitoring a zone of interest.

  9. A novel method for calculating the energy barriers for carbon diffusion in ferrite under heterogeneous stress

    NASA Astrophysics Data System (ADS)

    Tchitchekova, Deyana S.; Morthomas, Julien; Ribeiro, Fabienne; Ducher, Roland; Perez, Michel

    2014-07-01

    A novel method for accurate and efficient evaluation of the change in energy barriers for carbon diffusion in ferrite under heterogeneous stress is introduced. This method, called Linear Combination of Stress States, is based on the knowledge of the effects of simple stresses (uniaxial or shear) on these diffusion barriers. Then, it is assumed that the change in energy barriers under a complex stress can be expressed as a linear combination of these already known simple stress effects. The modifications of energy barriers by either uniaxial traction/compression and shear stress are determined by means of atomistic simulations with the Climbing Image-Nudge Elastic Band method and are stored as a set of functions. The results of this method are compared to the predictions of anisotropic elasticity theory. It is shown that, linear anisotropic elasticity fails to predict the correct energy barrier variation with stress (especially with shear stress) whereas the proposed method provides correct energy barrier variation for stresses up to ˜3 GPa. This study provides a basis for the development of multiscale models of diffusion under non-uniform stress.

  10. A novel method for calculating the energy barriers for carbon diffusion in ferrite under heterogeneous stress.

    PubMed

    Tchitchekova, Deyana S; Morthomas, Julien; Ribeiro, Fabienne; Ducher, Roland; Perez, Michel

    2014-07-21

    A novel method for accurate and efficient evaluation of the change in energy barriers for carbon diffusion in ferrite under heterogeneous stress is introduced. This method, called Linear Combination of Stress States, is based on the knowledge of the effects of simple stresses (uniaxial or shear) on these diffusion barriers. Then, it is assumed that the change in energy barriers under a complex stress can be expressed as a linear combination of these already known simple stress effects. The modifications of energy barriers by either uniaxial traction/compression and shear stress are determined by means of atomistic simulations with the Climbing Image-Nudge Elastic Band method and are stored as a set of functions. The results of this method are compared to the predictions of anisotropic elasticity theory. It is shown that, linear anisotropic elasticity fails to predict the correct energy barrier variation with stress (especially with shear stress) whereas the proposed method provides correct energy barrier variation for stresses up to ∼3 GPa. This study provides a basis for the development of multiscale models of diffusion under non-uniform stress.

  11. Urban sound energy reduction by means of sound barriers

    NASA Astrophysics Data System (ADS)

    Iordache, Vlad; Ionita, Mihai Vlad

    2018-02-01

    In urban environment, various heating ventilation and air conditioning appliances designed to maintain indoor comfort become urban acoustic pollution vectors due to the sound energy produced by these equipment. The acoustic barriers are the recommended method for the sound energy reduction in urban environment. The current sizing method of these acoustic barriers is too difficult and it is not practical for any 3D location of the noisy equipment and reception point. In this study we will develop based on the same method a new simplified tool for acoustic barriers sizing, maintaining the same precision characteristic to the classical method. Abacuses for acoustic barriers sizing are built that can be used for different 3D locations of the source and the reception points, for several frequencies and several acoustic barrier heights. The study case presented in the article represents a confirmation for the rapidity and ease of use of these abacuses in the design of the acoustic barriers.

  12. Environmental barrier material for organic light emitting device and method of making

    DOEpatents

    Graff, Gordon L [West Richland, WA; Gross, Mark E [Pasco, WA; Affinito, John D [Kennewick, WA; Shi, Ming-Kun [Richland, WA; Hall, Michael [West Richland, WA; Mast, Eric [Richland, WA

    2003-02-18

    An encapsulated organic light emitting device. The device includes a first barrier stack comprising at least one first barrier layer and at least one first polymer layer. There is an organic light emitting layer stack adjacent to the first barrier stack. A second barrier stack is adjacent to the organic light emitting layer stack. The second barrier stack has at least one second barrier layer and at least one second polymer layer. A method of making the encapsulated organic light emitting device is also provided.

  13. Subterranean barriers, methods, and apparatuses for forming, inspecting, selectively heating, and repairing same

    DOEpatents

    Nickelson, Reva A.; Sloan, Paul A.; Richardson, John G.; Walsh, Stephanie; Kostelnik, Kevin M.

    2009-04-07

    A subterranean barrier and method for forming same are disclosed, the barrier including a plurality of casing strings wherein at least one casing string of the plurality of casing strings may be affixed to at least another adjacent casing string of the plurality of casing strings through at least one weld, at least one adhesive joint, or both. A method and system for nondestructively inspecting a subterranean barrier is disclosed. For instance, a radiographic signal may be emitted from within a casing string toward an adjacent casing string and the radiographic signal may be detected from within the adjacent casing string. A method of repairing a barrier including removing at least a portion of a casing string and welding a repair element within the casing string is disclosed. A method of selectively heating at least one casing string forming at least a portion of a subterranean barrier is disclosed.

  14. Monotonicity and Logarithmic Concavity of Two Functions Involving Exponential Function

    ERIC Educational Resources Information Center

    Liu, Ai-Qi; Li, Guo-Fu; Guo, Bai-Ni; Qi, Feng

    2008-01-01

    The function 1 divided by "x"[superscript 2] minus "e"[superscript"-x"] divided by (1 minus "e"[superscript"-x"])[superscript 2] for "x" greater than 0 is proved to be strictly decreasing. As an application of this monotonicity, the logarithmic concavity of the function "t" divided by "e"[superscript "at"] minus "e"[superscript"(a-1)""t"] for "a"…

  15. Portable geiger counter with logarithmic scale (in Portuguese)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oliveira, L.A.C.; de Andrade Chagas, E.; de Bittencourt, F.A.

    1971-06-01

    From 23rd annual meeting of the Brazilian Society for the Advancement of Science; Curitiba, Brazil (4 Jul 1971). A portable scaler with logarithmic scale covering 3 decades: 1 to 10, 10 to 10/sup 2/, and 10/sup 2/ to 10/sup 3/ cps is presented. Electrica l energy is supplied by 6 volts given by 4 D type batteries. (INIS)

  16. Regional Frequency Computation Users Manual.

    DTIC Science & Technology

    1972-07-01

    increment of flow used to prevent infinite logarithms for events with zero flow X = Mean logarithm of flow events N = Total years of record S = Unbiased...C LIBRARY 3’jr.RfUTINFES USEO--ALflGpSINvAB3 1002 c PRGRAM ~ SUBRflUTINES CR0UTpR,chfNN-SFE COt’MENTS I-N RNGEN 100 C REFERENCE TOl TAPE ? AT

  17. A new type of density-management diagram for slash pine plantations

    Treesearch

    Curtis L. VanderSchaaf

    2006-01-01

    Many Density-Management Diagrams (DMD) have been developed for conifer species throughout the world based on stand density index (SDI). The diagrams often plot the logarithm of average tree size (volume, weight, or quadratic mean diameter) over the logarithm of trees per unit area. A new type of DMD is presented for slash pine (Pinus elliottii var elliottii)...

  18. MUTATIONAL AND TRANSCRIPTIONAL RESPONSES OF STATIONARY- AND LOGARITHMIC-PHASE SALMONELLA TO MX: CORRELATION OF MUTATIONAL RESPONSE TO CHANGES IN GENE EXPRESSION

    EPA Science Inventory

    We measured the mutational and transcriptional response of stationary-phase and logarithmic-phase S. typhimurium TA100 to 3 concentrations of the drinking water mutagen 3-chloro-4-(dichloromethyl)-5-hydroxy-2(5H)-furanone (MX). The mutagenicity of MX in strain TA100 was evaluated...

  19. A numerical solution for two-dimensional Fredholm integral equations of the second kind with kernels of the logarithmic potential form

    NASA Technical Reports Server (NTRS)

    Gabrielsen, R. E.; Uenal, A.

    1981-01-01

    Two dimensional Fredholm integral equations with logarithmic potential kernels are numerically solved. The explicit consequence of these solutions to their true solutions is demonstrated. The results are based on a previous work in which numerical solutions were obtained for Fredholm integral equations of the second kind with continuous kernels.

  20. Uplink Downlink Rate Balancing and Throughput Scaling in FDD Massive MIMO Systems

    NASA Astrophysics Data System (ADS)

    Bergel, Itsik; Perets, Yona; Shamai, Shlomo

    2016-05-01

    In this work we extend the concept of uplink-downlink rate balancing to frequency division duplex (FDD) massive MIMO systems. We consider a base station with large number antennas serving many single antenna users. We first show that any unused capacity in the uplink can be traded off for higher throughput in the downlink in a system that uses either dirty paper (DP) coding or linear zero-forcing (ZF) precoding. We then also study the scaling of the system throughput with the number of antennas in cases of linear Beamforming (BF) Precoding, ZF Precoding, and DP coding. We show that the downlink throughput is proportional to the logarithm of the number of antennas. While, this logarithmic scaling is lower than the linear scaling of the rate in the uplink, it can still bring significant throughput gains. For example, we demonstrate through analysis and simulation that increasing the number of antennas from 4 to 128 will increase the throughput by more than a factor of 5. We also show that a logarithmic scaling of downlink throughput as a function of the number of receive antennas can be achieved even when the number of transmit antennas only increases logarithmically with the number of receive antennas.

  1. Logarithmic conformal field theory: beyond an introduction

    NASA Astrophysics Data System (ADS)

    Creutzig, Thomas; Ridout, David

    2013-12-01

    This article aims to review a selection of central topics and examples in logarithmic conformal field theory. It begins with the remarkable observation of Cardy that the horizontal crossing probability of critical percolation may be computed analytically within the formalism of boundary conformal field theory. Cardy’s derivation relies on certain implicit assumptions which are shown to lead inexorably to indecomposable modules and logarithmic singularities in correlators. For this, a short introduction to the fusion algorithm of Nahm, Gaberdiel and Kausch is provided. While the percolation logarithmic conformal field theory is still not completely understood, there are several examples for which the formalism familiar from rational conformal field theory, including bulk partition functions, correlation functions, modular transformations, fusion rules and the Verlinde formula, has been successfully generalized. This is illustrated for three examples: the singlet model \\mathfrak {M} (1,2), related to the triplet model \\mathfrak {W} (1,2), symplectic fermions and the fermionic bc ghost system; the fractional level Wess-Zumino-Witten model based on \\widehat{\\mathfrak {sl}} \\left( 2 \\right) at k=-\\frac{1}{2}, related to the bosonic βγ ghost system; and the Wess-Zumino-Witten model for the Lie supergroup \\mathsf {GL} \\left( 1 {\\mid} 1 \\right), related to \\mathsf {SL} \\left( 2 {\\mid} 1 \\right) at k=-\\frac{1}{2} and 1, the Bershadsky-Polyakov algebra W_3^{(2)} and the Feigin-Semikhatov algebras W_n^{(2)}. These examples have been chosen because they represent the most accessible, and most useful, members of the three best-understood families of logarithmic conformal field theories. The logarithmic minimal models \\mathfrak {W} (q,p), the fractional level Wess-Zumino-Witten models, and the Wess-Zumino-Witten models on Lie supergroups (excluding \\mathsf {OSP} \\left( 1 {\\mid} 2n \\right)). In this review, the emphasis lies on the representation theory of the underlying chiral algebra and the modular data pertaining to the characters of the representations. Each of the archetypal logarithmic conformal field theories is studied here by first determining its irreducible spectrum, which turns out to be continuous, as well as a selection of natural reducible, but indecomposable, modules. This is followed by a detailed description of how to obtain character formulae for each irreducible, a derivation of the action of the modular group on the characters, and an application of the Verlinde formula to compute the Grothendieck fusion rules. In each case, the (genuine) fusion rules are known, so comparisons can be made and favourable conclusions drawn. In addition, each example admits an infinite set of simple currents, hence extended symmetry algebras may be constructed and a series of bulk modular invariants computed. The spectrum of such an extended theory is typically discrete and this is how the triplet model \\mathfrak {W} (1,2) arises, for example. Moreover, simple current technology admits a derivation of the extended algebra fusion rules from those of its continuous parent theory. Finally, each example is concluded by a brief description of the computation of some bulk correlators, a discussion of the structure of the bulk state space, and remarks concerning more advanced developments and generalizations. The final part gives a very short account of the theory of staggered modules, the (simplest class of) representations that are responsible for the logarithmic singularities that distinguish logarithmic theories from their rational cousins. These modules are discussed in a generality suitable to encompass all the examples met in this review and some of the very basic structure theory is proven. Then, the important quantities known as logarithmic couplings are reviewed for Virasoro staggered modules and their role as fundamentally important parameters, akin to the three-point constants of rational conformal field theory, is discussed. An appendix is also provided in order to introduce some of the necessary, but perhaps unfamiliar, language of homological algebra.

  2. Evaluation of data transformations used with the square root and schoolfield models for predicting bacterial growth rate.

    PubMed Central

    Alber, S A; Schaffner, D W

    1992-01-01

    A comparison was made between mathematical variations of the square root and Schoolfield models for predicting growth rate as a function of temperature. The statistical consequences of square root and natural logarithm transformations of growth rate use in several variations of the Schoolfield and square root models were examined. Growth rate variances of Yersinia enterocolitica in brain heart infusion broth increased as a function of temperature. The ability of the two data transformations to correct for the heterogeneity of variance was evaluated. A natural logarithm transformation of growth rate was more effective than a square root transformation at correcting for the heterogeneity of variance. The square root model was more accurate than the Schoolfield model when both models used natural logarithm transformation. PMID:1444367

  3. A law of the iterated logarithm for Grenander’s estimator

    PubMed Central

    Dümbgen, Lutz; Wellner, Jon A.; Wolff, Malcolm

    2016-01-01

    In this note we prove the following law of the iterated logarithm for the Grenander estimator of a monotone decreasing density: If f(t0) > 0, f′(t0) < 0, and f′ is continuous in a neighborhood of t0, then lim supn→∞(n2log logn)1/3(fn^(t0)−f(t0))=|f(t0)f′(t0)/2|1/32Malmost surely where M≡supg∈GTg=(3/4)1/3andTg≡argmaxu{g(u)−u2};here G is the two-sided Strassen limit set on R. The proof relies on laws of the iterated logarithm for local empirical processes, Groeneboom’s switching relation, and properties of Strassen’s limit set analogous to distributional properties of Brownian motion. PMID:28042197

  4. Fechner's law: where does the log transform come from?

    PubMed

    Laming, Donald

    2010-01-01

    This paper looks at Fechner's law in the light of 150 years of subsequent study. In combination with the normal, equal variance, signal-detection model, Fechner's law provides a numerically accurate account of discriminations between two separate stimuli, essentially because the logarithmic transform delivers a model for Weber's law. But it cannot be taken to be a measure of internal sensation because an equally accurate account is provided by a chi(2) model in which stimuli are scaled by their physical magnitude. The logarithmic transform of Fechner's law arises because, for the number of degrees of freedom typically required in the chi(2) model, the logarithm of a chi(2) variable is, to a good approximation, normal. This argument is set within a general theory of sensory discrimination.

  5. Elastic scattering of virtual photons via a quark loop in the double-logarithmic approximation

    NASA Astrophysics Data System (ADS)

    Ermolaev, B. I.; Ivanov, D. Yu.; Troyan, S. I.

    2018-04-01

    We calculate the amplitude of elastic photon-photon scattering via a single quark loop in the double-logarithmic approximation, presuming all external photons to be off-shell and unpolarized. At the same time we account for the running coupling effects. We consider this process in the forward kinematics at arbitrary relations between t and the external photon virtualities. We obtain explicit expressions for the photon-photon scattering amplitudes in all double-logarithmic kinematic regions. Then we calculate the small-x asymptotics of the obtained amplitudes and compare them with the parent amplitudes, thereby fixing the applicability regions of the asymptotics, i.e., fixing the applicability region for the nonvacuum Reggeons. We find that these Reggeons should be used at x <10-8 only.

  6. Study on Hyperspectral Estimation Model of Total Nitrogen Content in Soil of Shaanxi Province

    NASA Astrophysics Data System (ADS)

    Liu, Jinbao; Dong, Zhenyu; Chen, Xi

    2018-01-01

    The development of hyperspectral remote sensing technology has been widely used in soil nutrient prediction. The soil is the representative soil type in Shaanxi Province. In this study, the soil total nitrogen content in Shaanxi soil was used as the research target, and the soil samples were measured by reflectance spectroscopy using ASD method. Pre-treatment, the first order differential, second order differential and reflectance logarithmic transformation of the reflected spectrum after pre-treatment, and the hyperspectral estimation model is established by using the least squares regression method and the principal component regression method. The results show that the correlation between the reflectance spectrum and the total nitrogen content of the soil is significantly improved. The correlation coefficient between the original reflectance and soil total nitrogen content is in the range of 350 ~ 2500nm. The correlation coefficient of soil total nitrogen content and first deviation of reflectance is more than 0.5 at 142nm, 1963nm, 2204nm and 2307nm, the second deviation has a significant positive correlation at 1114nm, 1470nm, 1967nm, 2372nm and 2402nm, respectively. After the reciprocal logarithmic transformation of the reflectance with the total nitrogen content of the correlation analysis found that the effect is not obvious. Rc2 = 0.7102, RMSEC = 0.0788; Rv2 = 0.8480, RMSEP = 0.0663, which can achieve the rapid prediction of the total nitrogen content in the region. The results show that the principal component regression model is the best.

  7. Expectation values of twist fields and universal entanglement saturation of the free massive boson

    NASA Astrophysics Data System (ADS)

    Blondeau-Fournier, Olivier; Doyon, Benjamin

    2017-07-01

    The evaluation of vacuum expectation values (VEVs) in massive integrable quantum field theory (QFT) is a nontrivial renormalization-group ‘connection problem’—relating large and short distance asymptotics—and is in general unsolved. This is particularly relevant in the context of entanglement entropy, where VEVs of branch-point twist fields give universal saturation predictions. We propose a new method to compute VEVs of twist fields associated to continuous symmetries in QFT. The method is based on a differential equation in the continuous symmetry parameter, and gives VEVs as infinite form-factor series which truncate at two-particle level in free QFT. We verify the method by studying U(1) twist fields in free models, which are simply related to the branch-point twist fields. We provide the first exact formulae for the VEVs of such fields in the massive uncompactified free boson model, checking against an independent calculation based on angular quantization. We show that logarithmic terms, overlooked in the original work of Callan and Wilczek (1994 Phys. Lett. B 333 55-61), appear both in the massless and in the massive situations. This implies that, in agreement with numerical form-factor observations by Bianchini and Castro-Alvaredo (2016 Nucl. Phys. B 913 879-911), the standard power-law short-distance behavior is corrected by a logarithmic factor. We discuss how this gives universal formulae for the saturation of entanglement entropy of a single interval in near-critical harmonic chains, including loglog corrections.

  8. Method of recognizing the high-speed railway noise barriers based on the distance image

    NASA Astrophysics Data System (ADS)

    Ma, Le; Shao, Shuangyun; Feng, Qibo; Liu, Bingqian; Kim, Chol Ryong

    2016-10-01

    The damage or lack of the noise barriers is one of the important hidden troubles endangering the safety of high-speed railway. In order to obtain the vibration information of the noise barriers, the online detection systems based on laser vision were proposed. The systems capture images of the laser stripe on the noise barriers and export data files containing distance information between the detection systems on the train and the noise barriers. The vibration status or damage of the noise barriers can be estimated depending on the distance information. In this paper, we focused on the method of separating the area of noise barrier from the background automatically. The test results showed that the proposed method is in good efficiency and accuracy.

  9. A uniform technique for flood frequency analysis.

    USGS Publications Warehouse

    Thomas, W.O.

    1985-01-01

    This uniform technique consisted of fitting the logarithms of annual peak discharges to a Pearson Type III distribution using the method of moments. The objective was to adopt a consistent approach for the estimation of floodflow frequencies that could be used in computing average annual flood losses for project evaluation. In addition, a consistent approach was needed for defining equitable flood-hazard zones as part of the National Flood Insurance Program. -from ASCE Publications Information

  10. An algorithm for the numerical evaluation of the associated Legendre functions that runs in time independent of degree and order

    NASA Astrophysics Data System (ADS)

    Bremer, James

    2018-05-01

    We describe a method for the numerical evaluation of normalized versions of the associated Legendre functions Pν- μ and Qν- μ of degrees 0 ≤ ν ≤ 1, 000, 000 and orders - ν ≤ μ ≤ ν for arguments in the interval (- 1 , 1). Our algorithm, which runs in time independent of ν and μ, is based on the fact that while the associated Legendre functions themselves are extremely expensive to represent via polynomial expansions, the logarithms of certain solutions of the differential equation defining them are not. We exploit this by numerically precomputing the logarithms of carefully chosen solutions of the associated Legendre differential equation and representing them via piecewise trivariate Chebyshev expansions. These precomputed expansions, which allow for the rapid evaluation of the associated Legendre functions over a large swath of parameter domain mentioned above, are supplemented with asymptotic and series expansions in order to cover it entirely. The results of numerical experiments demonstrating the efficacy of our approach are presented, and our code for evaluating the associated Legendre functions is publicly available.

  11. Modeling leaching of viruses by the Monte Carlo method.

    PubMed

    Faulkner, Barton R; Lyon, William G; Khan, Faruque A; Chattopadhyay, Sandip

    2003-11-01

    A predictive screening model was developed for fate and transport of viruses in the unsaturated zone by applying the final value theorem of Laplace transformation to previously developed governing equations. A database of input parameters allowed Monte Carlo analysis with the model. The resulting kernel densities of predicted attenuation during percolation indicated very small, but finite probabilities of failure for all homogeneous USDA classified soils to attenuate reovirus 3 by 99.99% in one-half meter of gravity drainage. The logarithm of saturated hydraulic conductivity and water to air-water interface mass transfer coefficient affected virus fate and transport about 3 times more than any other parameter, including the logarithm of inactivation rate of suspended viruses. Model results suggest extreme infiltration events may play a predominant role in leaching of viruses in soils, since such events could impact hydraulic conductivity. The air-water interface also appears to play a predominating role in virus transport and fate. Although predictive modeling may provide insight into actual attenuation of viruses, hydrogeologic sensitivity assessments for the unsaturated zone should include a sampling program.

  12. Logarithmic detrapping response for holes injected into SiO2 and the influence of thermal activation and electric fields

    NASA Astrophysics Data System (ADS)

    Lakshmanna, V.; Vengurlekar, A. S.

    1988-05-01

    Relaxation of trapped holes that are introduced into silicon dioxide from silicon by the avalanche injection method is studied under various conditions of thermal activation and external electric fields. It is found that the flat band voltage recovery in time follows a universal behavior in that the response at high temperatures is a time scaled extension of the response at low temperatures. Similar universality exists in the detrapping response at different external bias fields. The recovery characteristics show a logarithmic time dependence in the time regime studied (up to 6000 s). We find that the recovery is thermally activated with the activation energy varying from 0.5 eV for a field of 2 MV/cm to 1.0 eV for a field of -1 MV/cm. There is little discharge in 3000 s at room temperature for negative fields beyond -4 MV/cm. The results suggest that the recovery is due to tunneling of electrons in the silicon conduction band into the oxide either to compensate or to remove the charge of trapped holes.

  13. Solution of the Fokker-Planck equation with a logarithmic potential and mixed eigenvalue spectrum

    NASA Astrophysics Data System (ADS)

    Guarnieri, F.; Moon, W.; Wettlaufer, J. S.

    2017-09-01

    Motivated by a problem in climate dynamics, we investigate the solution of a Bessel-like process with a negative constant drift, described by a Fokker-Planck equation with a potential V (x ) =-[b ln(x ) +a x ] , for b >0 and a <0 . The problem belongs to a family of Fokker-Planck equations with logarithmic potentials closely related to the Bessel process that has been extensively studied for its applications in physics, biology, and finance. The Bessel-like process we consider can be solved by seeking solutions through an expansion into a complete set of eigenfunctions. The associated imaginary-time Schrödinger equation exhibits a mix of discrete and continuous eigenvalue spectra, corresponding to the quantum Coulomb potential describing the bound states of the hydrogen atom. We present a technique to evaluate the normalization factor of the continuous spectrum of eigenfunctions that relies solely upon their asymptotic behavior. We demonstrate the technique by solving the Brownian motion problem and the Bessel process both with a constant negative drift. We conclude with a comparison to other analytical methods and with numerical solutions.

  14. Fast parallel molecular algorithms for DNA-based computation: solving the elliptic curve discrete logarithm problem over GF2.

    PubMed

    Li, Kenli; Zou, Shuting; Xv, Jin

    2008-01-01

    Elliptic curve cryptographic algorithms convert input data to unrecognizable encryption and the unrecognizable data back again into its original decrypted form. The security of this form of encryption hinges on the enormous difficulty that is required to solve the elliptic curve discrete logarithm problem (ECDLP), especially over GF(2(n)), n in Z+. This paper describes an effective method to find solutions to the ECDLP by means of a molecular computer. We propose that this research accomplishment would represent a breakthrough for applied biological computation and this paper demonstrates that in principle this is possible. Three DNA-based algorithms: a parallel adder, a parallel multiplier, and a parallel inverse over GF(2(n)) are described. The biological operation time of all of these algorithms is polynomial with respect to n. Considering this analysis, cryptography using a public key might be less secure. In this respect, a principal contribution of this paper is to provide enhanced evidence of the potential of molecular computing to tackle such ambitious computations.

  15. Fast Parallel Molecular Algorithms for DNA-Based Computation: Solving the Elliptic Curve Discrete Logarithm Problem over GF(2n)

    PubMed Central

    Li, Kenli; Zou, Shuting; Xv, Jin

    2008-01-01

    Elliptic curve cryptographic algorithms convert input data to unrecognizable encryption and the unrecognizable data back again into its original decrypted form. The security of this form of encryption hinges on the enormous difficulty that is required to solve the elliptic curve discrete logarithm problem (ECDLP), especially over GF(2n), n ∈ Z+. This paper describes an effective method to find solutions to the ECDLP by means of a molecular computer. We propose that this research accomplishment would represent a breakthrough for applied biological computation and this paper demonstrates that in principle this is possible. Three DNA-based algorithms: a parallel adder, a parallel multiplier, and a parallel inverse over GF(2n) are described. The biological operation time of all of these algorithms is polynomial with respect to n. Considering this analysis, cryptography using a public key might be less secure. In this respect, a principal contribution of this paper is to provide enhanced evidence of the potential of molecular computing to tackle such ambitious computations. PMID:18431451

  16. Predictive Model and Software for Inbreeding-Purging Analysis of Pedigreed Populations

    PubMed Central

    García-Dorado, Aurora; Wang, Jinliang; López-Cortegano, Eugenio

    2016-01-01

    The inbreeding depression of fitness traits can be a major threat to the survival of populations experiencing inbreeding. However, its accurate prediction requires taking into account the genetic purging induced by inbreeding, which can be achieved using a “purged inbreeding coefficient”. We have developed a method to compute purged inbreeding at the individual level in pedigreed populations with overlapping generations. Furthermore, we derive the inbreeding depression slope for individual logarithmic fitness, which is larger than that for the logarithm of the population fitness average. In addition, we provide a new software, PURGd, based on these theoretical results that allows analyzing pedigree data to detect purging, and to estimate the purging coefficient, which is the parameter necessary to predict the joint consequences of inbreeding and purging. The software also calculates the purged inbreeding coefficient for each individual, as well as standard and ancestral inbreeding. Analysis of simulation data show that this software produces reasonably accurate estimates for the inbreeding depression rate and for the purging coefficient that are useful for predictive purposes. PMID:27605515

  17. Deposition and persistence of beachcast seabird carcasses

    USGS Publications Warehouse

    van Pelt, Thomas I.; Piatt, John F.

    1995-01-01

    Following a massive wreck of guillemots (Uria aalge) in late winter and spring of 1993, we monitored the deposition and subsequent disappearance of 398 beachcast guillemot carcasses on two beaches in Resurrection Bay, Alaska, during a 100 day period. Deposition of carcasses declined logarithmically with time after the original event. Since fresh carcasses were more likely to be removed between counts than older carcasses, persistence rates increased logarithmically over time. Scavenging appeared to be the primary cause of carcass removal, followed by burial in beach debris and sand. Along-shore transport was negligible. We present an equation which estimates the number of carcasses deposited at time zero from beach surveys conducted some time later, using non-linear persistence rates that are a function of time. We use deposition rates to model the accumulation of beached carcasses, accounting for further deposition subsequent to the original event. Finally, we present a general method for extrapolating from a single count the number of carcasses cumulatively deposited on surveyed beaches, and discuss how our results can be used to assess the magnitude of mass seabird mortality events from beach surveys.

  18. A Bid Price Equation For Timber Sales on the Ouachita and Ozark National Forests

    Treesearch

    Michael M. Huebschmann; Thomas B. Lynch; David K. Lewis; Daniel S. Tilley; James M. Guldin

    2004-01-01

    Data from 150 timber sales on the Ouachita and Ozark National Forests in Arkansas and southeaster n Oklahoma were used to develop an equation that relates bid prices to timber sale variables. Variables used to predict the natural logarithm of the real, winning total bid price are the natural logarithms of total sawtimber volume per sale, total pulpwood volume per sale...

  19. Approximating exponential and logarithmic functions using polynomial interpolation

    NASA Astrophysics Data System (ADS)

    Gordon, Sheldon P.; Yang, Yajun

    2017-04-01

    This article takes a closer look at the problem of approximating the exponential and logarithmic functions using polynomials. Either as an alternative to or a precursor to Taylor polynomial approximations at the precalculus level, interpolating polynomials are considered. A measure of error is given and the behaviour of the error function is analysed. The results of interpolating polynomials are compared with those of Taylor polynomials.

  20. Linear and Logarithmic Speed-Accuracy Trade-Offs in Reciprocal Aiming Result from Task-Specific Parameterization of an Invariant Underlying Dynamics

    ERIC Educational Resources Information Center

    Bongers, Raoul M.; Fernandez, Laure; Bootsma, Reinoud J.

    2009-01-01

    The authors examined the origins of linear and logarithmic speed-accuracy trade-offs from a dynamic systems perspective on motor control. In each experiment, participants performed 2 reciprocal aiming tasks: (a) a velocity-constrained task in which movement time was imposed and accuracy had to be maximized, and (b) a distance-constrained task in…

  1. Role of Loop-Clamping Side Chains in Catalysis by Triosephosphate Isomerase.

    PubMed

    Zhai, Xiang; Amyes, Tina L; Richard, John P

    2015-12-09

    The side chains of Y208 and S211 from loop 7 of triosephosphate isomerase (TIM) form hydrogen bonds to backbone amides and carbonyls from loop 6 to stabilize the caged enzyme-substrate complex. The effect of seven mutations [Y208T, Y208S, Y208A, Y208F, S211G, S211A, Y208T/S211G] on the kinetic parameters for TIM catalyzed reactions of the whole substrates dihydroxyacetone phosphate and d-glyceraldehyde 3-phosphate [(k(cat)/K(m))(GAP) and (k(cat)/K(m))DHAP] and of the substrate pieces glycolaldehyde and phosphite dianion (k(cat)/K(HPi)K(GA)) are reported. The linear logarithmic correlation between these kinetic parameters, with slope of 1.04 ± 0.03, shows that most mutations of TIM result in an identical change in the activation barriers for the catalyzed reactions of whole substrate and substrate pieces, so that the transition states for these reactions are stabilized by similar interactions with the protein catalyst. The second linear logarithmic correlation [slope = 0.53 ± 0.16] between k(cat) for isomerization of GAP and K(d)(⧧) for phosphite dianion binding to the transition state for wildtype and many mutant TIM-catalyzed reactions of substrate pieces shows that ca. 50% of the wildtype TIM dianion binding energy, eliminated by these mutations, is expressed at the wildtype Michaelis complex, and ca. 50% is only expressed at the wildtype transition state. Negative deviations from this correlation are observed when the mutation results in a decrease in enzyme reactivity at the catalytic site. The main effect of Y208T, Y208S, and Y208A mutations is to cause a reduction in the total intrinsic dianion binding energy, but the effect of Y208F extends to the catalytic site.

  2. The Ponseti method in Latin America: initial impact and barriers to its diffusion and implementation.

    PubMed

    Boardman, Allison; Jayawardena, Asitha; Oprescu, Florin; Cook, Thomas; Morcuende, Jose A

    2011-01-01

    The Ponseti method for correcting clubfoot is a safe, effective, and minimally invasive treatment that has recently been implemented in Latin America. This study evaluates the initial impact and unique barriers to the diffusion of the Ponseti method throughout this region. Structured interviews were conducted with 30 physicians practicing the Ponseti method in three socioeconomically diverse countries: Chile, Peru and Guatemala. Since learning the Ponseti method, these physicians have treated approximately 1,740 clubfoot patients, with an estimated 1,705 (98%) patients treated using the Ponseti method, and 35 (2%) patients treated using surgical techniques. The barriers were classified into the following themes: physician education, health care system of the country, culture and beliefs of patients, physical distance and transport, financial barriers for patients, and parental compliance with the method. The results yielded several common barriers throughout Latin America including lack of physician education, physical distance to the treatment centers, and financial barriers for patients. Information from this study can be used to inform, and to implement and evaluate specific strategies to improve the diffusion of the Ponseti method for treating clubfoot throughout Latin America.

  3. A feedback control strategy for the airfoil system under non-Gaussian colored noise excitation.

    PubMed

    Huang, Yong; Tao, Gang

    2014-09-01

    The stability of a binary airfoil with feedback control under stochastic disturbances, a non-Gaussian colored noise, is studied in this paper. First, based on some approximated theories and methods the non-Gaussian colored noise is simplified to an Ornstein-Uhlenbeck process. Furthermore, via the stochastic averaging method and the logarithmic polar transformation, one dimensional diffusion process can be obtained. At last by applying the boundary conditions, the largest Lyapunov exponent which can determine the almost-sure stability of the system and the effective region of control parameters is calculated.

  4. Pdf prediction of supersonic hydrogen flames

    NASA Technical Reports Server (NTRS)

    Eifler, P.; Kollmann, W.

    1993-01-01

    A hybrid method for the prediction of supersonic turbulent flows with combustion is developed consisting of a second order closure for the velocity field and a multi-scalar pdf method for the local thermodynamic state. It is shown that for non-premixed flames and chemical equilibrium mixture fraction, the logarithm of the (dimensionless) density, internal energy per unit mass and the divergence of the velocity have several advantages over other sets of scalars. The closure model is applied to a supersonic non-premixed flame burning hydrogen with air supplied by a supersonic coflow and the results are compared with a limited set of experimental data.

  5. A feedback control strategy for the airfoil system under non-Gaussian colored noise excitation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Yong, E-mail: hy@njust.edu.cn, E-mail: taogang@njust.edu.cn; Tao, Gang, E-mail: hy@njust.edu.cn, E-mail: taogang@njust.edu.cn

    2014-09-01

    The stability of a binary airfoil with feedback control under stochastic disturbances, a non-Gaussian colored noise, is studied in this paper. First, based on some approximated theories and methods the non-Gaussian colored noise is simplified to an Ornstein-Uhlenbeck process. Furthermore, via the stochastic averaging method and the logarithmic polar transformation, one dimensional diffusion process can be obtained. At last by applying the boundary conditions, the largest Lyapunov exponent which can determine the almost-sure stability of the system and the effective region of control parameters is calculated.

  6. Assessing the role of pavement macrotexture in preventing crashes on highways.

    PubMed

    Pulugurtha, Srinivas S; Kusam, Prasanna R; Patel, Kuvleshay J

    2010-02-01

    The objective of this article is to assess the role of pavement macrotexture in preventing crashes on highways in the State of North Carolina. Laser profilometer data obtained from the North Carolina Department of Transportation (NCDOT) for highways comprising four corridors are processed to calculate pavement macrotexture at 100-m (approximately 330-ft) sections according to the American Society for Testing and Materials (ASTM) standards. Crash data collected over the same lengths of the corridors were integrated with the calculated pavement macrotexture for each section. Scatterplots were generated to assess the role of pavement macrotexture on crashes and logarithm of crashes. Regression analyses were conducted by considering predictor variables such as million vehicle miles of travel (as a function of traffic volume and length), the number of interchanges, the number of at-grade intersections, the number of grade-separated interchanges, and the number of bridges, culverts, and overhead signs along with pavement macrotexture to study the statistical significance of relationship between pavement macrotexture and crashes (both linear and log-linear) when compared to other predictor variables. Scatterplots and regression analysis conducted indicate a more statistically significant relationship between pavement macrotexture and logarithm of crashes than between pavement macrotexture and crashes. The coefficient for pavement macrotexture, in general, is negative, indicating that the number of crashes or logarithm of crashes decreases as it increases. The relation between pavement macrotexture and logarithm of crashes is generally stronger than between most other predictor variables and crashes or logarithm of crashes. Based on results obtained, it can be concluded that maintaining pavement macrotexture greater than or equal to 1.524 mm (0.06 in.) as a threshold limit would possibly reduce crashes and provide safe transportation to road users on highways.

  7. Examining empirical evidence of the effect of superfluidity on the fusion barrier

    NASA Astrophysics Data System (ADS)

    Scamps, Guillaume

    2018-04-01

    Background: Recent time-dependent Hartree-Fock-Bogoliubov (TDHFB) calculations predict that superfluidity enhances fluctuations of the fusion barrier. This effect is not fully understood and not yet experimentally revealed. Purpose: The goal of this study is to empirically investigate the effect of superfluidity on the distribution width of the fusion barrier. Method: Two new methods are proposed in the present study. First, the local regression method is introduced and used to determine the barrier distribution. The second method, which requires only the calculation of an integral of the cross section, is developed to determine accurately the fluctuations of the barrier. This integral method, showing the best performance, is systematically applied to 115 fusion reactions. Results: Fluctuations of the barrier for open-shell systems are, on average, larger than those for magic or semimagic nuclei. This is due to the deformation and the superfluidity. To disentangle these two effects, a comparison is made between the experimental width and the width estimated from a model that takes into account the tunneling, the deformation, and the vibration effect. This study reveals that superfluidity enhances the fusion barrier width. Conclusions: This analysis shows that the predicted effect of superfluidity on the width of the barrier is real and is of the order of 1 MeV.

  8. Exome sequencing and genome-wide linkage analysis in 17 families illustrate the complex contribution of TTN truncating variants to dilated cardiomyopathy.

    PubMed

    Norton, Nadine; Li, Duanxiang; Rampersaud, Evadnie; Morales, Ana; Martin, Eden R; Zuchner, Stephan; Guo, Shengru; Gonzalez, Michael; Hedges, Dale J; Robertson, Peggy D; Krumm, Niklas; Nickerson, Deborah A; Hershberger, Ray E

    2013-04-01

    BACKGROUND- Familial dilated cardiomyopathy (DCM) is a genetically heterogeneous disease with >30 known genes. TTN truncating variants were recently implicated in a candidate gene study to cause 25% of familial and 18% of sporadic DCM cases. METHODS AND RESULTS- We used an unbiased genome-wide approach using both linkage analysis and variant filtering across the exome sequences of 48 individuals affected with DCM from 17 families to identify genetic cause. Linkage analysis ranked the TTN region as falling under the second highest genome-wide multipoint linkage peak, multipoint logarithm of odds, 1.59. We identified 6 TTN truncating variants carried by individuals affected with DCM in 7 of 17 DCM families (logarithm of odds, 2.99); 2 of these 7 families also had novel missense variants that segregated with disease. Two additional novel truncating TTN variants did not segregate with DCM. Nucleotide diversity at the TTN locus, including missense variants, was comparable with 5 other known DCM genes. The average number of missense variants in the exome sequences from the DCM cases or the ≈5400 cases from the Exome Sequencing Project was ≈23 per individual. The average number of TTN truncating variants in the Exome Sequencing Project was 0.014 per individual. We also identified a region (chr9q21.11-q22.31) with no known DCM genes with a maximum heterogeneity logarithm of odds score of 1.74. CONCLUSIONS- These data suggest that TTN truncating variants contribute to DCM cause. However, the lack of segregation of all identified TTN truncating variants illustrates the challenge of determining variant pathogenicity even with full exome sequencing.

  9. In vivo measurements of skin barrier: comparison of different methods and advantages of laser scanning microscopy

    NASA Astrophysics Data System (ADS)

    Patzelt, A.; Sterry, W.; Lademann, J.

    2010-12-01

    A major function of the skin is to provide a protective barrier at the interface between external environment and the organism. For skin barrier measurement, a multiplicity of methods is available. As standard methods, the determination of the transepidermal water loss (TEWL) as well as the measurement of the stratum corneum hydration, are widely accepted, although they offer some obvious disadvantages such as increased interference liability. Recently, new optical and spectroscopic methods have been introduced to investigate skin barrier properties in vivo. Especially, laser scanning microscopy has been shown to represent an excellent tool to study skin barrier integrity in many areas of relevance such as cosmetology, occupation, diseased skin, and wound healing.

  10. An analog gamma correction scheme for high dynamic range CMOS logarithmic image sensors.

    PubMed

    Cao, Yuan; Pan, Xiaofang; Zhao, Xiaojin; Wu, Huisi

    2014-12-15

    In this paper, a novel analog gamma correction scheme with a logarithmic image sensor dedicated to minimize the quantization noise of the high dynamic applications is presented. The proposed implementation exploits a non-linear voltage-controlled-oscillator (VCO) based analog-to-digital converter (ADC) to perform the gamma correction during the analog-to-digital conversion. As a result, the quantization noise does not increase while the same high dynamic range of logarithmic image sensor is preserved. Moreover, by combining the gamma correction with the analog-to-digital conversion, the silicon area and overall power consumption can be greatly reduced. The proposed gamma correction scheme is validated by the reported simulation results and the experimental results measured for our designed test structure, which is fabricated with 0.35 μm standard complementary-metal-oxide-semiconductor (CMOS) process.

  11. Path Loss Prediction Formula in Urban Area for the Fourth-Generation Mobile Communication Systems

    NASA Astrophysics Data System (ADS)

    Kitao, Koshiro; Ichitsubo, Shinichi

    A site-general type prediction formula is created based on the measurement results in an urban area in Japan assuming that the prediction frequency range required for Fourth-Generation (4G) Mobile Communication Systems is from 3 to 6GHz, the distance range is 0.1 to 3km, and the base station (BS) height range is from 10 to 100m. Based on the measurement results, the path loss (dB) is found to be proportional to the logarithm of the distance (m), the logarithm of the BS height (m), and the logarithm of the frequency (GHz). Furthermore, we examine the extension of existing formulae such as the Okumura-Hata, Walfisch-Ikegami, and Sakagami formulae for 4G systems and propose a prediction formula based on the Extended Sakagami formula.

  12. An Analog Gamma Correction Scheme for High Dynamic Range CMOS Logarithmic Image Sensors

    PubMed Central

    Cao, Yuan; Pan, Xiaofang; Zhao, Xiaojin; Wu, Huisi

    2014-01-01

    In this paper, a novel analog gamma correction scheme with a logarithmic image sensor dedicated to minimize the quantization noise of the high dynamic applications is presented. The proposed implementation exploits a non-linear voltage-controlled-oscillator (VCO) based analog-to-digital converter (ADC) to perform the gamma correction during the analog-to-digital conversion. As a result, the quantization noise does not increase while the same high dynamic range of logarithmic image sensor is preserved. Moreover, by combining the gamma correction with the analog-to-digital conversion, the silicon area and overall power consumption can be greatly reduced. The proposed gamma correction scheme is validated by the reported simulation results and the experimental results measured for our designed test structure, which is fabricated with 0.35 μm standard complementary-metal-oxide-semiconductor (CMOS) process. PMID:25517692

  13. Method of sealing casings of subsurface materials management system

    DOEpatents

    Nickelson, Reva A.; Richardson, John G.; Kostelnik, Kevin M.; Sloan, Paul A.

    2007-02-06

    Systems, components, and methods relating to subterranean containment barriers. Laterally adjacent tubular casings having male interlock structures and multiple female interlock structures defining recesses for receiving a male interlock structure are used to create subterranean barriers for containing and treating buried waste and its effluents. The multiple female interlock structures enable the barriers to be varied around subsurface objects and to form barrier sidewalls. The barrier may be used for treating and monitoring a zone of interest.

  14. Logarithmic Sobolev Inequalities on Path Spaces Over Riemannian Manifolds

    NASA Astrophysics Data System (ADS)

    Hsu, Elton P.

    Let Wo(M) be the space of paths of unit time length on a connected, complete Riemannian manifold M such that γ(0) =o, a fixed point on M, and ν the Wiener measure on Wo(M) (the law of Brownian motion on M starting at o).If the Ricci curvature is bounded by c, then the following logarithmic Sobolev inequality holds:

  15. Measuring Academic Progress of Students with Learning Difficulties: A Comparison of the Semi-Logarithmic Chart and Equal Interval Graph Paper.

    ERIC Educational Resources Information Center

    Marston, Doug; Deno, Stanley L.

    The accuracy of predictions of future student performance on the basis of graphing data on semi-logarithmic charts and equal interval graphs was examined. All 83 low-achieving students in grades 3 to 6 read randomly-selected lists of words from the Harris-Jacobson Word List for 1 minute. The number of words read correctly and words read…

  16. Representational change and strategy use in children's number line estimation during the first years of primary school.

    PubMed

    White, Sonia L J; Szűcs, Dénes

    2012-01-04

    The objective of this study was to scrutinize number line estimation behaviors displayed by children in mathematics classrooms during the first three years of schooling. We extend existing research by not only mapping potential logarithmic-linear shifts but also provide a new perspective by studying in detail the estimation strategies of individual target digits within a number range familiar to children. Typically developing children (n = 67) from Years 1-3 completed a number-to-position numerical estimation task (0-20 number line). Estimation behaviors were first analyzed via logarithmic and linear regression modeling. Subsequently, using an analysis of variance we compared the estimation accuracy of each digit, thus identifying target digits that were estimated with the assistance of arithmetic strategy. Our results further confirm a developmental logarithmic-linear shift when utilizing regression modeling; however, uniquely we have identified that children employ variable strategies when completing numerical estimation, with levels of strategy advancing with development. In terms of the existing cognitive research, this strategy factor highlights the limitations of any regression modeling approach, or alternatively, it could underpin the developmental time course of the logarithmic-linear shift. Future studies need to systematically investigate this relationship and also consider the implications for educational practice.

  17. Pars plana Ahmed valve and vitrectomy in patients with glaucoma associated with posterior segment disease.

    PubMed

    Wallsh, Josh O; Gallemore, Ron P; Taban, Mehran; Hu, Charles; Sharareh, Behnam

    2013-01-01

    To assess the safety and efficacy of a modified technique for pars plana placement of the Ahmed valve in combination with pars plana vitrectomy in the treatment of glaucoma associated with posterior segment disease. Thirty-nine eyes with glaucoma associated with posterior segment disease underwent pars plana vitrectomy combined with Ahmed valve placement. All valves were placed in the pars plana using a modified technique, without the pars plana clip, and using a scleral patch graft. The 24 eyes diagnosed with neovascular glaucoma had an improvement in intraocular pressure from 37.6 mmHg to 13.8 mmHg and best-corrected visual acuity from 2.13 logarithm of minimum angle of resolution to 1.40 logarithm of minimum angle of resolution. Fifteen eyes diagnosed with steroid-induced glaucoma had an improvement in intraocular pressure from 27.9 mmHg to 14.1 mmHg and best-corrected visual acuity from 1.38 logarithm of minimum angle of resolution to 1.13 logarithm of minimum angle of resolution. Complications included four cases of cystic bleb formation and one case of choroidal detachment and explantation for hypotony. Ahmed valve placement through the pars plana during vitrectomy is an effective option for managing complex cases of glaucoma without the use of the pars plana clip.

  18. A Planar Microfluidic Mixer Based on Logarithmic Spirals

    PubMed Central

    Scherr, Thomas; Quitadamo, Christian; Tesvich, Preston; Park, Daniel Sang-Won; Tiersch, Terrence; Hayes, Daniel; Choi, Jin-Woo; Nandakumar, Krishnaswamy

    2013-01-01

    A passive, planar micromixer design based on logarithmic spirals is presented. The device was fabricated using polydimethylsiloxane soft photolithography techniques, and mixing performance was characterized via numerical simulation and fluorescent microscopy. Mixing efficiency initially declined as Reynolds number increased, and this trend continued until a Reynolds number of 15 where a minimum was reached at 53%. Mixing efficiency then began to increase reaching a maximum mixing efficiency of 86% at Re = 67. Three-dimensional simulations of fluid mixing in this design were compared to other planar geometries such as the Archimedes spiral and Meandering-S mixers. The implementation of logarithmic curvature offers several unique advantages that enhance mixing, namely a variable cross-sectional area and a logarithmically varying radius of curvature that creates 3-D Dean vortices. These flow phenomena were observed in simulations with multilayered fluid folding and validated with confocal microscopy. This design provides improved mixing performance over a broader range of Reynolds numbers than other reported planar mixers, all while avoiding external force fields, more complicated fabrication processes, and the introduction of flow obstructions or cavities that may unintentionally affect sensitive or particulate-containing samples. Due to the planar design requiring only single-step lithographic features, this compact geometry could be easily implemented into existing micro-total analysis systems requiring effective rapid mixing. PMID:23956497

  19. A planar microfluidic mixer based on logarithmic spirals

    NASA Astrophysics Data System (ADS)

    Scherr, Thomas; Quitadamo, Christian; Tesvich, Preston; Sang-Won Park, Daniel; Tiersch, Terrence; Hayes, Daniel; Choi, Jin-Woo; Nandakumar, Krishnaswamy; Monroe, W. Todd

    2012-05-01

    A passive, planar micromixer design based on logarithmic spirals is presented. The device was fabricated using polydimethylsiloxane soft photolithography techniques, and mixing performance was characterized via numerical simulation and fluorescent microscopy. Mixing efficiency initially declined as the Reynolds number increased, and this trend continued until a Reynolds number of 15 where a minimum was reached at 53%. Mixing efficiency then began to increase reaching a maximum mixing efficiency of 86% at Re = 67. Three-dimensional (3D) simulations of fluid mixing in this design were compared to other planar geometries such as the Archimedes spiral and Meandering-S mixers. The implementation of logarithmic curvature offers several unique advantages that enhance mixing, namely a variable cross-sectional area and a logarithmically varying radius of curvature that creates 3D Dean vortices. These flow phenomena were observed in simulations with multilayered fluid folding and validated with confocal microscopy. This design provides improved mixing performance over a broader range of Reynolds numbers than other reported planar mixers, all while avoiding external force fields, more complicated fabrication processes and the introduction of flow obstructions or cavities that may unintentionally affect sensitive or particulate-containing samples. Due to the planar design requiring only single-step lithographic features, this compact geometry could be easily implemented into existing micro-total analysis systems requiring effective rapid mixing.

  20. Coherence and entanglement measures based on Rényi relative entropies

    NASA Astrophysics Data System (ADS)

    Zhu, Huangjun; Hayashi, Masahito; Chen, Lin

    2017-11-01

    We study systematically resource measures of coherence and entanglement based on Rényi relative entropies, which include the logarithmic robustness of coherence, geometric coherence, and conventional relative entropy of coherence together with their entanglement analogues. First, we show that each Rényi relative entropy of coherence is equal to the corresponding Rényi relative entropy of entanglement for any maximally correlated state. By virtue of this observation, we establish a simple operational connection between entanglement measures and coherence measures based on Rényi relative entropies. We then prove that all these coherence measures, including the logarithmic robustness of coherence, are additive. Accordingly, all these entanglement measures are additive for maximally correlated states. In addition, we derive analytical formulas for Rényi relative entropies of entanglement of maximally correlated states and bipartite pure states, which reproduce a number of classic results on the relative entropy of entanglement and logarithmic robustness of entanglement in a unified framework. Several nontrivial bounds for Rényi relative entropies of coherence (entanglement) are further derived, which improve over results known previously. Moreover, we determine all states whose relative entropy of coherence is equal to the logarithmic robustness of coherence. As an application, we provide an upper bound for the exact coherence distillation rate, which is saturated for pure states.

  1. Validation of SplitVectors Encoding for Quantitative Visualization of Large-Magnitude-Range Vector Fields

    PubMed Central

    Zhao, Henan; Bryant, Garnett W.; Griffin, Wesley; Terrill, Judith E.; Chen, Jian

    2017-01-01

    We designed and evaluated SplitVectors, a new vector field display approach to help scientists perform new discrimination tasks on large-magnitude-range scientific data shown in three-dimensional (3D) visualization environments. SplitVectors uses scientific notation to display vector magnitude, thus improving legibility. We present an empirical study comparing the SplitVectors approach with three other approaches - direct linear representation, logarithmic, and text display commonly used in scientific visualizations. Twenty participants performed three domain analysis tasks: reading numerical values (a discrimination task), finding the ratio between values (a discrimination task), and finding the larger of two vectors (a pattern detection task). Participants used both mono and stereo conditions. Our results suggest the following: (1) SplitVectors improve accuracy by about 10 times compared to linear mapping and by four times to logarithmic in discrimination tasks; (2) SplitVectors have no significant differences from the textual display approach, but reduce cluttering in the scene; (3) SplitVectors and textual display are less sensitive to data scale than linear and logarithmic approaches; (4) using logarithmic can be problematic as participants' confidence was as high as directly reading from the textual display, but their accuracy was poor; and (5) Stereoscopy improved performance, especially in more challenging discrimination tasks. PMID:28113469

  2. Validation of SplitVectors Encoding for Quantitative Visualization of Large-Magnitude-Range Vector Fields.

    PubMed

    Henan Zhao; Bryant, Garnett W; Griffin, Wesley; Terrill, Judith E; Jian Chen

    2017-06-01

    We designed and evaluated SplitVectors, a new vector field display approach to help scientists perform new discrimination tasks on large-magnitude-range scientific data shown in three-dimensional (3D) visualization environments. SplitVectors uses scientific notation to display vector magnitude, thus improving legibility. We present an empirical study comparing the SplitVectors approach with three other approaches - direct linear representation, logarithmic, and text display commonly used in scientific visualizations. Twenty participants performed three domain analysis tasks: reading numerical values (a discrimination task), finding the ratio between values (a discrimination task), and finding the larger of two vectors (a pattern detection task). Participants used both mono and stereo conditions. Our results suggest the following: (1) SplitVectors improve accuracy by about 10 times compared to linear mapping and by four times to logarithmic in discrimination tasks; (2) SplitVectors have no significant differences from the textual display approach, but reduce cluttering in the scene; (3) SplitVectors and textual display are less sensitive to data scale than linear and logarithmic approaches; (4) using logarithmic can be problematic as participants' confidence was as high as directly reading from the textual display, but their accuracy was poor; and (5) Stereoscopy improved performance, especially in more challenging discrimination tasks.

  3. Current collectors for improved safety

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abdelmalak, Michael Naguib; Allu, Srikanth; Dudney, Nancy J.

    A battery electrode assembly includes a current collector with conduction barrier regions having a conductive state in which electrical conductivity through the conduction barrier region is permitted, and a safety state in which electrical conductivity through the conduction barrier regions is reduced. The conduction barrier regions change from the conductive state to the safety state when the current collector receives a short-threatening event. An electrode material can be connected to the current collector. The conduction barrier regions can define electrical isolation subregions. A battery is also disclosed, and methods for making the electrode assembly, methods for making a battery, andmore » methods for operating a battery.« less

  4. Experimental evaluation of optimization method for developing ultraviolet barrier coatings

    NASA Astrophysics Data System (ADS)

    Gonome, Hiroki; Okajima, Junnosuke; Komiya, Atsuki; Maruyama, Shigenao

    2014-01-01

    Ultraviolet (UV) barrier coatings can be used to protect many industrial products from UV attack. This study introduces a method of optimizing UV barrier coatings using pigment particles. The radiative properties of the pigment particles were evaluated theoretically, and the optimum particle size was decided from the absorption efficiency and the back-scattering efficiency. UV barrier coatings were prepared with zinc oxide (ZnO) and titanium dioxide (TiO2). The transmittance of the UV barrier coating was calculated theoretically. The radiative transfer in the UV barrier coating was modeled using the radiation element method by ray emission model (REM2). In order to validate the calculated results, the transmittances of these coatings were measured by a spectrophotometer. A UV barrier coating with a low UV transmittance and high VIS transmittance could be achieved. The calculated transmittance showed a similar spectral tendency with the measured one. The use of appropriate particles with optimum size, coating thickness and volume fraction will result in effective UV barrier coatings. UV barrier coatings can be achieved by the application of optical engineering.

  5. Electronic filters, signal conversion apparatus, hearing aids and methods

    NASA Technical Reports Server (NTRS)

    Morley, Jr., Robert E. (Inventor); Engebretson, A. Maynard (Inventor); Engel, George L. (Inventor); Sullivan, Thomas J. (Inventor)

    1992-01-01

    An electronic filter for filtering an electrical signal. Signal processing circuitry therein includes a logarithmic filter having a series of filter stages with inputs and outputs in cascade and respective circuits as GOVERNMENT SUPPORT This invention was made with U.S. Government support under Veterans Administration Contract VA KV 674P857 and National Aeronautics and Space Administration (NASA) Research Grant No. NAG10-0040. The U.S. Government has certain rights in this invention.

  6. [The influence of various acoustic stimuli upon the cumulative action potential (SAP) of the auditory nerves in guinea pigs (author's transl)].

    PubMed

    Hofmann, G; Kraak, W

    1976-08-31

    The impact of various acoustic stimuli upon the cumulative action potential of the auditory nerves in guinea pigs is investigated by means of the averaging method. It was found that the potential amplitude within the measuring range increases with the logarithm of the rising sonic pressure velocity. Unlike the evoked response audiometry (ERA), this potential seems unsuitable for furnishing information of the frequency-dependent threshold course.

  7. The advantages of logarithmically scaled data for electromagnetic inversion

    NASA Astrophysics Data System (ADS)

    Wheelock, Brent; Constable, Steven; Key, Kerry

    2015-06-01

    Non-linear inversion algorithms traverse a data misfit space over multiple iterations of trial models in search of either a global minimum or some target misfit contour. The success of the algorithm in reaching that objective depends upon the smoothness and predictability of the misfit space. For any given observation, there is no absolute form a datum must take, and therefore no absolute definition for the misfit space; in fact, there are many alternatives. However, not all misfit spaces are equal in terms of promoting the success of inversion. In this work, we appraise three common forms that complex data take in electromagnetic geophysical methods: real and imaginary components, a power of amplitude and phase, and logarithmic amplitude and phase. We find that the optimal form is logarithmic amplitude and phase. Single-parameter misfit curves of log-amplitude and phase data for both magnetotelluric and controlled-source electromagnetic methods are the smoothest of the three data forms and do not exhibit flattening at low model resistivities. Synthetic, multiparameter, 2-D inversions illustrate that log-amplitude and phase is the most robust data form, converging to the target misfit contour in the fewest steps regardless of starting model and the amount of noise added to the data; inversions using the other two data forms run slower or fail under various starting models and proportions of noise. It is observed that inversion with log-amplitude and phase data is nearly two times faster in converging to a solution than with other data types. We also assess the statistical consequences of transforming data in the ways discussed in this paper. With the exception of real and imaginary components, which are assumed to be Gaussian, all other data types do not produce an expected mean-squared misfit value of 1.00 at the true model (a common assumption) as the errors in the complex data become large. We recommend that real and imaginary data with errors larger than 10 per cent of the complex amplitude be withheld from a log-amplitude and phase inversion rather than retaining them with large error-bars.

  8. Method for applying a diffusion barrier interlayer for high temperature components

    DOEpatents

    Wei, Ronghua; Cheruvu, Narayana S.

    2016-03-08

    A coated substrate and a method of forming a diffusion barrier coating system between a substrate and a MCrAl coating, including a diffusion barrier coating deposited onto at least a portion of a substrate surface, wherein the diffusion barrier coating comprises a nitride, oxide or carbide of one or more transition metals and/or metalloids and a MCrAl coating, wherein M includes a transition metal or a metalloid, deposited on at least a portion of the diffusion barrier coating, wherein the diffusion barrier coating restricts the inward diffusion of aluminum of the MCrAl coating into the substrate.

  9. Subsurface materials management and containment system, components thereof and methods relating thereto

    DOEpatents

    Nickelson, Reva A.; Richardson, John G.; Kostelnik, Kevin M.; Sloan, Paul A.

    2006-04-18

    Systems, components, and methods relating to subterranean containment barriers. Laterally adjacent tubular casings having male interlock structures and multiple female interlock structures defining recesses for receiving a male interlock structure are used to create subterranean barriers for containing and treating buried waste and its effluents. The multiple female interlock structures enable the barriers to be varied around subsurface objects and to form barrier sidewalls. The barrier may be used for treating and monitoring a zone of interest.

  10. Particle Swarm-Based Translation Control for Immersed Tunnel Element in the Hong Kong-Zhuhai-Macao Bridge Project

    NASA Astrophysics Data System (ADS)

    Li, Jun-jun; Yang, Xiao-jun; Xiao, Ying-jie; Xu, Bo-wei; Wu, Hua-feng

    2018-03-01

    Immersed tunnel is an important part of the Hong Kong-Zhuhai-Macao Bridge (HZMB) project. In immersed tunnel floating, translation which includes straight and transverse movements is the main working mode. To decide the magnitude and direction of the towing force for each tug, a particle swarm-based translation control method is presented for non-power immersed tunnel element. A sort of linear weighted logarithmic function is exploited to avoid weak subgoals. In simulation, the particle swarm-based control method is evaluated and compared with traditional empirical method in the case of the HZMB project. Simulation results show that the presented method delivers performance improvement in terms of the enhanced surplus towing force.

  11. Auto-correlation of journal impact factor for consensus research reporting statements: a cohort study.

    PubMed

    Shanahan, Daniel R

    2016-01-01

    Background. The Journal Citation Reports journal impact factors (JIFs) are widely used to rank and evaluate journals, standing as a proxy for the relative importance of a journal within its field. However, numerous criticisms have been made of use of a JIF to evaluate importance. This problem is exacerbated when the use of JIFs is extended to evaluate not only the journals, but the papers therein. The purpose of this study was therefore to investigate the relationship between the number of citations and journal IF for identical articles published simultaneously in multiple journals. Methods. Eligible articles were consensus research reporting statements listed on the EQUATOR Network website that were published simultaneously in three or more journals. The correlation between the citation count for each article and the median journal JIF over the published period, and between the citation count and number of article accesses was calculated for each reporting statement. Results. Nine research reporting statements were included in this analysis, representing 85 articles published across 58 journals in biomedicine. The number of citations was strongly correlated to the JIF for six of the nine reporting guidelines, with moderate correlation shown for the remaining three guidelines (median r = 0.66, 95% CI [0.45-0.90]). There was also a strong positive correlation between the number of citations and the number of article accesses (median r = 0.71, 95% CI [0.5-0.8]), although the number of data points for this analysis were limited. When adjusted for the individual reporting guidelines, each logarithm unit of JIF predicted a median increase of 0.8 logarithm units of citation counts (95% CI [-0.4-5.2]), and each logarithm unit of article accesses predicted a median increase of 0.1 logarithm units of citation counts (95% CI [-0.9-1.4]). This model explained 26% of the variance in citations (median adjusted r (2) = 0.26, range 0.18-1.0). Conclusion. The impact factor of the journal in which a reporting statement was published was shown to influence the number of citations that statement will gather over time. Similarly, the number of article accesses also influenced the number of citations, although to a lesser extent than the impact factor. This demonstrates that citation counts are not purely a reflection of scientific merit and the impact factor is, in fact, auto-correlated.

  12. Absence of Nosocomial Transmission of Imported Lassa Fever during Use of Standard Barrier Nursing Methods.

    PubMed

    Grahn, Anna; Bråve, Andreas; Tolfvenstam, Thomas; Studahl, Marie

    2018-06-01

    Nosocomial transmission of Lassa virus (LASV) is reported to be low when care for the index patient includes proper barrier nursing methods. We investigated whether asymptomatic LASV infection occurred in healthcare workers who used standard barrier nursing methods during the first 15 days of caring for a patient with Lassa fever in Sweden. Of 76 persons who were defined as having been potentially exposed to LASV, 53 provided blood samples for detection of LASV IgG. These persons also responded to a detailed questionnaire to evaluate exposure to different body fluids from the index patient. LASV-specific IgG was not detected in any of the 53 persons. Five of 53 persons had not been using proper barrier nursing methods. Our results strengthen the argument for a low risk of secondary transmission of LASV in humans when standard barrier nursing methods are used and the patient has only mild symptoms.

  13. Will molecular dynamics simulations of proteins ever reach equilibrium?

    PubMed

    Genheden, Samuel; Ryde, Ulf

    2012-06-28

    We show that conformational entropies calculated for five proteins and protein-ligand complexes with dihedral-distribution histogramming, the von Mises approach, or quasi-harmonic analysis do not converge to any useful precision even if molecular dynamics (MD) simulations of 380-500 ns length are employed (the uncertainty is 12-89 kJ mol(-1)). To explain this, we suggest a simple protein model involving dihedrals with effective barriers forming a uniform distribution and show that for such a model, the entropy increases logarithmically with time until all significantly populated dihedral states have been sampled, in agreement with the simulations (during the simulations, 52-70% of the available dihedral phase space has been visited). This is also confirmed by the analysis of the trajectories of a 1 ms simulation of bovine pancreatic trypsin inhibitor (31 kJ mol(-1) difference in the entropy between the first and second part of the simulation). Strictly speaking, this means that it is practically impossible to equilibrate MD simulations of proteins. We discuss the implications of such a lack of strict equilibration of protein MD simulations and show that ligand-binding free energies estimated with the MM/GBSA method (molecular mechanics with generalised Born and surface-area solvation) vary by 3-15 kJ mol(-1) during a 500 ns simulation (the higher estimate is caused by rare conformational changes), although they involve a questionable but well-converged normal-mode entropy estimate, whereas free energies estimated by free-energy perturbation vary by less than 0.6 kJ mol(-1) for the same simulation.

  14. Assessing probabilistic predictions of ENSO phase and intensity from the North American Multimodel Ensemble

    NASA Astrophysics Data System (ADS)

    Tippett, Michael K.; Ranganathan, Meghana; L'Heureux, Michelle; Barnston, Anthony G.; DelSole, Timothy

    2017-05-01

    Here we examine the skill of three, five, and seven-category monthly ENSO probability forecasts (1982-2015) from single and multi-model ensemble integrations of the North American Multimodel Ensemble (NMME) project. Three-category forecasts are typical and provide probabilities for the ENSO phase (El Niño, La Niña or neutral). Additional forecast categories indicate the likelihood of ENSO conditions being weak, moderate or strong. The level of skill observed for differing numbers of forecast categories can help to determine the appropriate degree of forecast precision. However, the dependence of the skill score itself on the number of forecast categories must be taken into account. For reliable forecasts with same quality, the ranked probability skill score (RPSS) is fairly insensitive to the number of categories, while the logarithmic skill score (LSS) is an information measure and increases as categories are added. The ignorance skill score decreases to zero as forecast categories are added, regardless of skill level. For all models, forecast formats and skill scores, the northern spring predictability barrier explains much of the dependence of skill on target month and forecast lead. RPSS values for monthly ENSO forecasts show little dependence on the number of categories. However, the LSS of multimodel ensemble forecasts with five and seven categories show statistically significant advantages over the three-category forecasts for the targets and leads that are least affected by the spring predictability barrier. These findings indicate that current prediction systems are capable of providing more detailed probabilistic forecasts of ENSO phase and amplitude than are typically provided.

  15. Osteonal effects on elastic modulus and fatigue life in equine bone.

    PubMed

    Gibson, V A; Stover, S M; Gibeling, J C; Hazelwood, S J; Martin, R B

    2006-01-01

    We hypothesized that recently formed, incompletely mineralized, and thus, relatively deformable osteons in the equine third metacarpus enhance in vitro load-controlled fatigue life in two ways. Macroscopically, there is a compliance effect, because reduced tissue elastic modulus diminishes the stress required to reach a given strain. Microscopically, there is a cement line effect, in which new osteons and their cement lines more effectively serve as barriers to crack propagation. We studied 18 4 x 10 x 100 mm beams from the medial, lateral, and dorsal cortices of metacarpal bones from 6 thoroughbred racehorses. Following load-controlled fatigue testing to fracture in 4 point bending, a transverse, 100 microm thick, basic fuchsin-stained cross-section was taken from the load-bearing region. The number and diameter of all intact (and thus recently formed/compliant) secondary osteons in a 3.8 x 3.8 mm region in the center of the section were determined. The associated area fraction and cement line length of intact osteons were calculated, and the relationships between these variables, elastic modulus (E), and the logarithm of fatigue life (logN(F)) were analyzed. As expected, logN(F) was negatively correlated with E, which was in turn negatively correlated with intact osteon area fraction and density. (LogN(F))/E increased in proportion to intact osteon density and nonlinearly with cement line density (mm/mm(2)). These results support the hypothesis that remodeling extends load-controlled fatigue life both through the creation of osteonal barriers to microdamage propagation and modulus reduction.

  16. A viable logarithmic f(R) model for inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amin, M.; Khalil, S.; Salah, M.

    2016-08-18

    Inflation in the framework of f(R) modified gravity is revisited. We study the conditions that f(R) should satisfy in order to lead to a viable inflationary model in the original form and in the Einstein frame. Based on these criteria we propose a new logarithmic model as a potential candidate for f(R) theories aiming to describe inflation consistent with observations from Planck satellite (2015). The model predicts scalar spectral index 0.9615

  17. Graviton 1-loop partition function for 3-dimensional massive gravity

    NASA Astrophysics Data System (ADS)

    Gaberdiel, Matthias R.; Grumiller, Daniel; Vassilevich, Dmitri

    2010-11-01

    Thegraviton1-loop partition function in Euclidean topologically massivegravity (TMG) is calculated using heat kernel techniques. The partition function does not factorize holomorphically, and at the chiral point it has the structure expected from a logarithmic conformal field theory. This gives strong evidence for the proposal that the dual conformal field theory to TMG at the chiral point is indeed logarithmic. We also generalize our results to new massive gravity.

  18. SPECIFIC HEAT INDICATOR

    DOEpatents

    Horn, F.L.; Binns, J.E.

    1961-05-01

    Apparatus for continuously and automatically measuring and computing the specific heat of a flowing solution is described. The invention provides for the continuous measurement of all the parameters required for the mathematical solution of this characteristic. The parameters are converted to logarithmic functions which are added and subtracted in accordance with the solution and a null-seeking servo reduces errors due to changing voltage drops to a minimum. Logarithmic potentiometers are utilized in a unique manner to accomplish these results.

  19. Cost drivers and resource allocation in military health care systems.

    PubMed

    Fulton, Larry; Lasdon, Leon S; McDaniel, Reuben R

    2007-03-01

    This study illustrates the feasibility of incorporating technical efficiency considerations in the funding of military hospitals and identifies the primary drivers for hospital costs. Secondary data collected for 24 U.S.-based Army hospitals and medical centers for the years 2001 to 2003 are the basis for this analysis. Technical efficiency was measured by using data envelopment analysis; subsequently, efficiency estimates were included in logarithmic-linear cost models that specified cost as a function of volume, complexity, efficiency, time, and facility type. These logarithmic-linear models were compared against stochastic frontier analysis models. A parsimonious, three-variable, logarithmic-linear model composed of volume, complexity, and efficiency variables exhibited a strong linear relationship with observed costs (R(2) = 0.98). This model also proved reliable in forecasting (R(2) = 0.96). Based on our analysis, as much as $120 million might be reallocated to improve the United States-based Army hospital performance evaluated in this study.

  20. Scaling in the vicinity of the four-state Potts fixed point

    NASA Astrophysics Data System (ADS)

    Blöte, H. W. J.; Guo, Wenan; Nightingale, M. P.

    2017-08-01

    We study a self-dual generalization of the Baxter-Wu model, employing results obtained by transfer matrix calculations of the magnetic scaling dimension and the free energy. While the pure critical Baxter-Wu model displays the critical behavior of the four-state Potts fixed point in two dimensions, in the sense that logarithmic corrections are absent, the introduction of different couplings in the up- and down triangles moves the model away from this fixed point, so that logarithmic corrections appear. Real couplings move the model into the first-order range, away from the behavior displayed by the nearest-neighbor, four-state Potts model. We also use complex couplings, which bring the model in the opposite direction characterized by the same type of logarithmic corrections as present in the four-state Potts model. Our finite-size analysis confirms in detail the existing renormalization theory describing the immediate vicinity of the four-state Potts fixed point.

  1. Optical Processing of Speckle Images with Bacteriorhodopsin for Pattern Recognition

    NASA Technical Reports Server (NTRS)

    Downie, John D.; Tucker, Deanne (Technical Monitor)

    1994-01-01

    Logarithmic processing of images with multiplicative noise characteristics can be utilized to transform the image into one with an additive noise distribution. This simplifies subsequent image processing steps for applications such as image restoration or correlation for pattern recognition. One particularly common form of multiplicative noise is speckle, for which the logarithmic operation not only produces additive noise, but also makes it of constant variance (signal-independent). We examine the optical transmission properties of some bacteriorhodopsin films here and find them well suited to implement such a pointwise logarithmic transformation optically in a parallel fashion. We present experimental results of the optical conversion of speckle images into transformed images with additive, signal-independent noise statistics using the real-time photochromic properties of bacteriorhodopsin. We provide an example of improved correlation performance in terms of correlation peak signal-to-noise for such a transformed speckle image.

  2. Universal principles governing multiple random searchers on complex networks: The logarithmic growth pattern and the harmonic law

    NASA Astrophysics Data System (ADS)

    Weng, Tongfeng; Zhang, Jie; Small, Michael; Harandizadeh, Bahareh; Hui, Pan

    2018-03-01

    We propose a unified framework to evaluate and quantify the search time of multiple random searchers traversing independently and concurrently on complex networks. We find that the intriguing behaviors of multiple random searchers are governed by two basic principles—the logarithmic growth pattern and the harmonic law. Specifically, the logarithmic growth pattern characterizes how the search time increases with the number of targets, while the harmonic law explores how the search time of multiple random searchers varies relative to that needed by individual searchers. Numerical and theoretical results demonstrate these two universal principles established across a broad range of random search processes, including generic random walks, maximal entropy random walks, intermittent strategies, and persistent random walks. Our results reveal two fundamental principles governing the search time of multiple random searchers, which are expected to facilitate investigation of diverse dynamical processes like synchronization and spreading.

  3. Late-time structure of the Bunch-Davies de Sitter wavefunction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anninos, Dionysios; Anous, Tarek; Freedman, Daniel Z.

    2015-11-30

    We examine the late time behavior of the Bunch-Davies wavefunction for interacting light fields in a de Sitter background. We use perturbative techniques developed in the framework of AdS/CFT, and analytically continue to compute tree and loop level contributions to the Bunch-Davies wavefunction. We consider self-interacting scalars of general mass, but focus especially on the massless and conformally coupled cases. We show that certain contributions grow logarithmically in conformal time both at tree and loop level. We also consider gauge fields and gravitons. The four-dimensional Fefferman-Graham expansion of classical asymptotically de Sitter solutions is used to show that the wavefunctionmore » contains no logarithmic growth in the pure graviton sector at tree level. Finally, assuming a holographic relation between the wavefunction and the partition function of a conformal field theory, we interpret the logarithmic growths in the language of conformal field theory.« less

  4. Confirming the Lanchestrian linear-logarithmic model of attrition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hartley, D.S. III.

    1990-12-01

    This paper is the fourth in a series of reports on the breakthrough research in historical validation of attrition in conflict. Significant defense policy decisions, including weapons acquisition and arms reduction, are based in part on models of conflict. Most of these models are driven by their attrition algorithms, usually forms of the Lanchester square and linear laws. None of these algorithms have been validated. The results of this paper confirm the results of earlier papers, using a large database of historical results. The homogeneous linear-logarithmic Lanchestrian attrition model is validated to the extent possible with current initial and finalmore » force size data and is consistent with the Iwo Jima data. A particular differential linear-logarithmic model is described that fits the data very well. A version of Helmbold's victory predicting parameter is also confirmed, with an associated probability function. 37 refs., 73 figs., 68 tabs.« less

  5. Chemical origins of frictional aging.

    PubMed

    Liu, Yun; Szlufarska, Izabela

    2012-11-02

    Although the basic laws of friction are simple enough to be taught in elementary physics classes and although friction has been widely studied for centuries, in the current state of knowledge it is still not possible to predict a friction force from fundamental principles. One of the highly debated topics in this field is the origin of static friction. For most macroscopic contacts between two solids, static friction will increase logarithmically with time, a phenomenon that is referred to as aging of the interface. One known reason for the logarithmic growth of static friction is the deformation creep in plastic contacts. However, this mechanism cannot explain frictional aging observed in the absence of roughness and plasticity. Here, we discover molecular mechanisms that can lead to a logarithmic increase of friction based purely on interfacial chemistry. Predictions of our model are consistent with published experimental data on the friction of silica.

  6. Where to restore ecological connectivity? Detecting barriers and quantifying restoration benefits.

    PubMed

    McRae, Brad H; Hall, Sonia A; Beier, Paul; Theobald, David M

    2012-01-01

    Landscape connectivity is crucial for many ecological processes, including dispersal, gene flow, demographic rescue, and movement in response to climate change. As a result, governmental and non-governmental organizations are focusing efforts to map and conserve areas that facilitate movement to maintain population connectivity and promote climate adaptation. In contrast, little focus has been placed on identifying barriers-landscape features which impede movement between ecologically important areas-where restoration could most improve connectivity. Yet knowing where barriers most strongly reduce connectivity can complement traditional analyses aimed at mapping best movement routes. We introduce a novel method to detect important barriers and provide example applications. Our method uses GIS neighborhood analyses in conjunction with effective distance analyses to detect barriers that, if removed, would significantly improve connectivity. Applicable in least-cost, circuit-theoretic, and simulation modeling frameworks, the method detects both complete (impermeable) barriers and those that impede but do not completely block movement. Barrier mapping complements corridor mapping by broadening the range of connectivity conservation alternatives available to practitioners. The method can help practitioners move beyond maintaining currently important areas to restoring and enhancing connectivity through active barrier removal. It can inform decisions on trade-offs between restoration and protection; for example, purchasing an intact corridor may be substantially more costly than restoring a barrier that blocks an alternative corridor. And it extends the concept of centrality to barriers, highlighting areas that most diminish connectivity across broad networks. Identifying which modeled barriers have the greatest impact can also help prioritize error checking of land cover data and collection of field data to improve connectivity maps. Barrier detection provides a different way to view the landscape, broadening thinking about connectivity and fragmentation while increasing conservation options.

  7. Studies of phosphatidylcholine vesicles by spectroturbidimetric and dynamic light scattering methods

    NASA Astrophysics Data System (ADS)

    Khlebtsov, B. N.; Kovler, L. A.; Bogatyrev, V. A.; Khlebtsov, N. G.; Shchyogolev, S. Yu.

    2003-09-01

    A spectroturbidimetric method for the determination of the average size and thickness of the shell in polydisperse suspensions of liposome particles is discussed. The method is based on measuring the wavelength exponent of a suspension (a slope of the logarithmic turbidity spectrum) and the specific turbidity (the turbidity per unit mass concentration of the dispersed substance). The inverse problem was solved using an exact calculation of characteristics of light scattering for polydisperse suspensions of spherical bilayer particles with allowance for the spectral dependence of optical constants. A practical realization of this method is illustrated by the experimental determinations of the structural parameters of liposomes prepared from egg lecithin. Comparison experiments to determine the liposome size by the dynamic (quasielastic) light scattering method were performed as an independent control.

  8. The Employers' Perspective on Barriers and Facilitators to Employment of People with Intellectual Disability: A Differential Mixed-Method Approach

    ERIC Educational Resources Information Center

    Kocman, Andreas; Fischer, Linda; Weber, Germain

    2018-01-01

    Background: Obtaining employment is among the most important ambitions of people with intellectual disability. Progress towards comprehensive inclusive employment is hampered by numerous barriers. Limited research is available on these barriers and strategies to overcome them. Method: A mixed method approach in a sample of 30 HR-managers was used…

  9. In situ retreival of contaminants or other substances using a barrier system and leaching solutions and components, processes and methods relating thereto

    DOEpatents

    Nickelson, Reva A.; Walsh, Stephanie; Richardson, John G.; Dick, John R.; Sloan, Paul A.

    2005-06-28

    Processes and methods relating to treating contaminants and collecting desired substances from a zone of interest using subterranean collection and containment barriers. Tubular casings having interlock structures are used to create subterranean barriers for containing and treating buried waste and its effluents. The subterranean barrier includes an effluent collection system. Treatment solutions provided to the zone of interest pass therethrough and are collected by the barrier and treated or recovered, allowing on-site remediation. Barrier components may be used to in the treatment by collecting or removing contaminants or other materials from the zone of interest.

  10. Method of in situ retrieval of contaminants or other substances using a barrier system and leaching solutions

    DOEpatents

    Nickelson, Reva A.; Walsh, Stephanie; Richardson, John G.; Dick, John R.; Sloan, Paul A.

    2006-12-26

    Processes and methods relating to treating contaminants and collecting desired substances from a zone of interest using subterranean collection and containment barriers. Tubular casings having interlock structures are used to create subterranean barriers for containing and treating buried waste and its effluents. The subterranean barrier includes an effluent collection system. Treatment solutions provided to the zone of interest pass therethrough and are collected by the barrier and treated or recovered, allowing on-site remediation. Barrier components may be used to in the treatment by collecting or removing contaminants or other materials from the zone of interest.

  11. Simulation of solute transport across low-permeability barrier walls

    USGS Publications Warehouse

    Harte, P.T.; Konikow, Leonard F.; Hornberger, G.Z.

    2006-01-01

    Low-permeability, non-reactive barrier walls are often used to contain contaminants in an aquifer. Rates of solute transport through such barriers are typically many orders of magnitude slower than rates through the aquifer. Nevertheless, the success of remedial actions may be sensitive to these low rates of transport. Two numerical simulation methods for representing low-permeability barriers in a finite-difference groundwater-flow and transport model were tested. In the first method, the hydraulic properties of the barrier were represented directly on grid cells and in the second method, the intercell hydraulic-conductance values were adjusted to approximate the reduction in horizontal flow, allowing use of a coarser and computationally efficient grid. The alternative methods were tested and evaluated on the basis of hypothetical test problems and a field case involving tetrachloroethylene (PCE) contamination at a Superfund site in New Hampshire. For all cases, advective transport across the barrier was negligible, but preexisting numerical approaches to calculate dispersion yielded dispersive fluxes that were greater than expected. A transport model (MODFLOW-GWT) was modified to (1) allow different dispersive and diffusive properties to be assigned to the barrier than the adjacent aquifer and (2) more accurately calculate dispersion from concentration gradients and solute fluxes near barriers. The new approach yields reasonable and accurate concentrations for the test cases. ?? 2006.

  12. Efficient free energy calculations by combining two complementary tempering sampling methods.

    PubMed

    Xie, Liangxu; Shen, Lin; Chen, Zhe-Ning; Yang, Mingjun

    2017-01-14

    Although energy barriers can be efficiently crossed in the reaction coordinate (RC) guided sampling, this type of method suffers from identification of the correct RCs or requirements of high dimensionality of the defined RCs for a given system. If only the approximate RCs with significant barriers are used in the simulations, hidden energy barriers with small to medium height would exist in other degrees of freedom (DOFs) relevant to the target process and consequently cause the problem of insufficient sampling. To address the sampling in this so-called hidden barrier situation, here we propose an effective approach to combine temperature accelerated molecular dynamics (TAMD), an efficient RC-guided sampling method, with the integrated tempering sampling (ITS), a generalized ensemble sampling method. In this combined ITS-TAMD method, the sampling along the major RCs with high energy barriers is guided by TAMD and the sampling of the rest of the DOFs with lower but not negligible barriers is enhanced by ITS. The performance of ITS-TAMD to three systems in the processes with hidden barriers has been examined. In comparison to the standalone TAMD or ITS approach, the present hybrid method shows three main improvements. (1) Sampling efficiency can be improved at least five times even if in the presence of hidden energy barriers. (2) The canonical distribution can be more accurately recovered, from which the thermodynamic properties along other collective variables can be computed correctly. (3) The robustness of the selection of major RCs suggests that the dimensionality of necessary RCs can be reduced. Our work shows more potential applications of the ITS-TAMD method as the efficient and powerful tool for the investigation of a broad range of interesting cases.

  13. Efficient free energy calculations by combining two complementary tempering sampling methods

    NASA Astrophysics Data System (ADS)

    Xie, Liangxu; Shen, Lin; Chen, Zhe-Ning; Yang, Mingjun

    2017-01-01

    Although energy barriers can be efficiently crossed in the reaction coordinate (RC) guided sampling, this type of method suffers from identification of the correct RCs or requirements of high dimensionality of the defined RCs for a given system. If only the approximate RCs with significant barriers are used in the simulations, hidden energy barriers with small to medium height would exist in other degrees of freedom (DOFs) relevant to the target process and consequently cause the problem of insufficient sampling. To address the sampling in this so-called hidden barrier situation, here we propose an effective approach to combine temperature accelerated molecular dynamics (TAMD), an efficient RC-guided sampling method, with the integrated tempering sampling (ITS), a generalized ensemble sampling method. In this combined ITS-TAMD method, the sampling along the major RCs with high energy barriers is guided by TAMD and the sampling of the rest of the DOFs with lower but not negligible barriers is enhanced by ITS. The performance of ITS-TAMD to three systems in the processes with hidden barriers has been examined. In comparison to the standalone TAMD or ITS approach, the present hybrid method shows three main improvements. (1) Sampling efficiency can be improved at least five times even if in the presence of hidden energy barriers. (2) The canonical distribution can be more accurately recovered, from which the thermodynamic properties along other collective variables can be computed correctly. (3) The robustness of the selection of major RCs suggests that the dimensionality of necessary RCs can be reduced. Our work shows more potential applications of the ITS-TAMD method as the efficient and powerful tool for the investigation of a broad range of interesting cases.

  14. Vision-based calibration of parallax barrier displays

    NASA Astrophysics Data System (ADS)

    Ranieri, Nicola; Gross, Markus

    2014-03-01

    Static and dynamic parallax barrier displays became very popular over the past years. Especially for single viewer applications like tablets, phones and other hand-held devices, parallax barriers provide a convenient solution to render stereoscopic content. In our work we present a computer vision based calibration approach to relate image layer and barrier layer of parallax barrier displays with unknown display geometry for static or dynamic viewer positions using homographies. We provide the math and methods to compose the required homographies on the fly and present a way to compute the barrier without the need of any iteration. Our GPU implementation is stable and general and can be used to reduce latency and increase refresh rate of existing and upcoming barrier methods.

  15. Quantification and scaling of multipartite entanglement in continuous variable systems.

    PubMed

    Adesso, Gerardo; Serafini, Alessio; Illuminati, Fabrizio

    2004-11-26

    We present a theoretical method to determine the multipartite entanglement between different partitions of multimode, fully or partially symmetric Gaussian states of continuous variable systems. For such states, we determine the exact expression of the logarithmic negativity and show that it coincides with that of equivalent two-mode Gaussian states. Exploiting this reduction, we demonstrate the scaling of the multipartite entanglement with the number of modes and its reliable experimental estimate by direct measurements of the global and local purities.

  16. Enhancement of concentration range of chromatographically detectable components with array detector mass spectrometry

    DOEpatents

    Enke, Christie

    2013-02-19

    Methods and instruments for high dynamic range analysis of sample components are described. A sample is subjected to time-dependent separation, ionized, and the ions dispersed with a constant integration time across an array of detectors according to the ions m/z values. Each of the detectors in the array has a dynamically adjustable gain or a logarithmic response function, producing an instrument capable of detecting a ratio of responses or 4 or more orders of magnitude.

  17. Barriers to GPs' use of evidence-based medicine: a systematic review

    PubMed Central

    Zwolsman, Sandra; te Pas, Ellen; Hooft, Lotty; Waard, Margreet Wieringa-de; van Dijk, Nynke

    2012-01-01

    Background GPs report various barriers to the use and practice of evidence-based medicine (EBM). A review of research on these barriers may help solve problems regarding the uptake of evidence in clinical outpatient practice. Aim To determine the barriers encountered by GPs in the practice of EBM and to come up with solutions to the barriers identified. Design A systematic review of the literature. Method The following databases were searched: MEDLINE® (PubMed®), Embase, CINAHL®, ERIC, and the Cochrane Library, until February 2011. Primary studies (all methods, all languages) that explore the barriers that GPs encounter in the practice of EBM were included. Results A total of 14 700 articles were identified, of which 22 fulfilled all inclusion criteria. Of the latter, nine concerned qualitative, 12 concerned quantitative, and one concerned both qualitative and quantitative research methods. The barriers described in the articles cover the categories: evidence (including the accompanying EBM steps), the GP’s preferences (experience, expertise, education), and the patient’s preferences. The particular GP setting also has important barriers to the use of EBM. Barriers found in this review, among others, include lack of time, EBM skills, and available evidence; patient-related factors; and the attitude of the GP. Conclusion Various barriers are encountered when using EBM in GP practice. Interventions that help GPs to overcome these barriers are needed, both within EBM education and in clinical practice. PMID:22781999

  18. Extracting Damping Ratio from Dynamic Data and Numerical Solutions

    NASA Technical Reports Server (NTRS)

    Casiano, M. J.

    2016-01-01

    There are many ways to extract damping parameters from data or models. This Technical Memorandum provides a quick reference for some of the more common approaches used in dynamics analysis. Described are six methods of extracting damping from data: the half-power method, logarithmic decrement (decay rate) method, an autocorrelation/power spectral density fitting method, a frequency response fitting method, a random decrement fitting method, and a newly developed half-quadratic gain method. Additionally, state-space models and finite element method modeling tools, such as COMSOL Multiphysics (COMSOL), provide a theoretical damping via complex frequency. Each method has its advantages which are briefly noted. There are also likely many other advanced techniques in extracting damping within the operational modal analysis discipline, where an input excitation is unknown; however, these approaches discussed here are objective, direct, and can be implemented in a consistent manner.

  19. Thermochemical Data for Propellant Ingredients and their Products of Explosion

    DTIC Science & Technology

    1949-12-01

    oases except perhaps at temperatures below 2000°K. The logarithms of all the equilibrium constants except Ko have been tabulated since these logarithms...have almost constant first differences. Linear interpolation may lead to an error of a unit or two in the third decimal place for Ko but the...dissociation products OH, H and KO will be formed and at still higher temperatures the other dissociation products 0*, 0, N and C will begin to appear

  20. On the Existence of the Logarithmic Surface Layer in the Inner Core of Hurricanes

    DTIC Science & Technology

    2012-01-01

    characteristics of eyewall boundary layer of Hurricane Hugo (1989). Mon. Wea. Rev., 139, 1447-1462. Zhang, JA, Montgomery MT. 2012 Observational...the inner core of hurricanes Roger K. Smitha ∗and Michael T. Montgomeryb a Meteorological Institute, University of Munich, Munich, Germany b Dept. of...logarithmic surface layer”, or log layer, in the boundary layer of the rapidly-rotating core of a hurricane . One such study argues that boundary-layer

  1. Logarithmic corrections to entropy of magnetically charged AdS4 black holes

    NASA Astrophysics Data System (ADS)

    Jeon, Imtak; Lal, Shailesh

    2017-11-01

    Logarithmic terms are quantum corrections to black hole entropy determined completely from classical data, thus providing a strong check for candidate theories of quantum gravity purely from physics in the infrared. We compute these terms in the entropy associated to the horizon of a magnetically charged extremal black hole in AdS4×S7 using the quantum entropy function and discuss the possibility of matching against recently derived microscopic expressions.

  2. Estimating ice-affected streamflow by extended Kalman filtering

    USGS Publications Warehouse

    Holtschlag, D.J.; Grewal, M.S.

    1998-01-01

    An extended Kalman filter was developed to automate the real-time estimation of ice-affected streamflow on the basis of routine measurements of stream stage and air temperature and on the relation between stage and streamflow during open-water (ice-free) conditions. The filter accommodates three dynamic modes of ice effects: sudden formation/ablation, stable ice conditions, and eventual elimination. The utility of the filter was evaluated by applying it to historical data from two long-term streamflow-gauging stations, St. John River at Dickey, Maine and Platte River at North Bend, Nebr. Results indicate that the filter was stable and that parameters converged for both stations, producing streamflow estimates that are highly correlated with published values. For the Maine station, logarithms of estimated streamflows are within 8% of the logarithms of published values 87.2% of the time during periods of ice effects and within 15% 96.6% of the time. Similarly, for the Nebraska station, logarithms of estimated streamflows are within 8% of the logarithms of published values 90.7% of the time and within 15% 97.7% of the time. In addition, the correlation between temporal updates and published streamflows on days of direct measurements at the Maine station was 0.777 and 0.998 for ice-affected and open-water periods, respectively; for the Nebraska station, corresponding correlations were 0.864 and 0.997.

  3. Efficient Queries of Stand-off Annotations for Natural Language Processing on Electronic Medical Records.

    PubMed

    Luo, Yuan; Szolovits, Peter

    2016-01-01

    In natural language processing, stand-off annotation uses the starting and ending positions of an annotation to anchor it to the text and stores the annotation content separately from the text. We address the fundamental problem of efficiently storing stand-off annotations when applying natural language processing on narrative clinical notes in electronic medical records (EMRs) and efficiently retrieving such annotations that satisfy position constraints. Efficient storage and retrieval of stand-off annotations can facilitate tasks such as mapping unstructured text to electronic medical record ontologies. We first formulate this problem into the interval query problem, for which optimal query/update time is in general logarithm. We next perform a tight time complexity analysis on the basic interval tree query algorithm and show its nonoptimality when being applied to a collection of 13 query types from Allen's interval algebra. We then study two closely related state-of-the-art interval query algorithms, proposed query reformulations, and augmentations to the second algorithm. Our proposed algorithm achieves logarithmic time stabbing-max query time complexity and solves the stabbing-interval query tasks on all of Allen's relations in logarithmic time, attaining the theoretic lower bound. Updating time is kept logarithmic and the space requirement is kept linear at the same time. We also discuss interval management in external memory models and higher dimensions.

  4. Efficient Queries of Stand-off Annotations for Natural Language Processing on Electronic Medical Records

    PubMed Central

    Luo, Yuan; Szolovits, Peter

    2016-01-01

    In natural language processing, stand-off annotation uses the starting and ending positions of an annotation to anchor it to the text and stores the annotation content separately from the text. We address the fundamental problem of efficiently storing stand-off annotations when applying natural language processing on narrative clinical notes in electronic medical records (EMRs) and efficiently retrieving such annotations that satisfy position constraints. Efficient storage and retrieval of stand-off annotations can facilitate tasks such as mapping unstructured text to electronic medical record ontologies. We first formulate this problem into the interval query problem, for which optimal query/update time is in general logarithm. We next perform a tight time complexity analysis on the basic interval tree query algorithm and show its nonoptimality when being applied to a collection of 13 query types from Allen’s interval algebra. We then study two closely related state-of-the-art interval query algorithms, proposed query reformulations, and augmentations to the second algorithm. Our proposed algorithm achieves logarithmic time stabbing-max query time complexity and solves the stabbing-interval query tasks on all of Allen’s relations in logarithmic time, attaining the theoretic lower bound. Updating time is kept logarithmic and the space requirement is kept linear at the same time. We also discuss interval management in external memory models and higher dimensions. PMID:27478379

  5. Nonlinear interactions and their scaling in the logarithmic region of turbulent channels

    NASA Astrophysics Data System (ADS)

    Moarref, Rashad; Sharma, Ati S.; Tropp, Joel A.; McKeon, Beverley J.

    2014-11-01

    The nonlinear interactions in wall turbulence redistribute the turbulent kinetic energy across different scales and different wall-normal locations. To better understand these interactions in the logarithmic region of turbulent channels, we decompose the velocity into a weighted sum of resolvent modes (McKeon & Sharma, J. Fluid Mech., 2010). The resolvent modes represent the linear amplification mechanisms in the Navier-Stokes equations (NSE) and the weights represent the scaling influence of the nonlinearity. An explicit equation for the unknown weights is obtained by projecting the NSE onto the known resolvent modes (McKeon et al., Phys. Fluids, 2013). The weights of triad modes -the modes that directly interact via the quadratic nonlinearity in the NSE- are coupled via interaction coefficients that depend solely on the resolvent modes. We use the hierarchies of self-similar modes in the logarithmic region (Moarref et al., J. Fluid Mech., 2013) to extend the notion of triad modes to triad hierarchies. It is shown that the interaction coefficients for the triad modes that belong to a triad hierarchy follow an exponential function. These scalings can be used to better understand the interaction of flow structures in the logarithmic region and develop analytical results therein. The support of Air Force Office of Scientific Research under Grants FA 9550-09-1-0701 (P.M. Rengasamy Ponnappan) and FA 9550-12-1-0469 (P.M. Doug Smith) is gratefully acknowledged.

  6. A study of the eigenvectors of the vibrational modes in crystalline cytidine via high-pressure Raman spectroscopy.

    PubMed

    Lee, Scott A; Pinnick, David A; Anderson, A

    2015-01-01

    Raman spectroscopy has been used to study the eigenvectors and eigenvalues of the vibrational modes of crystalline cytidine at 295 K and high pressures by evaluating the logarithmic derivative of the vibrational frequency ω with respect to pressure P: [Formula: see text]. Crystalline samples of molecular materials have strong intramolecular bonds and weak intermolecular bonds. This hierarchy of bonding strengths causes the vibrational optical modes localized within a molecular unit ("internal" modes) to be relatively high in frequency while the modes in which the molecular units vibrate against each other ("external" modes) have relatively low frequencies. The value of the logarithmic derivative is a useful diagnostic probe of the nature of the eigenvector of the vibrational modes because stretching modes (which are predominantly internal to the molecule) have low logarithmic derivatives while external modes have higher logarithmic derivatives. In crystalline cytidine, the modes at 85.8, 101.4, and 110.6 cm(-1) are external in which the molecules of the unit cell vibrate against each other in either translational or librational motions (or some linear combination thereof). All of the modes above 320 cm(-1) are predominantly internal stretching modes. The remaining modes below 320 cm(-1) include external modes and internal modes, mostly involving either torsional or bending motions of groups of atoms within a molecule.

  7. The asymptotic form of non-global logarithms, black disc saturation, and gluonic deserts

    NASA Astrophysics Data System (ADS)

    Neill, Duff

    2017-01-01

    We develop an asymptotic perturbation theory for the large logarithmic behavior of the non-linear integro-differential equation describing the soft correlations of QCD jet measurements, the Banfi-Marchesini-Smye (BMS) equation. This equation captures the late-time evolution of radiating color dipoles after a hard collision. This allows us to prove that at large values of the control variable (the non-global logarithm, a function of the infra-red energy scales associated with distinct hard jets in an event), the distribution has a gaussian tail. We compute the decay width analytically, giving a closed form expression, and find it to be jet geometry independent, up to the number of legs of the dipole in the active jet. Enabling the asymptotic expansion is the correct perturbative seed, where we perturb around an anzats encoding formally no real emissions, an intuition motivated by the buffer region found in jet dynamics. This must be supplemented with the correct application of the BFKL approximation to the BMS equation in collinear limits. Comparing to the asymptotics of the conformally related evolution equation encountered in small-x physics, the Balitisky-Kovchegov (BK) equation, we find that the asymptotic form of the non-global logarithms directly maps to the black-disc unitarity limit of the BK equation, despite the contrasting physical pictures. Indeed, we recover the equations of saturation physics in the final state dynamics of QCD.

  8. Graphical evaluation of complexometric titration curves.

    PubMed

    Guinon, J L

    1985-04-01

    A graphical method, based on logarithmic concentration diagrams, for construction, without any calculations, of complexometric titration curves is examined. The titration curves obtained for different kinds of unidentate, bidentate and quadridentate ligands clearly show why only chelating ligands are usually used in titrimetric analysis. The method has also been applied to two practical cases where unidentate ligands are used: (a) the complexometric determination of mercury(II) with halides and (b) the determination of cyanide with silver, which involves both a complexation and a precipitation system; for this purpose construction of the diagrams for the HgCl(2)/HgCl(+)/Hg(2+) and Ag(CN)(2)(-)/AgCN/CN(-) systems is considered in detail.

  9. Learning investment indicators through data extension

    NASA Astrophysics Data System (ADS)

    Dvořák, Marek

    2017-07-01

    Stock prices in the form of time series were analysed using single and multivariate statistical methods. After simple data preprocessing in the form of logarithmic differences, we augmented this single variate time series to a multivariate representation. This method makes use of sliding windows to calculate several dozen of new variables using simple statistic tools like first and second moments as well as more complicated statistic, like auto-regression coefficients and residual analysis, followed by an optional quadratic transformation that was further used for data extension. These were used as a explanatory variables in a regularized logistic LASSO regression which tried to estimate Buy-Sell Index (BSI) from real stock market data.

  10. Method of making dense, conformal, ultra-thin cap layers for nanoporous low-k ILD by plasma assisted atomic layer deposition

    DOEpatents

    Jiang, Ying-Bing [Albuquerque, NM; Cecchi, Joseph L [Albuquerque, NM; Brinker, C Jeffrey [Albuquerque, NM

    2011-05-24

    Barrier layers and methods for forming barrier layers on a porous layer are provided. The methods can include chemically adsorbing a plurality of first molecules on a surface of the porous layer in a chamber and forming a first layer of the first molecules on the surface of the porous layer. A plasma can then be used to react a plurality of second molecules with the first layer of first molecules to form a first layer of a barrier layer. The barrier layers can seal the pores of the porous material, function as a diffusion barrier, be conformal, and/or have a negligible impact on the overall ILD k value of the porous material.

  11. Where to Restore Ecological Connectivity? Detecting Barriers and Quantifying Restoration Benefits

    PubMed Central

    McRae, Brad H.; Hall, Sonia A.; Beier, Paul; Theobald, David M.

    2012-01-01

    Landscape connectivity is crucial for many ecological processes, including dispersal, gene flow, demographic rescue, and movement in response to climate change. As a result, governmental and non-governmental organizations are focusing efforts to map and conserve areas that facilitate movement to maintain population connectivity and promote climate adaptation. In contrast, little focus has been placed on identifying barriers—landscape features which impede movement between ecologically important areas—where restoration could most improve connectivity. Yet knowing where barriers most strongly reduce connectivity can complement traditional analyses aimed at mapping best movement routes. We introduce a novel method to detect important barriers and provide example applications. Our method uses GIS neighborhood analyses in conjunction with effective distance analyses to detect barriers that, if removed, would significantly improve connectivity. Applicable in least-cost, circuit-theoretic, and simulation modeling frameworks, the method detects both complete (impermeable) barriers and those that impede but do not completely block movement. Barrier mapping complements corridor mapping by broadening the range of connectivity conservation alternatives available to practitioners. The method can help practitioners move beyond maintaining currently important areas to restoring and enhancing connectivity through active barrier removal. It can inform decisions on trade-offs between restoration and protection; for example, purchasing an intact corridor may be substantially more costly than restoring a barrier that blocks an alternative corridor. And it extends the concept of centrality to barriers, highlighting areas that most diminish connectivity across broad networks. Identifying which modeled barriers have the greatest impact can also help prioritize error checking of land cover data and collection of field data to improve connectivity maps. Barrier detection provides a different way to view the landscape, broadening thinking about connectivity and fragmentation while increasing conservation options. PMID:23300719

  12. Electroweak gauge-boson production at small q T : Infrared safety from the collinear anomaly

    NASA Astrophysics Data System (ADS)

    Becher, Thomas; Neubert, Matthias; Wilhelm, Daniel

    2012-02-01

    Using methods from effective field theory, we develop a novel, systematic framework for the calculation of the cross sections for electroweak gauge-boson production at small and very small transverse momentum q T , in which large logarithms of the scale ratio M V /q T are resummed to all orders. These cross sections receive logarithmically enhanced corrections from two sources: the running of the hard matching coefficient and the collinear factorization anomaly. The anomaly leads to the dynamical generation of a non-perturbative scale {q_* } ˜ {M_V}{e^{ - {text{const}}/{α_s}left( {{M_V}} right)}} , which protects the processes from receiving large long-distance hadronic contributions. Expanding the cross sections in either α s or q T generates strongly divergent series, which must be resummed. As a by-product, we obtain an explicit non-perturbative expression for the intercept of the cross sections at q T = 0, including the normalization and first-order α s ( q ∗ ) correction. We perform a detailed numerical comparison of our predictions with the available data on the transverse-momentum distribution in Z-boson production at the Tevatron and LHC.

  13. Legal barriers in accessing opioid medicines: results of the ATOME quick scan of national legislation of eastern European countries.

    PubMed

    Vranken, Marjolein J M; Mantel-Teeuwisse, Aukje K; Jünger, Saskia; Radbruch, Lukas; Lisman, John; Scholten, Willem; Payne, Sheila; Lynch, Tom; Schutjens, Marie-Hélène D B

    2014-12-01

    Overregulation of controlled medicines is one of the factors contributing to limited access to opioid medicines. The purpose of this study was to identify legal barriers to access to opioid medicines in 12 Eastern European countries participating in the Access to Opioid Medication in Europa project, using a quick scan method. A quick scan method to identify legal barriers was developed focusing on eight different categories of barriers. Key experts in 12 European countries were requested to send relevant legislation. Legislation was quick scanned using World Health Organization guidelines. Overly restrictive provisions and provisions that contain stigmatizing language and incorrect definitions were identified. The selected provisions were scored into two categories: 1) barrier and 2) uncertain, and reviewed by two authors. A barrier was recorded if both authors agreed the selected provision to be a barrier (Category 1). National legislation was obtained from 11 of 12 countries. All 11 countries showed legal barriers in the areas of prescribing (most frequently observed barrier). Ten countries showed barriers in the areas of dispensing and showed stigmatizing language and incorrect use of definitions in their legislation. Most barriers were identified in the legislation of Bulgaria, Greece, Lithuania, Serbia, and Slovenia. The Cypriot legislation showed the fewest total number of barriers. The selected countries have in common as main barriers prescribing and dispensing restrictions, the use of stigmatizing language, and incorrect use of definitions. The practical impact of these barriers identified using a quick scan method needs to be validated by other means. Copyright © 2014 American Academy of Hospice and Palliative Medicine. Published by Elsevier Inc. All rights reserved.

  14. Structure-function relationships using spectral-domain optical coherence tomography: comparison with scanning laser polarimetry.

    PubMed

    Aptel, Florent; Sayous, Romain; Fortoul, Vincent; Beccat, Sylvain; Denis, Philippe

    2010-12-01

    To evaluate and compare the regional relationships between visual field sensitivity and retinal nerve fiber layer (RNFL) thickness as measured by spectral-domain optical coherence tomography (OCT) and scanning laser polarimetry. Prospective cross-sectional study. One hundred and twenty eyes of 120 patients (40 with healthy eyes, 40 with suspected glaucoma, and 40 with glaucoma) were tested on Cirrus-OCT, GDx VCC, and standard automated perimetry. Raw data on RNFL thickness were extracted for 256 peripapillary sectors of 1.40625 degrees each for the OCT measurement ellipse and 64 peripapillary sectors of 5.625 degrees each for the GDx VCC measurement ellipse. Correlations between peripapillary RNFL thickness in 6 sectors and visual field sensitivity in the 6 corresponding areas were evaluated using linear and logarithmic regression analysis. Receiver operating curve areas were calculated for each instrument. With spectral-domain OCT, the correlations (r(2)) between RNFL thickness and visual field sensitivity ranged from 0.082 (nasal RNFL and corresponding visual field area, linear regression) to 0.726 (supratemporal RNFL and corresponding visual field area, logarithmic regression). By comparison, with GDx-VCC, the correlations ranged from 0.062 (temporal RNFL and corresponding visual field area, linear regression) to 0.362 (supratemporal RNFL and corresponding visual field area, logarithmic regression). In pairwise comparisons, these structure-function correlations were generally stronger with spectral-domain OCT than with GDx VCC and with logarithmic regression than with linear regression. The largest areas under the receiver operating curve were seen for OCT superior thickness (0.963 ± 0.022; P < .001) in eyes with glaucoma and for OCT average thickness (0.888 ± 0.072; P < .001) in eyes with suspected glaucoma. The structure-function relationship was significantly stronger with spectral-domain OCT than with scanning laser polarimetry, and was better expressed logarithmically than linearly. Measurements with these 2 instruments should not be considered to be interchangeable. Copyright © 2010 Elsevier Inc. All rights reserved.

  15. Devices for overcoming biological barriers: the use of physical forces to disrupt the barriers.

    PubMed

    Mitragotri, Samir

    2013-01-01

    Overcoming biological barriers including skin, mucosal membranes, blood brain barrier as well as cell and nuclear membrane constitutes a key hurdle in the field of drug delivery. While these barriers serve the natural protective function in the body, they limit delivery of drugs into the body. A variety of methods have been developed to overcome these barriers including formulations, targeting peptides and device-based technologies. This review focuses on the use of physical methods including acoustic devices, electric devices, high-pressure devices, microneedles and optical devices for disrupting various barriers in the body including skin and other membranes. A summary of the working principles of these devices and their ability to enhance drug delivery is presented. Copyright © 2012. Published by Elsevier B.V.

  16. Direct control and characterization of a Schottky barrier by scanning tunneling microscopy

    NASA Technical Reports Server (NTRS)

    Bell, L. D.; Kaiser, W. J.; Hecht, M. H.; Grunthaner, F. J.

    1988-01-01

    Scanning tunneling microscopy (STM) methods are used to directly control the barrier height of a metal tunnel tip-semiconductor tunnel junction. Barrier behavior is measured by tunnel current-voltage spectroscopy and compared to theory. A unique surface preparation method is used to prepare a low surface state density Si surface. Control of band bending with this method enables STM investigation of semiconductor subsurface properties.

  17. Fredholm-Volterra Integral Equation with a Generalized Singular Kernel and its Numerical Solutions

    NASA Astrophysics Data System (ADS)

    El-Kalla, I. L.; Al-Bugami, A. M.

    2010-11-01

    In this paper, the existence and uniqueness of solution of the Fredholm-Volterra integral equation (F-VIE), with a generalized singular kernel, are discussed and proved in the spaceL2(Ω)×C(0,T). The Fredholm integral term (FIT) is considered in position while the Volterra integral term (VIT) is considered in time. Using a numerical technique we have a system of Fredholm integral equations (SFIEs). This system of integral equations can be reduced to a linear algebraic system (LAS) of equations by using two different methods. These methods are: Toeplitz matrix method and Product Nyström method. A numerical examples are considered when the generalized kernel takes the following forms: Carleman function, logarithmic form, Cauchy kernel, and Hilbert kernel.

  18. Optical Logarithmic Transformation of Speckle Images with Bacteriorhodopsin Films

    NASA Technical Reports Server (NTRS)

    Downie, John D.

    1995-01-01

    The application of logarithmic transformations to speckle images is sometimes desirable in converting the speckle noise distribution into an additive, constant-variance noise distribution. The optical transmission properties of some bacteriorhodopsin films are well suited to implement such a transformation optically in a parallel fashion. I present experimental results of the optical conversion of a speckle image into a transformed image with signal-independent noise statistics, using the real-time photochromic properties of bacteriorhodopsin. The original and transformed noise statistics are confirmed by histogram analysis.

  19. The exponentiated Hencky energy: anisotropic extension and case studies

    NASA Astrophysics Data System (ADS)

    Schröder, Jörg; von Hoegen, Markus; Neff, Patrizio

    2017-10-01

    In this paper we propose an anisotropic extension of the isotropic exponentiated Hencky energy, based on logarithmic strain invariants. Unlike other elastic formulations, the isotropic exponentiated Hencky elastic energy has been derived solely on differential geometric grounds, involving the geodesic distance of the deformation gradient \\varvec{F} to the group of rotations. We formally extend this approach towards anisotropy by defining additional anisotropic logarithmic strain invariants with the help of suitable structural tensors and consider our findings for selected case studies.

  20. A law of iterated logarithm for the subfractional Brownian motion and an application.

    PubMed

    Qi, Hongsheng; Yan, Litan

    2018-01-01

    Let [Formula: see text] be a sub-fractional Brownian motion with Hurst index [Formula: see text]. In this paper, we give a local law of the iterated logarithm of the form [Formula: see text] almost surely, for all [Formula: see text], where [Formula: see text] for [Formula: see text]. As an application, we introduce the [Formula: see text]-variation of [Formula: see text] driven by [Formula: see text] [Formula: see text] with [Formula: see text].

  1. Logarithmic singularities and quantum oscillations in magnetically doped topological insulators

    NASA Astrophysics Data System (ADS)

    Nandi, D.; Sodemann, Inti; Shain, K.; Lee, G. H.; Huang, K.-F.; Chang, Cui-Zu; Ou, Yunbo; Lee, S. P.; Ward, J.; Moodera, J. S.; Kim, P.; Yacoby, A.

    2018-02-01

    We report magnetotransport measurements on magnetically doped (Bi,Sb ) 2Te3 films grown by molecular beam epitaxy. In Hall bar devices, we observe logarithmic dependence of transport coefficients in temperature and bias voltage which can be understood to arise from electron-electron interaction corrections to the conductivity and self-heating. Submicron scale devices exhibit intriguing quantum oscillations at high magnetic fields with dependence on bias voltage. The observed quantum oscillations can be attributed to bulk and surface transport.

  2. An Estimation of the Logarithmic Timescale in Ergodic Dynamics

    NASA Astrophysics Data System (ADS)

    Gomez, Ignacio S.

    An estimation of the logarithmic timescale in quantum systems having an ergodic dynamics in the semiclassical limit, is presented. The estimation is based on an extension of the Krieger’s finite generator theorem for discretized σ-algebras and using the time rescaling property of the Kolmogorov-Sinai entropy. The results are in agreement with those obtained in the literature but with a simpler mathematics and within the context of the ergodic theory. Moreover, some consequences of the Poincaré’s recurrence theorem are also explored.

  3. Natural Strain

    NASA Technical Reports Server (NTRS)

    Freed, Alan D.

    1997-01-01

    Logarithmic strain is the preferred measure of strain used by materials scientists, who typically refer to it as the "true strain." It was Nadai who gave it the name "natural strain," which seems more appropriate. This strain measure was proposed by Ludwik for the one-dimensional extension of a rod with length l. It was defined via the integral of dl/l to which Ludwik gave the name "effective specific strain." Today, it is after Hencky, who extended Ludwik's measure to three-dimensional analysis by defining logarithmic strains for the three principal directions.

  4. Perceived Barriers and Facilitators to School Social Work Practice: A Mixed-Methods Study

    ERIC Educational Resources Information Center

    Teasley, Martell; Canifield, James P.; Archuleta, Adrian J.; Crutchfield, Jandel; Chavis, Annie McCullough

    2012-01-01

    Understanding barriers to practice is a growing area within school social work research. Using a convenience sample of 284 school social workers, this study replicates the efforts of a mixed-method investigation designed to identify barriers and facilitators to school social work practice within different geographic locations. Time constraints and…

  5. A reaction-diffusion model of the Darien Gap Sterile Insect Release Method

    NASA Astrophysics Data System (ADS)

    Alford, John G.

    2015-05-01

    The Sterile Insect Release Method (SIRM) is used as a biological control for invasive insect species. SIRM involves introducing large quantities of sterilized male insects into a wild population of invading insects. A fertile/sterile mating produces offspring that are not viable and the wild insect population will eventually be eradicated. A U.S. government program maintains a permanent sterile fly barrier zone in the Darien Gap between Panama and Columbia to control the screwworm fly (Cochliomyia Hominivorax), an insect that feeds off of living tissue in mammals and has devastating effects on livestock. This barrier zone is maintained by regular releases of massive quantities of sterilized male screwworm flies from aircraft. We analyze a reaction-diffusion model of the Darien Gap barrier zone. Simulations of the model equations yield two types of spatially inhomogeneous steady-state solutions representing a sterile fly barrier that does not prevent invasion and a barrier that does prevent invasion. We investigate steady-state solutions using both phase plane methods and monotone iteration methods and describe how barrier width and the sterile fly release rate affects steady-state behavior.

  6. Parametric State Space Structuring

    NASA Technical Reports Server (NTRS)

    Ciardo, Gianfranco; Tilgner, Marco

    1997-01-01

    Structured approaches based on Kronecker operators for the description and solution of the infinitesimal generator of a continuous-time Markov chains are receiving increasing interest. However, their main advantage, a substantial reduction in the memory requirements during the numerical solution, comes at a price. Methods based on the "potential state space" allocate a probability vector that might be much larger than actually needed. Methods based on the "actual state space", instead, have an additional logarithmic overhead. We present an approach that realizes the advantages of both methods with none of their disadvantages, by partitioning the local state spaces of each submodel. We apply our results to a model of software rendezvous, and show how they reduce memory requirements while, at the same time, improving the efficiency of the computation.

  7. The algebraic decoding of the (41, 21, 9) quadratic residue code

    NASA Technical Reports Server (NTRS)

    Reed, Irving S.; Truong, T. K.; Chen, Xuemin; Yin, Xiaowei

    1992-01-01

    A new algebraic approach for decoding the quadratic residue (QR) codes, in particular the (41, 21, 9) QR code is presented. The key ideas behind this decoding technique are a systematic application of the Sylvester resultant method to the Newton identities associated with the code syndromes to find the error-locator polynomial, and next a method for determining error locations by solving certain quadratic, cubic and quartic equations over GF(2 exp m) in a new way which uses Zech's logarithms for the arithmetic. The algorithms developed here are suitable for implementation in a programmable microprocessor or special-purpose VLSI chip. It is expected that the algebraic methods developed here can apply generally to other codes such as the BCH and Reed-Solomon codes.

  8. Study on Hyperspectral Characteristics and Estimation Model of Soil Mercury Content

    NASA Astrophysics Data System (ADS)

    Liu, Jinbao; Dong, Zhenyu; Sun, Zenghui; Ma, Hongchao; Shi, Lei

    2017-12-01

    In this study, the mercury content of 44 soil samples in Guan Zhong area of Shaanxi Province was used as the data source, and the reflectance spectrum of soil was obtained by ASD Field Spec HR (350-2500 nm) Comparing the reflection characteristics of different contents and the effect of different pre-treatment methods on the establishment of soil heavy metal spectral inversion model. The first order differential, second order differential and reflectance logarithmic transformations were carried out after the pre-treatment of NOR, MSC and SNV, and the sensitive bands of reflectance and mercury content in different mathematical transformations were selected. A hyperspectral estimation model is established by regression method. The results of chemical analysis show that there is a serious Hg pollution in the study area. The results show that: (1) the reflectivity decreases with the increase of mercury content, and the sensitive regions of mercury are located at 392 ~ 455nm, 923nm ~ 1040nm and 1806nm ~ 1969nm. (2) The combination of NOR, MSC and SNV transformations combined with differential transformations can improve the information of heavy metal elements in the soil, and the combination of high correlation band can improve the stability and prediction ability of the model. (3) The partial least squares regression model based on the logarithm of the original reflectance is better and the precision is higher, Rc2 = 0.9912, RMSEC = 0.665; Rv2 = 0.9506, RMSEP = 1.93, which can achieve the mercury content in this region Quick forecast.

  9. Structuring unbreakable hydrophobic barriers in paper

    NASA Astrophysics Data System (ADS)

    Nargang, Tobias M.; Kotz, Frederik; Rapp, Bastian E.

    2018-02-01

    Hydrophobic barriers are one of the key elements of microfluidic paper based analytical devices (μPADs).μPADs are simple and cost efficient and they can be carried out without the need of high standard laboratories. To carry out such a test a method is needed to create stable hydrophobic barriers. Commonly used methods like printing wax or polystyrene have the major drawback that these barriers are stiff and break if bended which means they will no longer be able to retain a liquid sample. Here we present silanes to structure hydrophobic barriers via polycondensation and show a silanization method which combines the advantages of flexible silane/siloxane layers with the short processing times of UV-light based structuring. The barriers are created by using methoxy silanes which are mixed with a photo acid generator (PAG) as photoinitiator. Also a photosensitizer was given to the mixture to increase the effectiveness of the PAG. After the PAG is activated by UV-light the silane is hydrolyzed and coupled to the cellulose via polycondensation. The created hydrophobic barriers are highly stable and do not break if being bended.

  10. Circularly polarized antennas for active holographic imaging through barriers

    DOEpatents

    McMakin, Douglas L [Richland, WA; Severtsen, Ronald H [Richland, WA; Lechelt, Wayne M [West Richland, WA; Prince, James M [Kennewick, WA

    2011-07-26

    Circularly-polarized antennas and their methods of use for active holographic imaging through barriers. The antennas are dielectrically loaded to optimally match the dielectric constant of the barrier through which images are to be produced. The dielectric loading helps to remove barrier-front surface reflections and to couple electromagnetic energy into the barrier.

  11. Device and method for producing a containment barrier underneath and around in-situ buried waste

    DOEpatents

    Gardner, Bradley M.; Smith, Ann M.; Hanson, Richard W.; Hodges, Richard T.

    1998-01-01

    An apparatus for building a horizontal underground barrier by cutting through soil and depositing a slurry, preferably on which cures into a hardened material. The apparatus includes a digging means for cutting and removing soil to create a void under the surface of the ground and injection means for inserting barrier-forming material into the void. In one embodiment, the digging means is a continuous cutting chain. Mounted on the continuous cutting chain are cutter teeth for cutting through soil and discharge paddles for removing the loosened soil. This invention includes a barrier placement machine, a method for building an underground horizontal containment barrier using the barrier placement machine, and the underground containment system. Preferably the underground containment barrier goes underneath and around the site to be contained in a bathtub-type containment.

  12. A theoretical stochastic control framework for adapting radiotherapy to hypoxia

    NASA Astrophysics Data System (ADS)

    Saberian, Fatemeh; Ghate, Archis; Kim, Minsun

    2016-10-01

    Hypoxia, that is, insufficient oxygen partial pressure, is a known cause of reduced radiosensitivity in solid tumors, and especially in head-and-neck tumors. It is thus believed to adversely affect the outcome of fractionated radiotherapy. Oxygen partial pressure varies spatially and temporally over the treatment course and exhibits inter-patient and intra-tumor variation. Emerging advances in non-invasive functional imaging offer the future possibility of adapting radiotherapy plans to this uncertain spatiotemporal evolution of hypoxia over the treatment course. We study the potential benefits of such adaptive planning via a theoretical stochastic control framework using computer-simulated evolution of hypoxia on computer-generated test cases in head-and-neck cancer. The exact solution of the resulting control problem is computationally intractable. We develop an approximation algorithm, called certainty equivalent control, that calls for the solution of a sequence of convex programs over the treatment course; dose-volume constraints are handled using a simple constraint generation method. These convex programs are solved using an interior point algorithm with a logarithmic barrier via Newton’s method and backtracking line search. Convexity of various formulations in this paper is guaranteed by a sufficient condition on radiobiological tumor-response parameters. This condition is expected to hold for head-and-neck tumors and for other similarly responding tumors where the linear dose-response parameter is larger than the quadratic dose-response parameter. We perform numerical experiments on four test cases by using a first-order vector autoregressive process with exponential and rational-quadratic covariance functions from the spatiotemporal statistics literature to simulate the evolution of hypoxia. Our results suggest that dynamic planning could lead to a considerable improvement in the number of tumor cells remaining at the end of the treatment course. Through these simulations, we also gain insights into when and why dynamic planning is likely to yield the largest benefits.

  13. Ion-barrier for memristors/ReRAM and methods thereof

    DOEpatents

    Haase, Gad S.

    2017-11-28

    The present invention relates to memristive devices including a resistance-switching element and a barrier element. In particular examples, the barrier element is a monolayer of a transition metal chalcogenide that sufficiently inhibits diffusion of oxygen atoms or ions out of the switching element. As the location of these atoms and ions determine the state of the device, inhibiting diffusion would provide enhanced state retention and device reliability. Other types of barrier elements, as well as methods for forming such elements, are described herein.

  14. Entanglement between random and clean quantum spin chains

    NASA Astrophysics Data System (ADS)

    Juhász, Róbert; Kovács, István A.; Roósz, Gergő; Iglói, Ferenc

    2017-08-01

    The entanglement entropy in clean, as well as in random quantum spin chains has a logarithmic size-dependence at the critical point. Here, we study the entanglement of composite systems that consist of a clean subsystem and a random subsystem, both being critical. In the composite, antiferromagnetic XX-chain with a sharp interface, the entropy is found to grow in a double-logarithmic fashion {{ S}}∼ \\ln\\ln(L) , where L is the length of the chain. We have also considered an extended defect at the interface, where the disorder penetrates into the homogeneous region in such a way that the strength of disorder decays with the distance l from the contact point as  ∼l-κ . For κ<1/2 , the entropy scales as {{ S}}(κ) ≃ \\frac{\\ln 2 (1-2κ)}{6}{\\ln L} , while for κ ≥slant 1/2 , when the extended interface defect is an irrelevant perturbation, we recover the double-logarithmic scaling. These results are explained through strong-disorder RG arguments.

  15. The effect of multiplicity of stellar encounters and the diffusion coefficients in a locally homogeneous three-dimensional stellar medium: Removing the classical divergence

    NASA Astrophysics Data System (ADS)

    Rastorguev, A. S.; Utkin, N. D.; Chumak, O. V.

    2017-08-01

    Agekyan's λ-factor that allows for the effect of multiplicity of stellar encounters with large impact parameters has been used for the first time to directly calculate the diffusion coefficients in the phase space of a stellar system. Simple estimates show that the cumulative effect, i.e., the total contribution of distant encounters to the change in the velocity of a test star, given the multiplicity of stellar encounters, is finite, and the logarithmic divergence inherent in the classical description of diffusion is removed, as was shown previously byKandrup using a different, more complex approach. In this case, the expressions for the diffusion coefficients, as in the classical description, contain the logarithm of the ratio of two independent quantities: the mean interparticle distance and the impact parameter of a close encounter. However, the physical meaning of this logarithmic factor changes radically: it reflects not the divergence but the presence of two characteristic length scales inherent in the stellar medium.

  16. Parallel, exhaustive processing underlies logarithmic search functions: Visual search with cortical magnification.

    PubMed

    Wang, Zhiyuan; Lleras, Alejandro; Buetti, Simona

    2018-04-17

    Our lab recently found evidence that efficient visual search (with a fixed target) is characterized by logarithmic Reaction Time (RT) × Set Size functions whose steepness is modulated by the similarity between target and distractors. To determine whether this pattern of results was based on low-level visual factors uncontrolled by previous experiments, we minimized the possibility of crowding effects in the display, compensated for the cortical magnification factor by magnifying search items based on their eccentricity, and compared search performance on such displays to performance on displays without magnification compensation. In both cases, the RT × Set Size functions were found to be logarithmic, and the modulation of the log slopes by target-distractor similarity was replicated. Consistent with previous results in the literature, cortical magnification compensation eliminated most target eccentricity effects. We conclude that the log functions and their modulation by target-distractor similarity relations reflect a parallel exhaustive processing architecture for early vision.

  17. SU(3) Landau gauge gluon and ghost propagators using the logarithmic lattice gluon field definition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ilgenfritz, Ernst-Michael; Humboldt-Universitaet zu Berlin, Institut fuer Physik, 12489 Berlin; Menz, Christoph

    2011-03-01

    We study the Landau gauge gluon and ghost propagators of SU(3) gauge theory, employing the logarithmic definition for the lattice gluon fields and implementing the corresponding form of the Faddeev-Popov matrix. This is necessary in order to consistently compare lattice data for the bare propagators with that of higher-loop numerical stochastic perturbation theory. In this paper we provide such a comparison, and introduce what is needed for an efficient lattice study. When comparing our data for the logarithmic definition to that of the standard lattice Landau gauge we clearly see the propagators to be multiplicatively related. The data of themore » associated ghost-gluon coupling matches up almost completely. For the explored lattice spacings and sizes discretization artifacts, finite size, and Gribov-copy effects are small. At weak coupling and large momentum, the bare propagators and the ghost-gluon coupling are seen to be approached by those of higher-order numerical stochastic perturbation theory.« less

  18. Impact of long-range interactions on the disordered vortex lattice

    NASA Astrophysics Data System (ADS)

    Koopmann, J. A.; Geshkenbein, V. B.; Blatter, G.

    2003-07-01

    The interaction between the vortex lines in a type-II superconductor is mediated by currents. In the absence of transverse screening this interaction is long ranged, stiffening up the vortex lattice as expressed by the dispersive elastic moduli. The effect of disorder is strongly reduced, resulting in a mean-squared displacement correlator ≡<[u(R,L)-u(0,0)]2> characterized by a mere logarithmic growth with distance. Finite screening cuts the interaction on the scale of the London penetration depth λ and limits the above behavior to distances R<λ. Using a functional renormalization-group approach, we derive the flow equation for the disorder correlation function and calculate the disorder-averaged mean-squared relative displacement ∝ ln2σ(R/a0). The logarithmic growth (2σ=1) in the perturbative regime at small distances [A. I. Larkin and Yu. N. Ovchinnikov, J. Low Temp. Phys. 34, 409 (1979)] crosses over to a sub-logarithmic growth with 2σ=0.348 at large distances.

  19. Gravitational Field as a Pressure Force from Logarithmic Lagrangians and Non-Standard Hamiltonians: The Case of Stellar Halo of Milky Way

    NASA Astrophysics Data System (ADS)

    El-Nabulsi, Rami Ahmad

    2018-03-01

    Recently, the notion of non-standard Lagrangians was discussed widely in literature in an attempt to explore the inverse variational problem of nonlinear differential equations. Different forms of non-standard Lagrangians were introduced in literature and have revealed nice mathematical and physical properties. One interesting form related to the inverse variational problem is the logarithmic Lagrangian, which has a number of motivating features related to the Liénard-type and Emden nonlinear differential equations. Such types of Lagrangians lead to nonlinear dynamics based on non-standard Hamiltonians. In this communication, we show that some new dynamical properties are obtained in stellar dynamics if standard Lagrangians are replaced by Logarithmic Lagrangians and their corresponding non-standard Hamiltonians. One interesting consequence concerns the emergence of an extra pressure term, which is related to the gravitational field suggesting that gravitation may act as a pressure in a strong gravitational field. The case of the stellar halo of the Milky Way is considered.

  20. Logarithmic violation of scaling in anisotropic kinematic dynamo model

    NASA Astrophysics Data System (ADS)

    Antonov, N. V.; Gulitskiy, N. M.

    2016-01-01

    Inertial-range asymptotic behavior of a vector (e.g., magnetic) field, passively advected by a strongly anisotropic turbulent flow, is studied by means of the field theoretic renormalization group and the operator product expansion. The advecting velocity field is Gaussian, not correlated in time, with the pair correlation function of the form ∝δ (t -t')/k⊥d-1 +ξ , where k⊥ = |k⊥| and k⊥ is the component of the wave vector, perpendicular to the distinguished direction. The stochastic advection-diffusion equation for the transverse (divergence-free) vector field includes, as special cases, the kinematic dynamo model for magnetohydrodynamic turbulence and the linearized Navier-Stokes equation. In contrast to the well known isotropic Kraichnan's model, where various correlation functions exhibit anomalous scaling behavior with infinite sets of anomalous exponents, here the dependence on the integral turbulence scale L has a logarithmic behavior: instead of power-like corrections to ordinary scaling, determined by naive (canonical) dimensions, the anomalies manifest themselves as polynomials of logarithms of L.

  1. Blue spectra of Kalb-Ramond axions and fully anisotropic string cosmologies

    NASA Astrophysics Data System (ADS)

    Giovannini, Massimo

    1999-03-01

    The inhomogeneities associated with massless Kalb-Ramond axions can be amplified not only in isotropic (four-dimensional) string cosmological models but also in the fully anisotropic case. If the background geometry is isotropic, the axions (which are not part of the homogeneous background) develop outside the horizon, the growing modes leading, ultimately, to logarithmic energy spectra which are ``red'' in frequency and increase at large distance scales. We show that this conclusion can be avoided not only in the case of higher dimensional backgrounds with contracting internal dimensions but also in the case of string cosmological scenarios which are completely anisotropic in four dimensions. In this case the logarithmic energy spectra turn out to be ``blue'' in frequency and, consequently, decreasing at large distance scales. We elaborate on anisotropic dilaton-driven models and we argue that, incidentally, the background models leading to blue (or flat) logarithmic energy spectra for axionic fluctuations are likely to be isotropized by the effect of string tension corrections.

  2. Flexible barrier film, method of forming same, and organic electronic device including same

    DOEpatents

    Blizzard, John; Tonge, James Steven; Weidner, William Kenneth

    2013-03-26

    A flexible barrier film has a thickness of from greater than zero to less than 5,000 nanometers and a water vapor transmission rate of no more than 1.times.10.sup.-2 g/m.sup.2/day at 22.degree. C. and 47% relative humidity. The flexible barrier film is formed from a composition, which comprises a multi-functional acrylate. The composition further comprises the reaction product of an alkoxy-functional organometallic compound and an alkoxy-functional organosilicon compound. A method of forming the flexible barrier film includes the steps of disposing the composition on a substrate and curing the composition to form the flexible barrier film. The flexible barrier film may be utilized in organic electronic devices.

  3. Swedish nurses encounter barriers when promoting healthy habits in children.

    PubMed

    Ljungkrona-Falk, Lena; Brekke, Hilde; Nyholm, Maria

    2014-12-01

    To increase the understanding of difficulties in promoting healthy habits to parents, we explore barriers in health-care provision. The aim of this study is to describe nurses' perceived barriers when discussing with parents regarding healthy food habits, physical activity and their child's body weight. A mixed method approach was chosen. Nurses (n = 76) working at 29 different Child Health Care Centers' in an area in west Sweden were included in the study. Three focus group interviews were conducted and 17 nurses were selected according to maximum variation. Data were categorized and qualitative content analysis was the chosen analysis method. In the second method, data were obtained from a questionnaire distributed to all 76 nurses. The latent content was formulated into a theme: even with encouragement and support, the nurses perceive barriers of both an external and internal nature. The results identified four main barriers: experienced barriers in the workplace-internal and external; the nurse's own fear and uncertainty; perceived obstacles in nurse-parent interactions and modern society impedes parents' ability to promote healthy habits. The nurses' perceived barriers were confirmed by the results from 62 of the nurses who completed the questionnaire. Despite education and professional support, the health professionals perceived both external and internal barriers in promoting healthy habits to parents when implementing a new method of health promotion in primary care. Further qualitative studies are needed to gain deeper understanding of the perceived barriers when promoting healthy habits to parents. © The Author (2013). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  4. Holographic Rényi entropy in AdS3/LCFT2 correspondence

    NASA Astrophysics Data System (ADS)

    Chen, Bin; Song, Feng-yan; Zhang, Jia-ju

    2014-03-01

    The recent study in AdS3/CFT2 correspondence shows that the tree level contribution and 1-loop correction of holographic Rényi entanglement entropy (HRE) exactly match the direct CFT computation in the large central charge limit. This allows the Rényi entanglement entropy to be a new window to study the AdS/CFT correspondence. In this paper we generalize the study of Rényi entanglement entropy in pure AdS3 gravity to the massive gravity theories at the critical points. For the cosmological topological massive gravity (CTMG), the dual conformal field theory (CFT) could be a chiral conformal field theory or a logarithmic conformal field theory (LCFT), depending on the asymptotic boundary conditions imposed. In both cases, by studying the short interval expansion of the Rényi entanglement entropy of two disjoint intervals with small cross ratio x, we find that the classical and 1-loop HRE are in exact match with the CFT results, up to order x 6. To this order, the difference between the massless graviton and logarithmic mode can be seen clearly. Moreover, for the cosmological new massive gravity (CNMG) at critical point, which could be dual to a logarithmic CFT as well, we find the similar agreement in the CNMG/LCFT correspondence. Furthermore we read the 2-loop correction of graviton and logarithmic mode to HRE from CFT computation. It has distinct feature from the one in pure AdS3 gravity.

  5. Magnetic hierarchical deposition

    NASA Astrophysics Data System (ADS)

    Posazhennikova, Anna I.; Indekeu, Joseph O.

    2014-11-01

    We consider random deposition of debris or blocks on a line, with block sizes following a rigorous hierarchy: the linear size equals 1/λn in generation n, in terms of a rescaling factor λ. Without interactions between the blocks, this model is described by a logarithmic fractal, studied previously, which is characterized by a constant increment of the length, area or volume upon proliferation. We study to what extent the logarithmic fractality survives, if each block is equipped with an Ising (pseudo-)spin s=±1 and the interactions between those spins are switched on (ranging from antiferromagnetic to ferromagnetic). It turns out that the dependence of the surface topology on the interaction sign and strength is not trivial. For instance, deep in the ferromagnetic regime, our numerical experiments and analytical results reveal a sharp crossover from a Euclidean transient, consisting of aggregated domains of aligned spins, to an asymptotic logarithmic fractal growth. In contrast, deep into the antiferromagnetic regime the surface roughness is important and is shown analytically to be controlled by vacancies induced by frustrated spins. Finally, in the weak interaction regime, we demonstrate that the non-interacting model is extremal in the sense that the effect of the introduction of interactions is only quadratic in the magnetic coupling strength. In all regimes, we demonstrate the adequacy of a mean-field approximation whenever vacancies are rare. In sum, the logarithmic fractal character is robust with respect to the introduction of spatial correlations in the hierarchical deposition process.

  6. The asymptotic form of non-global logarithms, black disc saturation, and gluonic deserts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neill, Duff

    Here, we develop an asymptotic perturbation theory for the large logarithmic behavior of the non-linear integro-differential equation describing the soft correlations of QCD jet measurements, the Banfi-Marchesini-Smye (BMS) equation. Furthermore, this equation captures the late-time evolution of radiating color dipoles after a hard collision. This allows us to prove that at large values of the control variable (the non-global logarithm, a function of the infra-red energy scales associated with distinct hard jets in an event), the distribution has a gaussian tail. We also compute the decay width analytically, giving a closed form expression, and find it to be jet geometrymore » independent, up to the number of legs of the dipole in the active jet. By enabling the asymptotic expansion we find that the perturbative seed is correct; we perturb around an anzats encoding formally no real emissions, an intuition motivated by the buffer region found in jet dynamics. This must be supplemented with the correct application of the BFKL approximation to the BMS equation in collinear limits. Comparing to the asymptotics of the conformally related evolution equation encountered in small-x physics, the Balitisky-Kovchegov (BK) equation, we find that the asymptotic form of the non-global logarithms directly maps to the black-disc unitarity limit of the BK equation, despite the contrasting physical pictures. Indeed, we recover the equations of saturation physics in the final state dynamics of QCD.« less

  7. The asymptotic form of non-global logarithms, black disc saturation, and gluonic deserts

    DOE PAGES

    Neill, Duff

    2017-01-25

    Here, we develop an asymptotic perturbation theory for the large logarithmic behavior of the non-linear integro-differential equation describing the soft correlations of QCD jet measurements, the Banfi-Marchesini-Smye (BMS) equation. Furthermore, this equation captures the late-time evolution of radiating color dipoles after a hard collision. This allows us to prove that at large values of the control variable (the non-global logarithm, a function of the infra-red energy scales associated with distinct hard jets in an event), the distribution has a gaussian tail. We also compute the decay width analytically, giving a closed form expression, and find it to be jet geometrymore » independent, up to the number of legs of the dipole in the active jet. By enabling the asymptotic expansion we find that the perturbative seed is correct; we perturb around an anzats encoding formally no real emissions, an intuition motivated by the buffer region found in jet dynamics. This must be supplemented with the correct application of the BFKL approximation to the BMS equation in collinear limits. Comparing to the asymptotics of the conformally related evolution equation encountered in small-x physics, the Balitisky-Kovchegov (BK) equation, we find that the asymptotic form of the non-global logarithms directly maps to the black-disc unitarity limit of the BK equation, despite the contrasting physical pictures. Indeed, we recover the equations of saturation physics in the final state dynamics of QCD.« less

  8. Rat model of blood-brain barrier disruption to allow targeted neurovascular therapeutics.

    PubMed

    Martin, Jacob A; Maris, Alexander S; Ehtesham, Moneeb; Singer, Robert J

    2012-11-30

    Endothelial cells with tight junctions along with the basement membrane and astrocyte end feet surround cerebral blood vessels to form the blood-brain barrier(1). The barrier selectively excludes molecules from crossing between the blood and the brain based upon their size and charge. This function can impede the delivery of therapeutics for neurological disorders. A number of chemotherapeutic drugs, for example, will not effectively cross the blood-brain barrier to reach tumor cells(2). Thus, improving the delivery of drugs across the blood-brain barrier is an area of interest. The most prevalent methods for enhancing the delivery of drugs to the brain are direct cerebral infusion and blood-brain barrier disruption(3). Direct intracerebral infusion guarantees that therapies reach the brain; however, this method has a limited ability to disperse the drug(4). Blood-brain barrier disruption (BBBD) allows drugs to flow directly from the circulatory system into the brain and thus more effectively reach dispersed tumor cells. Three methods of barrier disruption include osmotic barrier disruption, pharmacological barrier disruption, and focused ultrasound with microbubbles. Osmotic disruption, pioneered by Neuwelt, uses a hypertonic solution of 25% mannitol that dehydrates the cells of the blood-brain barrier causing them to shrink and disrupt their tight junctions. Barrier disruption can also be accomplished pharmacologically with vasoactive compounds such as histamine(5) and bradykinin(6). This method, however, is selective primarily for the brain-tumor barrier(7). Additionally, RMP-7, an analog of the peptide bradykinin, was found to be inferior when compared head-to-head with osmotic BBBD with 25% mannitol(8). Another method, focused ultrasound (FUS) in conjunction with microbubble ultrasound contrast agents, has also been shown to reversibly open the blood-brain barrier(9). In comparison to FUS, though, 25% mannitol has a longer history of safety in human patients that makes it a proven tool for translational research(10-12). In order to accomplish BBBD, mannitol must be delivered at a high rate directly into the brain's arterial circulation. In humans, an endovascular catheter is guided to the brain where rapid, direct flow can be accomplished. This protocol models human BBBD as closely as possible. Following a cut-down to the bifurcation of the common carotid artery, a catheter is inserted retrograde into the ECA and used to deliver mannitol directly into the internal carotid artery (ICA) circulation. Propofol and N2O anesthesia are used for their ability to maximize the effectiveness of barrier disruption(13). If executed properly, this procedure has the ability to safely, effectively, and reversibly open the blood-brain barrier and improve the delivery of drugs that do not ordinarily reach the brain (8,13,14).

  9. Method and device for detecting impact events on a security barrier which includes a hollow rebar allowing insertion and removal of an optical fiber

    DOEpatents

    Pies, Ross E.

    2016-03-29

    A method and device for the detection of impact events on a security barrier. A hollow rebar is farmed within a security barrier, whereby the hollow rebar is completely surrounded by the security barrier. An optical fiber passes through the interior of the hollow rebar. An optical transmitter and an optical receiver are both optically connected to the optical fiber and connected to optical electronics. The optical electronics are configured to provide notification upon the detection of an impact event at the security barrier based on the detection of disturbances within the optical fiber.

  10. Probiotic Properties and Cellular Antioxidant Activity of Lactobacillus plantarum MA2 Isolated from Tibetan Kefir Grains.

    PubMed

    Tang, Wei; Li, Chao; He, Zengguo; Pan, Fen; Pan, Shuo; Wang, Yanping

    2017-11-20

    Lactobacillus plantarum MA2 was isolated from traditional Chinese Tibetan kefir grains. Its antioxidant properties had been demonstrated in vitro and in vivo previously. In the present study, the probiotic characteristics of this strain were further evaluated by investigating its acid and bile salt tolerances, cell surface hydrophobicity, and autoaggregation, respectively. In addition, the cellular antioxidant activity (CAA) assay was applied to test the antioxidant capacity of the isolate in different growth phases. Same method was also used to evaluate the antioxidant capacity of its fermentation supernatant, cell-free extract, and intact cell quantitatively. The results of probiotic characteristic tests showed that MA2 could survive at pH 2.5 and 0.3% bile salt. Meanwhile, the measurements of cell surface hydrophobicity and autoaggregation were 45.29 ± 2.15 and 6.30 ± 0.34%, respectively. The results of cellular antioxidant activity tests indicated that MA2 had high antioxidant potential. The CAA value of logarithmic phase cell-free extract of MA2 (39,450.00 ± 424.05 μmol quercetin equivalents/100 g sample) was significantly higher than that in stationary phase cell-free extract (3395.98 ± 126.06 μmol quercetin equivalents/100 g sample) and that of fermentation supernatant in logarithmic phase (2174.41 ± 224.47 μmol quercetin equivalents/100 g sample) (p < 0.05). The CAA method was successively applied to evaluate the antioxidant capacity of MA2 in this study, which suggests that it could be used as a useful method for lactic acid bacteria antioxidant potential evaluation.

  11. Facilitators and barriers to exercise adherence in patients with osteopenia and osteoporosis: a systematic review.

    PubMed

    Rodrigues, I B; Armstrong, J J; Adachi, J D; MacDermid, J C

    2017-03-01

    The aim of this study was to categorize the facilitators and barriers of exercise and identify methods to promote exercise adherence in the osteoporosis population. Despite the fair methodological quality of included randomized controlled trials (RCTs), less than 75 % identified facilitators and barriers to exercise. Methods to promote and measure exercise adherence were poorly reported. Several studies have shown exercise to be successful in maintaining or increasing BMD in individuals with low bone mass. Yet, adherence to exercise is poor, with 50 % of those registered in an exercise program dropping out within the first 6 months, lack of time being the number one barrier in many populations. However, in the osteoporosis population, the main facilitator and barrier to exercise is still unclear. The aim of this study is to examine the extent to which RCTs reported the facilitators and the barriers to exercise and identified methods to promote adherence to an exercise program. PubMed, CINHAL, EMBASE, and the Cochrane Review were queried using a predefined search criterion, and the resulting citations were imported into DistillerSR. Screening was carried out by two independent reviewers, and articles were included in the analysis by consensus. The methodological quality of included studies was assessed using the PEDro scale. Fifty-four RCTs examining exercise interventions in patients with osteopenia or osteoporosis were included. A spectrum of facilitators and barriers to exercise for osteoporotic patients were identified; however, no one facilitator was more frequently reported than the other. The most commonly reported barriers were lack of time and transportation. In most RCTs, methods to promote and measure exercise adherence were unsatisfactory. Of the 54 papers, 72 % reported an adherence rate to an exercise program; the lowest reported rate was 51.7 %, and the highest 100 %. Most RCTs found were of fair quality; however, less than three quarters identified facilitators and barriers to exercise. Reporting of methods to promote and measure exercise adherence were low. Future work should be directed toward identifying major facilitators and barriers to exercise adherence within RCTs. Only then can methods be identified to leverage facilitators and overcome barriers, thus strengthening the evidence for efficacy of optimal interventional exercise programs. This review has been registered in PROSPERO under registration number CRD42016039941.

  12. A Legendre tau-spectral method for solving time-fractional heat equation with nonlocal conditions.

    PubMed

    Bhrawy, A H; Alghamdi, M A

    2014-01-01

    We develop the tau-spectral method to solve the time-fractional heat equation (T-FHE) with nonlocal condition. In order to achieve highly accurate solution of this problem, the operational matrix of fractional integration (described in the Riemann-Liouville sense) for shifted Legendre polynomials is investigated in conjunction with tau-spectral scheme and the Legendre operational polynomials are used as the base function. The main advantage in using the presented scheme is that it converts the T-FHE with nonlocal condition to a system of algebraic equations that simplifies the problem. For demonstrating the validity and applicability of the developed spectral scheme, two numerical examples are presented. The logarithmic graphs of the maximum absolute errors is presented to achieve the exponential convergence of the proposed method. Comparing between our spectral method and other methods ensures that our method is more accurate than those solved similar problem.

  13. A Legendre tau-Spectral Method for Solving Time-Fractional Heat Equation with Nonlocal Conditions

    PubMed Central

    Bhrawy, A. H.; Alghamdi, M. A.

    2014-01-01

    We develop the tau-spectral method to solve the time-fractional heat equation (T-FHE) with nonlocal condition. In order to achieve highly accurate solution of this problem, the operational matrix of fractional integration (described in the Riemann-Liouville sense) for shifted Legendre polynomials is investigated in conjunction with tau-spectral scheme and the Legendre operational polynomials are used as the base function. The main advantage in using the presented scheme is that it converts the T-FHE with nonlocal condition to a system of algebraic equations that simplifies the problem. For demonstrating the validity and applicability of the developed spectral scheme, two numerical examples are presented. The logarithmic graphs of the maximum absolute errors is presented to achieve the exponential convergence of the proposed method. Comparing between our spectral method and other methods ensures that our method is more accurate than those solved similar problem. PMID:25057507

  14. Penetration through the Skin Barrier.

    PubMed

    Nielsen, Jesper Bo; Benfeldt, Eva; Holmgaard, Rikke

    2016-01-01

    The skin is a strong and flexible organ with barrier properties essential for maintaining homeostasis and thereby human life. Characterizing this barrier is the ability to prevent some chemicals from crossing the barrier while allowing others, including medicinal products, to pass at varying rates. During recent decades, the latter has received increased attention as a route for intentionally delivering drugs to patients. This has stimulated research in methods for sampling, measuring and predicting percutaneous penetration. Previous chapters have described how different endogenous, genetic and exogenous factors may affect barrier characteristics. The present chapter introduces the theory for barrier penetration (Fick's law), and describes and discusses different methods for measuring the kinetics of percutaneous penetration of chemicals, including in vitro methods (static and flow-through diffusion cells) as well as in vivo methods (microdialysis and microperfusion). Then follows a discussion with examples of how different characteristics of the skin (age, site and integrity) and of the penetrants (size, solubility, ionization, logPow and vehicles) affect the kinetics of percutaneous penetration. Finally, a short discussion of the advantages and challenges of each method is provided, which will hopefully allow the reader to improve decision making and treatment planning, as well as the evaluation of experimental studies of percutaneous penetration of chemicals. © 2016 S. Karger AG, Basel.

  15. Room temperature current-voltage (I-V) characteristics of Ag/InGaN/n-Si Schottky barrier diode

    NASA Astrophysics Data System (ADS)

    Erdoğan, Erman; Kundakçı, Mutlu

    2017-02-01

    Metal-semiconductors (MSs) or Schottky barrier diodes (SBDs) have a significant potential in the integrated device technology. In the present paper, electrical characterization of Ag/InGaN/n-Si Schottky diode have been systematically carried out by simple Thermionic method (TE) and Norde function based on the I-V characteristics. Ag ohmic and schottky contacts are deposited on InGaN/n-Si film by thermal evaporation technique under a vacuum pressure of 1×10-5 mbar. Ideality factor, barrier height and series resistance values of this diode are determined from I-V curve. These parameters are calculated by TE and Norde methods and findings are given in a comparetive manner. The results show the consistency for both method and also good agreement with other results obtained in the literature. The value of ideality factor and barrier height have been determined to be 2.84 and 0.78 eV at room temperature using simple TE method. The value of barrier height obtained with Norde method is calculated as 0.79 eV.

  16. Impact of the bottom drag coefficient on saltwater intrusion in the extremely shallow estuary

    NASA Astrophysics Data System (ADS)

    Lyu, Hanghang; Zhu, Jianrong

    2018-02-01

    The interactions between the extremely shallow, funnel-shaped topography and dynamic processes in the North Branch (NB) of the Changjiang Estuary produce a particular type of saltwater intrusion, saltwater spillover (SSO), from the NB into the South Branch (SB). This dominant type of saltwater intrusion threatens the winter water supplies of reservoirs located in the estuary. Simulated SSO was weaker than actual SSO in previous studies, and this problem has not been solved until now. The improved ECOM-si model with the advection scheme HSIMT-TVD was applied in this study. Logarithmic and Chézy-Manning formulas of the bottom drag coefficient (BDC) were established in the model to investigate the associated effect on saltwater intrusion in the NB. Modeled data and data collected at eight measurement stations located in the NB from February 19 to March 1, 2017, were compared, and three skill assessment indicators, the correlation coefficient (CC), root-mean-square error (RMSE), and skill score (SS), of water velocity and salinity were used to quantitatively validate the model. The results indicated that the water velocities modeled using the Chézy-Manning formula of BDC were slightly more accurate than those based on the logarithmic BDC formula, but the salinities produced by the latter formula were more accurate than those of the former. The results showed that the BDC increases when water depth decreases during ebb tide, and the results based on the Chézy-Manning formula were smaller than those based on the logarithmic formula. Additionally, the landward net water flux in the upper reaches of the NB during spring tide increases based on the Chézy-Manning formula, and saltwater intrusion in the NB was enhanced, especially in the upper reaches of the NB. At a transect in the upper reaches of the NB, the net transect water flux (NTWF) is upstream in spring tide and downstream in neap tide, and the values produced by the Chézy-Manning formula are much larger than those based on the logarithmic formula. Notably, SSO during spring tide was 1.8 times larger based on the Chézy-Manning formula than that based on the logarithmic formula. The model underestimated SSO and salinity at the hydrological stations in the SB based on the logarithmic BDC formula but successfully simulated SSO and the temporal variations in salinity in the SB using the Chézy-Manning formula of BDC.

  17. Device and method for producing a containment barrier underneath and around in-situ buried waste

    DOEpatents

    Gardner, B.M.; Smith, A.M.; Hanson, R.W.; Hodges, R.T.

    1998-08-11

    An apparatus is described for building a horizontal underground barrier by cutting through soil and depositing a slurry, preferably on which cures into a hardened material. The apparatus includes a digging means for cutting and removing soil to create a void under the surface of the ground and injection means for inserting barrier-forming material into the void. In one embodiment, the digging means is a continuous cutting chain. Mounted on the continuous cutting chain are cutter teeth for cutting through soil and discharge paddles for removing the loosened soil. This invention includes a barrier placement machine, a method for building an underground horizontal containment barrier using the barrier placement machine, and the underground containment system. Preferably the underground containment barrier goes underneath and around the site to be contained in a bathtub-type containment. 15 figs.

  18. Oblique wave trapping by vertical permeable membrane barriers located near a wall

    NASA Astrophysics Data System (ADS)

    Koley, Santanu; Sahoo, Trilochan

    2017-12-01

    The effectiveness of a vertical partial flexible porous membrane wave barrier located near a rigid vertical impermeable seawall for trapping obliquely incident surface gravity waves are analyzed in water of uniform depth under the assumption of linear water wave theory and small amplitude membrane barrier response. From the general formulation of the submerged membrane barrier, results for bottom-standing and surface-piercing barriers are computed and analyzed in special cases. Using the eigenfunction expansion method, the boundary-value problems are converted into series relations and then the required unknowns are obtained using the least squares approximation method. Various physical quantities of interests like reflection coefficient, wave energy dissipation, wave forces acting on the membrane barrier and the seawall are computed and analyzed for different values of the wave and structural parameters. The study will be useful in the design of the membrane wave barrier for the creation of tranquility zone in the lee side of the barrier to protect the seawall.

  19. DATASPACE - A PROGRAM FOR THE LOGARITHMIC INTERPOLATION OF TEST DATA

    NASA Technical Reports Server (NTRS)

    Ledbetter, F. E.

    1994-01-01

    Scientists and engineers work with the reduction, analysis, and manipulation of data. In many instances, the recorded data must meet certain requirements before standard numerical techniques may be used to interpret it. For example, the analysis of a linear visoelastic material requires knowledge of one of two time-dependent properties, the stress relaxation modulus E(t) or the creep compliance D(t), one of which may be derived from the other by a numerical method if the recorded data points are evenly spaced or increasingly spaced with respect to the time coordinate. The problem is that most laboratory data are variably spaced, making the use of numerical techniques difficult. To ease this difficulty in the case of stress relaxation data analysis, NASA scientists developed DATASPACE (A Program for the Logarithmic Interpolation of Test Data), to establish a logarithmically increasing time interval in the relaxation data. The program is generally applicable to any situation in which a data set needs increasingly spaced abscissa values. DATASPACE first takes the logarithm of the abscissa values, then uses a cubic spline interpolation routine (which minimizes interpolation error) to create an evenly spaced array from the log values. This array is returned from the log abscissa domain to the abscissa domain and written to an output file for further manipulation. As a result of the interpolation in the log abscissa domain, the data is increasingly spaced. In the case of stress relaxation data, the array is closely spaced at short times and widely spaced at long times, thus avoiding the distortion inherent in evenly spaced time coordinates. The interpolation routine gives results which compare favorably with the recorded data. The experimental data curve is retained and the interpolated points reflect the desired spacing. DATASPACE is written in FORTRAN 77 for IBM PC compatibles with a math co-processor running MS-DOS and Apple Macintosh computers running MacOS. With minor modifications the source code is portable to any platform that supports an ANSI FORTRAN 77 compiler. MicroSoft FORTRAN v2.1 is required for the Macintosh version. An executable is included with the PC version. DATASPACE is available on a 5.25 inch 360K MS-DOS format diskette (standard distribution) or on a 3.5 inch 800K Macintosh format diskette. This program was developed in 1991. IBM PC is a trademark of International Business Machines Corporation. MS-DOS is a registered trademark of Microsoft Corporation. Macintosh and MacOS are trademarks of Apple Computer, Inc.

  20. Method for contamination control and barrier apparatus with filter for containing waste materials that include dangerous particulate matter

    DOEpatents

    Pinson, Paul A.

    1998-01-01

    A container for hazardous waste materials that includes air or other gas carrying dangerous particulate matter has incorporated in barrier material, preferably in the form of a flexible sheet, one or more filters for the dangerous particulate matter sealably attached to such barrier material. The filter is preferably a HEPA type filter and is preferably chemically bonded to the barrier materials. The filter or filters are preferably flexibly bonded to the barrier material marginally and peripherally of the filter or marginally and peripherally of air or other gas outlet openings in the barrier material, which may be a plastic bag. The filter may be provided with a backing panel of barrier material having an opening or openings for the passage of air or other gas into the filter or filters. Such backing panel is bonded marginally and peripherally thereof to the barrier material or to both it and the filter or filters. A coupling or couplings for deflating and inflating the container may be incorporated. Confining a hazardous waste material in such a container, rapidly deflating the container and disposing of the container, constitutes one aspect of the method of the invention. The chemical bonding procedure for producing the container constitutes another aspect of the method of the invention.

  1. Method for contamination control and barrier apparatus with filter for containing waste materials that include dangerous particulate matter

    DOEpatents

    Pinson, P.A.

    1998-02-24

    A container for hazardous waste materials that includes air or other gas carrying dangerous particulate matter has incorporated barrier material, preferably in the form of a flexible sheet, and one or more filters for the dangerous particulate matter sealably attached to such barrier material. The filter is preferably a HEPA type filter and is preferably chemically bonded to the barrier materials. The filter or filters are preferably flexibly bonded to the barrier material marginally and peripherally of the filter or marginally and peripherally of air or other gas outlet openings in the barrier material, which may be a plastic bag. The filter may be provided with a backing panel of barrier material having an opening or openings for the passage of air or other gas into the filter or filters. Such backing panel is bonded marginally and peripherally thereof to the barrier material or to both it and the filter or filters. A coupling or couplings for deflating and inflating the container may be incorporated. Confining a hazardous waste material in such a container, rapidly deflating the container and disposing of the container, constitutes one aspect of the method of the invention. The chemical bonding procedure for producing the container constitutes another aspect of the method of the invention. 3 figs.

  2. Career Barriers Perceived by Hard-of-Hearing Adolescents: Implications for Practice From a Mixed-Methods Study

    ERIC Educational Resources Information Center

    Punch, Renee; Creed, Peter A.; Hyde, Merv B.

    2006-01-01

    This study incorporated both quantitative and qualitative methods to examine the perception of career barriers by hard-of-hearing high school students being educated in regular classes with itinerant teacher support. Sixty-five students in Years 10, 11, and 12 completed a questionnaire about potential general and hearing-related barriers, and 12…

  3. Complementary Barrier Infrared Detector (CBIRD) Contact Methods

    NASA Technical Reports Server (NTRS)

    Ting, David Z.; Hill, Cory J.; Gunapala, Sarath D.

    2013-01-01

    The performance of the CBIRD detector is enhanced by using new device contacting methods that have been developed. The detector structure features a narrow gap adsorber sandwiched between a pair of complementary, unipolar barriers that are, in turn, surrounded by contact layers. In this innovation, the contact adjacent to the hole barrier is doped n-type, while the contact adjacent to the electron barrier is doped p-type. The contact layers can have wider bandgaps than the adsorber layer, so long as good electrical contacts are made to them. If good electrical contacts are made to either (or both) of the barriers, then one could contact the barrier(s) directly, obviating the need for additional contact layers. Both the left and right contacts can be doped either n-type or ptype. Having an n-type contact layer next to the electron barrier creates a second p-n junction (the first being the one between the hole barrier and the adsorber) over which applied bias could drop. This reduces the voltage drop over the adsorber, thereby reducing dark current generation in the adsorber region.

  4. Two Enhancements of the Logarithmic Least-Squares Method for Analyzing Subjective Comparisons

    DTIC Science & Technology

    1989-03-25

    error term. 1 For this model, the total sum of squares ( SSTO ), defined as n 2 SSTO = E (yi y) i=1 can be partitioned into error and regression sums...of the regression line around the mean value. Mathematically, for the model given by equation A.4, SSTO = SSE + SSR (A.6) A-4 where SSTO is the total...sum of squares (i.e., the variance of the yi’s), SSE is error sum of squares, and SSR is the regression sum of squares. SSTO , SSE, and SSR are given

  5. Entanglement entropy of electromagnetic edge modes.

    PubMed

    Donnelly, William; Wall, Aron C

    2015-03-20

    The vacuum entanglement entropy of Maxwell theory, when evaluated by standard methods, contains an unexpected term with no known statistical interpretation. We resolve this two-decades old puzzle by showing that this term is the entanglement entropy of edge modes: classical solutions determined by the electric field normal to the entangling surface. We explain how the heat kernel regularization applied to this term leads to the negative divergent expression found by Kabat. This calculation also resolves a recent puzzle concerning the logarithmic divergences of gauge fields in 3+1 dimensions.

  6. Lattice Virasoro algebra and corner transfer matrices in the Baxter eight-vertex model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Itoyama, H.; Thacker, H.B.

    1987-04-06

    A lattice Virasoro algebra is constructed for the Baxter eight-vertex model. The operator L/sub 0/ is obtained from the logarithm of the corner transfer matrix and is given by the first moment of the XYZ spin-chain Hamiltonian. The algebra is valid even when the Hamiltonian includes a mass term, in which case it represents lattice coordinate transformations which distinguish between even and odd sublattices. We apply the quantum inverse scattering method to demonstrate that the Virasoro algebra follows from the Yang-Baxter relations.

  7. Entropic Inference

    NASA Astrophysics Data System (ADS)

    Caticha, Ariel

    2011-03-01

    In this tutorial we review the essential arguments behing entropic inference. We focus on the epistemological notion of information and its relation to the Bayesian beliefs of rational agents. The problem of updating from a prior to a posterior probability distribution is tackled through an eliminative induction process that singles out the logarithmic relative entropy as the unique tool for inference. The resulting method of Maximum relative Entropy (ME), includes as special cases both MaxEnt and Bayes' rule, and therefore unifies the two themes of these workshops—the Maximum Entropy and the Bayesian methods—into a single general inference scheme.

  8. Ratiometric and colorimetric near-infrared sensors for multi-channel detection of cyanide ion and their application to measure β-glucosidase

    PubMed Central

    Xing, Panfei; Xu, Yongqian; Li, Hongjuan; Liu, Shuhui; Lu, Aiping; Sun, Shiguo

    2015-01-01

    A near-infrared sensor for cyanide ion (CN−) was developed via internal charge transfer (ICT). This sensor can selectively detect CN− either through dual-ratiometric fluorescence (logarithm of I414/I564 and I803/I564) or under various absorption (356 and 440 nm) and emission (414, 564 and 803 nm) channels. Especially, the proposed method can be employed to measure β-glucosidase by detecting CN− traces in commercial amygdalin samples. PMID:26549546

  9. Compact exponential product formulas and operator functional derivative

    NASA Astrophysics Data System (ADS)

    Suzuki, Masuo

    1997-02-01

    A new scheme for deriving compact expressions of the logarithm of the exponential product is proposed and it is applied to several exponential product formulas. A generalization of the Dynkin-Specht-Wever (DSW) theorem on free Lie elements is given, and it is used to study the relation between the traditional method (based on the DSW theorem) and the present new scheme. The concept of the operator functional derivative is also proposed, and it is applied to ordered exponentials, such as time-evolution operators for time-dependent Hamiltonians.

  10. Linear Titration Curves of Acids and Bases.

    PubMed

    Joseph, N R

    1959-05-29

    The Henderson-Hasselbalch equation, by a simple transformation, becomes pH - pK = pA - pB, where pA and pB are the negative logarithms of acid and base concentrations. Sigmoid titration curves then reduce to straight lines; titration curves of polyelectrolytes, to families of straight lines. The method is applied to the titration of the dipeptide glycyl aminotricarballylic acid, with four titrable groups. Results are expressed as Cartesian and d'Ocagne nomograms. The latter is of a general form applicable to polyelectrolytes of any degree of complexity.

  11. New approach to the resummation of logarithms in Higgs-boson decays to a vector quarkonium plus a photon

    NASA Astrophysics Data System (ADS)

    Bodwin, Geoffrey T.; Chung, Hee Sok; Ee, June-Haak; Lee, Jungil

    2017-03-01

    We present a calculation of the rates for Higgs-boson decays to a vector heavy-quarkonium state plus a photon, where the heavy-quarkonium states are the J /ψ and the ϒ (n S ) states, with n =1 , 2, or 3. The calculation is carried out in the light-cone formalism, combined with nonrelativistic QCD factorization, and is accurate at leading order in mQ2/mH2, where mQ is the heavy-quark mass and mH is the Higgs-boson mass. The calculation contains corrections through next-to-leading order in the strong-coupling constant αs and the square of the heavy-quark velocity v , and includes a resummation of logarithms of mH2/mQ2 at next-to-leading logarithmic accuracy. We have developed a new method, which makes use of Abel summation, accelerated through the use of Padé approximants, to deal with divergences in the resummed expressions for the quarkonium light-cone distribution amplitudes. This approach allows us to make definitive calculations of the resummation effects. Contributions from the order-αs and order-v2 corrections to the light-cone distribution amplitudes that we obtain with this new method differ substantially from the corresponding contributions that one obtains from a model light-cone distribution amplitude [M. König and M. Neubert, J. High Energy Phys. 08 (2015) 012, 10.1007/JHEP08(2015)012]. Our results for the real parts of the direct-process amplitudes are considerably smaller than those from one earlier calculation [G. T. Bodwin, H. S. Chung, J.-H. Ee, J. Lee, and F. Petriello, Phys. Rev. D 90, 113010 (2014), 10.1103/PhysRevD.90.113010], reducing the sensitivity to the Higgs-boson-heavy-quark couplings, and are somewhat smaller than those from another earlier calculation [M. König and M. Neubert, J. High Energy Phys. 08 (2015) 012, 10.1007/JHEP08(2015)012]. However, our results for the standard-model Higgs-boson branching fractions are in good agreement with those in M. König and M. Neubert, J. High Energy Phys. 08 (2015) 012, 10.1007/JHEP08(2015)012.

  12. Exact Asymptotics of the Freezing Transition of a Logarithmically Correlated Random Energy Model

    NASA Astrophysics Data System (ADS)

    Webb, Christian

    2011-12-01

    We consider a logarithmically correlated random energy model, namely a model for directed polymers on a Cayley tree, which was introduced by Derrida and Spohn. We prove asymptotic properties of a generating function of the partition function of the model by studying a discrete time analogy of the KPP-equation—thus translating Bramson's work on the KPP-equation into a discrete time case. We also discuss connections to extreme value statistics of a branching random walk and a rescaled multiplicative cascade measure beyond the critical point.

  13. Heavy dark matter annihilation from effective field theory.

    PubMed

    Ovanesyan, Grigory; Slatyer, Tracy R; Stewart, Iain W

    2015-05-29

    We formulate an effective field theory description for SU(2)_{L} triplet fermionic dark matter by combining nonrelativistic dark matter with gauge bosons in the soft-collinear effective theory. For a given dark matter mass, the annihilation cross section to line photons is obtained with 5% precision by simultaneously including Sommerfeld enhancement and the resummation of electroweak Sudakov logarithms at next-to-leading logarithmic order. Using these results, we present more accurate and precise predictions for the gamma-ray line signal from annihilation, updating both existing constraints and the reach of future experiments.

  14. Some properties of the Catalan-Qi function related to the Catalan numbers.

    PubMed

    Qi, Feng; Mahmoud, Mansour; Shi, Xiao-Ting; Liu, Fang-Fang

    2016-01-01

    In the paper, the authors find some properties of the Catalan numbers, the Catalan function, and the Catalan-Qi function which is a generalization of the Catalan numbers. Concretely speaking, the authors present a new expression, asymptotic expansions, integral representations, logarithmic convexity, complete monotonicity, minimality, logarithmically complete monotonicity, a generating function, and inequalities of the Catalan numbers, the Catalan function, and the Catalan-Qi function. As by-products, an exponential expansion and a double inequality for the ratio of two gamma functions are derived.

  15. Oil spill cleanup method and apparatus

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mayes, F.M.

    1980-06-24

    A method for removing oil from the surface of water where an oil spill has occurred, particularly in obstructed or shallow areas, which comprises partially surrounding a hovercraft with a floating oil-collecting barrier, there being no barrier at the front of the hovercraft, moving the oil-barrier-surrounded-hovercraft into oil contaminated water, and collecting oil gathered within the barrier behind the hovercraft through a suction line which carries the oil to a storage tank aboard the hovercraft. The invention also embodies the hovercraft adapted to effect an oil spill cleanup.

  16. Multilayer thermal barrier coating systems

    DOEpatents

    Vance, Steven J.; Goedjen, John G.; Sabol, Stephen M.; Sloan, Kelly M.

    2000-01-01

    The present invention generally describes multilayer thermal barrier coating systems and methods of making the multilayer thermal barrier coating systems. The thermal barrier coating systems comprise a first ceramic layer, a second ceramic layer, a thermally grown oxide layer, a metallic bond coating layer and a substrate. The thermal barrier coating systems have improved high temperature thermal and chemical stability for use in gas turbine applications.

  17. Coherent-Anomaly Method in Critical Phenomena. III. Mean-Field Transfer-Matrix Method in the 2D Ising Model

    NASA Astrophysics Data System (ADS)

    Hu, Xiao; Katori, Makoto; Suzuki, Masuo

    1987-11-01

    Two kinds of systematic mean-field transfer-matrix methods are formulated in the 2-dimensional Ising spin system, by introducing Weiss-like and Bethe-like approximations. All the critical exponents as well as the true critical point can be estimated in these methods following the CAM procedure. The numerical results of the above system are Tc*≃2.271 (J/kB), γ{=}γ'≃1.749, β≃0.131 and δ≃15.1. The specific heat is confirmd to be continuous and to have a logarithmic divergence at the true critical point, i.e., α{=}α'{=}0. Thus, the finite-degree-of-approximation scaling ansatz is shown to be correct and very powerful in practical estimations of the critical exponents as well as the true critical point.

  18. Quality measures in applications of image restoration.

    PubMed

    Kriete, A; Naim, M; Schafer, L

    2001-01-01

    We describe a new method for the estimation of image quality in image restoration applications. We demonstrate this technique on a simulated data set of fluorescent beads, in comparison with restoration by three different deconvolution methods. Both the number of iterations and a regularisation factor are varied to enforce changes in the resulting image quality. First, the data sets are directly compared by an accuracy measure. These values serve to validate the image quality descriptor, which is developed on the basis of optical information theory. This most general measure takes into account the spectral energies and the noise, weighted in a logarithmic fashion. It is demonstrated that this method is particularly helpful as a user-oriented method to control the output of iterative image restorations and to eliminate the guesswork in choosing a suitable number of iterations.

  19. The uncertainty of nitrous oxide emissions from grazed grasslands: A New Zealand case study

    NASA Astrophysics Data System (ADS)

    Kelliher, Francis M.; Henderson, Harold V.; Cox, Neil R.

    2017-01-01

    Agricultural soils emit nitrous oxide (N2O), a greenhouse gas and the primary source of nitrogen oxides which deplete stratospheric ozone. Agriculture has been estimated to be the largest anthropogenic N2O source. In New Zealand (NZ), pastoral agriculture uses half the land area. To estimate the annual N2O emissions from NZ's agricultural soils, the nitrogen (N) inputs have been determined and multiplied by an emission factor (EF), the mass fraction of N inputs emitted as N2Osbnd N. To estimate the associated uncertainty, we developed an analytical method. For comparison, another estimate was determined by Monte Carlo numerical simulation. For both methods, expert judgement was used to estimate the N input uncertainty. The EF uncertainty was estimated by meta-analysis of the results from 185 NZ field trials. For the analytical method, assuming a normal distribution and independence of the terms used to calculate the emissions (correlation = 0), the estimated 95% confidence limit was ±57%. When there was a normal distribution and an estimated correlation of 0.4 between N input and EF, the latter inferred from experimental data involving six NZ soils, the analytical method estimated a 95% confidence limit of ±61%. The EF data from 185 NZ field trials had a logarithmic normal distribution. For the Monte Carlo method, assuming a logarithmic normal distribution for EF, a normal distribution for the other terms and independence of all terms, the estimated 95% confidence limits were -32% and +88% or ±60% on average. When there were the same distribution assumptions and a correlation of 0.4 between N input and EF, the Monte Carlo method estimated 95% confidence limits were -34% and +94% or ±64% on average. For the analytical and Monte Carlo methods, EF uncertainty accounted for 95% and 83% of the emissions uncertainty when the correlation between N input and EF was 0 and 0.4, respectively. As the first uncertainty analysis of an agricultural soils N2O emissions inventory using "country-specific" field trials to estimate EF uncertainty, this can be a potentially informative case study for the international scientific community.

  20. Method of manufacturing lightweight thermo-barrier material

    NASA Technical Reports Server (NTRS)

    Blair, Winford (Inventor)

    1987-01-01

    A method of manufacturing thermal barrier structures comprising at least three dimpled cores separated by flat plate material with the outer surface of the flat plate material joined together by diffusion bonding.

  1. Coherent-Anomaly Method in Critical Phenomena. III.

    NASA Astrophysics Data System (ADS)

    Hu, Xiao; Katori, Makoto; Suzuki, Masuo

    Two kinds of systematic mean-field transfer-matrix methods are formulated in the 2-dimensional Ising spin system, by introducing Weiss-like and Bethe-like approximations. All the critical exponents as well as the true critical point can be estimated in these methods following the CAM procedure. The numerical results of the above system are Tc* = 2.271 (J/kB), γ=γ' ≃ 1.749, β≃0.131 and δ ≃ 15.1. The specific heat is confirmed to be continuous and to have a logarithmic divergence at the true critical point, i.e., α=α'=0. Thus, the finite-degree-of-approximation scaling ansatz is shown to be correct and very powerful in practical estimations of the critical exponents as well as the true critical point.

  2. Method and system for controlling the position of a beam of light

    DOEpatents

    Steinkraus, Jr., Robert F.; Johnson, Gary W [Livermore, CA; Ruggiero, Anthony J [Livermore, CA

    2011-08-09

    An method and system for laser beam tracking and pointing is based on a conventional position sensing detector (PSD) or quadrant cell but with the use of amplitude-modulated light. A combination of logarithmic automatic gain control, filtering, and synchronous detection offers high angular precision with exceptional dynamic range and sensitivity, while maintaining wide bandwidth. Use of modulated light enables the tracking of multiple beams simultaneously through the use of different modulation frequencies. It also makes the system resistant to interfering light sources such as ambient light. Beam pointing is accomplished by feeding back errors in the measured beam position to a beam steering element, such as a steering mirror. Closed-loop tracking performance is superior to existing methods, especially under conditions of atmospheric scintillation.

  3. Full Waveform Modeling of Transient Electromagnetic Response Based on Temporal Interpolation and Convolution Method

    NASA Astrophysics Data System (ADS)

    Qi, Youzheng; Huang, Ling; Wu, Xin; Zhu, Wanhua; Fang, Guangyou; Yu, Gang

    2017-07-01

    Quantitative modeling of the transient electromagnetic (TEM) response requires consideration of the full transmitter waveform, i.e., not only the specific current waveform in a half cycle but also the bipolar repetition. In this paper, we present a novel temporal interpolation and convolution (TIC) method to facilitate the accurate TEM modeling. We first calculate the temporal basis response on a logarithmic scale using the fast digital-filter-based methods. Then, we introduce a function named hamlogsinc in the framework of discrete signal processing theory to reconstruct the basis function and to make the convolution with the positive half of the waveform. Finally, a superposition procedure is used to take account of the effect of previous bipolar waveforms. Comparisons with the established fast Fourier transform method demonstrate that our TIC method can get the same accuracy with a shorter computing time.

  4. Barrier methods of birth control - slideshow

    MedlinePlus

    ... ency/presentations/100107.htm Barrier methods of birth control - series—Female normal anatomy To use the sharing ... M. Editorial team. Related MedlinePlus Health Topics Birth Control A.D.A.M., Inc. is accredited by ...

  5. Polymeric hydrogen diffusion barrier, high-pressure storage tank so equipped, method of fabricating a storage tank and method of preventing hydrogen diffusion

    DOEpatents

    Lessing, Paul A [Idaho Falls, ID

    2008-07-22

    An electrochemically active hydrogen diffusion barrier which comprises an anode layer, a cathode layer, and an intermediate electrolyte layer, which is conductive to protons and substantially impermeable to hydrogen. A catalytic metal present in or adjacent to the anode layer catalyzes an electrochemical reaction that converts any hydrogen that diffuses through the electrolyte layer to protons and electrons. The protons and electrons are transported to the cathode layer and reacted to form hydrogen. The hydrogen diffusion barrier is applied to a polymeric substrate used in a storage tank to store hydrogen under high pressure. A storage tank equipped with the electrochemically active hydrogen diffusion barrier, a method of fabricating the storage tank, and a method of preventing hydrogen from diffusing out of a storage tank are also disclosed.

  6. Polymeric hydrogen diffusion barrier, high-pressure storage tank so equipped, method of fabricating a storage tank and method of preventing hydrogen diffusion

    DOEpatents

    Lessing, Paul A.

    2004-09-07

    An electrochemically active hydrogen diffusion barrier which comprises an anode layer, a cathode layer, and an intermediate electrolyte layer, which is conductive to protons and substantially impermeable to hydrogen. A catalytic metal present in or adjacent to the anode layer catalyzes an electrochemical reaction that converts any hydrogen that diffuses through the electrolyte layer to protons and electrons. The protons and electrons are transported to the cathode layer and reacted to form hydrogen. The hydrogen diffusion barrier is applied to a polymeric substrate used in a storage tank to store hydrogen under high pressure. A storage tank equipped with the electrochemically active hydrogen diffusion barrier, a method of fabricating the storage tank, and a method of preventing hydrogen from diffusing out of a storage tank are also disclosed.

  7. Effect of various infection-control methods for light-cure units on the cure of composite resins.

    PubMed

    Chong, S L; Lam, Y K; Lee, F K; Ramalingam, L; Yeo, A C; Lim, C C

    1998-01-01

    This study (1) compared the curing-light intensity with various barrier infection-control methods used to prevent cross contamination, (2) compared the Knoop hardness value of cured composite resin when various barrier control methods were used, and (3) correlated the hardness of the composite resin with the light-intensity output when different infection-control methods were used. The light-cure unit tips were covered with barriers, such as cellophane wrap, plastic gloves, Steri-shields, and finger cots. The control group had no barrier. Composite resins were then cured for each of the five groups, and their Knoop hardness values recorded. The results showed that there was significant statistical difference in the light-intensity output among the five groups. However, there was no significant statistical difference in the Knoop hardness values among any of the groups. There was also no correlation between the Knoop hardness value of the composite resin with the light-intensity output and the different infection-control methods. Therefore, any of the five infection-control methods could be used as barriers for preventing cross-contamination of the light-cure unit tip, for the light-intensity output for all five groups exceeded the recommended value of 300 W/m2. However, to allow a greater margin of error in clinical situations, the authors recommend that the plastic glove or the cellophane wrap be used to wrap the light-cure tip, since these barriers allowed the highest light-intensity output.

  8. Stochastic differential equation (SDE) model of opening gold share price of bursa saham malaysia

    NASA Astrophysics Data System (ADS)

    Hussin, F. N.; Rahman, H. A.; Bahar, A.

    2017-09-01

    Black and Scholes option pricing model is one of the most recognized stochastic differential equation model in mathematical finance. Two parameter estimation methods have been utilized for the Geometric Brownian model (GBM); historical and discrete method. The historical method is a statistical method which uses the property of independence and normality logarithmic return, giving out the simplest parameter estimation. Meanwhile, discrete method considers the function of density of transition from the process of diffusion normal log which has been derived from maximum likelihood method. These two methods are used to find the parameter estimates samples of Malaysians Gold Share Price data such as: Financial Times and Stock Exchange (FTSE) Bursa Malaysia Emas, and Financial Times and Stock Exchange (FTSE) Bursa Malaysia Emas Shariah. Modelling of gold share price is essential since fluctuation of gold affects worldwide economy nowadays, including Malaysia. It is found that discrete method gives the best parameter estimates than historical method due to the smallest Root Mean Square Error (RMSE) value.

  9. Barriers that practitioners face when initiating insulin therapy in general practice settings and how they can be overcome

    PubMed Central

    Bin rsheed, Abdulaziz; Chenoweth, Ian

    2017-01-01

    AIM To explore primary care physicians’ perspectives on possible barriers to the use of insulin. METHODS This systematic review was conducted according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. Eight electronic databases were searched (between January 1, 1994 and August 31, 2014) for relevant studies. A search for grey literature and a review of the references in the retrieved studies were also conducted. Studies that focused on healthcare providers’ perspectives on possible barriers to insulin initiation with type 2 diabetic patients were included, as well as articles suggesting solutions for these barriers. Review articles and studies that only considered patients’ perspectives were excluded. RESULTS A total of 19 studies met the inclusion criteria and were therefore included in this study: 10 of these studies used qualitative methods, 8 used quantitative methods and 1 used mixed methods. Studies included a range of different health care settings. The findings are reported under four broad categories: The perceptions of primary care physicians about the barriers to initiate insulin therapy for type 2 diabetes patients, how primary care physicians assess patients prior to initiating insulin, professional roles and possible solutions to overcome these barriers. The barriers described were many and covered doctor, patient, system and technological aspects. Interventions that focused on doctor training and support, or IT-based decision support were few, and did not result in significant improvement. CONCLUSION Primary care physicians’ known delay in insulin initiation is multifactorial. Published reports of attempts to find solutions for these barriers were limited in number. PMID:28138362

  10. Waste management barriers in developing country hospitals: Case study and AHP analysis.

    PubMed

    Delmonico, Diego V de Godoy; Santos, Hugo H Dos; Pinheiro, Marco Ap; de Castro, Rosani; de Souza, Regiane M

    2018-01-01

    Healthcare waste management is an essential field for both researchers and practitioners. Although there have been few studies using statistical methods for its evaluation, it has been the subject of several studies in different contexts. Furthermore, the known precarious practices for waste management in developing countries raise questions about its potential barriers. This study aims to investigate the barriers in healthcare waste management and their relevance. For this purpose, this paper analyses waste management practices in two Brazilian hospitals by using case study and the Analytic Hierarchy Process method. The barriers were organized into three categories - human factors, management, and infrastructure, and the main findings suggest that cost and employee awareness were the most significant barriers. These results highlight the main barriers to more sustainable waste management, and provide an empirical basis for multi-criteria evaluation of the literature.

  11. Effect of SiN x diffusion barrier thickness on the structural properties and photocatalytic activity of TiO2 films obtained by sol-gel dip coating and reactive magnetron sputtering.

    PubMed

    Ghazzal, Mohamed Nawfal; Aubry, Eric; Chaoui, Nouari; Robert, Didier

    2015-01-01

    We investigate the effect of the thickness of the silicon nitride (SiN x ) diffusion barrier on the structural and photocatalytic efficiency of TiO2 films obtained with different processes. We show that the structural and photocatalytic efficiency of TiO2 films produced using soft chemistry (sol-gel) and physical methods (reactive sputtering) are affected differentially by the intercalating SiN x diffusion barrier. Increasing the thickness of the SiN x diffusion barrier induced a gradual decrease of the crystallite size of TiO2 films obtained by the sol-gel process. However, TiO2 obtained using the reactive sputtering method showed no dependence on the thickness of the SiN x barrier diffusion. The SiN x barrier diffusion showed a beneficial effect on the photocatalytic efficiency of TiO2 films regardless of the synthesis method used. The proposed mechanism leading to the improvement in the photocatalytic efficiency of the TiO2 films obtained by each process was discussed.

  12. Granuloma Weight and the α1-acute Phase Protein Response in Rats Injected with Turpentine

    PubMed Central

    Darcy, D. A.

    1970-01-01

    Rats of 6 different age (and weight) groups were injected with turpentine subcutaneously in a single depot at 4 different doses per kg. body weight. In each age/weight group the weight of the turpentine granuloma produced at 48 hr was proportional to log turpentine dose. The 48 hr response of the α1-AP (acute phase) globulin was also proportional to log turpentine dose and was proportional to the granuloma weight. When rats of different age/weight groups were compared it was found that granuloma weight increased logarithmically with body weight for a given turpentine dose per kg. body weight. More remarkably, granuloma weight increased logarithmically with body weight for a constant volume of turpentine injected per rat, thus 0·2 ml. of turpentine gave an 0·65 g. granuloma in 60 g. (4-week old) rats and a 5 g. granuloma in 371 g. (40-week old) rats. The possibility of an age influence on this phenomenon was not excluded by these experiments. The α1-AP globulin response also increased logarithmically with body weight for a given turpentine dose per kg. body weight. For a constant volume of turpentine per rat, the response increased logarithmically with body weight and directly with granuloma weight. It was concluded that this acute phase protein response is closely correlated with the size of the lesion. There was some evidence, however, that the age of the rat may make a contribution to the response. The histology of the granulomata is described. PMID:4190826

  13. The critical role of logarithmic transformation in Nernstian equilibrium potential calculations.

    PubMed

    Sawyer, Jemima E R; Hennebry, James E; Revill, Alexander; Brown, Angus M

    2017-06-01

    The membrane potential, arising from uneven distribution of ions across cell membranes containing selectively permeable ion channels, is of fundamental importance to cell signaling. The necessity of maintaining the membrane potential may be appreciated by expressing Ohm's law as current = voltage/resistance and recognizing that no current flows when voltage = 0, i.e., transmembrane voltage gradients, created by uneven transmembrane ion concentrations, are an absolute requirement for the generation of currents that precipitate the action and synaptic potentials that consume >80% of the brain's energy budget and underlie the electrical activity that defines brain function. The concept of the equilibrium potential is vital to understanding the origins of the membrane potential. The equilibrium potential defines a potential at which there is no net transmembrane ion flux, where the work created by the concentration gradient is balanced by the transmembrane voltage difference, and derives from a relationship describing the work done by the diffusion of ions down a concentration gradient. The Nernst equation predicts the equilibrium potential and, as such, is fundamental to understanding the interplay between transmembrane ion concentrations and equilibrium potentials. Logarithmic transformation of the ratio of internal and external ion concentrations lies at the heart of the Nernst equation, but most undergraduate neuroscience students have little understanding of the logarithmic function. To compound this, no current undergraduate neuroscience textbooks describe the effect of logarithmic transformation in appreciable detail, leaving the majority of students with little insight into how ion concentrations determine, or how ion perturbations alter, the membrane potential. Copyright © 2017 the American Physiological Society.

  14. Finite-time singularities in the dynamics of hyperinflation in an economy

    NASA Astrophysics Data System (ADS)

    Szybisz, Martín A.; Szybisz, Leszek

    2009-08-01

    The dynamics of hyperinflation episodes is studied by applying a theoretical approach based on collective “adaptive inflation expectations” with a positive nonlinear feedback proposed in the literature. In such a description it is assumed that the growth rate of the logarithmic price, r(t) , changes with a velocity obeying a power law which leads to a finite-time singularity at a critical time tc . By revising that model we found that, indeed, there are two types of singular solutions for the logarithmic price, p(t) . One is given by the already reported form p(t)≈(tc-t)-α (with α>0 ) and the other exhibits a logarithmic divergence, p(t)≈ln[1/(tc-t)] . The singularity is a signature for an economic crash. In the present work we express p(t) explicitly in terms of the parameters introduced throughout the formulation avoiding the use of any combination of them defined in the original paper. This procedure allows to examine simultaneously the time series of r(t) and p(t) performing a linked error analysis of the determined parameters. For the first time this approach is applied for analyzing the very extreme historical hyperinflations occurred in Greece (1941-1944) and Yugoslavia (1991-1994). The case of Greece is compatible with a logarithmic singularity. The study is completed with an analysis of the hyperinflation spiral currently experienced in Zimbabwe. According to our results, an economic crash in this country is predicted for these days. The robustness of the results to changes of the initial time of the series and the differences with a linear feedback are discussed.

  15. Multiplicative surrogate standard deviation: a group metric for the glycemic variability of individual hospitalized patients.

    PubMed

    Braithwaite, Susan S; Umpierrez, Guillermo E; Chase, J Geoffrey

    2013-09-01

    Group metrics are described to quantify blood glucose (BG) variability of hospitalized patients. The "multiplicative surrogate standard deviation" (MSSD) is the reverse-transformed group mean of the standard deviations (SDs) of the logarithmically transformed BG data set of each patient. The "geometric group mean" (GGM) is the reverse-transformed group mean of the means of the logarithmically transformed BG data set of each patient. Before reverse transformation is performed, the mean of means and mean of SDs each has its own SD, which becomes a multiplicative standard deviation (MSD) after reverse transformation. Statistical predictions and comparisons of parametric or nonparametric tests remain valid after reverse transformation. A subset of a previously published BG data set of 20 critically ill patients from the first 72 h of treatment under the SPRINT protocol was transformed logarithmically. After rank ordering according to the SD of the logarithmically transformed BG data of each patient, the cohort was divided into two equal groups, those having lower or higher variability. For the entire cohort, the GGM was 106 (÷/× 1.07) mg/dl, and MSSD was 1.24 (÷/× 1.07). For the subgroups having lower and higher variability, respectively, the GGM did not differ, 104 (÷/× 1.07) versus 109 (÷/× 1.07) mg/dl, but the MSSD differed, 1.17 (÷/× 1.03) versus 1.31 (÷/× 1.05), p = .00004. By using the MSSD with its MSD, groups can be characterized and compared according to glycemic variability of individual patient members. © 2013 Diabetes Technology Society.

  16. Wave propagation model of heat conduction and group speed

    NASA Astrophysics Data System (ADS)

    Zhang, Long; Zhang, Xiaomin; Peng, Song

    2018-03-01

    In view of the finite relaxation model of non-Fourier's law, the Cattaneo and Vernotte (CV) model and Fourier's law are presented in this work for comparing wave propagation modes. Independent variable translation is applied to solve the partial differential equation. Results show that the general form of the time spatial distribution of temperature for the three media comprises two solutions: those corresponding to the positive and negative logarithmic heating rates. The former shows that a group of heat waves whose spatial distribution follows the exponential function law propagates at a group speed; the speed of propagation is related to the logarithmic heating rate. The total speed of all the possible heat waves can be combined to form the group speed of the wave propagation. The latter indicates that the spatial distribution of temperature, which follows the exponential function law, decays with time. These features show that propagation accelerates when heated and decelerates when cooled. For the model media that follow Fourier's law and correspond to the positive heat rate of heat conduction, the propagation mode is also considered the propagation of a group of heat waves because the group speed has no upper bound. For the finite relaxation model with non-Fourier media, the interval of group speed is bounded and the maximum speed can be obtained when the logarithmic heating rate is exactly the reciprocal of relaxation time. And for the CV model with a non-Fourier medium, the interval of group speed is also bounded and the maximum value can be obtained when the logarithmic heating rate is infinite.

  17. Logarithmic and power law input-output relations in sensory systems with fold-change detection.

    PubMed

    Adler, Miri; Mayo, Avi; Alon, Uri

    2014-08-01

    Two central biophysical laws describe sensory responses to input signals. One is a logarithmic relationship between input and output, and the other is a power law relationship. These laws are sometimes called the Weber-Fechner law and the Stevens power law, respectively. The two laws are found in a wide variety of human sensory systems including hearing, vision, taste, and weight perception; they also occur in the responses of cells to stimuli. However the mechanistic origin of these laws is not fully understood. To address this, we consider a class of biological circuits exhibiting a property called fold-change detection (FCD). In these circuits the response dynamics depend only on the relative change in input signal and not its absolute level, a property which applies to many physiological and cellular sensory systems. We show analytically that by changing a single parameter in the FCD circuits, both logarithmic and power-law relationships emerge; these laws are modified versions of the Weber-Fechner and Stevens laws. The parameter that determines which law is found is the steepness (effective Hill coefficient) of the effect of the internal variable on the output. This finding applies to major circuit architectures found in biological systems, including the incoherent feed-forward loop and nonlinear integral feedback loops. Therefore, if one measures the response to different fold changes in input signal and observes a logarithmic or power law, the present theory can be used to rule out certain FCD mechanisms, and to predict their cooperativity parameter. We demonstrate this approach using data from eukaryotic chemotaxis signaling.

  18. Sediment data sources and estimated annual suspended-sediment loads of rivers and streams in Colorado

    USGS Publications Warehouse

    Elliott, J.G.; DeFeyter, K.L.

    1986-01-01

    Sources of sediment data collected by several government agencies through water year 1984 are summarized for Colorado. The U.S. Geological Survey has collected suspended-sediment data at 243 sites; these data are stored in the U.S. Geological Survey 's water data storage and retrieval system. The U.S. Forest Service has collected suspended-sediment and bedload data at an additional 225 sites, and most of these data are stored in the U.S. Environmental Protection Agency 's water-quality-control information system. Additional unpublished sediment data are in the possession of the collecting entities. Annual suspended-sediment loads were computed for 133 U.S. Geological Survey sediment-data-collection sites using the daily mean water-discharge/sediment-transport-curve method. Sediment-transport curves were derived for each site by one of three techniques: (1) Least-squares linear regression of all pairs of suspended-sediment and corresponding water-discharge data, (2) least-squares linear regression of data sets subdivided on the basis of hydrograph season; and (3) graphical fit to a logarithm-logarithm plot of data. The curve-fitting technique used for each site depended on site-specific characteristics. Sediment-data sources and estimates of annual loads of suspended, bed, and total sediment from several other reports also are summarized. (USGS)

  19. AMICO: optimized detection of galaxy clusters in photometric surveys

    NASA Astrophysics Data System (ADS)

    Bellagamba, Fabio; Roncarelli, Mauro; Maturi, Matteo; Moscardini, Lauro

    2018-02-01

    We present Adaptive Matched Identifier of Clustered Objects (AMICO), a new algorithm for the detection of galaxy clusters in photometric surveys. AMICO is based on the Optimal Filtering technique, which allows to maximize the signal-to-noise ratio (S/N) of the clusters. In this work, we focus on the new iterative approach to the extraction of cluster candidates from the map produced by the filter. In particular, we provide a definition of membership probability for the galaxies close to any cluster candidate, which allows us to remove its imprint from the map, allowing the detection of smaller structures. As demonstrated in our tests, this method allows the deblending of close-by and aligned structures in more than 50 per cent of the cases for objects at radial distance equal to 0.5 × R200 or redshift distance equal to 2 × σz, being σz the typical uncertainty of photometric redshifts. Running AMICO on mocks derived from N-body simulations and semi-analytical modelling of the galaxy evolution, we obtain a consistent mass-amplitude relation through the redshift range of 0.3 < z < 1, with a logarithmic slope of ∼0.55 and a logarithmic scatter of ∼0.14. The fraction of false detections is steeply decreasing with S/N and negligible at S/N > 5.

  20. Three-dimensional representations of photo-induced electron transfer rates in pyrene-(CH2)n-N,N'-dimethylaniline systems obtained by three electron transfer theories.

    PubMed

    Rujkorakarn, Rong; Tanaka, Fumio

    2009-01-01

    The observed rates of photo-induced electron transfer (ET) from N,N'-dimethylaniline (DMA) to the excited pyrene (Py) in confined systems of pyrene-(CH(2))(n)-N,N'- dimethylaniline (PnD: n=1-3) were studied by molecular dynamic simulation (MD) and three kinds of electron transfer theories. ET parameters contained in Marcus theory (M theory), Bixon and Jortner theory (BJ theory) and Kakitani and Mataga theory (KM theory) were determined so as to fit the calculated fluorescence intensities with those obtained by the observed ET rates, according to a non-linear least squares method. Three-dimensional profiles of logarithm of calculated ET rates depending on two of three ET parameters, R, epsilon(0) and -DeltaG degrees were systematically examined with best-fit ET parameters of P1D. Bell shape dependencies of ET rate were predicted on R and on epsilon(0), and on -DeltaG degrees as well, by M theory and KM theory. The profiles of logarithm of ET rate calculated by BJ theory exhibited oscillatory dependencies not only on -DeltaG degrees , but also on R and on epsilon(0). Relationship between ET state and charge transfer complex was discussed with BJ theory.

  1. Water quality trend analysis for the Karoon River in Iran.

    PubMed

    Naddafi, K; Honari, H; Ahmadi, M

    2007-11-01

    The Karoon River basin, with a basin area of 67,000 km(2), is located in the southern part of Iran. Monthly measurements of the discharge and the water quality variables have been monitored at the Gatvand and Khorramshahr stations of the Karoon River on a monthly basis for the period 1967-2005 and 1969-2005 for Gatvand and Khorramshahr stations, respectively. In this paper the time series of monthly values of water quality parameters and the discharge were analyzed using statistical methods and the existence of trends and the evaluation of the best fitted models were performed. The Kolmogorov-Smirnov test was used to select the theoretical distribution which best fitted the data. Simple regression was used to examine the concentration-time relationships. The concentration-time relationships showed better correlation in Khorramshahr station than that of Gatvand station. The exponential model expresses better concentration - time relationships in Khorramshahr station, but in Gatvand station the logarithmic model is more fitted. The correlation coefficients are positive for all of the variables in Khorramshahr station also in Gatvand station all of the variables are positive except magnesium (Mg2+), bicarbonates (HCO3-) and temporary hardness which shows a decreasing relationship. The logarithmic and the exponential models describe better the concentration-time relationships for two stations.

  2. Universality of the logarithmic velocity profile restored

    NASA Astrophysics Data System (ADS)

    Luchini, Paolo

    2017-11-01

    The logarithmic velocity profile of wall-bounded turbulent flow, despite its widespread adoption in research and in teaching, exhibits discrepancies with both experiments and numerical simulations that have been repeatedly observed in the literature; serious doubts ensued about its precise form and universality, leading to the formulation of alternate theories and hindering ongoing experimental efforts to measure von Kármán's constant. By comparing different geometries of pipe, plane-channel and plane-Couette flow, here we show that such discrepancies can be physically interpreted, and analytically accounted for, through an equally universal higher-order correction caused by the pressure gradient. Inclusion of this term produces a tenfold increase in the adherence of the predicted profile to existing experiments and numerical simulations in all three geometries. Universality of the logarithmic law then emerges beyond doubt and a satisfactorily simple formulation is established. Among the consequences of this formulation is a strongly increased confidence that the Reynolds number of present-day direct numerical simulations is actually high enough to uncover asymptotic behaviour, but research efforts are still needed in order to increase their accuracy.

  3. Optimizing the Determination of Roughness Parameters for Model Urban Canopies

    NASA Astrophysics Data System (ADS)

    Huq, Pablo; Rahman, Auvi

    2018-05-01

    We present an objective optimization procedure to determine the roughness parameters for very rough boundary-layer flow over model urban canopies. For neutral stratification the mean velocity profile above a model urban canopy is described by the logarithmic law together with the set of roughness parameters of displacement height d, roughness length z_0 , and friction velocity u_* . Traditionally, values of these roughness parameters are obtained by fitting the logarithmic law through (all) the data points comprising the velocity profile. The new procedure generates unique velocity profiles from subsets or combinations of the data points of the original velocity profile, after which all possible profiles are examined. Each of the generated profiles is fitted to the logarithmic law for a sequence of values of d, with the representative value of d obtained from the minima of the summed least-squares errors for all the generated profiles. The representative values of z_0 and u_* are identified by the peak in the bivariate histogram of z_0 and u_* . The methodology has been verified against laboratory datasets of flow above model urban canopies.

  4. A spring-block analogy for the dynamics of stock indexes

    NASA Astrophysics Data System (ADS)

    Sándor, Bulcsú; Néda, Zoltán

    2015-06-01

    A spring-block chain placed on a running conveyor belt is considered for modeling stylized facts observed in the dynamics of stock indexes. Individual stocks are modeled by the blocks, while the stock-stock correlations are introduced via simple elastic forces acting in the springs. The dragging effect of the moving belt corresponds to the expected economic growth. The spring-block system produces collective behavior and avalanche like phenomena, similar to the ones observed in stock markets. An artificial index is defined for the spring-block chain, and its dynamics is compared with the one measured for the Dow Jones Industrial Average. For certain parameter regions the model reproduces qualitatively well the dynamics of the logarithmic index, the logarithmic returns, the distribution of the logarithmic returns, the avalanche-size distribution and the distribution of the investment horizons. A noticeable success of the model is that it is able to account for the gain-loss asymmetry observed in the inverse statistics. Our approach has mainly a pedagogical value, bridging between a complex socio-economic phenomena and a basic (mechanical) model in physics.

  5. Resummed Differential Cross Sections for Top-Quark Pairs at the LHC.

    PubMed

    Pecjak, Benjamin D; Scott, Darren J; Wang, Xing; Yang, Li Lin

    2016-05-20

    We present state of the art resummation predictions for differential cross sections in top-quark pair production at the LHC. They are derived from a formalism which allows the simultaneous resummation of both soft and small-mass logarithms, which endanger the convergence of fixed-order perturbative series in the boosted regime, where the partonic center-of-mass energy is much larger than the mass to the top quark. We combine such a double resummation at next-to-next-to-leading logarithmic^{'} (NNLL^{'}) accuracy with standard soft-gluon resummation at next-to-next-to-leading logarithmic accuracy and with next-to-leading-order calculations, so that our results are applicable throughout the whole phase space. We find that the resummation effects on the differential distributions are significant, bringing theoretical predictions into better agreement with experimental data compared to fixed-order calculations. Moreover, such effects are not well described by the next-to-next-to-leading-order approximation of the resummation formula, especially in the high-energy tails of the distributions, highlighting the importance of all-orders resummation in dedicated studies of boosted top production.

  6. Electronic clinical predictive thermometer using logarithm for temperature prediction

    NASA Technical Reports Server (NTRS)

    Cambridge, Vivien J. (Inventor); Koger, Thomas L. (Inventor); Nail, William L. (Inventor); Diaz, Patrick (Inventor)

    1998-01-01

    A thermometer that rapidly predicts body temperature based on the temperature signals received from a temperature sensing probe when it comes into contact with the body. The logarithms of the differences between the temperature signals in a selected time frame are determined. A line is fit through the logarithms and the slope of the line is used as a system time constant in predicting the final temperature of the body. The time constant in conjunction with predetermined additional constants are used to compute the predicted temperature. Data quality in the time frame is monitored and if unacceptable, a different time frame of temperature signals is selected for use in prediction. The processor switches to a monitor mode if data quality over a limited number of time frames is unacceptable. Determining the start time on which the measurement time frame for prediction is based is performed by summing the second derivatives of temperature signals over time frames. When the sum of second derivatives in a particular time frame exceeds a threshold, the start time is established.

  7. Simulated Stochastic Approximation Annealing for Global Optimization with a Square-Root Cooling Schedule

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liang, Faming; Cheng, Yichen; Lin, Guang

    2014-06-13

    Simulated annealing has been widely used in the solution of optimization problems. As known by many researchers, the global optima cannot be guaranteed to be located by simulated annealing unless a logarithmic cooling schedule is used. However, the logarithmic cooling schedule is so slow that no one can afford to have such a long CPU time. This paper proposes a new stochastic optimization algorithm, the so-called simulated stochastic approximation annealing algorithm, which is a combination of simulated annealing and the stochastic approximation Monte Carlo algorithm. Under the framework of stochastic approximation Markov chain Monte Carlo, it is shown that themore » new algorithm can work with a cooling schedule in which the temperature can decrease much faster than in the logarithmic cooling schedule, e.g., a square-root cooling schedule, while guaranteeing the global optima to be reached when the temperature tends to zero. The new algorithm has been tested on a few benchmark optimization problems, including feed-forward neural network training and protein-folding. The numerical results indicate that the new algorithm can significantly outperform simulated annealing and other competitors.« less

  8. Fragmentation functions beyond fixed order accuracy

    NASA Astrophysics Data System (ADS)

    Anderle, Daniele P.; Kaufmann, Tom; Stratmann, Marco; Ringer, Felix

    2017-03-01

    We give a detailed account of the phenomenology of all-order resummations of logarithmically enhanced contributions at small momentum fraction of the observed hadron in semi-inclusive electron-positron annihilation and the timelike scale evolution of parton-to-hadron fragmentation functions. The formalism to perform resummations in Mellin moment space is briefly reviewed, and all relevant expressions up to next-to-next-to-leading logarithmic order are derived, including their explicit dependence on the factorization and renormalization scales. We discuss the details pertinent to a proper numerical implementation of the resummed results comprising an iterative solution to the timelike evolution equations, the matching to known fixed-order expressions, and the choice of the contour in the Mellin inverse transformation. First extractions of parton-to-pion fragmentation functions from semi-inclusive annihilation data are performed at different logarithmic orders of the resummations in order to estimate their phenomenological relevance. To this end, we compare our results to corresponding fits up to fixed, next-to-next-to-leading order accuracy and study the residual dependence on the factorization scale in each case.

  9. Reliability of environmental sampling culture results using the negative binomial intraclass correlation coefficient.

    PubMed

    Aly, Sharif S; Zhao, Jianyang; Li, Ben; Jiang, Jiming

    2014-01-01

    The Intraclass Correlation Coefficient (ICC) is commonly used to estimate the similarity between quantitative measures obtained from different sources. Overdispersed data is traditionally transformed so that linear mixed model (LMM) based ICC can be estimated. A common transformation used is the natural logarithm. The reliability of environmental sampling of fecal slurry on freestall pens has been estimated for Mycobacterium avium subsp. paratuberculosis using the natural logarithm transformed culture results. Recently, the negative binomial ICC was defined based on a generalized linear mixed model for negative binomial distributed data. The current study reports on the negative binomial ICC estimate which includes fixed effects using culture results of environmental samples. Simulations using a wide variety of inputs and negative binomial distribution parameters (r; p) showed better performance of the new negative binomial ICC compared to the ICC based on LMM even when negative binomial data was logarithm, and square root transformed. A second comparison that targeted a wider range of ICC values showed that the mean of estimated ICC closely approximated the true ICC.

  10. Species-abundance distribution patterns of soil fungi: contribution to the ecological understanding of their response to experimental fire in Mediterranean maquis (southern Italy).

    PubMed

    Persiani, Anna Maria; Maggi, Oriana

    2013-01-01

    Experimental fires, of both low and high intensity, were lit during summer 2000 and the following 2 y in the Castel Volturno Nature Reserve, southern Italy. Soil samples were collected Jul 2000-Jul 2002 to analyze the soil fungal community dynamics. Species abundance distribution patterns (geometric, logarithmic, log normal, broken-stick) were compared. We plotted datasets with information both on species richness and abundance for total, xerotolerant and heat-stimulated soil microfungi. The xerotolerant fungi conformed to a broken-stick model for both the low- and high intensity fires at 7 and 84 d after the fire; their distribution subsequently followed logarithmic models in the 2 y following the fire. The distribution of the heat-stimulated fungi changed from broken-stick to logarithmic models and eventually to a log-normal model during the post-fire recovery. Xerotolerant and, to a far greater extent, heat-stimulated soil fungi acquire an important functional role following soil water stress and/or fire disturbance; these disturbances let them occupy unsaturated habitats and become increasingly abundant over time.

  11. Error Analysis for RADAR Neighbor Matching Localization in Linear Logarithmic Strength Varying Wi-Fi Environment

    PubMed Central

    Tian, Zengshan; Xu, Kunjie; Yu, Xiang

    2014-01-01

    This paper studies the statistical errors for the fingerprint-based RADAR neighbor matching localization with the linearly calibrated reference points (RPs) in logarithmic received signal strength (RSS) varying Wi-Fi environment. To the best of our knowledge, little comprehensive analysis work has appeared on the error performance of neighbor matching localization with respect to the deployment of RPs. However, in order to achieve the efficient and reliable location-based services (LBSs) as well as the ubiquitous context-awareness in Wi-Fi environment, much attention has to be paid to the highly accurate and cost-efficient localization systems. To this end, the statistical errors by the widely used neighbor matching localization are significantly discussed in this paper to examine the inherent mathematical relations between the localization errors and the locations of RPs by using a basic linear logarithmic strength varying model. Furthermore, based on the mathematical demonstrations and some testing results, the closed-form solutions to the statistical errors by RADAR neighbor matching localization can be an effective tool to explore alternative deployment of fingerprint-based neighbor matching localization systems in the future. PMID:24683349

  12. Error analysis for RADAR neighbor matching localization in linear logarithmic strength varying Wi-Fi environment.

    PubMed

    Zhou, Mu; Tian, Zengshan; Xu, Kunjie; Yu, Xiang; Wu, Haibo

    2014-01-01

    This paper studies the statistical errors for the fingerprint-based RADAR neighbor matching localization with the linearly calibrated reference points (RPs) in logarithmic received signal strength (RSS) varying Wi-Fi environment. To the best of our knowledge, little comprehensive analysis work has appeared on the error performance of neighbor matching localization with respect to the deployment of RPs. However, in order to achieve the efficient and reliable location-based services (LBSs) as well as the ubiquitous context-awareness in Wi-Fi environment, much attention has to be paid to the highly accurate and cost-efficient localization systems. To this end, the statistical errors by the widely used neighbor matching localization are significantly discussed in this paper to examine the inherent mathematical relations between the localization errors and the locations of RPs by using a basic linear logarithmic strength varying model. Furthermore, based on the mathematical demonstrations and some testing results, the closed-form solutions to the statistical errors by RADAR neighbor matching localization can be an effective tool to explore alternative deployment of fingerprint-based neighbor matching localization systems in the future.

  13. Adjustments for the display of quantized ion channel dwell times in histograms with logarithmic bins.

    PubMed

    Stark, J A; Hladky, S B

    2000-02-01

    Dwell-time histograms are often plotted as part of patch-clamp investigations of ion channel currents. The advantages of plotting these histograms with a logarithmic time axis were demonstrated by, J. Physiol. (Lond.). 378:141-174), Pflügers Arch. 410:530-553), and, Biophys. J. 52:1047-1054). Sigworth and Sine argued that the interpretation of such histograms is simplified if the counts are presented in a manner similar to that of a probability density function. However, when ion channel records are recorded as a discrete time series, the dwell times are quantized. As a result, the mapping of dwell times to logarithmically spaced bins is highly irregular; bins may be empty, and significant irregularities may extend beyond the duration of 100 samples. Using simple approximations based on the nature of the binning process and the transformation rules for probability density functions, we develop adjustments for the display of the counts to compensate for this effect. Tests with simulated data suggest that this procedure provides a faithful representation of the data.

  14. On the enhanced sampling over energy barriers in molecular dynamics simulations.

    PubMed

    Gao, Yi Qin; Yang, Lijiang

    2006-09-21

    We present here calculations of free energies of multidimensional systems using an efficient sampling method. The method uses a transformed potential energy surface, which allows an efficient sampling of both low and high energy spaces and accelerates transitions over barriers. It allows efficient sampling of the configuration space over and only over the desired energy range(s). It does not require predetermined or selected reaction coordinate(s). We apply this method to study the dynamics of slow barrier crossing processes in a disaccharide and a dipeptide system.

  15. YIP Formal Synthesis of Software-Based Control Protocols for Fractionated,Composable Autonomous Systems

    DTIC Science & Technology

    2016-07-08

    Systems Using Automata Theory and Barrier Certifi- cates We developed a sound but incomplete method for the computational verification of specifications...method merges ideas from automata -based model checking with those from control theory including so-called barrier certificates and optimization-based... Automata theory meets barrier certificates: Temporal logic verification of nonlinear systems,” IEEE Transactions on Automatic Control, 2015. [J2] R

  16. Barrier modification in sub-barrier fusion reaction 64Ni+100Mo using Wong formula with Skyrme forces in semiclassical formalism

    NASA Astrophysics Data System (ADS)

    Kumar, Raj; Gupta, Raj K.

    2011-09-01

    We obtain the nuclear proximity potential by using semiclassical extended Thomas Fermi (ETF) approach in Skyrme energy density formalism (SEDF), and use it in the extended l-summed Wong formula under frozen density approximation. This method has the advantage of allowing the use of different Skyrme forces, giving different barriers. Thus, for a given reaction, we could choose a Skyrme force with proper barrier characteristics, not-requiring extra "barrier lowering" or "barrier narrowing" for a best fit to data. For the 64Ni+100Mo reaction, the l-summed Wong formula, with effects of deformations and orientations of nuclei included, fits the fusion-evaporation cross section data exactly for the force GSkI, requiring additional barrier modifications for forces SIII and SV. However, the same for other similar reactions, like 58,64Ni+58,64Ni, fit the data best for SIII force. Hence, the barrier modification effects in l-summed Wong expression depend on the choice of Skyrme force in semiclassical ETF method.

  17. Validating the operational bias and hypothesis of universal exponent in landslide frequency-area distribution.

    PubMed

    Huang, Jr-Chuan; Lee, Tsung-Yu; Teng, Tse-Yang; Chen, Yi-Chin; Huang, Cho-Ying; Lee, Cheing-Tung

    2014-01-01

    The exponent decay in landslide frequency-area distribution is widely used for assessing the consequences of landslides and with some studies arguing that the slope of the exponent decay is universal and independent of mechanisms and environmental settings. However, the documented exponent slopes are diverse and hence data processing is hypothesized for this inconsistency. An elaborated statistical experiment and two actual landslide inventories were used here to demonstrate the influences of the data processing on the determination of the exponent. Seven categories with different landslide numbers were generated from the predefined inverse-gamma distribution and then analyzed by three data processing procedures (logarithmic binning, LB, normalized logarithmic binning, NLB and cumulative distribution function, CDF). Five different bin widths were also considered while applying LB and NLB. Following that, the maximum likelihood estimation was used to estimate the exponent slopes. The results showed that the exponents estimated by CDF were unbiased while LB and NLB performed poorly. Two binning-based methods led to considerable biases that increased with the increase of landslide number and bin width. The standard deviations of the estimated exponents were dependent not just on the landslide number but also on binning method and bin width. Both extremely few and plentiful landslide numbers reduced the confidence of the estimated exponents, which could be attributed to limited landslide numbers and considerable operational bias, respectively. The diverse documented exponents in literature should therefore be adjusted accordingly. Our study strongly suggests that the considerable bias due to data processing and the data quality should be constrained in order to advance the understanding of landslide processes.

  18. [Comparison of commercial HIV-1 viral load tests by using proficiency test results in China, 2013- 2015].

    PubMed

    Zhang, L; Jin, C; Jiang, Z; Tang, T; Jiang, Y; Pan, P L

    2017-09-10

    Objective: To compare the bio-equivalence among commercial HIV-1 viral load tests, including EasyQ HIV-1 v2.0 (EasyQ) from bioMerieux NucliSens of France; VERSANT HIV-1 RNA 3.0 assay (bDNA) from Siemens Healthcare Diagnostics of USA; COBAS AmpliPrep/COBAS TaqMan HIV-1 test (Taqman) from Roche Molecular Diagnosis of USA; Abbott Real Time HIV-1 Kit (M2000) from Abbott Molecular of USA and two domestic HIV-1 viral load test kits (domestic kit) from DaAn Gene Company of Sun Yat-Sen University and Liaoning Bio-Pharmaceutical company of Northeast pharmaceutical group, by using proficiency test results in China from 2013 to 2015. Methods: A total of 2 954 proficiency test results, obtained from 22 positive samples of 6 proficiency tests in 155 laboratories conducted by China CDC were analyzed during 2013-2015. The results from each sample were first logarithmic transformed and then grouped according to the method used, the mean value of logarithmic results was calculated. Subsequently, 22 clusters of mean values were analyzed by Bland-Altman analysis for the consistency, and linear regression analysis for the interdependency. Results: The results indicated that, by taking Taqman as the reference, EasyQ, M2000, bDNA and domestic kit had good consistency (90 % -100 % ) and interdependency. Conclusion: All the viral load tests were bio-equivalent. Moreover, according to the conversion formula derived from domestic proficiency test results, all the viral load results could be converted, which is critical for epidemiological analysis.

  19. C-Reactive Protein As a Marker of Melanoma Progression

    PubMed Central

    Fang, Shenying; Wang, Yuling; Sui, Dawen; Liu, Huey; Ross, Merrick I.; Gershenwald, Jeffrey E.; Cormier, Janice N.; Royal, Richard E.; Lucci, Anthony; Schacherer, Christopher W.; Gardner, Julie M.; Reveille, John D.; Bassett, Roland L.; Wang, Li-E; Wei, Qingyi; Amos, Christopher I.; Lee, Jeffrey E.

    2015-01-01

    Purpose To investigate the association between blood levels of C-reactive protein (CRP) in patients with melanoma and overall survival (OS), melanoma-specific survival (MSS), and disease-free survival. Patients and Methods Two independent sets of plasma samples from a total of 1,144 patients with melanoma (587 initial and 557 confirmatory) were available for CRP determination. Kaplan-Meier method and Cox regression were used to evaluate the relationship between CRP and clinical outcome. Among 115 patients who underwent sequential blood draws, we evaluated the relationship between change in disease status and change in CRP using nonparametric tests. Results Elevated CRP level was associated with poorer OS and MSS in the initial, confirmatory, and combined data sets (combined data set: OS hazard ratio, 1.44 per unit increase of logarithmic CRP; 95% CI, 1.30 to 1.59; P < .001; MSS hazard ratio, 1.51 per unit increase of logarithmic CRP; 95% CI, 1.36 to 1.68; P < .001). These findings persisted after multivariable adjustment. As compared with CRP < 10 mg/L, CRP ≥ 10 mg/L conferred poorer OS in patients with any-stage, stage I/II, or stage III/IV disease and poorer disease-free survival in those with stage I/II disease. In patients who underwent sequential evaluation of CRP, an association was identified between an increase in CRP and melanoma disease progression. Conclusion CRP is an independent prognostic marker in patients with melanoma. CRP measurement should be considered for incorporation into prospective studies of outcome in patients with melanoma and clinical trials of systemic therapies for those with melanoma. PMID:25779565

  20. Evaluation of Vertical Lacunarity Profiles in Forested Areas Using Airborne Laser Scanning Point Clouds

    NASA Astrophysics Data System (ADS)

    Székely, B.; Kania, A.; Standovár, T.; Heilmeier, H.

    2016-06-01

    The horizontal variation and vertical layering of the vegetation are important properties of the canopy structure determining the habitat; three-dimensional (3D) distribution of objects (shrub layers, understory vegetation, etc.) is related to the environmental factors (e.g., illumination, visibility). It has been shown that gaps in forests, mosaic-like structures are essential to biodiversity; various methods have been introduced to quantify this property. As the distribution of gaps in the vegetation is a multi-scale phenomenon, in order to capture it in its entirety, scale-independent methods are preferred; one of these is the calculation of lacunarity. We used Airborne Laser Scanning point clouds measured over a forest plantation situated in a former floodplain. The flat topographic relief ensured that the tree growth is independent of the topographic effects. The tree pattern in the plantation crops provided various quasi-regular and irregular patterns, as well as various ages of the stands. The point clouds were voxelized and layers of voxels were considered as images for two-dimensional input. These images calculated for a certain vicinity of reference points were taken as images for the computation of lacunarity curves, providing a stack of lacunarity curves for each reference points. These sets of curves have been compared to reveal spatial changes of this property. As the dynamic range of the lacunarity values is very large, the natural logarithms of the values were considered. Logarithms of lacunarity functions show canopy-related variations, we analysed these variations along transects. The spatial variation can be related to forest properties and ecology-specific aspects.

  1. Generation of Magnetohydrodynamic Waves in Low Solar Atmospheric Flux Tubes by Photospheric Motions

    NASA Astrophysics Data System (ADS)

    Mumford, S. J.; Fedun, V.; Erdélyi, R.

    2015-01-01

    Recent ground- and space-based observations reveal the presence of small-scale motions between convection cells in the solar photosphere. In these regions, small-scale magnetic flux tubes are generated via the interaction of granulation motion and the background magnetic field. This paper studies the effects of these motions on magnetohydrodynamic (MHD) wave excitation from broadband photospheric drivers. Numerical experiments of linear MHD wave propagation in a magnetic flux tube embedded in a realistic gravitationally stratified solar atmosphere between the photosphere and the low choromosphere (above β = 1) are performed. Horizontal and vertical velocity field drivers mimic granular buffeting and solar global oscillations. A uniform torsional driver as well as Archimedean and logarithmic spiral drivers mimic observed torsional motions in the solar photosphere. The results are analyzed using a novel method for extracting the parallel, perpendicular, and azimuthal components of the perturbations, which caters to both the linear and non-linear cases. Employing this method yields the identification of the wave modes excited in the numerical simulations and enables a comparison of excited modes via velocity perturbations and wave energy flux. The wave energy flux distribution is calculated to enable the quantification of the relative strengths of excited modes. The torsional drivers primarily excite Alfvén modes (≈60% of the total flux) with small contributions from the slow kink mode, and, for the logarithmic spiral driver, small amounts of slow sausage mode. The horizontal and vertical drivers primarily excite slow kink or fast sausage modes, respectively, with small variations dependent upon flux surface radius.

  2. A deep learning framework for the automated inspection of complex dual-energy x-ray cargo imagery

    NASA Astrophysics Data System (ADS)

    Rogers, Thomas W.; Jaccard, Nicolas; Griffin, Lewis D.

    2017-05-01

    Previously, we investigated the use of Convolutional Neural Networks (CNNs) to detect so-called Small Metallic Threats (SMTs) hidden amongst legitimate goods inside a cargo container. We trained a CNN from scratch on data produced by a Threat Image Projection (TIP) framework that generates images with realistic variation to robustify performance. The system achieved 90% detection of containers that contained a single SMT, while raising 6% false positives on benign containers. The best CNN architecture used the raw high energy image (single-energy) and its logarithm as input channels. Use of the logarithm improved performance, thus echoing studies on human operator performance. However, it is an unexpected result with CNNs. In this work, we (i) investigate methods to exploit material information captured in dual-energy images, and (ii) introduce a new CNN training scheme that generates `spot-the-difference' benign and threat pairs on-the-fly. To the best of our knowledge, this is the first time that CNNs have been applied directly to raw dual-energy X-ray imagery, in any field. To exploit dual-energy, we experiment with adapting several physics-derived approaches to material discrimination from the cargo literature, and introduce three novel variants. We hypothesise that CNNs can implicitly learn about the material characteristics of objects from the raw dual-energy images, and use this to suppress false positives. The best performing method is able to detect 95% of containers containing a single SMT, while raising 0.4% false positives on benign containers. This is a step change improvement in performance over our prior work

  3. A Mathematical Model to Predict Endothelial Cell Density Following Penetrating Keratoplasty With Selective Dropout From Graft Failure

    PubMed Central

    Riddlesworth, Tonya D.; Kollman, Craig; Lass, Jonathan H.; Patel, Sanjay V.; Stulting, R. Doyle; Benetz, Beth Ann; Gal, Robin L.; Beck, Roy W.

    2014-01-01

    Purpose. We constructed several mathematical models that predict endothelial cell density (ECD) for patients after penetrating keratoplasty (PK) for a moderate-risk condition (principally Fuchs' dystrophy or pseudophakic/aphakic corneal edema). Methods. In a subset (n = 591) of Cornea Donor Study participants, postoperative ECD was determined by a central reading center. Various statistical models were considered to estimate the ECD trend longitudinally over 10 years of follow-up. A biexponential model with and without a logarithm transformation was fit using the Gauss-Newton nonlinear least squares algorithm. To account for correlated data, a log-polynomial model was fit using the restricted maximum likelihood method. A sensitivity analysis for the potential bias due to selective dropout was performed using Bayesian analysis techniques. Results. The three models using a logarithm transformation yield similar trends, whereas the model without the transform predicts higher ECD values. The adjustment for selective dropout turns out to be negligible. However, this is possibly due to the relatively low rate of graft failure in this cohort (19% at 10 years). Fuchs' dystrophy and pseudophakic/aphakic corneal edema (PACE) patients had similar ECD decay curves, with the PACE group having slightly higher cell densities by 10 years. Conclusions. Endothelial cell loss after PK can be modeled via a log-polynomial model, which accounts for the correlated data from repeated measures on the same subject. This model is not significantly affected by the selective dropout due to graft failure. Our findings warrant further study on how this may extend to ECD following endothelial keratoplasty. PMID:25425307

  4. Comparison of Methodologies of Activation Barrier Measurements for Reactions with Deactivation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Zhenhua; Yan, Binhang; Zhang, Li

    In this work, methodologies of activation barrier measurements for reactions with deactivation were theoretically analyzed. Reforming of ethane with CO 2 was introduced as an example for reactions with deactivation to experimentally evaluate these methodologies. Both the theoretical and experimental results showed that due to catalyst deactivation, the conventional method would inevitably lead to a much lower activation barrier, compared to the intrinsic value, even though heat and mass transport limitations were excluded. In this work, an optimal method was identified in order to provide a reliable and efficient activation barrier measurement for reactions with deactivation.

  5. Comparison of Methodologies of Activation Barrier Measurements for Reactions with Deactivation

    DOE PAGES

    Xie, Zhenhua; Yan, Binhang; Zhang, Li; ...

    2017-01-25

    In this work, methodologies of activation barrier measurements for reactions with deactivation were theoretically analyzed. Reforming of ethane with CO 2 was introduced as an example for reactions with deactivation to experimentally evaluate these methodologies. Both the theoretical and experimental results showed that due to catalyst deactivation, the conventional method would inevitably lead to a much lower activation barrier, compared to the intrinsic value, even though heat and mass transport limitations were excluded. In this work, an optimal method was identified in order to provide a reliable and efficient activation barrier measurement for reactions with deactivation.

  6. In situ formation of phosphate barriers in soil

    DOEpatents

    Moore, Robert C.

    2002-01-01

    Reactive barriers and methods for making reactive barriers in situ in soil for sequestering soil ontaminants including actinides and heavy metals. The barrier includes phosphate, and techniques are disclosed for forming specifically apatite barriers. The method includes injecting dilute reagents into soil in proximity to a contamination plume or source such as a waste drum to achieve complete or partial encapsulation of the waste. Controlled temperature and pH facilitates rapid formation of apatite, for example, where dilute aqueous calcium chloride and dilute aqueous sodium phosphate are the selected reagents. Mixing of reagents to form precipitate is mediated and enhanced through movement of reagents in soil as a result of phenomena including capillary action, movement of groundwater, soil washing and reagent injection pressure.

  7. Method Producing an SNS Superconducting Junction with Weak Link Barrier

    NASA Technical Reports Server (NTRS)

    Hunt, Brian D. (Inventor)

    1999-01-01

    A method of producing a high temperature superconductor Josephson element and an improved SNS weak link barrier element is provided. A YBaCuO superconducting electrode film is deposited on a substrate at a temperature of approximately 800 C. A weak link barrier layer of a nonsuperconducting film of N-YBaCuO is deposited over the electrode at a temperature range of 520 C. to 540 C. at a lower deposition rate. Subsequently a superconducting counter-electrode film layer of YBaCuO is deposited over the weak link barrier layer at approximately 800 C. The weak link barrier layer has a thickness of approximately 50 A and the SNS element can be constructed to provide an edge geometry junction.

  8. Resonances for Symmetric Two-Barrier Potentials

    ERIC Educational Resources Information Center

    Fernandez, Francisco M.

    2011-01-01

    We describe a method for the accurate calculation of bound-state and resonance energies for one-dimensional potentials. We calculate the shape resonances for symmetric two-barrier potentials and compare them with those coming from the Siegert approximation, the complex scaling method and the box-stabilization method. A comparison of the…

  9. Material Barriers to Diffusive Mixing

    NASA Astrophysics Data System (ADS)

    Haller, George; Karrasch, Daniel

    2017-11-01

    Transport barriers, as zero-flux surfaces, are ill-defined in purely advective mixing in which the flux of any passive scalar is zero through all material surfaces. For this reason, Lagrangian Coherent Structures (LCSs) have been argued to play the role of mixing barriers as most repelling, attracting or shearing material lines. These three kinematic concepts, however, can also be defined in different ways, both within rigorous mathematical treatments and within the realm of heuristic diagnostics. This has lead to a an ever-growing number of different LCS methods, each generally identifying different objects as transport barriers. In this talk, we examine which of these methods have actual relevance for diffusive transport barriers. The latter barriers are arguably the practically relevant inhibitors in the mixing of physically relevant tracers, such as temperature, salinity, vorticity or potential vorticity. We demonstrate the role of the most effective diffusion barriers in analytical examples and observational data. Supported in part by the DFG Priority Program on Turbulent Superstructures.

  10. The Employers' perspective on barriers and facilitators to employment of people with intellectual disability: A differential mixed-method approach.

    PubMed

    Kocman, Andreas; Fischer, Linda; Weber, Germain

    2018-01-01

    Obtaining employment is among the most important ambitions of people with intellectual disability. Progress towards comprehensive inclusive employment is hampered by numerous barriers. Limited research is available on these barriers and strategies to overcome them. A mixed method approach in a sample of 30 HR-managers was used to assess (i) differences in perceived barriers for employment of people with specific disabilities and mental disorders; (ii) barriers specific to employing people with intellectual disability; (iii) strategies to overcome these barriers. Employers perceive more barriers for hiring people with intellectual disability and mental disorders than for physical disabilities. Employment for this population is hampered by a perceived lack of skills and legal issues. Strategies perceived as beneficial are supplying information, changes in organizational strategies and legal changes. Employers' differentiated expectations and reservations towards hiring individuals with specific disabilities need to be taken into account to increase employment for people with intellectual disability. © 2017 John Wiley & Sons Ltd.

  11. Puncture detecting barrier materials

    DOEpatents

    Hermes, R.E.; Ramsey, D.R.; Stampfer, J.F.; Macdonald, J.M.

    1998-03-31

    A method and apparatus for continuous real-time monitoring of the integrity of protective barrier materials, particularly protective barriers against toxic, radioactive and biologically hazardous materials has been developed. Conductivity, resistivity or capacitance between conductive layers in the multilayer protective materials is measured by using leads connected to electrically conductive layers in the protective barrier material. The measured conductivity, resistivity or capacitance significantly changes upon a physical breach of the protective barrier material. 4 figs.

  12. Puncture detecting barrier materials

    DOEpatents

    Hermes, Robert E.; Ramsey, David R.; Stampfer, Joseph F.; Macdonald, John M.

    1998-01-01

    A method and apparatus for continuous real-time monitoring of the integrity of protective barrier materials, particularly protective barriers against toxic, radioactive and biologically hazardous materials has been developed. Conductivity, resistivity or capacitance between conductive layers in the multilayer protective materials is measured by using leads connected to electrically conductive layers in the protective barrier material. The measured conductivity, resistivity or capacitance significantly changes upon a physical breach of the protective barrier material.

  13. An integral equation method for calculating sound field diffracted by a rigid barrier on an impedance ground.

    PubMed

    Zhao, Sipei; Qiu, Xiaojun; Cheng, Jianchun

    2015-09-01

    This paper proposes a different method for calculating a sound field diffracted by a rigid barrier based on the integral equation method, where a virtual boundary is assumed above the rigid barrier to divide the whole space into two subspaces. Based on the Kirchhoff-Helmholtz equation, the sound field in each subspace is determined with the source inside and the boundary conditions on the surface, and then the diffracted sound field is obtained by using the continuation conditions on the virtual boundary. Simulations are carried out to verify the feasibility of the proposed method. Compared to the MacDonald method and other existing methods, the proposed method is a rigorous solution for whole space and is also much easier to understand.

  14. Multi-layer light-weight protective coating and method for application

    NASA Technical Reports Server (NTRS)

    Wiedemann, Karl E. (Inventor); Clark, Ronald K. (Inventor); Taylor, Patrick J. (Inventor)

    1992-01-01

    A thin, light-weight, multi-layer coating is provided for protecting metals and their alloys from environmental attack at high temperatures. A reaction barrier is applied to the metal substrate and a diffusion barrier is then applied to the reaction barrier. A sealant layer may also be applied to the diffusion barrier if desired. The reaction barrier is either non-reactive or passivating with respect to the metal substrate and the diffusion barrier. The diffusion barrier is either non-reactive or passivating with respect to the reaction barrier and the sealant layer. The sealant layer is immiscible with the diffusion barrier and has a softening point below the expected use temperature of the metal.

  15. Synthesizing qualitative and quantitative evidence on non-financial access barriers: implications for assessment at the district level.

    PubMed

    O'Connell, Thomas S; Bedford, K Juliet A; Thiede, Michael; McIntyre, Di

    2015-06-09

    A key element of the global drive to universal health coverage is ensuring access to needed health services for everyone, and to pursue this goal in an equitable way. This requires concerted efforts to reduce disparities in access through understanding and acting on barriers facing communities with the lowest utilisation levels. Financial barriers dominate the empirical literature on health service access. Unless the full range of access barriers are investigated, efforts to promote equitable access to health care are unlikely to succeed. This paper therefore focuses on exploring the nature and extent of non-financial access barriers. We draw upon two structured literature reviews on barriers to access and utilization of maternal, newborn and child health services in Ghana, Bangladesh, Vietnam and Rwanda. One review analyses access barriers identified in published literature using qualitative research methods; the other in published literature using quantitative analysis of household survey data. We then synthesised the key qualitative and quantitative findings through a conjoint iterative analysis. Five dominant themes on non-financial access barriers were identified: ethnicity; religion; physical accessibility; decision-making, gender and autonomy; and knowledge, information and education. The analysis highlighted that non-financial factors pose considerable barriers to access, many of which relate to the acceptability dimension of access and are challenging to address. Another key finding is that quantitative research methods, while yielding important findings, are inadequate for understanding non-financial access barriers in sufficient detail to develop effective responses. Qualitative research is critical in filling this gap. The analysis also indicates that the nature of non-financial access barriers vary considerably, not only between countries but also between different communities within individual countries. To adequately understand access barriers as a basis for developing effective strategies to address them, mixed-methods approaches are required. From an equity perspective, communities with the lowest utilisation levels should be prioritised and the access barriers specific to that community identified. It is, therefore, critical to develop approaches that can be used at the district level to diagnose and act upon access barriers if we are to pursue an equitable path to universal health coverage.

  16. Zeros and logarithmic asymptotics of Sobolev orthogonal polynomials for exponential weights

    NASA Astrophysics Data System (ADS)

    Díaz Mendoza, C.; Orive, R.; Pijeira Cabrera, H.

    2009-12-01

    We obtain the (contracted) weak zero asymptotics for orthogonal polynomials with respect to Sobolev inner products with exponential weights in the real semiaxis, of the form , with [gamma]>0, which include as particular cases the counterparts of the so-called Freud (i.e., when [phi] has a polynomial growth at infinity) and Erdös (when [phi] grows faster than any polynomial at infinity) weights. In addition, the boundness of the distance of the zeros of these Sobolev orthogonal polynomials to the convex hull of the support and, as a consequence, a result on logarithmic asymptotics are derived.

  17. Ergodic Transition in a Simple Model of the Continuous Double Auction

    PubMed Central

    Radivojević, Tijana; Anselmi, Jonatha; Scalas, Enrico

    2014-01-01

    We study a phenomenological model for the continuous double auction, whose aggregate order process is equivalent to two independent queues. The continuous double auction defines a continuous-time random walk for trade prices. The conditions for ergodicity of the auction are derived and, as a consequence, three possible regimes in the behavior of prices and logarithmic returns are observed. In the ergodic regime, prices are unstable and one can observe a heteroskedastic behavior in the logarithmic returns. On the contrary, non-ergodicity triggers stability of prices, even if two different regimes can be seen. PMID:24558377

  18. Atmospheric Dispersion Capability for T2VOC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oldenburg, Curtis M.

    2005-09-19

    Atmospheric transport by variable-K theory dispersion has been added to T2VOC. The new code, T2VOCA, models flow and transport in the subsurface identically to T2VOC, but includes also the capability for modeling passive multicomponent variable-K theory dispersion in an atmospheric region assumed to be flat, horizontal, and with a logarithmic wind profile. The specification of the logarithmic wind profile in the T2VOC input file is automated through the use of a build code called ATMDISPV. The new capability is demonstrated on 2-D and 3-D example problems described in this report.

  19. Born-Infeld Gravity Revisited

    NASA Astrophysics Data System (ADS)

    Setare, M. R.; Sahraee, M.

    2013-12-01

    In this paper, we investigate the behavior of linearized gravitational excitation in the Born-Infeld gravity in AdS3 space. We obtain the linearized equation of motion and show that this higher-order gravity propagate two gravitons, massless and massive, on the AdS3 background. In contrast to the R2 models, such as TMG or NMG, Born-Infeld gravity does not have a critical point for any regular choice of parameters. So the logarithmic solution is not a solution of this model, due to this one cannot find a logarithmic conformal field theory as a dual model for Born-Infeld gravity.

  20. Numerical Simulation of Atmospheric Boundary Layer Flow Over Battlefield-scale Complex Terrain: Surface Fluxes From Resolved and Subgrid Scales

    DTIC Science & Technology

    2015-07-06

    provision of law , no person shall be subject to any oenalty for failing to comply with a collection of information if it does not display a currently...due to h′ (x, y) are represented by the equilibrium logarithmic law : τw,∆13 ρ = u2τ ũ U = − [ κU log (z/z0) ]2 ũ U , (2) where z0 is a momentum...topography. The equilibrium logarithmic law expression for passive scalar fluxes, q̇′′ (neutral stratification – stability correction terms not needed

  1. Ergodic transition in a simple model of the continuous double auction.

    PubMed

    Radivojević, Tijana; Anselmi, Jonatha; Scalas, Enrico

    2014-01-01

    We study a phenomenological model for the continuous double auction, whose aggregate order process is equivalent to two independent M/M/1 queues. The continuous double auction defines a continuous-time random walk for trade prices. The conditions for ergodicity of the auction are derived and, as a consequence, three possible regimes in the behavior of prices and logarithmic returns are observed. In the ergodic regime, prices are unstable and one can observe a heteroskedastic behavior in the logarithmic returns. On the contrary, non-ergodicity triggers stability of prices, even if two different regimes can be seen.

  2. Non-renormalization for non-supersymmetric black holes

    DOE PAGES

    Charles, Anthony M.; Larsen, Finn; Mayerson, Daniel R.

    2017-08-11

    We analyze large logarithmic corrections to 4D black hole entropy and relate them to the Weyl anomaly. We use duality to show that counter-terms in EinsteinMaxwell theory can be expressed in terms of geometry alone, with no dependence on matter terms. We analyze the two known N = 2 supersymmetric invariants for various non-supersymmetric black holes and find that both reduce to the Euler invariant. The c-anomaly therefore vanishes in these theories and the coefficient of the large logarithms becomes topological. It is therefore independent of continuous black hole parameters, such as the mass, even far from extremality.

  3. Glass-(nAg, nCu) biocide coatings on ceramic oxide substrates.

    PubMed

    Esteban-Tejeda, Leticia; Malpartida, Francisco; Díaz, Luis Antonio; Torrecillas, Ramón; Rojo, Fernando; Moya, José Serafín

    2012-01-01

    The present work was focused on obtaining biocide coatings constituted by a glassy soda-lime matrix containing silver or copper nanoparticles on ceramic (alumina and zirconia based) substrates. Both glassy coatings showed a high biocide activity against Gram-, Gram+ bacteria and yeast, reducing cell numbers more than three logarithms. Silver nanoparticles had a significantly higher biocide activity than copper nanoparticles, since the lixiviation levels required to reduce cell numbers more than 3 logarithms was of almost 1-2 µg/cm(2) in the case of silver nanoparticles, and 10-15 µg/cm(2) for the copper nanoparticles.

  4. Non-renormalization for non-supersymmetric black holes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Charles, Anthony M.; Larsen, Finn; Mayerson, Daniel R.

    We analyze large logarithmic corrections to 4D black hole entropy and relate them to the Weyl anomaly. We use duality to show that counter-terms in EinsteinMaxwell theory can be expressed in terms of geometry alone, with no dependence on matter terms. We analyze the two known N = 2 supersymmetric invariants for various non-supersymmetric black holes and find that both reduce to the Euler invariant. The c-anomaly therefore vanishes in these theories and the coefficient of the large logarithms becomes topological. It is therefore independent of continuous black hole parameters, such as the mass, even far from extremality.

  5. Entanglement dynamics in critical random quantum Ising chain with perturbations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Yichen, E-mail: ychuang@caltech.edu

    We simulate the entanglement dynamics in a critical random quantum Ising chain with generic perturbations using the time-evolving block decimation algorithm. Starting from a product state, we observe super-logarithmic growth of entanglement entropy with time. The numerical result is consistent with the analytical prediction of Vosk and Altman using a real-space renormalization group technique. - Highlights: • We study the dynamical quantum phase transition between many-body localized phases. • We simulate the dynamics of a very long random spin chain with matrix product states. • We observe numerically super-logarithmic growth of entanglement entropy with time.

  6. Development of methods for skin barrier peeling tests.

    PubMed

    Omura, Yuko; Kazuharu, Seki; Kenji, Oishi

    2006-01-01

    We sought to develop a more effective method to evaluate the adhesive properties of skin barriers. The experimental design used was based on 3 principles: partial control, randomization, and repetition. Using these principles, the 180-degree peeling tests were conducted as specified in a standardized methodology (JIS Z0297) to the extent possible. However, the use of a stainless steel plate as a proxy for skin barrier application may result in the stretching and breaking of the skin barrier, making it impossible to obtain suitable measurements. Tests were conducted in constant temperature/ humidity chambers using a Tensilon Automatic Elongation Tester, where a sample was fixed on the side of a sample immobilization device, a sturdy metal (aluminum) box from which the air in the box was drawn off with a vacuum pump. A fluorocarbon polymer film was applied to the adhesive surface of a sample skin barrier. The film was peeled off in the volte-face (180-degree) direction in order to measure adhesive strengths. The films exhibit such properties as (a) ease of removal from the adhesive surface, (b) no resistance to a 180-degree fold back due to the thinness and flexibility of the material, and (c) tolerance of elongation. The adhesive properties of skin barriers were measured by peeling the fluorocarbon polymers in a 180-degree direction. Twelve specimen skin barrier products were selected for measurement, providing results with satisfactory reproducibility. Results based on the conventional stainless steel plate-based testing method acted as a control. The newly developed testing method enables chronological measurement results for skin barriers applied to fluorocarbon polymer films after 24 hours, 48 hours, and longer period.

  7. Vented Cavity Radiant Barrier Assembly And Method

    DOEpatents

    Dinwoodie, Thomas L.; Jackaway, Adam D.

    2000-05-16

    A vented cavity radiant barrier assembly (2) includes a barrier (12), typically a PV module, having inner and outer surfaces (18, 22). A support assembly (14) is secured to the barrier and extends inwardly from the inner surface of the barrier to a building surface (14) creating a vented cavity (24) between the building surface and the barrier inner surface. A low emissivity element (20) is mounted at or between the building surface and the barrier inner surface. At least part of the cavity exit (30) is higher than the cavity entrance (28) to promote cooling air flow through the cavity.

  8. Time and materials needed to survey, inject systemic fungicides, and install root-graft barriers for Dutch elm disease management

    Treesearch

    William N., Jr. Cannon; Jack H. Barger; Charles J. Kostichka; Charles J. Kostichka

    1986-01-01

    Dutch elm disease control practice in 15 communities showed a wide range of time and material required to apply control methods. The median time used for each method was: sanitation survey, 9.8 hours per square mile; symptom survey, 96 hours per thousand elms; systemic fungicide injection, 1.4 hours per elm; and root-graft barrier installation, 2.2 hours per barrier (5...

  9. Analysis of Nanoporosity in Moisture Permeation Barrier Layers by Electrochemical Impedance Spectroscopy.

    PubMed

    Perrotta, Alberto; García, Santiago J; Michels, Jasper J; Andringa, Anne-Marije; Creatore, Mariadriana

    2015-07-29

    Water permeation in inorganic moisture permeation barriers occurs through macroscale defects/pinholes and nanopores, the latter with size approaching the water kinetic diameter (0.27 nm). Both permeation paths can be identified by the calcium test, i.e., a time-consuming and expensive optical method for determining the water vapor transmission rate (WVTR) through barrier layers. Recently, we have shown that ellipsometric porosimetry (i.e., a combination of spectroscopic ellipsometry and isothermal adsorption studies) is a valid method to classify and quantify the nanoporosity and correlate it with the WVTR values. Nevertheless, no information is obtained about the macroscale defects or the kinetics of water permeation through the barrier, both essential in assessing the quality of the barrier layer. In this study, electrochemical impedance spectroscopy (EIS) is shown as a sensitive and versatile method to obtain information on nanoporosity and macroscale defects, water permeation, and diffusivity of moisture barrier layers, complementing the barrier property characterization obtained by means of EP and calcium test. EIS is performed on thin SiO2 barrier layers deposited by plasma enhanced-CVD. It allows the determination of the relative water uptake in the SiO2 layers, found to be in agreement with the nanoporosity content inferred by EP. Furthermore, the kinetics of water permeation is followed by EIS, and the diffusivity (D) is determined and found to be in accordance with literature values. Moreover, differently from EP, EIS data are shown to be sensitive to the presence of local macrodefects, correlated with the barrier failure during the calcium test.

  10. Cross-Cultural Differences in Undergraduate Students' Perceptions of Online Barriers

    ERIC Educational Resources Information Center

    Olesova, Larisa; Yang, Dazhi; Richardson, Jennifer C.

    2011-01-01

    The intent of this study was to learn about students' perceived barriers and the impact of those barriers on the quality of online discussions between two distinct cultural groups in Eastern and Northern Siberia (Russia). A mixed-methods approach utilizing surveys and interviews was used to investigate (1) the types of barriers the students…

  11. Subsurface materials management and containment system

    DOEpatents

    Nickelson, Reva A.; Richardson, John G.; Kosteinik, Kevin M.; Sloan, Paul A.

    2004-07-06

    Systems, components, and methods relating to subterranean containment barriers. Laterally adjacent tubular casings having male interlock structures and multiple female interlock structures defining recesses for receiving a male interlock structure are used to create subterranean barriers for containing and treating buried waste and its effluents. The multiple female interlock structures enable the barriers to be varied around subsurface objects and to form barrier sidewalls. The barrier may be used for treating and monitoring a zone of interest.

  12. Subsurface materials management and containment system

    DOEpatents

    Nickelson, Reva A.; Richardson, John G.; Kostelnik, Kevin M.; Sloan, Paul A.

    2006-10-17

    Systems, components, and methods relating to subterranean containment barriers. Laterally adjacent tubular casings having male interlock structures and multiple female interlock structures defining recesses for receiving a male interlock structure are used to create subterranean barriers for containing and treating buried waste and its effluents. The multiple female interlock structures enable the barriers to be varied around subsurface objects and to form barrier sidewalls. The barrier may be used for treating and monitoring a zone of interest.

  13. Power-limited low-thrust trajectory optimization with operation point detection

    NASA Astrophysics Data System (ADS)

    Chi, Zhemin; Li, Haiyang; Jiang, Fanghua; Li, Junfeng

    2018-06-01

    The power-limited solar electric propulsion system is considered more practical in mission design. An accurate mathematical model of the propulsion system, based on experimental data of the power generation system, is used in this paper. An indirect method is used to deal with the time-optimal and fuel-optimal control problems, in which the solar electric propulsion system is described using a finite number of operation points, which are characterized by different pairs of thruster input power. In order to guarantee the integral accuracy for the discrete power-limited problem, a power operation detection technique is embedded in the fourth-order Runge-Kutta algorithm with fixed step. Moreover, the logarithmic homotopy method and normalization technique are employed to overcome the difficulties caused by using indirect methods. Three numerical simulations with actual propulsion systems are given to substantiate the feasibility and efficiency of the proposed method.

  14. Implementation of the reduced charge state method of calculating impurity transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crume, E.C. Jr.; Arnurius, D.E.

    1982-07-01

    A recent review article by Hirshman and Sigmar includes expressions needed to calculate the parallel friction coefficients, the essential ingredients of the plateau-Pfirsch-Schluter transport coefficients, using the method of reduced charge states. These expressions have been collected and an expanded notation introduced in some cases to facilitate differentiation between reduced charge state and full charge state quantities. A form of the Coulomb logarithm relevant to the method of reduced charge states is introduced. This method of calculating the f/sub ij//sup ab/ has been implemented in the impurity transport simulation code IMPTAR and has resulted in an overall reduction in computationmore » time of approximately 25% for a typical simulation of impurity transport in the Impurity Study Experiment (ISX-B). Results obtained using this treatment are almost identical to those obtained using an earlier approximate theory of Hirshman.« less

  15. A Discrete-Vortex Method for Studying the Wing Rock of Delta Wings

    NASA Technical Reports Server (NTRS)

    Gainer, Thomas G.

    2002-01-01

    A discrete-vortex method is developed to investigate the wing rock problem associated with highly swept wings. The method uses two logarithmic vortices placed above the wing to represent the vortex flow field and uses boundary conditions based on conical flow, vortex rate of change of momentum, and other considerations to position the vortices and determine their strengths. A relationship based on the time analogy and conical-flow assumptions is used to determine the hysteretic positions of the vortices during roll oscillations. Static and dynamic vortex positions and wing rock amplitudes and frequencies calculated by using the method are generally in good agreement with available experimental data. The results verify that wing rock is caused by hysteretic deflections of the vortices and indicate that the stabilizing moments that limit wing rock amplitudes are the result of the one primary vortex moving outboard of the wing where it has little influence on the wing.

  16. Effective distances for epidemics spreading on complex networks.

    PubMed

    Iannelli, Flavio; Koher, Andreas; Brockmann, Dirk; Hövel, Philipp; Sokolov, Igor M

    2017-01-01

    We show that the recently introduced logarithmic metrics used to predict disease arrival times on complex networks are approximations of more general network-based measures derived from random walks theory. Using the daily air-traffic transportation data we perform numerical experiments to compare the infection arrival time with this alternative metric that is obtained by accounting for multiple walks instead of only the most probable path. The comparison with direct simulations reveals a higher correlation compared to the shortest-path approach used previously. In addition our method allows to connect fundamental observables in epidemic spreading with the cumulant-generating function of the hitting time for a Markov chain. Our results provides a general and computationally efficient approach using only algebraic methods.

  17. Effective distances for epidemics spreading on complex networks

    NASA Astrophysics Data System (ADS)

    Iannelli, Flavio; Koher, Andreas; Brockmann, Dirk; Hövel, Philipp; Sokolov, Igor M.

    2017-01-01

    We show that the recently introduced logarithmic metrics used to predict disease arrival times on complex networks are approximations of more general network-based measures derived from random walks theory. Using the daily air-traffic transportation data we perform numerical experiments to compare the infection arrival time with this alternative metric that is obtained by accounting for multiple walks instead of only the most probable path. The comparison with direct simulations reveals a higher correlation compared to the shortest-path approach used previously. In addition our method allows to connect fundamental observables in epidemic spreading with the cumulant-generating function of the hitting time for a Markov chain. Our results provides a general and computationally efficient approach using only algebraic methods.

  18. Dealing with Liars: Misbehavior Identification via Rényi-Ulam Games

    NASA Astrophysics Data System (ADS)

    Kozma, William; Lazos, Loukas

    We address the problem of identifying misbehaving nodes that refuse to forward packets in wireless multi-hop networks. We map the process of locating the misbehaving nodes to the classic Rényi-Ulam game of 20 questions. Compared to previous methods, our mapping allows the evaluation of node behavior on a per-packet basis, without the need for energy-expensive overhearing techniques or intensive acknowledgment schemes. Furthermore, it copes with colluding adversaries that coordinate their behavioral patterns to avoid identification and frame honest nodes. We show via simulations that our algorithms reduce the communication overhead for identifying misbehaving nodes by at least one order of magnitude compared to other methods, while increasing the identification delay logarithmically with the path size.

  19. An object recognition method based on fuzzy theory and BP networks

    NASA Astrophysics Data System (ADS)

    Wu, Chuan; Zhu, Ming; Yang, Dong

    2006-01-01

    It is difficult to choose eigenvectors when neural network recognizes object. It is possible that the different object eigenvectors is similar or the same object eigenvectors is different under scaling, shifting, rotation if eigenvectors can not be chosen appropriately. In order to solve this problem, the image is edged, the membership function is reconstructed and a new threshold segmentation method based on fuzzy theory is proposed to get the binary image. Moment invariant of binary image is extracted and normalized. Some time moment invariant is too small to calculate effectively so logarithm of moment invariant is taken as input eigenvectors of BP network. The experimental results demonstrate that the proposed approach could recognize the object effectively, correctly and quickly.

  20. The Connection between Teaching Methods and Attribution Errors

    ERIC Educational Resources Information Center

    Wieman, Carl; Welsh, Ashley

    2016-01-01

    We collected data at a large, very selective public university on what math and science instructors felt was the biggest barrier to their students' learning. We also determined the extent of each instructor's use of research-based effective teaching methods. Instructors using fewer effective methods were more likely to say the greatest barrier to…

Top