Wilson Dslash Kernel From Lattice QCD Optimization
Joo, Balint; Smelyanskiy, Mikhail; Kalamkar, Dhiraj D.; Vaidyanathan, Karthikeyan
2015-07-01
Lattice Quantum Chromodynamics (LQCD) is a numerical technique used for calculations in Theoretical Nuclear and High Energy Physics. LQCD is traditionally one of the first applications ported to many new high performance computing architectures and indeed LQCD practitioners have been known to design and build custom LQCD computers. Lattice QCD kernels are frequently used as benchmarks (e.g. 168.wupwise in the SPEC suite) and are generally well understood, and as such are ideal to illustrate several optimization techniques. In this chapter we will detail our work in optimizing the Wilson-Dslash kernels for Intel Xeon Phi, however, as we will show the technique gives excellent performance on regular Xeon Architecture as well.
NASA Astrophysics Data System (ADS)
These are the proceedings of the QCD Evolution 2015 Workshop which was held 26-30 May, 2015 at Jefferson Lab, Newport News, Virginia, USA. The workshop is a continuation of a series of workshops held during four consecutive years 2011, 2012, 2013 at Jefferson Lab, and in 2014 in Santa Fe, NM. With the rapid developments in our understanding of the evolution of parton distributions including low-x, TMDs, GPDs, higher-twist correlation functions, and the associated progress in perturbative QCD, lattice QCD and effective field theory techniques we look forward with great enthusiasm to the 2015 meeting. A special attention was also paid to participation of experimentalists as the topics discussed are of immediate importance for the JLab 12 experimental program and a future Electron Ion Collider.
QCDNUM: Fast QCD evolution and convolution
NASA Astrophysics Data System (ADS)
Botje, M.
2011-02-01
The QCDNUM program numerically solves the evolution equations for parton densities and fragmentation functions in perturbative QCD. Un-polarised parton densities can be evolved up to next-to-next-to-leading order in powers of the strong coupling constant, while polarised densities or fragmentation functions can be evolved up to next-to-leading order. Other types of evolution can be accessed by feeding alternative sets of evolution kernels into the program. A versatile convolution engine provides tools to compute parton luminosities, cross-sections in hadron-hadron scattering, and deep inelastic structure functions in the zero-mass scheme or in generalised mass schemes. Input to these calculations are either the QCDNUM evolved densities, or those read in from an external parton density repository. Included in the software distribution are packages to calculate zero-mass structure functions in un-polarised deep inelastic scattering, and heavy flavour contributions to these structure functions in the fixed flavour number scheme. Program summaryProgram title: QCDNUM version: 17.00 Catalogue identifier: AEHV_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHV_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU Public Licence No. of lines in distributed program, including test data, etc.: 45 736 No. of bytes in distributed program, including test data, etc.: 911 569 Distribution format: tar.gz Programming language: Fortran-77 Computer: All Operating system: All RAM: Typically 3 Mbytes Classification: 11.5 Nature of problem: Evolution of the strong coupling constant and parton densities, up to next-to-next-to-leading order in perturbative QCD. Computation of observable quantities by Mellin convolution of the evolved densities with partonic cross-sections. Solution method: Parametrisation of the parton densities as linear or quadratic splines on a discrete grid, and evolution of the spline
Sivers Asymmetry with QCD Evolution
NASA Astrophysics Data System (ADS)
Echevarria, Miguel G.; Idilbi, Ahmad; Kang, Zhong-Bo; Vitev, Ivan
2015-02-01
We analyze the Sivers asymmetry in both Drell-Yan (DY) production and semi-inclusive deep inelastic scattering (SIDIS), while considering properly defined transverse momentum dependent parton distribution and fragmentation functions and their QCD evolution. After finding a universal non-perturbative spin-independent Sudakov factor that can describe reasonably well the world's data of SIDIS, DY lepton pair and W/Z production in unpolarized scatterings, we perform a global fitting of all the experimental data on the Sivers asymmetry in SIDIS from HERMES, COMPASS and Jefferson Lab. Then we make predictions for the asymmetry in DY lepton pair and W boson production, which could be compared to the future experimental data in order to test the sign change of the Sivers function.
Jet quenching from QCD evolution
NASA Astrophysics Data System (ADS)
Chien, Yang-Ting; Emerman, Alexander; Kang, Zhong-Bo; Ovanesyan, Grigory; Vitev, Ivan
2016-04-01
Recent advances in soft-collinear effective theory with Glauber gluons have led to the development of a new method that gives a unified description of inclusive hadron production in reactions with nucleons and heavy nuclei. We show how this approach, based on the generalization of the DGLAP evolution equations to include final-state medium-induced parton shower corrections for large Q2 processes, can be combined with initial-state effects for applications to jet quenching phenomenology. We demonstrate that the traditional parton energy loss calculations can be regarded as a special soft-gluon emission limit of the general QCD evolution framework. We present phenomenological comparison of the SCETG -based results on the suppression of inclusive charged hadron and neutral pion production in √{sNN }=2.76 TeV lead-lead collisions at the Large Hadron Collider to experimental data. We also show theoretical predictions for the upcoming √{sNN }≃5.1 TeV Pb +Pb run at the LHC.
QCD Evolution of Helicity and Transversity TMDs
Prokudin, Alexei
2014-01-01
We examine the QCD evolution of the helicity and transversity parton distribution functions when including also their dependence on transverse momentum. Using an appropriate definition of these polarized transverse momentum distributions (TMDs), we describe their dependence on the factorization scale and rapidity cutoff, which is essential for phenomenological applications.
R evolution: Improving perturbative QCD
NASA Astrophysics Data System (ADS)
Hoang, André H.; Jain, Ambar; Scimemi, Ignazio; Stewart, Iain W.
2010-07-01
Perturbative QCD results in the MS¯ scheme can be dramatically improved by switching to a scheme that accounts for the dominant power law dependence on the factorization scale in the operator product expansion. We introduce the “MSR scheme” which achieves this in a Lorentz and gauge invariant way and has a very simple relation to MS¯. Results in MSR depend on a cutoff parameter R, in addition to the μ of MS¯. R variations can be used to independently estimate (i.) the size of power corrections, and (ii.) higher-order perturbative corrections (much like μ in MS¯). We give two examples at three-loop order, the ratio of mass splittings in the B*-B and D*-D systems, and the Ellis-Jaffe sum rule as a function of momentum transfer Q in deep inelastic scattering. Comparing to data, the perturbative MSR results work well even for Q˜1GeV, and power corrections are reduced compared to MS¯.
R evolution: Improving perturbative QCD
Hoang, Andre H.; Jain, Ambar; Stewart, Iain W.; Scimemi, Ignazio
2010-07-01
Perturbative QCD results in the MS scheme can be dramatically improved by switching to a scheme that accounts for the dominant power law dependence on the factorization scale in the operator product expansion. We introduce the ''MSR scheme'' which achieves this in a Lorentz and gauge invariant way and has a very simple relation to MS. Results in MSR depend on a cutoff parameter R, in addition to the {mu} of MS. R variations can be used to independently estimate (i.) the size of power corrections, and (ii.) higher-order perturbative corrections (much like {mu} in MS). We give two examples at three-loop order, the ratio of mass splittings in the B*-B and D*-D systems, and the Ellis-Jaffe sum rule as a function of momentum transfer Q in deep inelastic scattering. Comparing to data, the perturbative MSR results work well even for Q{approx}1 GeV, and power corrections are reduced compared to MS.
Resumming double logarithms in the QCD evolution of color dipoles
NASA Astrophysics Data System (ADS)
Iancu, E.; Madrigal, J. D.; Mueller, A. H.; Soyez, G.; Triantafyllopoulos, D. N.
2015-05-01
The higher-order perturbative corrections, beyond leading logarithmic accuracy, to the BFKL evolution in QCD at high energy are well known to suffer from a severe lack-of-convergence problem, due to radiative corrections enhanced by double collinear logarithms. Via an explicit calculation of Feynman graphs in light cone (time-ordered) perturbation theory, we show that the corrections enhanced by double logarithms (either energy-collinear, or double collinear) are associated with soft gluon emissions which are strictly ordered in lifetime. These corrections can be resummed to all orders by solving an evolution equation which is non-local in rapidity. This equation can be equivalently rewritten in local form, but with modified kernel and initial conditions, which resum double collinear logs to all orders. We extend this resummation to the next-to-leading order BFKL and BK equations. The first numerical studies of the collinearly-improved BK equation demonstrate the essential role of the resummation in both stabilizing and slowing down the evolution.
The QCD evolution of TMD in the covariant approach
NASA Astrophysics Data System (ADS)
Efremov, A. V.; Teryaev, O. V.; Zavada, P.
2016-02-01
The procedure for calculation of the QCD evolution of transverse momentum dependent distributions within the covariant approach is suggested. The standard collinear QCD evolution together with the requirements of relativistic invariance and rotational symmetry of the nucleon in its rest frame represent the basic ingredients of our approach. The obtained results are compared with the predictions of some other approaches.
QCD EVOLUTION AND TMD/SPIN EXPERIMENTS
Jian-Ping Chen
2012-12-01
Transverse Spin and Transverse Momemtum Dependent (TMD) distribution study has been one of the main focuses of hadron physics in recent years. The initial exploratory Semi-Incluisve Deep-Inelastic-Scattering (SIDIS) experiments with transversely polarized proton and deuteron from HERMES and COMPASS attracted great attention and lead to very active efforts in both experiments and theory. QCD factorization has been carefully studied. A SIDIS experiment on the neutron with a polarized 3He target was performed at JLab. Recently published results will be shown. Precision TMD experiments are planned at JLab after the 12 GeV energy upgrade. The approved experiments with a new SoLID spectrometer on both the proton and neutron will be presented. Proper QCD evolution treatments beyond collinear cases become crucial for the precision study of the TMDs. Experimentally, Q2 evolution and higher-twist effects are often closely related. The experience of study higher-twist effects in the cases of moments of the spin structure functions will be discussed.
QCD evolution of the Sivers asymmetry
NASA Astrophysics Data System (ADS)
Echevarria, Miguel G.; Idilbi, Ahmad; Kang, Zhong-Bo; Vitev, Ivan
2014-04-01
We study the QCD evolution of the Sivers effect in both semi-inclusive deep inelastic scattering (SIDIS) and Drell-Yan production (DY). We pay close attention to the nonperturbative spin-independent Sudakov factor in the evolution formalism and find a universal form which can describe reasonably well the experimental data on the transverse momentum distributions in SIDIS, DY lepton pair and W/Z production. With this Sudakov factor at hand, we perform a global fitting of all the experimental data on the Sivers asymmetry in SIDIS from HERMES, COMPASS and Jefferson Lab. We then make predictions for the Sivers asymmetry in DY lepton pair and W production that can be compared to the future experimental measurements to test the sign change of the Sivers functions between SIDIS and DY processes and constrain the sea quark Sivers functions.
Correlations and discreteness in nonlinear QCD evolution
Armesto, N.; Milhano, J.
2006-06-01
We consider modifications of the standard nonlinear QCD evolution in an attempt to account for some of the missing ingredients discussed recently, such as correlations, discreteness in gluon emission and Pomeron loops. The evolution is numerically performed using the Balitsky-Kovchegov equation on individual configurations defined by a given initial value of the saturation scale, for reduced rapidities y=({alpha}{sub s}N{sub c}/{pi})Y<10. We consider the effects of averaging over configurations as a way to implement correlations, using three types of Gaussian averaging around a mean saturation scale. Further, we heuristically mimic discreteness in gluon emission by considering a modified evolution in which the tails of the gluon distributions are cut off. The approach to scaling and the behavior of the saturation scale with rapidity in these modified evolutions are studied and compared with the standard mean-field results. For the large but finite values of rapidity explored, no strong quantitative difference in scaling for transverse momenta around the saturation scale is observed. At larger transverse momenta, the influence of the modifications in the evolution seems most noticeable in the first steps of the evolution. No influence on the rapidity behavior of the saturation scale due to the averaging procedure is found. In the cutoff evolution the rapidity evolution of the saturation scale is slowed down and strongly depends on the value of the cutoff. Our results stress the need to go beyond simple modifications of evolution by developing proper theoretical tools that implement such recently discussed ingredients.
Evolution of fluctuations near QCD critical point
Stephanov, M. A.
2010-03-01
We propose to describe the time evolution of quasistationary fluctuations near QCD critical point by a system of stochastic Boltzmann-Langevin-Vlasov-type equations. We derive the equations and study the system analytically in the linearized regime. Known results for equilibrium stationary fluctuations as well as the critical scaling of diffusion coefficient are reproduced. We apply the approach to the long-standing question of the fate of the critical point fluctuations during the hadronic rescattering stage of the heavy-ion collision after chemical freeze-out. We find that if conserved particle number fluctuations survive the rescattering, so do, under a certain additional condition, the fluctuations of nonconserved quantities, such as mean transverse momentum. We derive a simple analytical formula for the magnitude of this memory effect.
QCD Evolution of the Transverse Momentum Dependent Correlations
Zhou, Jian; Liang, Zuo-Tang; Yuan, Feng
2008-12-10
We study the QCD evolution for the twist-three quark-gluon correlation functions associated with the transverse momentum odd quark distributions. Different from that for the leading twist quark distributions, these evolution equations involve more general twist-three functions beyond the correlation functions themselves. They provide important information on nucleon structure, and can be studied in the semi-inclusive hadron production in deep inelastic scattering and Drell-Yan lepton pair production in pp scattering process.
Analytic Evolution of Singular Distribution Amplitudes in QCD
Radyushkin, Anatoly V.; Tandogan Kunkel, Asli
2014-03-01
We describe a method of analytic evolution of distribution amplitudes (DA) that have singularities, such as non-zero values at the end-points of the support region, jumps at some points inside the support region and cusps. We illustrate the method by applying it to the evolution of a flat (constant) DA, anti-symmetric at DA and then use it for evolution of the two-photon generalized distribution amplitude. Our approach has advantages over the standard method of expansion in Gegenbauer polynomials, which requires infinite number of terms in order to accurately reproduce functions in the vicinity of singular points, and over a straightforward iteration of an initial distribution with evolution kernel. The latter produces logarithmically divergent terms at each iteration, while in our method the logarithmic singularities are summed from the start, which immediately produces a continuous curve, with only one or two iterations needed afterwards in order to get rather precise results.
NASA Astrophysics Data System (ADS)
Fomin, Fedor V.
Preprocessing (data reduction or kernelization) as a strategy of coping with hard problems is universally used in almost every implementation. The history of preprocessing, like applying reduction rules simplifying truth functions, can be traced back to the 1950's [6]. A natural question in this regard is how to measure the quality of preprocessing rules proposed for a specific problem. For a long time the mathematical analysis of polynomial time preprocessing algorithms was neglected. The basic reason for this anomaly was that if we start with an instance I of an NP-hard problem and can show that in polynomial time we can replace this with an equivalent instance I' with |I'| < |I| then that would imply P=NP in classical complexity.
Iterative filtering decomposition based on local spectral evolution kernel
Wang, Yang; Wei, Guo-Wei; Yang, Siyang
2011-01-01
The synthesizing information, achieving understanding, and deriving insight from increasingly massive, time-varying, noisy and possibly conflicting data sets are some of most challenging tasks in the present information age. Traditional technologies, such as Fourier transform and wavelet multi-resolution analysis, are inadequate to handle all of the above-mentioned tasks. The empirical model decomposition (EMD) has emerged as a new powerful tool for resolving many challenging problems in data processing and analysis. Recently, an iterative filtering decomposition (IFD) has been introduced to address the stability and efficiency problems of the EMD. Another data analysis technique is the local spectral evolution kernel (LSEK), which provides a near prefect low pass filter with desirable time-frequency localizations. The present work utilizes the LSEK to further stabilize the IFD, and offers an efficient, flexible and robust scheme for information extraction, complexity reduction, and signal and image understanding. The performance of the present LSEK based IFD is intensively validated over a wide range of data processing tasks, including mode decomposition, analysis of time-varying data, information extraction from nonlinear dynamic systems, etc. The utility, robustness and usefulness of the proposed LESK based IFD are demonstrated via a large number of applications, such as the analysis of stock market data, the decomposition of ocean wave magnitudes, the understanding of physiologic signals and information recovery from noisy images. The performance of the proposed method is compared with that of existing methods in the literature. Our results indicate that the LSEK based IFD improves both the efficiency and the stability of conventional EMD algorithms. PMID:22350559
Iterative filtering decomposition based on local spectral evolution kernel.
Wang, Yang; Wei, Guo-Wei; Yang, Siyang
2012-03-01
The synthesizing information, achieving understanding, and deriving insight from increasingly massive, time-varying, noisy and possibly conflicting data sets are some of most challenging tasks in the present information age. Traditional technologies, such as Fourier transform and wavelet multi-resolution analysis, are inadequate to handle all of the above-mentioned tasks. The empirical model decomposition (EMD) has emerged as a new powerful tool for resolving many challenging problems in data processing and analysis. Recently, an iterative filtering decomposition (IFD) has been introduced to address the stability and efficiency problems of the EMD. Another data analysis technique is the local spectral evolution kernel (LSEK), which provides a near prefect low pass filter with desirable time-frequency localizations. The present work utilizes the LSEK to further stabilize the IFD, and offers an efficient, flexible and robust scheme for information extraction, complexity reduction, and signal and image understanding. The performance of the present LSEK based IFD is intensively validated over a wide range of data processing tasks, including mode decomposition, analysis of time-varying data, information extraction from nonlinear dynamic systems, etc. The utility, robustness and usefulness of the proposed LESK based IFD are demonstrated via a large number of applications, such as the analysis of stock market data, the decomposition of ocean wave magnitudes, the understanding of physiologic signals and information recovery from noisy images. The performance of the proposed method is compared with that of existing methods in the literature. Our results indicate that the LSEK based IFD improves both the efficiency and the stability of conventional EMD algorithms. PMID:22350559
NASA Astrophysics Data System (ADS)
Echevarria, Miguel G.; Idilbi, Ahmad; Scimemi, Ignazio
2014-07-01
By considering semi-inclusive deep-inelastic scattering and the (complementary) qT-spectrum for Drell-Yan lepton pair production we derive the QCD evolution for all the leading-twist transverse momentum dependent distribution and fragmentation functions. We argue that all of those functions evolve with Q2 following a single evolution kernel. This kernel is independent of the underlying kinematics and it is also spin independent. Those features hold, in impact parameter space, to all values of bT. The evolution kernel presented has all of its large logarithms resummed up to next-to-next-to leading logarithmic accuracy, which is the highest possible accuracy given the existing perturbative calculations. As a study case we apply this kernel to investigate the evolution of the Collins function, one of the ingredients that have recently attracted much attention within the phenomenological studies of spin asymmetries. Our analysis can be readily implemented to revisit previously obtained fits that involve data at different scales for other spin-dependent functions. Such improved fits are important to get better predictions—with the correct evolution kernel—for certain upcoming experiments aiming to measure the Sivers function, Collins function, transversity, and other spin-dependent functions as well.
Non-Markovian Quantum Evolution: Time-Local Generators and Memory Kernels
NASA Astrophysics Data System (ADS)
Chruściński, Dariusz; Należyty, Paweł
2016-06-01
In this paper we provide a basic introduction to the topic of quantum non-Markovian evolution presenting both time-local and memory kernel approach to the evolution of open quantum systems. We start with the standard notion of a classical Markovian stochastic process and generalize it to classical Markovian stochastic evolution which in turn becomes a starting point of the quantum setting. Our approach is based on the notion of P-divisible, CP-divisible maps and their refinements to k-divisible maps. Basic methods enabling one to detect non-Markovianity of the quantum evolution are also presented. Our analysis is illustrated by several simple examples.
NASA Astrophysics Data System (ADS)
Fleming, Sean
In this talk I review recent experimental and theoretical results in QCD. Since the topic is too vast to cover within given time constraints I choose to highlight some of the subjects that I find particularly exciting. On the experimental side I focus on measurements made at the Tevatron. Specifically jet production rates, and the cross section for B meson production. In addition I discuss an interesting measurement made by the Belle collaboration of double exclusive charmonium production. On the theory side I quickly review recent advances in computing hadronic cross sections at subleading order in perturbation theory. I then move on to soft-collinear effective theory. After a lightning review of the formalism I discuss recently published results on color-suppressed B → D decays.
How to impose initial conditions for QCD evolution of double parton distributions?
NASA Astrophysics Data System (ADS)
Golec-Biernat, Krzysztof; Lewandowska, Emilia
2014-07-01
Double parton distribution functions are used in the QCD description of double parton scattering. The double parton distributions evolve with hard scales through QCD evolution equations which obey nontrivial momentum and valence quark number sum rules. We describe an attempt to construct initial conditions for the evolution equations which exactly fulfill these sum rules and discuss its shortcomings. We also discuss the factorization of the double parton distributions into a product of two single parton distribution functions at small values of the parton momentum fractions.
COLLINEAR SPLITTING, PARTON EVOLUTION AND THE STRANGE-QUARK ASYMMETRY OF THE NUCLEON IN NNLO QCD.
RODRIGO,G.CATANI,S.DE FLORIAN, D.VOGELSANG,W.
2004-04-25
We consider the collinear limit of QCD amplitudes at one-loop order, and their factorization properties directly in color space. These results apply to the multiple collinear limit of an arbitrary number of QCD partons, and are a basic ingredient in many higher-order computations. In particular, we discuss the triple collinear limit and its relation to flavor asymmetries in the QCD evolution of parton densities at three loops. As a phenomenological consequence of this new effect, and of the fact that the nucleon has non-vanishing quark valence densities, we study the perturbative generation of a strange-antistrange asymmetry s(x)-{bar s}(x) in the nucleon's sea.
Efficient evolution of unpolarized and polarized parton distributions with QCD- PEGASUS
NASA Astrophysics Data System (ADS)
Vogt, A.
2005-07-01
The FORTRAN package QCD- PEGASUS is presented. This program provides fast, flexible and accurate solutions of the evolution equations for unpolarized and polarized parton distributions of hadrons in perturbative QCD. The evolution is performed using the symbolic moment-space solutions on a one-fits-all Mellin inversion contour. User options include the order of the evolution including the next-to-next-to-leading order in the unpolarized case, the type of the evolution including an emulation of brute-force solutions, the evolution with a fixed number n of flavors or in the variable- n scheme, and the evolution with a renormalization scale unequal to the factorization scale. The initial distributions are needed in a form facilitating the computation of the complex Mellin moments. Program summaryTitle of program: QCD- PEGASUS Version: 1.0 Catalogue identifier: ADVN Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVN Program obtainable from: CPC Program Library Queen's University of Belfast, N. Ireland License: GNU Public License Computers: all Operating systems: all Program language:FORTRAN 77 (using the common compiler extension of procedure names with more than six characters) Memory required to execute: negligible ( <1 MB) Other programs called: none External files needed: none Number of lines in distributed program, including test data, etc.: 8157 Number of bytes in distributed program, including test data, etc.: 240 578 Distribution format: tar.gz Nature of the physical problem: Solution of the evolution equations for the unpolarized and polarized parton distributions of hadrons at leading order (LO), next-to-leading order and next-to-next-to-leading order of perturbative QCD. Evolution performed either with a fixed number n of effectively massless quark flavors or in the variable- n scheme. The calculation of observables from the parton distributions is not part of the present package. Method of solution: Analytic solution in Mellin space (beyond LO in
Method of Analytic Evolution of Flat Distribution Amplitudes in QCD
Asli Tandogan, Anatoly V. Radyushkin
2011-11-01
A new analytical method of performing ERBL evolution is described. The main goal is to develop an approach that works for distribution amplitudes that do not vanish at the end points, for which the standard method of expansion in Gegenbauer polynomials is inefficient. Two cases of the initial DA are considered: a purely flat DA, given by the same constant for all x, and an antisymmetric DA given by opposite constants for x < 1/2 or x > 1/2. For a purely flat DA, the evolution is governed by an overall (x (1-x)){sup t} dependence on the evolution parameter t times a factor that was calculated as an expansion in t. For an antisymmetric flat DA, an extra overall factor |1-2x|{sup 2t} appears due to a jump at x = 1/2. A good convergence was observed in the t {approx}< 1/2 region. For larger t, one can use the standard method of the Gegenbauer expansion.
Renormalization group evolution of multi-gluon correlators in high energy QCD
Dumitru A.; Venugopalan R.; Jalilian-Marian, J.; Lappi, T.; Schenke, B.
2011-11-06
Many-body QCD in leading high energy Regge asymptotics is described by the Balitsky-JIMWLK hierarchy of renormalization group equations for the x evolution of multi-point Wilson line correlators. These correlators are universal and ubiquitous in final states in deeply inelastic scattering and hadronic collisions. For instance, recently measured di-hadron correlations at forward rapidity in deuteron-gold collisions at the Relativistic Heavy Ion Collider (RHIC) are sensitive to four and six point correlators of Wilson lines in the small x color fields of the dense nuclear target. We evaluate these correlators numerically by solving the functional Langevin equation that describes the Balitsky-JIMWLK hierarchy. We compare the results to mean-field Gaussian and large Nc approximations used in previous phenomenological studies. We comment on the implications of our results for quantitative studies of multi-gluon final states in high energy QCD.
Statistical physics in QCD evolution towards high energies
NASA Astrophysics Data System (ADS)
Munier, Stéphane
2015-08-01
The concepts and methods used for the study of disordered systems have proven useful in the analysis of the evolution equations of quantum chromodynamics in the high-energy regime: Indeed, parton branching in the semi-classical approximation relevant at high energies and at a fixed impact parameter is a peculiar branching-diffusion process, and parton branching supplemented by saturation effects (such as gluon recombination) is a reaction-diffusion process. In this review article, we first introduce the basic concepts in the context of simple toy models, we study the properties of the latter, and show how the results obtained for the simple models may be taken over to quantum chromodynamics.
Real time evolution of non-Gaussian cumulants in the QCD critical regime
Mukherjee, Swagato; Venugopalan, Raju; Yin, Yi
2015-09-23
In this study, we derive a coupled set of equations that describe the nonequilibrium evolution of cumulants of critical fluctuations for spacetime trajectories on the crossover side of the QCD phase diagram. In particular, novel expressions are obtained for the nonequilibrium evolution of non-Gaussian skewness and kurtosis cumulants. UBy utilizing a simple model of the spacetime evolution of a heavy-ion collision, we demonstrate that, depending on the relaxation rate of critical fluctuations, skewness and kurtosis can differ significantly in magnitude as well as in sign from equilibrium expectations. Memory effects are important and shown to persist even for trajectories thatmore » skirt the edge of the critical regime. We use phenomenologically motivated parametrizations of freeze-out curves and of the beam-energy dependence of the net baryon chemical potential to explore the implications of our model study for the critical-point search in heavy-ion collisions.« less
Real time evolution of non-Gaussian cumulants in the QCD critical regime
Mukherjee, Swagato; Venugopalan, Raju; Yin, Yi
2015-09-23
In this study, we derive a coupled set of equations that describe the nonequilibrium evolution of cumulants of critical fluctuations for spacetime trajectories on the crossover side of the QCD phase diagram. In particular, novel expressions are obtained for the nonequilibrium evolution of non-Gaussian skewness and kurtosis cumulants. UBy utilizing a simple model of the spacetime evolution of a heavy-ion collision, we demonstrate that, depending on the relaxation rate of critical fluctuations, skewness and kurtosis can differ significantly in magnitude as well as in sign from equilibrium expectations. Memory effects are important and shown to persist even for trajectories that skirt the edge of the critical regime. We use phenomenologically motivated parametrizations of freeze-out curves and of the beam-energy dependence of the net baryon chemical potential to explore the implications of our model study for the critical-point search in heavy-ion collisions.
Transverse momentum dependent parton distribution and fragmentation functions with QCD evolution
NASA Astrophysics Data System (ADS)
Aybat, S. Mert; Rogers, Ted C.
2011-06-01
We assess the current phenomenological status of transverse momentum dependent (TMD) parton distribution functions (PDFs) and fragmentation functions (FFs) and study the effect of consistently including perturbative QCD (pQCD) evolution. Our goal is to initiate the process of establishing reliable, QCD-evolved parametrizations for the TMD PDFs and TMD FFs that can be used both to test TMD factorization and to search for evidence of the breakdown of TMD factorization that is expected for certain processes. In this article, we focus on spin-independent processes because they provide the simplest illustration of the basic steps and can already be used in direct tests of TMD factorization. Our calculations are based on the Collins-Soper-Sterman (CSS) formalism, supplemented by recent theoretical developments which have clarified the precise definitions of the TMD PDFs and TMD FFs needed for a valid TMD-factorization theorem. Starting with these definitions, we numerically generate evolved TMD PDFs and TMD FFs using as input existing parametrizations for the collinear PDFs, collinear FFs, nonperturbative factors in the CSS factorization formalism, and recent fixed-scale fits. We confirm that evolution has important consequences, both qualitatively and quantitatively, and argue that it should be included in future phenomenological studies of TMD functions. Our analysis is also suggestive of extensions to processes that involve spin-dependent functions such as the Boer-Mulders, Sivers, or Collins functions, which we intend to pursue in future publications. At our website [http://projects.hepforge.org/tmd/], we have made available the tables and calculations needed to obtain the TMD parametrizations presented herein.
Linear vs non-linear QCD evolution in the neutrino-nucleon cross section
NASA Astrophysics Data System (ADS)
Albacete, Javier L.; Illana, José I.; Soto-Ontoso, Alba
2016-03-01
Evidence for an extraterrestrial flux of ultra-high-energy neutrinos, in the order of PeV, has opened a new era in Neutrino Astronomy. An essential ingredient for the determination of neutrino fluxes from the number of observed events is the precise knowledge of the neutrino-nucleon cross section. In this work, based on [1], we present a quantitative study of σνN in the neutrino energy range 104 < Eν < 1014 GeV within two transversal QCD approaches: NLO DGLAP evolution using different sets of PDFs and BK small-x evolution with running coupling and kinematical corrections. Further, we translate this theoretical uncertainty into upper bounds for the ultra-high-energy neutrino flux for different experiments.
Markovian Monte Carlo program EvolFMC v.2 for solving QCD evolution equations
NASA Astrophysics Data System (ADS)
Jadach, S.; Płaczek, W.; Skrzypek, M.; Stokłosa, P.
2010-02-01
We present the program EvolFMC v.2 that solves the evolution equations in QCD for the parton momentum distributions by means of the Monte Carlo technique based on the Markovian process. The program solves the DGLAP-type evolution as well as modified-DGLAP ones. In both cases the evolution can be performed in the LO or NLO approximation. The quarks are treated as massless. The overall technical precision of the code has been established at 5×10. This way, for the first time ever, we demonstrate that with the Monte Carlo method one can solve the evolution equations with precision comparable to the other numerical methods. New version program summaryProgram title: EvolFMC v.2 Catalogue identifier: AEFN_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFN_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including binary test data, etc.: 66 456 (7407 lines of C++ code) No. of bytes in distributed program, including test data, etc.: 412 752 Distribution format: tar.gz Programming language: C++ Computer: PC, Mac Operating system: Linux, Mac OS X RAM: Less than 256 MB Classification: 11.5 External routines: ROOT ( http://root.cern.ch/drupal/) Nature of problem: Solution of the QCD evolution equations for the parton momentum distributions of the DGLAP- and modified-DGLAP-type in the LO and NLO approximations. Solution method: Monte Carlo simulation of the Markovian process of a multiple emission of partons. Restrictions:Limited to the case of massless partons. Implemented in the LO and NLO approximations only. Weighted events only. Unusual features: Modified-DGLAP evolutions included up to the NLO level. Additional comments: Technical precision established at 5×10. Running time: For the 10 6 events at 100 GeV: DGLAP NLO: 27s; C-type modified DGLAP NLO: 150s (MacBook Pro with Mac OS X v.10
Collins, John; Rogers, Ted
2015-04-01
There is considerable controversy about the size and importance of non-perturbative contributions to the evolution of transverse momentum dependent (TMD) parton distribution functions. Standard fits to relatively high-energy Drell-Yan data give evolution that when taken to lower Q is too rapid to be consistent with recent data in semi-inclusive deeply inelastic scattering. Some authors provide very different forms for TMD evolution, even arguing that non-perturbative contributions at large transverse distance bT are not needed or are irrelevant. Here, we systematically analyze the issues, both perturbative and non-perturbative. We make a motivated proposal for the parameterization of the non-perturbative part ofmore » the TMD evolution kernel that could give consistency: with the variety of apparently conflicting data, with theoretical perturbative calculations where they are applicable, and with general theoretical non-perturbative constraints on correlation functions at large distances. We propose and use a scheme- and scale-independent function A(bT) that gives a tool to compare and diagnose different proposals for TMD evolution. We also advocate for phenomenological studies of A(bT) as a probe of TMD evolution. The results are important generally for applications of TMD factorization. In particular, they are important to making predictions for proposed polarized Drell- Yan experiments to measure the Sivers function.« less
Collins, John; Rogers, Ted
2015-04-01
There is considerable controversy about the size and importance of non-perturbative contributions to the evolution of transverse momentum dependent (TMD) parton distribution functions. Standard fits to relatively high-energy Drell-Yan data give evolution that when taken to lower Q is too rapid to be consistent with recent data in semi-inclusive deeply inelastic scattering. Some authors provide very different forms for TMD evolution, even arguing that non-perturbative contributions at large transverse distance bT are not needed or are irrelevant. Here, we systematically analyze the issues, both perturbative and non-perturbative. We make a motivated proposal for the parameterization of the non-perturbative part of the TMD evolution kernel that could give consistency: with the variety of apparently conflicting data, with theoretical perturbative calculations where they are applicable, and with general theoretical non-perturbative constraints on correlation functions at large distances. We propose and use a scheme- and scale-independent function A(bT) that gives a tool to compare and diagnose different proposals for TMD evolution. We also advocate for phenomenological studies of A(bT) as a probe of TMD evolution. The results are important generally for applications of TMD factorization. In particular, they are important to making predictions for proposed polarized Drell- Yan experiments to measure the Sivers function.
QCD evolution of (un)polarized gluon TMDPDFs and the Higgs q T -distribution
NASA Astrophysics Data System (ADS)
Echevarria, Miguel G.; Kasemets, Tomas; Mulders, Piet J.; Pisano, Cristian
2015-07-01
We provide the proper definition of all the leading-twist (un)polarized gluon transverse momentum dependent parton distribution functions (TMDPDFs), by considering the Higgs boson transverse momentum distribution in hadron-hadron collisions and deriving the factorization theorem in terms of them. We show that the evolution of all the (un)polarized gluon TMDPDFs is driven by a universal evolution kernel, which can be resummed up to next-to-next-to-leading-logarithmic accuracy. Considering the proper definition of gluon TMDPDFs, we perform an explicit next-to-leading-order calculation of the unpolarized ( f {1/ g }), linearly polarized ( h {1/⊥ g }) and helicity ( g {1/L g }) gluon TMDPDFs, and show that, as expected, they are free from rapidity divergences. As a byproduct, we obtain the Wilson coefficients of the refactorization of these TMDPDFs at large transverse momentum. In particular, the coefficient of g {1/L g }, which has never been calculated before, constitutes a new and necessary ingredient for a reliable phenomenological extraction of this quantity, for instance at RHIC or the future AFTER@LHC or Electron-Ion Collider. The coefficients of f {1/ g } and h {1/⊥ g } have never been calculated in the present formalism, although they could be obtained by carefully collecting and recasting previous results in the new TMD formalism. We apply these results to analyze the contribution of linearly polarized gluons at different scales, relevant, for instance, for the inclusive production of the Higgs boson and the C-even pseudoscalar bottomonium state η b . Applying our resummation scheme we finally provide predictions for the Higgs boson q T -distribution at the LHC.
Dumitru, Adrian; Jalilian-Marian, Jamal
2010-10-01
Present knowledge of QCD n-point functions of Wilson lines at high energies is rather limited. In practical applications, it is therefore customary to factorize higher n-point functions into products of two-point functions (dipoles) which satisfy the Balitsky-Kovchegov-evolution equation. We employ the Jalilian-Marian-Iancu-McLerran-Weigert-Leonidov-Kovner formalism to derive explicit evolution equations for the 4- and 6-point functions of fundamental Wilson lines and show that if the Gaussian approximation is carried out before the rapidity evolution step is taken, then many leading order N{sub c} contributions are missed. Our evolution equations could specifically be used to improve calculations of forward dijet angular correlations, recently measured by the STAR Collaboration in deuteron-gold collisions at the RHIC collider. Forward dijets in proton-proton collisions at the LHC probe QCD evolution at even smaller light-cone momentum fractions. Such correlations may provide insight into genuine differences between the Jalilian-Marian-Iancu-McLerran-Weigert-Leonidov-Kovner and Balitsky-Kovchegov approaches.
Analytic solution to leading order coupled DGLAP evolution equations: A new perturbative QCD tool
NASA Astrophysics Data System (ADS)
Block, Martin M.; Durand, Loyal; Ha, Phuoc; McKay, Douglas W.
2011-03-01
We have analytically solved the LO perturbative QCD singlet DGLAP equations [V. N. Gribov and L. N. Lipatov, Sov. J. Nucl. Phys. 15, 438 (1972)SJNCAS0038-5506][G. Altarelli and G. Parisi, Nucl. Phys. B126, 298 (1977)][Y. L. Dokshitzer, Sov. Phys. JETP 46, 641 (1977)SPHJAR0038-5646] using Laplace transform techniques. Newly developed, highly accurate, numerical inverse Laplace transform algorithms [M. M. Block, Eur. Phys. J. C 65, 1 (2010)EPCFFB1434-604410.1140/epjc/s10052-009-1195-8][M. M. Block, Eur. Phys. J. C 68, 683 (2010)EPCFFB1434-604410.1140/epjc/s10052-010-1374-7] allow us to write fully decoupled solutions for the singlet structure function Fs(x,Q2) and G(x,Q2) as Fs(x,Q2)=Fs(Fs0(x0),G0(x0)) and G(x,Q2)=G(Fs0(x0),G0(x0)), where the x0 are the Bjorken x values at Q02. Here Fs and G are known functions—found using LO DGLAP splitting functions—of the initial boundary conditions Fs0(x)≡Fs(x,Q02) and G0(x)≡G(x,Q02), i.e., the chosen starting functions at the virtuality Q02. For both G(x) and Fs(x), we are able to either devolve or evolve each separately and rapidly, with very high numerical accuracy—a computational fractional precision of O(10-9). Armed with this powerful new tool in the perturbative QCD arsenal, we compare our numerical results from the above equations with the published MSTW2008 and CTEQ6L LO gluon and singlet Fs distributions [A. D. Martin, W. J. Stirling, R. S. Thorne, and G. Watt, Eur. Phys. J. C 63, 189 (2009)EPCFFB1434-604410.1140/epjc/s10052-009-1072-5], starting from their initial values at Q02=1GeV2 and 1.69GeV2, respectively, using their choice of αs(Q2). This allows an important independent check on the accuracies of their evolution codes and, therefore, the computational accuracies of their published parton distributions. Our method completely decouples the two LO distributions, at the same time guaranteeing that both G and Fs satisfy the singlet coupled DGLAP equations. It also allows one to easily obtain the effects of
Small-x Evolution of Structure Functions in the Next-to-Leading Order
Chirilli, Giovanni A.
2009-12-17
The high-energy behavior of amplitudes in gauge theories can be reformulated in terms of the evolution of Wilson-line operators. In the leading order this evolution is governed by the nonlinear Balitsky-Kovchegov (BK) equation. The NLO corrections define the scale of the running-coupling constant in the BK equation and in QCD, its kernel has both conformal and non-conformal parts. To separate the conformally invariant effects from the running-coupling effects, we calculate the NLO evolution of the color dipoles in the conformal N = 4 SYM theory, then we define the 'composite dipole operator' with the rapidity cutoff preserving conformal invariance, and the resulting Moebius invariant kernel for this operator agrees with the forward NLO BFKL calculation.In QCD, the NLO kernel for the composite operators resolves in a sum of the conformal part and the running-coupling part.
Small-x Evolution of Structure Functions in the Next-to-Leading Order
Giovanni Antonio Chirilli
2009-12-01
The high-energy behavior of amplitudes in gauge theories can be reformulated in terms of the evolution of Wilson-line operators. In the leading order this evolution is governed by the nonlinear Balitsky-Kovchegov (BK) equation. The NLO corrections define the scale of the running coupling constant in the BK equation and in QCD, its kernel has both conformal and non-conformal parts. To separate the conformally invariant effects from the running-coupling effects, we calculate the NLO evolution of the color dipoles in the conformal N = 4 SYM theory, then we define the "composite dipole operator" with the rapidity cutoff preserving conformal invariance, and the resulting Möbius invariant kernel for this operator agrees with the forward NLO BFKL calculation. In QCD, the NLO kernel for the composite operators resolves in a sum of the conformal part and the running-coupling part.
A new approach to parton recombination in the QCD evolution equations
NASA Astrophysics Data System (ADS)
Wei Zhu
1999-06-01
Parton recombination is reconsidered in perturbation theory without using the AGK cutting rules in the leading order of the recombination. We use time-ordered perturbation theory to sum the cut diagrams, which are neglected in the GLR evolution equation. We present a set of new evolution equations including parton recombination.
Bornyakov, V.G.
2005-06-01
Possibilities that are provided by a lattice regularization of QCD for studying nonperturbative properties of QCD are discussed. A review of some recent results obtained from computer calculations in lattice QCD is given. In particular, the results for the QCD vacuum structure, the hadron mass spectrum, and the strong coupling constant are considered.
Small-x evolution of structure functions in the next-to-leading order
Giovanni A. Chirilli
2010-01-01
The high-energy behavior of amplitudes in gauge theories can be reformulated in terms of the evolution of Wilson-line operators. In the leading order this evolution is governed by the non-linear Balitsky-Kovchegov (BK) equation. In QCD the NLO kernel has both conformal and non-conformal parts. To separate the conformally invariant effects from the running-coupling effects, we calculate the NLO evolution of the color dipoles in the conformal N = 4 SYM theory, then we define the "composite dipole operator", and the resulting Mobius invariant kernel for this operator agrees with the forward NLO BFKL calculation.
Extraction of quark transversity distribution and Collins fragmentation functions with QCD evolution
Kang, Zhong-Bo; Prokudin, Alexei; Sun, Peng; Yuan, Feng
2016-01-13
In this paper, we study the transverse momentum dependent (TMD) evolution of the Collins azimuthal asymmetries in e+e- annihilations and semi-inclusive hadron production in deep inelastic scattering (SIDIS) processes. All the relevant coefficients are calculated up to the next-to-leading logarithmic (NLL) order accuracy. By applying the TMD evolution at the approximate NLL order in the Collins- Soper-Sterman (CSS) formalism, we extract transversity distributions for u and d quarks and Collins fragmentation functions from current experimental data by a global analysis of the Collins asymmetries in back-to-back di-hadron productions in e+e- annihilations measured by BELLE and BABAR Collaborations and SIDIS datamore » from HERMES, COMPASS, and JLab HALL A experiments. The impact of the evolution effects and the relevant theoretical uncertainties are discussed. We further discuss the TMD interpretation for our results, and illustrate the unpolarized quark distribution, transversity distribution, unpolarized quark fragmentation and Collins fragmentation functions depending on the transverse momentum and the hard momentum scale. Finally, we give predictions and discuss impact of future experiments.« less
Extraction of quark transversity distribution and Collins fragmentation functions with QCD evolution
NASA Astrophysics Data System (ADS)
Kang, Zhong-Bo; Prokudin, Alexei; Sun, Peng; Yuan, Feng
2016-01-01
We study the transverse-momentum-dependent (TMD) evolution of the Collins azimuthal asymmetries in e+e- annihilations and semi-inclusive hadron production in deep inelastic scattering processes. All the relevant coefficients are calculated up to the next-to-leading-logarithmic-order accuracy. By applying the TMD evolution at the approximate next-to-leading-logarithmic order in the Collins-Soper-Sterman formalism, we extract transversity distributions for u and d quarks and Collins fragmentation functions from current experimental data by a global analysis of the Collins asymmetries in back-to-back dihadron productions in e+e- annihilations measured by BELLE and BABAR collaborations and semi-inclusive hadron production in deep inelastic scattering data from HERMES, COMPASS, and JLab HALL A experiments. The impact of the evolution effects and the relevant theoretical uncertainties are discussed. We further discuss the TMD interpretation for our results and illustrate the unpolarized quark distribution, transversity distribution, unpolarized quark fragmentation, and Collins fragmentation functions depending on the transverse momentum and the hard momentum scale. We make detailed predictions for future experiments and discuss their impact.
Two-loop conformal generators for leading-twist operators in QCD
NASA Astrophysics Data System (ADS)
Braun, V. M.; Manashov, A. N.; Moch, S.; Strohmaier, M.
2016-03-01
QCD evolution equations in minimal subtraction schemes have a hidden symmetry: one can construct three operators that commute with the evolution kernel and form an SL(2) algebra, i.e. they satisfy (exactly) the SL(2) commutation relations. In this paper we find explicit expressions for these operators to two-loop accuracy going over to QCD in non-integer d = 4 - 2ɛ space-time dimensions at the intermediate stage. In this way conformal symmetry of QCD is restored on quantum level at the specially chosen (critical) value of the coupling, and at the same time the theory is regularized allowing one to use the standard renormalization procedure for the relevant Feynman diagrams. Quantum corrections to conformal generators in d = 4 - 2ɛ effectively correspond to the conformal symmetry breaking in the physical theory in four dimensions and the SL(2) commutation relations lead to nontrivial constraints on the renormalization group equations for composite operators. This approach is valid to all orders in perturbation theory and the result includes automatically all terms that can be identified as due to a nonvanishing QCD β-function (in the physical theory in four dimensions). Our result can be used to derive three-loop evolution equations for flavor-nonsinglet quark-antiquark operators including mixing with the operators containing total derivatives. These equations govern, e.g., the scale dependence of generalized hadron parton distributions and light-cone meson distribution amplitudes.
The Chroma Software System for Lattice QCD
Robert Edwards; Balint Joo
2004-06-01
We describe aspects of the Chroma software system for lattice QCD calculations. Chroma is an open source C++ based software system developed using the software infrastructure of the US SciDAC initiative. Chroma interfaces with output from the BAGEL assembly generator for optimized lattice fermion kernels on some architectures. It can be run on workstations, clusters and the QCDOC supercomputer.
Caprio, Michael A; Martinez, Jeannette C; Porter, Patrick A; Bynum, Ed
2016-02-01
Seeds or kernels on hybrid plants are primarily F(2) tissue and will segregate for heterozygous alleles present in the parental F(1) hybrids. In the case of plants expressing Bt-toxins, the F(2) tissue in the kernels will express toxins as they would segregate in any F(2) tissue. In the case of plants expressing two unlinked toxins, the kernels on a Bt plant fertilized by another Bt plant would express anywhere from 0 to 2 toxins. Larvae of corn earworm [Helicoverpa zea (Boddie)] feed on a number of kernels during development and would therefore be exposed to local habitats (kernels) that varied in their toxin expression. Three models were developed for plants expressing two Bt-toxins, one where the traits are unlinked, a second where the traits were linked and a third model assuming that maternal traits were expressed in all kernels as well as paternally inherited traits. Results suggest that increasing larval movement rates off of expressing kernels tended to increase durability while increasing movement rates off of nonexpressing kernels always decreased durability. An ideal block refuge (no pollen flow between blocks and refuges) was more durable than a seed blend because the refuge expressed no toxins, while pollen contamination from plants expressing toxins in a seed blend reduced durability. A linked-trait model in an ideal refuge model predicted the longest durability. The results suggest that using a seed-blend strategy for a kernel feeding insect on a hybrid crop could dramatically reduce durability through the loss of refuge due to extensive cross-pollination. PMID:26527792
Initial-state splitting kernels in cold nuclear matter
NASA Astrophysics Data System (ADS)
Ovanesyan, Grigory; Ringer, Felix; Vitev, Ivan
2016-09-01
We derive medium-induced splitting kernels for energetic partons that undergo interactions in dense QCD matter before a hard-scattering event at large momentum transfer Q2. Working in the framework of the effective theory SCETG, we compute the splitting kernels beyond the soft gluon approximation. We present numerical studies that compare our new results with previous findings. We expect the full medium-induced splitting kernels to be most relevant for the extension of initial-state cold nuclear matter energy loss phenomenology in both p+A and A+A collisions.
Hess, Peter O.
2006-09-25
A review is presented on the contributions of Mexican Scientists to QCD phenomenology. These contributions range from Constituent Quark model's (CQM) with a fixed number of quarks (antiquarks) to those where the number of quarks is not conserved. Also glueball spectra were treated with phenomenological models. Several other approaches are mentioned.
QCD at nonzero chemical potential: Recent progress on the lattice
NASA Astrophysics Data System (ADS)
Aarts, Gert; Attanasio, Felipe; Jäger, Benjamin; Seiler, Erhard; Sexty, Dénes; Stamatescu, Ion-Olimpiu
2016-01-01
We summarise recent progress in simulating QCD at nonzero baryon density using complex Langevin dynamics. After a brief outline of the main idea, we discuss gauge cooling as a means to control the evolution. Subsequently we present a status report for heavy dense QCD and its phase structure, full QCD with staggered quarks, and full QCD with Wilson quarks, both directly and using the hopping parameter expansion to all orders.
Approximate kernel competitive learning.
Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang
2015-03-01
Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. PMID:25528318
NASA Astrophysics Data System (ADS)
Cao, Shanshan; Qin, Guang-You; Bass, Steffen A.
2014-12-01
We study heavy flavor evolution and hadronization in relativistic heavy-ion collisions. The in-medium evolution of heavy quarks is described using our modified Langevin framework that incorporates both collisional and radiative energy loss mechanisms. The subsequent hadronization process for heavy quarks is calculated with a fragmentation plus recombination model. We find significant contribution from gluon radiation to heavy quark energy loss at high pT; the recombination mechanism can greatly enhance the D meson production at medium pT. Our calculation provides a good description of the D meson nuclear modification at the LHC. In addition, we explore the angular correlation functions of heavy flavor pairs which may provide us a potential candidate for distinguishing different energy loss mechanisms of heavy quarks inside the QGP.
Small-x Evolution in the Next-to-Leading Order
Ian Balitsky
2009-10-01
The high-energy behavior of amplitudes in gauge theories can be reformulated in terms of the evolution of Wilson-line operators. In the leading order this evolution is governed by the non-linear BK equation. The NLO corrections define the scale of the running-coupling constant in the BK equation and in QCD, its kernel has both conformal and non-conformal parts. To separate the conformally invariant effects from the running-coupling effects, we calculate the NLO evolution of the color dipoles in the conformal N=4 SYM theory, then we define the 'composite dipole operator' with the rapidity cutoff preserving conformal invariance, and the resulting Möbius invariant kernel for this operator agrees with the forward NLO BFKL calculation.
Sparse representation with kernels.
Gao, Shenghua; Tsang, Ivor Wai-Hung; Chia, Liang-Tien
2013-02-01
Recent research has shown the initial success of sparse coding (Sc) in solving many computer vision tasks. Motivated by the fact that kernel trick can capture the nonlinear similarity of features, which helps in finding a sparse representation of nonlinear features, we propose kernel sparse representation (KSR). Essentially, KSR is a sparse coding technique in a high dimensional feature space mapped by an implicit mapping function. We apply KSR to feature coding in image classification, face recognition, and kernel matrix approximation. More specifically, by incorporating KSR into spatial pyramid matching (SPM), we develop KSRSPM, which achieves a good performance for image classification. Moreover, KSR-based feature coding can be shown as a generalization of efficient match kernel and an extension of Sc-based SPM. We further show that our proposed KSR using a histogram intersection kernel (HIK) can be considered a soft assignment extension of HIK-based feature quantization in the feature coding process. Besides feature coding, comparing with sparse coding, KSR can learn more discriminative sparse codes and achieve higher accuracy for face recognition. Moreover, KSR can also be applied to kernel matrix approximation in large scale learning tasks, and it demonstrates its robustness to kernel matrix approximation, especially when a small fraction of the data is used. Extensive experimental results demonstrate promising results of KSR in image classification, face recognition, and kernel matrix approximation. All these applications prove the effectiveness of KSR in computer vision and machine learning tasks. PMID:23014744
Melacci, Stefano; Gori, Marco
2013-11-01
Supervised examples and prior knowledge on regions of the input space have been profitably integrated in kernel machines to improve the performance of classifiers in different real-world contexts. The proposed solutions, which rely on the unified supervision of points and sets, have been mostly based on specific optimization schemes in which, as usual, the kernel function operates on points only. In this paper, arguments from variational calculus are used to support the choice of a special class of kernels, referred to as box kernels, which emerges directly from the choice of the kernel function associated with a regularization operator. It is proven that there is no need to search for kernels to incorporate the structure deriving from the supervision of regions of the input space, because the optimal kernel arises as a consequence of the chosen regularization operator. Although most of the given results hold for sets, we focus attention on boxes, whose labeling is associated with their propositional description. Based on different assumptions, some representer theorems are given that dictate the structure of the solution in terms of box kernel expansion. Successful results are given for problems of medical diagnosis, image, and text categorization. PMID:24051728
Melacci, Stefano; Gori, Marco
2013-04-12
Supervised examples and prior knowledge on regions of the input space have been profitably integrated in kernel machines to improve the performance of classifiers in different real-world contexts. The proposed solutions, which rely on the unified supervision of points and sets, have been mostly based on specific optimization schemes in which, as usual, the kernel function operates on points only. In this paper, arguments from variational calculus are used to support the choice of a special class of kernels, referred to as box kernels, which emerges directly from the choice of the kernel function associated with a regularization operator. It is proven that there is no need to search for kernels to incorporate the structure deriving from the supervision of regions of the input space, since the optimal kernel arises as a consequence of the chosen regularization operator. Although most of the given results hold for sets, we focus attention on boxes, whose labeling is associated with their propositional description. Based on different assumptions, some representer theorems are given which dictate the structure of the solution in terms of box kernel expansion. Successful results are given for problems of medical diagnosis, image, and text categorization. PMID:23589591
Duff, I.
1994-12-31
This workshop focuses on kernels for iterative software packages. Specifically, the three speakers discuss various aspects of sparse BLAS kernels. Their topics are: `Current status of user lever sparse BLAS`; Current status of the sparse BLAS toolkit`; and `Adding matrix-matrix and matrix-matrix-matrix multiply to the sparse BLAS toolkit`.
Long range two-particle rapidity correlations in A+A collisions from high energy QCD evolution
NASA Astrophysics Data System (ADS)
Dusling, Kevin; Gelis, François; Lappi, Tuomas; Venugopalan, Raju
2010-05-01
Long range rapidity correlations in A+A collisions are sensitive to strong color field dynamics at early times after the collision. These can be computed in a factorization formalism (Gelis, Lappi and Venugopalan (2009) [1]) which expresses the n-gluon inclusive spectrum at arbitrary rapidity separations in terms of the multi-parton correlations in the nuclear wavefunctions. This formalism includes all radiative and rescattering contributions, to leading accuracy in αΔY, where Δ Y is the rapidity separation between either one of the measured gluons and a projectile, or between the measured gluons themselves. In this paper, we use a mean field approximation for the evolution of the nuclear wavefunctions to obtain a compact result for inclusive two gluon correlations in terms of the unintegrated gluon distributions in the nuclear projectiles. The unintegrated gluon distributions satisfy the Balitsky-Kovchegov equation, which we solve with running coupling and with initial conditions constrained by existing data on electron-nucleus collisions. Our results are valid for arbitrary rapidity separations between measured gluons having transverse momenta p,q≳Q, where Q is the saturation scale in the nuclear wavefunctions. We compare our results to data on long range rapidity correlations observed in the near-side ridge at RHIC and make predictions for similar long range rapidity correlations at the LHC.
Probing QCD at high energy via correlations
Jalilian-Marian, Jamal
2011-04-26
A hadron or nucleus at high energy or small x{sub Bj} contains many gluons and may be described as a Color Glass Condensate. Angular and rapidity correlations of two particles produced in high energy hadron-hadron collisions is a sensitive probe of high gluon density regime of QCD. Evolution equations which describe rapidity dependence of these correlation functions are derived from a QCD effective action.
Random walk through recent CDF QCD results
C. Mesropian
2003-04-09
We present recent results on jet fragmentation, jet evolution in jet and minimum bias events, and underlying event studies. The results presented in this talk address significant questions relevant to QCD and, in particular, to jet studies. One topic discussed is jet fragmentation and the possibility of describing it down to very small momentum scales in terms of pQCD. Another topic is the studies of underlying event energy originating from fragmentation of partons not associated with the hard scattering.
Analog forecasting with dynamics-adapted kernels
NASA Astrophysics Data System (ADS)
Zhao, Zhizhen; Giannakis, Dimitrios
2016-09-01
Analog forecasting is a nonparametric technique introduced by Lorenz in 1969 which predicts the evolution of states of a dynamical system (or observables defined on the states) by following the evolution of the sample in a historical record of observations which most closely resembles the current initial data. Here, we introduce a suite of forecasting methods which improve traditional analog forecasting by combining ideas from kernel methods developed in harmonic analysis and machine learning and state-space reconstruction for dynamical systems. A key ingredient of our approach is to replace single-analog forecasting with weighted ensembles of analogs constructed using local similarity kernels. The kernels used here employ a number of dynamics-dependent features designed to improve forecast skill, including Takens’ delay-coordinate maps (to recover information in the initial data lost through partial observations) and a directional dependence on the dynamical vector field generating the data. Mathematically, our approach is closely related to kernel methods for out-of-sample extension of functions, and we discuss alternative strategies based on the Nyström method and the multiscale Laplacian pyramids technique. We illustrate these techniques in applications to forecasting in a low-order deterministic model for atmospheric dynamics with chaotic metastability, and interannual-scale forecasting in the North Pacific sector of a comprehensive climate model. We find that forecasts based on kernel-weighted ensembles have significantly higher skill than the conventional approach following a single analog.
Inheritance of Kernel Color in Corn: Explanations and Investigations.
ERIC Educational Resources Information Center
Ford, Rosemary H.
2000-01-01
Offers a new perspective on traditional problems in genetics on kernel color in corn, including information about genetic regulation, metabolic pathways, and evolution of genes. (Contains 15 references.) (ASK)
Modeling QCD for Hadron Physics
Tandy, P. C.
2011-10-24
We review the approach to modeling soft hadron physics observables based on the Dyson-Schwinger equations of QCD. The focus is on light quark mesons and in particular the pseudoscalar and vector ground states, their decays and electromagnetic couplings. We detail the wide variety of observables that can be correlated by a ladder-rainbow kernel with one infrared parameter fixed to the chiral quark condensate. A recently proposed novel perspective in which the quark condensate is contained within hadrons and not the vacuum is mentioned. The valence quark parton distributions, in the pion and kaon, as measured in the Drell Yan process, are investigated with the same ladder-rainbow truncation of the Dyson-Schwinger and Bethe-Salpeter equations.
Modeling QCD for Hadron Physics
NASA Astrophysics Data System (ADS)
Tandy, P. C.
2011-10-01
We review the approach to modeling soft hadron physics observables based on the Dyson-Schwinger equations of QCD. The focus is on light quark mesons and in particular the pseudoscalar and vector ground states, their decays and electromagnetic couplings. We detail the wide variety of observables that can be correlated by a ladder-rainbow kernel with one infrared parameter fixed to the chiral quark condensate. A recently proposed novel perspective in which the quark condensate is contained within hadrons and not the vacuum is mentioned. The valence quark parton distributions, in the pion and kaon, as measured in the Drell Yan process, are investigated with the same ladder-rainbow truncation of the Dyson-Schwinger and Bethe-Salpeter equations.
Robotic Intelligence Kernel: Communications
Walton, Mike C.
2009-09-16
The INL Robotic Intelligence Kernel-Comms is the communication server that transmits information between one or more robots using the RIK and one or more user interfaces. It supports event handling and multiple hardware communication protocols.
Robotic Intelligence Kernel: Driver
2009-09-16
The INL Robotic Intelligence Kernel-Driver is built on top of the RIK-A and implements a dynamic autonomy structure. The RIK-D is used to orchestrate hardware for sensing and action as well as software components for perception, communication, behavior and world modeling into a single cognitive behavior kernel that provides intrinsic intelligence for a wide variety of unmanned ground vehicle systems.
Nuclear reactions from lattice QCD
Briceño, Raúl A.; Davoudi, Zohreh; Luu, Thomas C.
2015-01-13
In this study, one of the overarching goals of nuclear physics is to rigorously compute properties of hadronic systems directly from the fundamental theory of strong interactions, Quantum Chromodynamics (QCD). In particular, the hope is to perform reliable calculations of nuclear reactions which will impact our understanding of environments that occur during big bang nucleosynthesis, the evolution of stars and supernovae, and within nuclear reactors and high energy/density facilities. Such calculations, being truly ab initio, would include all two-nucleon and three- nucleon (and higher) interactions in a consistent manner. Currently, lattice QCD provides the only reliable option for performing calculations of some of the low-energy hadronic observables. With the aim of bridging the gap between lattice QCD and nuclear many-body physics, the Institute for Nuclear Theory held a workshop on Nuclear Reactions from Lattice QCD on March 2013. In this review article, we report on the topics discussed in this workshop and the path planned to move forward in the upcoming years.
Nuclear reactions from lattice QCD
Briceño, Raúl A.; Davoudi, Zohreh; Luu, Thomas C.
2015-01-13
In this study, one of the overarching goals of nuclear physics is to rigorously compute properties of hadronic systems directly from the fundamental theory of strong interactions, Quantum Chromodynamics (QCD). In particular, the hope is to perform reliable calculations of nuclear reactions which will impact our understanding of environments that occur during big bang nucleosynthesis, the evolution of stars and supernovae, and within nuclear reactors and high energy/density facilities. Such calculations, being truly ab initio, would include all two-nucleon and three- nucleon (and higher) interactions in a consistent manner. Currently, lattice QCD provides the only reliable option for performing calculationsmore » of some of the low-energy hadronic observables. With the aim of bridging the gap between lattice QCD and nuclear many-body physics, the Institute for Nuclear Theory held a workshop on Nuclear Reactions from Lattice QCD on March 2013. In this review article, we report on the topics discussed in this workshop and the path planned to move forward in the upcoming years.« less
Linearized Kernel Dictionary Learning
NASA Astrophysics Data System (ADS)
Golts, Alona; Elad, Michael
2016-06-01
In this paper we present a new approach of incorporating kernels into dictionary learning. The kernel K-SVD algorithm (KKSVD), which has been introduced recently, shows an improvement in classification performance, with relation to its linear counterpart K-SVD. However, this algorithm requires the storage and handling of a very large kernel matrix, which leads to high computational cost, while also limiting its use to setups with small number of training examples. We address these problems by combining two ideas: first we approximate the kernel matrix using a cleverly sampled subset of its columns using the Nystr\\"{o}m method; secondly, as we wish to avoid using this matrix altogether, we decompose it by SVD to form new "virtual samples," on which any linear dictionary learning can be employed. Our method, termed "Linearized Kernel Dictionary Learning" (LKDL) can be seamlessly applied as a pre-processing stage on top of any efficient off-the-shelf dictionary learning scheme, effectively "kernelizing" it. We demonstrate the effectiveness of our method on several tasks of both supervised and unsupervised classification and show the efficiency of the proposed scheme, its easy integration and performance boosting properties.
Foundations of Perturbative QCD
NASA Astrophysics Data System (ADS)
Collins, John
2011-04-01
1. Introduction; 2. Why QCD?; 3. Basics of QCD; 4. Infra-red safety and non-safety; 5. Libby-Sterman analysis and power counting; 6. Parton model to parton theory I; 7. Parton model to parton theory II; 8. Factorization; 9. Corrections to the parton model in QCD; 10. Factorization and subtractions; 11. DIS in QCD; 12. Fragmentation; 13. TMD factorization; 14. Hadron-hadron collisions; 15. More advanced topics; Appendices; References; Index.
Foundations of Perturbative QCD
NASA Astrophysics Data System (ADS)
Collins, John
2013-11-01
1. Introduction; 2. Why QCD?; 3. Basics of QCD; 4. Infra-red safety and non-safety; 5. Libby-Sterman analysis and power counting; 6. Parton model to parton theory I; 7. Parton model to parton theory II; 8. Factorization; 9. Corrections to the parton model in QCD; 10. Factorization and subtractions; 11. DIS in QCD; 12. Fragmentation; 13. TMD factorization; 14. Hadron-hadron collisions; 15. More advanced topics; Appendices; References; Index.
Supersymmetric QCD and high energy cosmic rays: Fragmentation functions of supersymmetric QCD
NASA Astrophysics Data System (ADS)
Corianò, Claudio; Faraggi, Alon E.
2002-04-01
The supersymmetric evolution of the fragmentation functions (or timelike evolution) within N=1 QCD is discussed and predictions for the fragmentation functions of the theory (into final protons) are given. We use a backward running of the supersymmetric DGLAP equations, using a method developed in previous works. We start from the usual QCD parametrizations at low energy and run the DGLAP back, up to an intermediate scale-assumed to be supersymmetric-where we switch-on supersymmetry. From there on we assume the applicability of an N=1 supersymmetric evolution (ESAP). We elaborate on the possible application of these results to high energy cosmic rays near the GZK cutoff.
QCD dynamics in mesons at soft and hard scales
Nguyen, T.; Souchlas, N. A.; Tandy, P. C.
2010-07-27
Using a ladder-rainbow kernel previously established for the soft scale of light quark hadrons, we explore, within a Dyson-Schwinger approach, phenomena that mix soft and hard scales of QCD. The difference between vector and axial vector current correlators is examined to estimate the four quark chiral condensate and the leading distance scale for the onset of non-perturbative phenomena in QCD. The valence quark distributions, in the pion and kaon, defined in deep inelastic scattering, and measured in the Drell Yan process, are investigated with the same ladder-rainbow truncation of the Dyson-Schwinger and Bethe-Salpeter equations.
LeFebvre, W.
1994-08-01
For many years, the popular program top has aided system administrations in examination of process resource usage on their machines. Yet few are familiar with the techniques involved in obtaining this information. Most of what is displayed by top is available only in the dark recesses of kernel memory. Extracting this information requires familiarity not only with how bytes are read from the kernel, but also what data needs to be read. The wide variety of systems and variants of the Unix operating system in today`s marketplace makes writing such a program very challenging. This paper explores the tremendous diversity in kernel information across the many platforms and the solutions employed by top to achieve and maintain ease of portability in the presence of such divergent systems.
Calculates Thermal Neutron Scattering Kernel.
Energy Science and Technology Software Center (ESTSC)
1989-11-10
Version 00 THRUSH computes the thermal neutron scattering kernel by the phonon expansion method for both coherent and incoherent scattering processes. The calculation of the coherent part is suitable only for calculating the scattering kernel for heavy water.
Robotic Intelligence Kernel: Architecture
Energy Science and Technology Software Center (ESTSC)
2009-09-16
The INL Robotic Intelligence Kernel Architecture (RIK-A) is a multi-level architecture that supports a dynamic autonomy structure. The RIK-A is used to coalesce hardware for sensing and action as well as software components for perception, communication, behavior and world modeling into a framework that can be used to create behaviors for humans to interact with the robot.
Robotic Intelligence Kernel: Visualization
Energy Science and Technology Software Center (ESTSC)
2009-09-16
The INL Robotic Intelligence Kernel-Visualization is the software that supports the user interface. It uses the RIK-C software to communicate information to and from the robot. The RIK-V illustrates the data in a 3D display and provides an operating picture wherein the user can task the robot.
NASA Technical Reports Server (NTRS)
Spafford, Eugene H.; Mckendry, Martin S.
1986-01-01
An overview of the internal structure of the Clouds kernel was presented. An indication of how these structures will interact in the prototype Clouds implementation is given. Many specific details have yet to be determined and await experimentation with an actual working system.
NLO evolution of color dipoles in N=4 SYM
Balitsky, Ian; Chirilli, Giovanni
2009-01-01
High-energy behavior of amplitudes in a gauge theory can be reformulated in terms of the evolution of Wilson-line operators. In the leading logarithmic approximation it is given by the conformally invariant BK equation for the evolution of color dipoles. In QCD, the next-to-leading order BK equation has both conformal and non-conformal parts, the latter providing the running of the coupling constant. To separate the conformally invariant effects from the running-coupling effects, we calculate the NLO evolution of the color dipoles in the conformal ${\\cal N}$=4 SYM theory. We define the ``composite dipole operator' with the rapidity cutoff preserving conformal invariance. The resulting M\\"obius invariant kernel agrees with the forward NLO BFKL calculation of Ref. 1
NASA Astrophysics Data System (ADS)
Wilczek, Frank
Introduction Symmetry and the Phenomena of QCD Apparent and Actual Symmetries Asymptotic Freedom Confinement Chiral Symmetry Breaking Chiral Anomalies and Instantons High Temperature QCD: Asymptotic Properties Significance of High Temperature QCD Numerical Indications for Quasi-Free Behavior Ideas About Quark-Gluon Plasma Screening Versus Confinement Models of Chiral Symmetry Breaking More Refined Numerical Experiments High-Temperature QCD: Phase Transitions Yoga of Phase Transitions and Order Parameters Application to Glue Theories Application to Chiral Transitions Close Up on Two Flavors A Genuine Critical Point! (?) High-Density QCD: Methods Hopes, Doubts, and Fruition Another Renormalization Group Pairing Theory Taming the Magnetic Singularity High-Density QCD: Color-Flavor Locking and Quark-Hadron Continuity Gauge Symmetry (Non)Breaking Symmetry Accounting Elementary Excitations A Modified Photon Quark-Hadron Continuity Remembrance of Things Past More Quarks Fewer Quarks and Reality
None
2011-10-06
Modern QCD - Lecture 3 We will introduce processes with initial-state hadrons and discuss parton distributions, sum rules, as well as the need for a factorization scale once radiative corrections are taken into account. We will then discuss the DGLAP equation, the evolution of parton densities, as well as ways in which parton densities are extracted from data.
Urban, Federico R.; Zhitnitsky, Ariel R.
2010-08-30
We review two mechanisms rooted in the infrared sector of QCD which, by exploiting the properties of the QCD ghost, as introduced by Veneziano, provide new insight on the cosmological dark energy problem, first, in the form of a Casimir-like energy from quantising QCD in a box, and second, in the form of additional, time-dependent, vacuum energy density in an expanding universe. Based on [1, 2].
Wilson loops and QCD/string scattering amplitudes
Makeenko, Yuri; Olesen, Poul
2009-07-15
We generalize modern ideas about the duality between Wilson loops and scattering amplitudes in N=4 super Yang-Mills theory to large N QCD by deriving a general relation between QCD meson scattering amplitudes and Wilson loops. We then investigate properties of the open-string disk amplitude integrated over reparametrizations. When the Wilson-loop is approximated by the area behavior, we find that the QCD scattering amplitude is a convolution of the standard Koba-Nielsen integrand and a kernel. As usual poles originate from the first factor, whereas no (momentum-dependent) poles can arise from the kernel. We show that the kernel becomes a constant when the number of external particles becomes large. The usual Veneziano amplitude then emerges in the kinematical regime, where the Wilson loop can be reliably approximated by the area behavior. In this case, we obtain a direct duality between Wilson loops and scattering amplitudes when spatial variables and momenta are interchanged, in analogy with the N=4 super Yang-Mills theory case.
Kernel optimization in discriminant analysis.
You, Di; Hamsici, Onur C; Martinez, Aleix M
2011-03-01
Kernel mapping is one of the most used approaches to intrinsically derive nonlinear classifiers. The idea is to use a kernel function which maps the original nonlinearly separable problem to a space of intrinsically larger dimensionality where the classes are linearly separable. A major problem in the design of kernel methods is to find the kernel parameters that make the problem linear in the mapped representation. This paper derives the first criterion that specifically aims to find a kernel representation where the Bayes classifier becomes linear. We illustrate how this result can be successfully applied in several kernel discriminant analysis algorithms. Experimental results, using a large number of databases and classifiers, demonstrate the utility of the proposed approach. The paper also shows (theoretically and experimentally) that a kernel version of Subclass Discriminant Analysis yields the highest recognition rates. PMID:20820072
MC Kernel: Broadband Waveform Sensitivity Kernels for Seismic Tomography
NASA Astrophysics Data System (ADS)
Stähler, Simon C.; van Driel, Martin; Auer, Ludwig; Hosseini, Kasra; Sigloch, Karin; Nissen-Meyer, Tarje
2016-04-01
We present MC Kernel, a software implementation to calculate seismic sensitivity kernels on arbitrary tetrahedral or hexahedral grids across the whole observable seismic frequency band. Seismic sensitivity kernels are the basis for seismic tomography, since they map measurements to model perturbations. Their calculation over the whole frequency range was so far only possible with approximative methods (Dahlen et al. 2000). Fully numerical methods were restricted to the lower frequency range (usually below 0.05 Hz, Tromp et al. 2005). With our implementation, it's possible to compute accurate sensitivity kernels for global tomography across the observable seismic frequency band. These kernels rely on wavefield databases computed via AxiSEM (www.axisem.info), and thus on spherically symmetric models. The advantage is that frequencies up to 0.2 Hz and higher can be accessed. Since the usage of irregular, adapted grids is an integral part of regularisation in seismic tomography, MC Kernel works in a inversion-grid-centred fashion: A Monte-Carlo integration method is used to project the kernel onto each basis function, which allows to control the desired precision of the kernel estimation. Also, it means that the code concentrates calculation effort on regions of interest without prior assumptions on the kernel shape. The code makes extensive use of redundancies in calculating kernels for different receivers or frequency-pass-bands for one earthquake, to facilitate its usage in large-scale global seismic tomography.
Lee, Myung Hee; Liu, Yufeng
2013-12-01
The continuum regression technique provides an appealing regression framework connecting ordinary least squares, partial least squares and principal component regression in one family. It offers some insight on the underlying regression model for a given application. Moreover, it helps to provide deep understanding of various regression techniques. Despite the useful framework, however, the current development on continuum regression is only for linear regression. In many applications, nonlinear regression is necessary. The extension of continuum regression from linear models to nonlinear models using kernel learning is considered. The proposed kernel continuum regression technique is quite general and can handle very flexible regression model estimation. An efficient algorithm is developed for fast implementation. Numerical examples have demonstrated the usefulness of the proposed technique. PMID:24058224
Norniella, Olga; /Barcelona, IFAE
2005-01-01
Recent QCD measurements from the CDF collaboration at the Tevatron are presented, together with future prospects as the luminosity increases. The measured inclusive jet cross section is compared to pQCD NLO predictions. Precise measurements on jet shapes and hadronic energy flows are compared to different phenomenological models that describe gluon emissions and the underlying event in hadron-hadron interactions.
Lattice QCD in rotating frames.
Yamamoto, Arata; Hirono, Yuji
2013-08-23
We formulate lattice QCD in rotating frames to study the physics of QCD matter under rotation. We construct the lattice QCD action with the rotational metric and apply it to the Monte Carlo simulation. As the first application, we calculate the angular momenta of gluons and quarks in the rotating QCD vacuum. This new framework is useful to analyze various rotation-related phenomena in QCD. PMID:24010426
Exclusive QCD processes, quark-hadron duality, and the transition to perturbative QCD
NASA Astrophysics Data System (ADS)
Corianò, Claudio; Li, Hsiang-nan; Savkli, Cetin
1998-07-01
Experiments at CEBAF will scan the intermediate-energy region of the QCD dynamics for the nucleon form factors and for Compton Scattering. These experiments will definitely clarify the role of resummed perturbation theory and of quark-hadron duality (QCD sum rules) in this regime. With this perspective in mind, we review the factorization theorem of perturbative QCD for exclusive processes at intermediate energy scales, which embodies the transverse degrees of freedom of a parton and the Sudakov resummation of the corresponding large logarithms. We concentrate on the pion and proton electromagnetic form factors and on pion Compton scattering. New ingredients, such as the evolution of the pion wave function and the complete two-loop expression of the Sudakov factor, are included. The sensitivity of our predictions to the infrared cutoff for the Sudakov evolution is discussed. We also elaborate on QCD sum rule methods for Compton Scattering, which provide an alternative description of this process. We show that, by comparing the local duality analysis to resummed perturbation theory, it is possible to describe the transition of exclusive processes to perturbative QCD.
Heavy quarkonium production at collider energies: Factorization and evolution
NASA Astrophysics Data System (ADS)
Kang, Zhong-Bo; Ma, Yan-Qing; Qiu, Jian-Wei; Sterman, George
2014-08-01
We present a perturbative QCD factorization formalism for inclusive production of heavy quarkonia of large transverse momentum, pT at collider energies, including both leading power (LP) and next-to-leading power (NLP) behavior in pT. We demonstrate that both LP and NLP contributions can be factorized in terms of perturbatively calculable short-distance partonic coefficient functions and universal nonperturbative fragmentation functions, and derive the evolution equations that are implied by the factorization. We identify projection operators for all channels of the factorized LP and NLP infrared safe short-distance partonic hard parts, and corresponding operator definitions of fragmentation functions. For the NLP, we focus on the contributions involving the production of a heavy quark pair, a necessary condition for producing a heavy quarkonium. We evaluate the first nontrivial order of evolution kernels for all relevant fragmentation functions, and discuss the role of NLP contributions.
Brodsky, Stanley J.; de Teramond, Guy F.; /Costa Rica U.
2012-02-16
-front QCD Hamiltonian 'Light-Front Holography'. Light-Front Holography is in fact one of the most remarkable features of the AdS/CFT correspondence. The Hamiltonian equation of motion in the light-front (LF) is frame independent and has a structure similar to eigenmode equations in AdS space. This makes a direct connection of QCD with AdS/CFT methods possible. Remarkably, the AdS equations correspond to the kinetic energy terms of the partons inside a hadron, whereas the interaction terms build confinement and correspond to the truncation of AdS space in an effective dual gravity approximation. One can also study the gauge/gravity duality starting from the bound-state structure of hadrons in QCD quantized in the light-front. The LF Lorentz-invariant Hamiltonian equation for the relativistic bound-state system is P{sub {mu}}P{sup {mu}}|{psi}(P)> = (P{sup +}P{sup -} - P{sub {perpendicular}}{sup 2})|{psi}(P)> = M{sup 2}|{psi}(P)>, P{sup {+-}} = P{sup 0} {+-} P{sup 3}, where the LF time evolution operator P{sup -} is determined canonically from the QCD Lagrangian. To a first semiclassical approximation, where quantum loops and quark masses are not included, this leads to a LF Hamiltonian equation which describes the bound-state dynamics of light hadrons in terms of an invariant impact variable {zeta} which measures the separation of the partons within the hadron at equal light-front time {tau} = x{sup 0} + x{sup 3}. This allows us to identify the holographic variable z in AdS space with an impact variable {zeta}. The resulting Lorentz-invariant Schroedinger equation for general spin incorporates color confinement and is systematically improvable. Light-front holographic methods were originally introduced by matching the electromagnetic current matrix elements in AdS space with the corresponding expression using LF theory in physical space time. It was also shown that one obtains identical holographic mapping using the matrix elements of the energy-momentum tensor by perturbing
Kernel Phase and Kernel Amplitude in Fizeau Imaging
NASA Astrophysics Data System (ADS)
Pope, Benjamin J. S.
2016-09-01
Kernel phase interferometry is an approach to high angular resolution imaging which enhances the performance of speckle imaging with adaptive optics. Kernel phases are self-calibrating observables that generalize the idea of closure phases from non-redundant arrays to telescopes with arbitrarily shaped pupils, by considering a matrix-based approximation to the diffraction problem. In this paper I discuss the recent fhistory of kernel phase, in particular in the matrix-based study of sparse arrays, and propose an analogous generalization of the closure amplitude to kernel amplitudes. This new approach can self-calibrate throughput and scintillation errors in optical imaging, which extends the power of kernel phase-like methods to symmetric targets where amplitude and not phase calibration can be a significant limitation, and will enable further developments in high angular resolution astronomy.
NASA Astrophysics Data System (ADS)
Geiger, Klaus
1997-08-01
VNI is a general-purpose Monte Carlo event generator, which includes the simulation of lepton-lepton, lepton-hadron, lepton-nucleus, hadron-hadron, hadron-nucleus, and nucleus-nucleus collisions. On the basis of renormalization-group improved parton description and quantum-kinetic theory, it uses the real-time evolution of parton cascades in conjunction with a self-consistent hadronization scheme that is governed by the dynamics itself. The causal evolution from a specific initial state (determined by the colliding beam particles) is followed by the time development of the phase-space densities of partons, pre-hadronic parton clusters, and final-state hadrons, in position space, momentum space and color space. The parton evolution is described in terms of a space-time generalization of the familiar momentum-space description of multiple (semi) hard interactions in QCD, involving 2 → 2 parton collisions, 2 → 1 parton fusion processes, and 1 → 2 radiation processes. The formation of color-singlet pre-hadronic clusters and their decays into hadrons, on the other hand, is treated by using a spatial criterion motivated by confinement and a non-perturbative model for hadronization. This article gives a brief review of the physics underlying VNI, which is followed by a detailed description of the program itself. The latter program description emphasizes easy-to-use pragmatism and explains how to use the program (including a simple example), annotates input and control parameters, and discusses output data provided by it.
NASA Astrophysics Data System (ADS)
Lutz, Matthias F. M.; Lange, Jens Sören; Pennington, Michael; Bettoni, Diego; Brambilla, Nora; Crede, Volker; Eidelman, Simon; Gillitzer, Albrecht; Gradl, Wolfgang; Lang, Christian B.; Metag, Volker; Nakano, Takashi; Nieves, Juan; Neubert, Sebastian; Oka, Makoto; Olsen, Stephen L.; Pappagallo, Marco; Paul, Stephan; Pelizäus, Marc; Pilloni, Alessandro; Prencipe, Elisabetta; Ritman, Jim; Ryan, Sinead; Thoma, Ulrike; Uwer, Ulrich; Weise, Wolfram
2016-04-01
We report on the EMMI Rapid Reaction Task Force meeting 'Resonances in QCD', which took place at GSI October 12-14, 2015. A group of 26 people met to discuss the physics of resonances in QCD. The aim of the meeting was defined by the following three key questions: What is needed to understand the physics of resonances in QCD? Where does QCD lead us to expect resonances with exotic quantum numbers? What experimental efforts are required to arrive at a coherent picture? For light mesons and baryons only those with up, down and strange quark content were considered. For heavy-light and heavy-heavy meson systems, those with charm quarks were the focus. This document summarizes the discussions by the participants, which in turn led to the coherent conclusions we present here.
NASA Astrophysics Data System (ADS)
Deur, Alexandre; Brodsky, Stanley J.; de Téramond, Guy F.
2016-09-01
We review the present theoretical and empirical knowledge for αs, the fundamental coupling underlying the interactions of quarks and gluons in Quantum Chromodynamics (QCD). The dependence of αs(Q2) on momentum transfer Q encodes the underlying dynamics of hadron physics-from color confinement in the infrared domain to asymptotic freedom at short distances. We review constraints on αs(Q2) at high Q2, as predicted by perturbative QCD, and its analytic behavior at small Q2, based on models of nonperturbative dynamics. In the introductory part of this review, we explain the phenomenological meaning of the coupling, the reason for its running, and the challenges facing a complete understanding of its analytic behavior in the infrared domain. In the second, more technical, part of the review, we discuss the behavior of αs(Q2) in the high momentum transfer domain of QCD. We review how αs is defined, including its renormalization scheme dependence, the definition of its renormalization scale, the utility of effective charges, as well as "Commensurate Scale Relations" which connect the various definitions of the QCD coupling without renormalization-scale ambiguity. We also report recent significant measurements and advanced theoretical analyses which have led to precise QCD predictions at high energy. As an example of an important optimization procedure, we discuss the "Principle of Maximum Conformality", which enhances QCD's predictive power by removing the dependence of the predictions for physical observables on the choice of theoretical conventions such as the renormalization scheme. In the last part of the review, we discuss the challenge of understanding the analytic behavior αs(Q2) in the low momentum transfer domain. We survey various theoretical models for the nonperturbative strongly coupled regime, such as the light-front holographic approach to QCD. This new framework predicts the form of the quark-confinement potential underlying hadron spectroscopy and
Skands, Peter Z.; /Fermilab
2005-07-01
Recent developments in QCD phenomenology have spurred on several improved approaches to Monte Carlo event generation, relative to the post-LEP state of the art. In this brief review, the emphasis is placed on approaches for (1) consistently merging fixed-order matrix element calculations with parton shower descriptions of QCD radiation, (2) improving the parton shower algorithms themselves, and (3) improving the description of the underlying event in hadron collisions.
Small-x evolution in the next-to-leading order
Giovanni Antonio Chirilli
2009-12-01
After a brief introduction to Deep Inelastic Scattering in the Bjorken limit and in the Regge Limit we discuss the operator product expansion in terms of non local string operator and in terms of Wilson lines. We will show how the high-energy behavior of amplitudes in gauge theories can be reformulated in terms of the evolution of Wilson-line operators. In the leading order this evolution is governed by the non-linear Balitsky-Kovchegov (BK) equation. In order to see if this equation is relevant for existing or future deep inelastic scattering (DIS) accelerators (like Electron Ion Collider (EIC) or Large Hadron electron Collider (LHeC)) one needs to know the next-to-leading order (NLO) corrections. In addition, the NLO corrections define the scale of the running-coupling constant in the BK equation and therefore determine the magnitude of the leading-order cross sections. In Quantum Chromodynamics (QCD), the next-to-leading order BK equation has both conformal and non-conformal parts. The NLO kernel for the composite operators resolves in a sum of the conformal part and the running-coupling part. The QCD and kernel of the BK equation is presented.
Bruemmer, David J.
2009-11-17
A robot platform includes perceptors, locomotors, and a system controller. The system controller executes a robot intelligence kernel (RIK) that includes a multi-level architecture and a dynamic autonomy structure. The multi-level architecture includes a robot behavior level for defining robot behaviors, that incorporate robot attributes and a cognitive level for defining conduct modules that blend an adaptive interaction between predefined decision functions and the robot behaviors. The dynamic autonomy structure is configured for modifying a transaction capacity between an operator intervention and a robot initiative and may include multiple levels with at least a teleoperation mode configured to maximize the operator intervention and minimize the robot initiative and an autonomous mode configured to minimize the operator intervention and maximize the robot initiative. Within the RIK at least the cognitive level includes the dynamic autonomy structure.
Ultrahigh energy neutrinos and nonlinear QCD dynamics
Machado, Magno V.T.
2004-09-01
The ultrahigh energy neutrino-nucleon cross sections are computed taking into account different phenomenological implementations of the nonlinear QCD dynamics. Based on the color dipole framework, the results for the saturation model supplemented by the Dokshitzer-Gribov-Lipatov-Altarelli-Parisi (DGLAP) evolution as well as for the Balitskii-Fadin-Kuraev-Lipatov (BFKL) formalism in the geometric scaling regime are presented. They are contrasted with recent calculations using next-to-leading order DGLAP and unified BFKL-DGLAP formalisms.
Tracking flame base movement and interaction with ignition kernels using topological methods
NASA Astrophysics Data System (ADS)
Mascarenhas, A.; Grout, R. W.; Yoo, C. S.; Chen, J. H.
2009-07-01
We segment the stabilization region in a simulation of a lifted jet flame based on its topology induced by the YOH field. Our segmentation method yields regions that correspond to the flame base and to potential auto-ignition kernels. We apply a region overlap based tracking method to follow the flame-base and the kernels over time, to study the evolution of kernels, and to detect when the kernels merge with the flame. The combination of our segmentation and tracking methods allow us observe flame stabilization via merging between the flame base and kernels; we also obtain YCH2O histories inside the kernels and detect a distinct decrease in radical concentration during transition to a developed flame.
FOREWORD: Extreme QCD 2012 (xQCD)
NASA Astrophysics Data System (ADS)
Alexandru, Andrei; Bazavov, Alexei; Liu, Keh-Fei
2013-04-01
The Extreme QCD 2012 conference, held at the George Washington University in August 2012, celebrated the 10th event in the series. It has been held annually since 2003 at different locations: San Carlos (2011), Bad Honnef (2010), Seoul (2009), Raleigh (2008), Rome (2007), Brookhaven (2006), Swansea (2005), Argonne (2004), and Nara (2003). As usual, it was a very productive and inspiring meeting that brought together experts in the field of finite-temperature QCD, both theoretical and experimental. On the experimental side, we heard about recent results from major experiments, such as PHENIX and STAR at Brookhaven National Laboratory, ALICE and CMS at CERN, and also about the constraints on the QCD phase diagram coming from astronomical observations of one of the largest laboratories one can imagine, neutron stars. The theoretical contributions covered a wide range of topics, including QCD thermodynamics at zero and finite chemical potential, new ideas to overcome the sign problem in the latter case, fluctuations of conserved charges and how they allow one to connect calculations in lattice QCD with experimentally measured quantities, finite-temperature behavior of theories with many flavors of fermions, properties and the fate of heavy quarkonium states in the quark-gluon plasma, and many others. The participants took the time to write up and revise their contributions and submit them for publication in these proceedings. Thanks to their efforts, we have now a good record of the ideas presented and discussed during the workshop. We hope that this will serve both as a reminder and as a reference for the participants and for other researchers interested in the physics of nuclear matter at high temperatures and density. To preserve the atmosphere of the event the contributions are ordered in the same way as the talks at the conference. We are honored to have helped organize the 10th meeting in this series, a milestone that reflects the lasting interest in this
ERIC Educational Resources Information Center
Mayr, Ernst
1978-01-01
Traces the history of evolution theory from Lamarck and Darwin to the present. Discusses natural selection in detail. Suggests that, besides biological evolution, there is also a cultural evolution which is more rapid than the former. (MA)
Harris, R.
1992-05-01
We present measurements of jet production and isolated prompt photon production in p{bar p} collisions at {radical}s = 1.8 TeV from the 1988--89 run of the Collider Detector at Fermilab (CDF). To test QCD with jets, the inclusive jet cross section (p{bar p} {yields} J + X) and two jet angular distributions (p{bar P} {yields} JJ + X) are compared to QCD predictions and are used to search for composite quarks. The ratio of the scaled jet cross sections at two Tevatron collision energies ({radical}s= 546 and 1800 GeV) is compared to QCD predictions for X{sub T} scaling violations. Also, we present the first evidence for QCD interference effects (color coherence) in third jet production (p{bar p} {yields} JJJ + X). To test QCD with photons, we present measurements of the transverse momentum spectrum of single isolated prompt photon production (p{bar p} {yields} {gamma} + X), double isolated prompt photon production (p{bar p} {yields} {gamma}{gamma} + X), and the angular distribution of photon-jet events (p{bar p} {yields} {gamma} J + X). We have also measured the isolated production ratio of {eta} and {pi}{sup 0} mesons (p{bar p} {yields} {eta} + X)/(p{bar p} {yields} {pi}{sup 0} + X) = 1.02 {plus minus} .15(stat) {plus minus} .23(sys).
Harris, R.; The CDF Collaboration
1992-05-01
We present measurements of jet production and isolated prompt photon production in p{bar p} collisions at {radical}s = 1.8 TeV from the 1988--89 run of the Collider Detector at Fermilab (CDF). To test QCD with jets, the inclusive jet cross section (p{bar p} {yields} J + X) and two jet angular distributions (p{bar P} {yields} JJ + X) are compared to QCD predictions and are used to search for composite quarks. The ratio of the scaled jet cross sections at two Tevatron collision energies ({radical}s= 546 and 1800 GeV) is compared to QCD predictions for X{sub T} scaling violations. Also, we present the first evidence for QCD interference effects (color coherence) in third jet production (p{bar p} {yields} JJJ + X). To test QCD with photons, we present measurements of the transverse momentum spectrum of single isolated prompt photon production (p{bar p} {yields} {gamma} + X), double isolated prompt photon production (p{bar p} {yields} {gamma}{gamma} + X), and the angular distribution of photon-jet events (p{bar p} {yields} {gamma} J + X). We have also measured the isolated production ratio of {eta} and {pi}{sup 0} mesons (p{bar p} {yields} {eta} + X)/(p{bar p} {yields} {pi}{sup 0} + X) = 1.02 {plus_minus} .15(stat) {plus_minus} .23(sys).
Blazey, G.C.
1995-05-01
Selected recent Quantum Chromodynamics (QCD) results from the D0 and CDF experiments at the Fermilab Tevatron are presented and discussed. The inclusive jet and inclusive triple differential dijet cross sections are compared to next-to-leading order QCD calculations. The sensitivity of the dijet cross section to parton distribution functions (for hadron momentum fractions {approximately} 0.01 to {approximately} 0.4) will constrain the gluon distribution of the proton. Two analyses of dijet production at large rapidity separation are presented. The first analysis tests the contributions of higher order processes to dijet production and can be considered a test of BFKL or GLAP parton evolution. The second analysis yields a strong rapidity gap signal consistent with colorless exchange between the scattered partons. The prompt photon inclusive cross section is consistent with next-to-leading order QCD only at the highest transverse momenta. The discrepancy at lower momenta may be indicative of higher order processes impacting a transverse momentum or ``k{sub T}`` to the partonic interaction. The first measurement of the strong coupling constant from the Tevatron is also presented. The coupling constant can be determined from the ratio of W + 1jet to W + 0jet cross sections and a next-to-leading order QCD calculation.
Electroweak symmetry breaking via QCD.
Kubo, Jisuke; Lim, Kher Sham; Lindner, Manfred
2014-08-29
We propose a new mechanism to generate the electroweak scale within the framework of QCD, which is extended to include conformally invariant scalar degrees of freedom belonging to a larger irreducible representation of SU(3)c. The electroweak symmetry breaking is triggered dynamically via the Higgs portal by the condensation of the colored scalar field around 1 TeV. The mass of the colored boson is restricted to be 350 GeV≲mS≲3 TeV, with the upper bound obtained from perturbative renormalization group evolution. This implies that the colored boson can be produced at the LHC. If the colored boson is electrically charged, the branching fraction of the Higgs boson decaying into two photons can slightly increase, and moreover, it can be produced at future linear colliders. Our idea of nonperturbative electroweak scale generation can serve as a new starting point for more realistic model building in solving the hierarchy problem. PMID:25215976
Kernel Methods on Riemannian Manifolds with Gaussian RBF Kernels.
Jayasumana, Sadeep; Hartley, Richard; Salzmann, Mathieu; Li, Hongdong; Harandi, Mehrtash
2015-12-01
In this paper, we develop an approach to exploiting kernel methods with manifold-valued data. In many computer vision problems, the data can be naturally represented as points on a Riemannian manifold. Due to the non-Euclidean geometry of Riemannian manifolds, usual Euclidean computer vision and machine learning algorithms yield inferior results on such data. In this paper, we define Gaussian radial basis function (RBF)-based positive definite kernels on manifolds that permit us to embed a given manifold with a corresponding metric in a high dimensional reproducing kernel Hilbert space. These kernels make it possible to utilize algorithms developed for linear spaces on nonlinear manifold-valued data. Since the Gaussian RBF defined with any given metric is not always positive definite, we present a unified framework for analyzing the positive definiteness of the Gaussian RBF on a generic metric space. We then use the proposed framework to identify positive definite kernels on two specific manifolds commonly encountered in computer vision: the Riemannian manifold of symmetric positive definite matrices and the Grassmann manifold, i.e., the Riemannian manifold of linear subspaces of a Euclidean space. We show that many popular algorithms designed for Euclidean spaces, such as support vector machines, discriminant analysis and principal component analysis can be generalized to Riemannian manifolds with the help of such positive definite Gaussian kernels. PMID:26539851
Deur, Alexandre; Brodsky, Stanley J.; de Téramond, Guy F.
2016-05-09
Here, we review present knowledge onmore » $$\\alpha_{s}$$, the Quantum Chromodynamics (QCD) running coupling. The dependence of $$\\alpha_s(Q^2)$$ on momentum transfer $Q$ encodes the underlying dynamics of hadron physics --from color confinement in the infrared domain to asymptotic freedom at short distances. We will survey our present theoretical and empirical knowledge of $$\\alpha_s(Q^2)$$, including constraints at high $Q^2$ predicted by perturbative QCD, and constraints at small $Q^2$ based on models of nonperturbative dynamics. In the first, introductory, part of this review, we explain the phenomenological meaning of the coupling, the reason for its running, and the challenges facing a complete understanding of its analytic behavior in the infrared domain. In the second, more technical, part of the review, we discuss $$\\alpha_s(Q^2)$$ in the high momentum transfer domain of QCD. We review how $$\\alpha_s$$ is defined, including its renormalization scheme dependence, the definition of its renormalization scale, the utility of effective charges, as well as `` Commensurate Scale Relations" which connect the various definitions of the QCD coupling without renormalization scale ambiguity. We also report recent important experimental measurements and advanced theoretical analyses which have led to precise QCD predictions at high energy. As an example of an important optimization procedure, we discuss the ``Principle of Maximum Conformality" which enhances QCD's predictive power by removing the dependence of the predictions for physical observables on the choice of the gauge and renormalization scheme. In last part of the review, we discuss $$\\alpha_s(Q^2)$$ in the low momentum transfer domain, where there has been no consensus on how to define $$\\alpha_s(Q^2)$$ or its analytic behavior. We will discuss the various approaches used for low energy calculations. Among them, we will discuss the light-front holographic approach to QCD in the strongly coupled
Mixing state of bi-component mixtures under aggregation with a product kernel
NASA Astrophysics Data System (ADS)
Fernández-Díaz, J. M.; Gómez-García, G. J.
2010-05-01
We analyze the aggregation of a two-component system with a product kernel, to determine its evolution in time during a progressive mixing. The evolution is governed by the Smoluchowski equation, yielding gelation from a certain time. In the past, equilibrium (or asymptotic) solutions have been used to study mixing of bi-component mixtures for non-gelling kernels. In this letter we show that asymptotic solutions are invalid to describe the mixing behavior for the product kernel case (even before gelation). Besides, an equilibrium concentration is not reached. On the contrary, particles with any composition exist all time.
Brodsky, Stanley J.; /SLAC
2007-07-06
I discuss a number of novel topics in QCD, including the use of the AdS/CFT correspondence between Anti-de Sitter space and conformal gauge theories to obtain an analytically tractable approximation to QCD in the regime where the QCD coupling is large and constant. In particular, there is an exact correspondence between the fifth-dimension coordinate z of AdS space and a specific impact variable {zeta} which measures the separation of the quark constituents within the hadron in ordinary space-time. This connection allows one to compute the analytic form of the frame-independent light-front wavefunctions of mesons and baryons, the fundamental entities which encode hadron properties and allow the computation of exclusive scattering amplitudes. I also discuss a number of novel phenomenological features of QCD. Initial- and final-state interactions from gluon-exchange, normally neglected in the parton model, have a profound effect in QCD hard-scattering reactions, leading to leading-twist single-spin asymmetries, diffractive deep inelastic scattering, diffractive hard hadronic reactions, the breakdown of the Lam Tung relation in Drell-Yan reactions, and nuclear shadowing and non-universal antishadowing--leading-twist physics not incorporated in the light-front wavefunctions of the target computed in isolation. I also discuss tests of hidden color in nuclear wavefunctions, the use of diffraction to materialize the Fock states of a hadronic projectile and test QCD color transparency, and anomalous heavy quark effects. The presence of direct higher-twist processes where a proton is produced in the hard subprocess can explain the large proton-to-pion ratio seen in high centrality heavy ion collisions.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 8 2011-01-01 2011-01-01 false Kernel weight. 981.9 Section 981.9 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Regulating Handling Definitions § 981.9 Kernel weight. Kernel weight means the weight of kernels,...
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 8 2010-01-01 2010-01-01 false Kernel weight. 981.9 Section 981.9 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Regulating Handling Definitions § 981.9 Kernel weight. Kernel weight means the weight of kernels,...
Cusp Kernels for Velocity-Changing Collisions
NASA Astrophysics Data System (ADS)
McGuyer, B. H.; Marsland, R., III; Olsen, B. A.; Happer, W.
2012-05-01
We introduce an analytical kernel, the “cusp” kernel, to model the effects of velocity-changing collisions on optically pumped atoms in low-pressure buffer gases. Like the widely used Keilson-Storer kernel [J. Keilson and J. E. Storer, Q. Appl. Math. 10, 243 (1952)QAMAAY0033-569X], cusp kernels are characterized by a single parameter and preserve a Maxwellian velocity distribution. Cusp kernels and their superpositions are more useful than Keilson-Storer kernels, because they are more similar to real kernels inferred from measurements or theory and are easier to invert to find steady-state velocity distributions.
QCD Phase Diagram and the Constant Mass Approximation
NASA Astrophysics Data System (ADS)
Ahmad, A.; Ayala, A.; Bashir, A.; Gutiérrez, E.; Raya, A.
2015-11-01
Dynamical generation of quark masses in the infrared region of QCD plays an important role to understand the peculiar nature of the physics of hadrons. As it is known, the solution of QCD gap equation for the quark mass function is flat for low momentum, but smoothly evolves to the perturbative behavior at high momentum. In this work, we use an effective truncation of QCD gap equation valid up to 1 GeV, and implement it at finite temperature and chemical potential to understand the QCD phase diagram for chiral symmetry breaking-chiral symmetry restoration, and confinement-deconfinement phase transitions from the Schwinger-Dysin equations point of view. Our effective kernel contains a gluon dressing function with two light quark flavors Nf = 2, with current quark mass 0.0035 GeV. An effective coupling, adjusted to reproduce the behavior of the chiral condensate at finite T complements our truncation. We find the critical end point of the phase diagram located at the temperature TE = 0.1245 GeV and the baryonic chemical potential μEB = 0.211 GeV.
Lattice QCD for parallel computers
NASA Astrophysics Data System (ADS)
Quadling, Henley Sean
Lattice QCD is an important tool in the investigation of Quantum Chromodynamics (QCD). This is particularly true at lower energies where traditional perturbative techniques fail, and where other non-perturbative theoretical efforts are not entirely satisfactory. Important features of QCD such as confinement and the masses of the low lying hadronic states have been demonstrated and calculated in lattice QCD simulations. In calculations such as these, non-lattice techniques in QCD have failed. However, despite the incredible advances in computer technology, a full solution of lattice QCD may still be in the too-distant future. Much effort is being expended in the search for ways to reduce the computational burden so that an adequate solution of lattice QCD is possible in the near future. There has been considerable progress in recent years, especially in the research of improved lattice actions. In this thesis, a new approach to lattice QCD algorithms is introduced, which results in very significant efficiency improvements. The new approach is explained in detail, evaluated and verified by comparing physics results with current lattice QCD simulations. The new sub-lattice layout methodology has been specifically designed for current and future hardware. Together with concurrent research into improved lattice actions and more efficient numerical algorithms, the very significant efficiency improvements demonstrated in this thesis can play an important role in allowing lattice QCD researchers access to much more realistic simulations. The techniques presented in this thesis also allow ambitious QCD simulations to be performed on cheap clusters of commodity computers.
Quark-gluon vertex model and lattice-QCD data
Bhagwat, M.S.; Tandy, P.C.
2004-11-01
A model for the dressed-quark-gluon vertex, at zero gluon momentum, is formed from a nonperturbative extension of the two Feynman diagrams that contribute at one loop in perturbation theory. The required input is an existing ladder-rainbow model Bethe-Salpeter kernel from an approach based on the Dyson-Schwinger equations; no new parameters are introduced. The model includes an Ansatz for the triple-gluon vertex. Two of the three vertex amplitudes from the model provide a pointwise description of the recent quenched-lattice-QCD data. An estimate of the effects of quenching is made.
Non-perturbative QCD Modeling and Meson Physics
Nguyen, T.; Souchlas, N. A.; Tandy, P. C.
2009-04-20
Using a ladder-rainbow kernel previously established for light quark hadron physics, we explore the extension to masses and electroweak decay constants of ground state pseudoscalar and vector quarkonia and heavy-light mesons in the c- and b-quark regions. We make a systematic study of the effectiveness of a constituent mass concept as a replacement for a heavy quark dressed propagator for such states. The difference between vector and axial vector current correlators is explored within the same model to provide an estimate of the four quark chiral condensate and the leading distance scale for the onset of non-perturbative phenomena in QCD.
Soft and Hard Scale QCD Dynamics in Mesons
NASA Astrophysics Data System (ADS)
Nguyen, T.; Souchlas, N. A.; Tandy, P. C.
2011-09-01
Using a ladder-rainbow kernel previously established for light quark hadron physics, we explore the extension to masses and electroweak decay constants of ground state pseudoscalar and vector quarkonia and heavy-light mesons in the c- and b-quark regions. We make a systematic study of the effectiveness of a constituent mass concept as a replacement for a heavy quark dressed propagator for such states. The difference between vector and axial vector current correlators is explored within the same model to provide an estimate of the four quark chiral condensate and the leading distance scale for the onset of non-perturbative phenomena in QCD.
Non-perturbative QCD Modeling and Meson Physics
NASA Astrophysics Data System (ADS)
Nguyen, T.; Souchlas, N. A.; Tandy, P. C.
2009-04-01
Using a ladder-rainbow kernel previously established for light quark hadron physics, we explore the extension to masses and electroweak decay constants of ground state pseudoscalar and vector quarkonia and heavy-light mesons in the c- and b-quark regions. We make a systematic study of the effectiveness of a constituent mass concept as a replacement for a heavy quark dressed propagator for such states. The difference between vector and axial vector current correlators is explored within the same model to provide an estimate of the four quark chiral condensate and the leading distance scale for the onset of non-perturbative phenomena in QCD.
Devlin, T.; CDF Collaboration
1996-10-01
The CDF collaboration is engaged in a broad program of QCD measurements at the Fermilab Tevatron Collider. I will discuss inclusive jet production at center-of-mass energies of 1800 GeV and 630 GeV, properties of events with very high total transverse energy and dijet angular distributions.
Plunkett, R.; The CDF Collaboration
1991-10-01
Results are presented for hadronic jet and direct photon production at {radical}{bar s} = 1800 GeV. The data are compared with next-to-leading QCD calculations. A new limit on the scale of possible composite structure of the quarks is also reported. 12 refs., 4 figs.
Brodsky, Stanley J.; Deshpande, Abhay L.; Gao, Haiyan; McKeown, Robert D.; Meyer, Curtis A.; Meziani, Zein-Eddine; Milner, Richard G.; Qiu, Jianwei; Richards, David G.; Roberts, Craig D.
2015-02-26
This White Paper presents the recommendations and scientific conclusions from the Town Meeting on QCD and Hadronic Physics that took place in the period 13-15 September 2014 at Temple University as part of the NSAC 2014 Long Range Planning process. The meeting was held in coordination with the Town Meeting on Phases of QCD and included a full day of joint plenary sessions of the two meetings. The goals of the meeting were to report and highlight progress in hadron physics in the seven years since the 2007 Long Range Plan (LRP07), and present a vision for the future by identifying the key questions and plausible paths to solutions which should define the next decade. The introductory summary details the recommendations and their supporting rationales, as determined at the Town Meeting on QCD and Hadron Physics, and the endorsements that were voted upon. The larger document is organized as follows. Section 2 highlights major progress since the 2007 LRP. It is followed, in Section 3, by a brief overview of the physics program planned for the immediate future. Finally, Section 4 provides an overview of the physics motivations and goals associated with the next QCD frontier: the Electron-Ion-Collider.
Andreas S. Kronfeld
2002-09-30
After reviewing some of the mathematical foundations and numerical difficulties facing lattice QCD, I review the status of several calculations relevant to experimental high-energy physics. The topics considered are moments of structure functions, which may prove relevant to search for new phenomena at the LHC, and several aspects of flavor physics, which are relevant to understanding CP and flavor violation.
Radyushkin, Anatoly V.; Efremov, Anatoly Vasilievich; Ginzburg, Ilya F.
2013-04-01
We discuss some problems concerning the application of perturbative QCD to high energy soft processes. We show that summing the contributions of the lowest twist operators for non-singlet $t$-channel leads to a Regge-like amplitude. Singlet case is also discussed.
Lincoln, Don
2016-06-28
The strongest force in the universe is the strong nuclear force and it governs the behavior of quarks and gluons inside protons and neutrons. The name of the theory that governs this force is quantum chromodynamics, or QCD. In this video, Fermilab?s Dr. Don Lincoln explains the intricacies of this dominant component of the Standard Model.
Nathan Isgur
1997-03-01
The author presents an idiosyncratic view of baryons which calls for a marriage between quark-based and hadronic models of QCD. He advocates a treatment based on valence quark plus glue dominance of hadron structure, with the sea of q pairs (in the form of virtual hadron pairs) as important corrections.
Brodsky, Stanley J.; /SLAC /Southern Denmark U., CP3-Origins
2011-08-12
I review a number of topics where conventional wisdom in hadron physics has been challenged. For example, hadrons can be produced at large transverse momentum directly within a hard higher-twist QCD subprocess, rather than from jet fragmentation. Such 'direct' processes can explain the deviations from perturbative QCD predictions in measurements of inclusive hadron cross sections at fixed x{sub T} = 2p{sub T}/{radical}s, as well as the 'baryon anomaly', the anomalously large proton-to-pion ratio seen in high centrality heavy ion collisions. Initial-state and final-state interactions of the struck quark, the soft-gluon rescattering associated with its Wilson line, lead to Bjorken-scaling single-spin asymmetries, diffractive deep inelastic scattering, the breakdown of the Lam-Tung relation in Drell-Yan reactions, as well as nuclear shadowing and antishadowing. The Gribov-Glauber theory predicts that antishadowing of nuclear structure functions is not universal, but instead depends on the flavor quantum numbers of each quark and antiquark, thus explaining the anomalous nuclear dependence measured in deep-inelastic neutrino scattering. Since shadowing and antishadowing arise from the physics of leading-twist diffractive deep inelastic scattering, one cannot attribute such phenomena to the structure of the nucleus itself. It is thus important to distinguish 'static' structure functions, the probability distributions computed from the square of the target light-front wavefunctions, versus 'dynamical' structure functions which include the effects of the final-state rescattering of the struck quark. The importance of the J = 0 photon-quark QCD contact interaction in deeply virtual Compton scattering is also emphasized. The scheme-independent BLM method for setting the renormalization scale is discussed. Eliminating the renormalization scale ambiguity greatly improves the precision of QCD predictions and increases the sensitivity of searches for new physics at the LHC
Nawa, Kanabu; Suganuma, Hideo; Kojo, Toru
2007-04-15
We study baryons in holographic QCD with D4/D8/D8 multi-D-brane system. In holographic QCD, the baryon appears as a topologically nontrivial chiral soliton in a four-dimensional effective theory of mesons. We call this topological soliton brane-induced Skyrmion. Some review of D4/D8/D8 holographic QCD is presented from the viewpoints of recent hadron physics and QCD phenomenologies. A four-dimensional effective theory with pions and {rho} mesons is uniquely derived from the non-Abelian Dirac-Born-Infeld (DBI) action of D8 brane with D4 supergravity background at the leading order of large N{sub c}, without small amplitude expansion of meson fields to discuss chiral solitons. For the hedgehog configuration of pion and {rho}-meson fields, we derive the energy functional and the Euler-Lagrange equation of brane-induced Skyrmion from the meson effective action induced by holographic QCD. Performing the numerical calculation, we obtain the soliton solution and figure out the pion profile F(r) and the {rho}-meson profile G-tilde(r) of the brane-induced Skyrmion with its total energy, energy density distribution, and root-mean-square radius. These results are compared with the experimental quantities of baryons and also with the profiles of standard Skyrmion without {rho} mesons. We analyze interaction terms of pions and {rho} mesons in brane-induced Skyrmion, and find a significant {rho}-meson component appearing in the core region of a baryon.
QCD with many fermions and QCD topology
NASA Astrophysics Data System (ADS)
Shuryak, Edward
2013-04-01
Major nonperturbative phenomena in QCD - confinement and chiral symmetry breaking - are known to be related with certain topological objects. Recent lattice advances into the domain of many Nf = O(10) fermion flavors have shown that both phase transitions had shifted in this case to much stronger coupling. We discuss confinement in terms of monopole Bose condensation, and discuss how it is affected by fermions "riding" on the monopoles, ending with the Nf dependence of the critical line. Chiral symmetry breaking is discussed in terms of the (anti)selfdual dyons, the instanton constituents. The fermionic zero modes of those have a different meaning and lead to strong interaction between dyons and antidyons. We report some qualitative consequences of this theory and also some information about our first direct numerical study of the dyonic ensemble, in respect to both chiral symmetry breaking and confinement (via back reaction to the holonomy potential).
Domain transfer multiple kernel learning.
Duan, Lixin; Tsang, Ivor W; Xu, Dong
2012-03-01
Cross-domain learning methods have shown promising results by leveraging labeled patterns from the auxiliary domain to learn a robust classifier for the target domain which has only a limited number of labeled samples. To cope with the considerable change between feature distributions of different domains, we propose a new cross-domain kernel learning framework into which many existing kernel methods can be readily incorporated. Our framework, referred to as Domain Transfer Multiple Kernel Learning (DTMKL), simultaneously learns a kernel function and a robust classifier by minimizing both the structural risk functional and the distribution mismatch between the labeled and unlabeled samples from the auxiliary and target domains. Under the DTMKL framework, we also propose two novel methods by using SVM and prelearned classifiers, respectively. Comprehensive experiments on three domain adaptation data sets (i.e., TRECVID, 20 Newsgroups, and email spam data sets) demonstrate that DTMKL-based methods outperform existing cross-domain learning and multiple kernel learning methods. PMID:21646679
Aligning Biomolecular Networks Using Modular Graph Kernels
NASA Astrophysics Data System (ADS)
Towfic, Fadi; Greenlee, M. Heather West; Honavar, Vasant
Comparative analysis of biomolecular networks constructed using measurements from different conditions, tissues, and organisms offer a powerful approach to understanding the structure, function, dynamics, and evolution of complex biological systems. We explore a class of algorithms for aligning large biomolecular networks by breaking down such networks into subgraphs and computing the alignment of the networks based on the alignment of their subgraphs. The resulting subnetworks are compared using graph kernels as scoring functions. We provide implementations of the resulting algorithms as part of BiNA, an open source biomolecular network alignment toolkit. Our experiments using Drosophila melanogaster, Saccharomyces cerevisiae, Mus musculus and Homo sapiens protein-protein interaction networks extracted from the DIP repository of protein-protein interaction data demonstrate that the performance of the proposed algorithms (as measured by % GO term enrichment of subnetworks identified by the alignment) is competitive with some of the state-of-the-art algorithms for pair-wise alignment of large protein-protein interaction networks. Our results also show that the inter-species similarity scores computed based on graph kernels can be used to cluster the species into a species tree that is consistent with the known phylogenetic relationships among the species.
RTOS kernel in portable electrocardiograph
NASA Astrophysics Data System (ADS)
Centeno, C. A.; Voos, J. A.; Riva, G. G.; Zerbini, C.; Gonzalez, E. A.
2011-12-01
This paper presents the use of a Real Time Operating System (RTOS) on a portable electrocardiograph based on a microcontroller platform. All medical device digital functions are performed by the microcontroller. The electrocardiograph CPU is based on the 18F4550 microcontroller, in which an uCOS-II RTOS can be embedded. The decision associated with the kernel use is based on its benefits, the license for educational use and its intrinsic time control and peripherals management. The feasibility of its use on the electrocardiograph is evaluated based on the minimum memory requirements due to the kernel structure. The kernel's own tools were used for time estimation and evaluation of resources used by each process. After this feasibility analysis, the migration from cyclic code to a structure based on separate processes or tasks able to synchronize events is used; resulting in an electrocardiograph running on one Central Processing Unit (CPU) based on RTOS.
Kovacs, E.; CDF Collaboration
1996-02-01
We present results for the inclusive jet cross section and the dijet mass distribution. The inclusive cross section and dijet mass both exhibit significant deviations from the predictions of NLO QCD for jets with E{sub T}>200 GeV, or dijet masses > 400 GeV/c{sup 2}. We show that it is possible, within a global QCD analysis that includes the CDF inclusive jet data, to modify the gluon distribution at high x. The resulting increase in the jet cross-section predictions is 25-35%. Owing to the presence of k{sub T} smearing effects, the direct photon data does not provide as strong a constraint on the gluon distribution as previously thought. A comparison of the CDF and UA2 jet data, which have a common range in x, is plagued by theoretical and experimental uncertainties, and cannot at present confirm the CDF excess or the modified gluon distribution.
NASA Astrophysics Data System (ADS)
Dudek, Jozef J.
2016-03-01
I describe how hadron-hadron scattering amplitudes are related to the eigenstates of QCD in a finite cubic volume. The discrete spectrum of such eigenstates can be determined from correlation functions computed using lattice QCD, and the corresponding scattering amplitudes extracted. I review results from the Hadron Spectrum Collaboration who have used these finite volume methods to study ππ elastic scattering, including the ρ resonance, as well as coupled-channel πK, ηK scattering. The very recent extension to the case where an external current acts is also presented, considering the reaction πγ* → ππ, from which the unstable ρ → πγ transition form factor is extracted. Ongoing calculations are advertised and the outlook for finite volume approaches is presented.
Exponentially modified QCD coupling
Cvetic, Gorazd; Valenzuela, Cristian
2008-04-01
We present a specific class of models for an infrared-finite analytic QCD coupling, such that at large spacelike energy scales the coupling differs from the perturbative one by less than any inverse power of the energy scale. This condition is motivated by the Institute for Theoretical and Experimental Physics operator product expansion philosophy. Allowed by the ambiguity in the analytization of the perturbative coupling, the proposed class of couplings has three parameters. In the intermediate energy region, the proposed coupling has low loop-level and renormalization scheme dependence. The present modification of perturbative QCD must be considered as a phenomenological attempt, with the aim of enlarging the applicability range of the theory of the strong interactions at low energies.
Gupta, R.
1998-12-31
The goal of the lectures on lattice QCD (LQCD) is to provide an overview of both the technical issues and the progress made so far in obtaining phenomenologically useful numbers. The lectures consist of three parts. The author`s charter is to provide an introduction to LQCD and outline the scope of LQCD calculations. In the second set of lectures, Guido Martinelli will discuss the progress they have made so far in obtaining results, and their impact on Standard Model phenomenology. Finally, Martin Luescher will discuss the topical subjects of chiral symmetry, improved formulation of lattice QCD, and the impact these improvements will have on the quality of results expected from the next generation of simulations.
Dudek, Jozef J.; Edwards, Robert G.
2012-03-21
In this study, we present the first comprehensive study of hybrid baryons using lattice QCD methods. Using a large basis of composite QCD interpolating fields we extract an extensive spectrum of baryon states and isolate those of hybrid character using their relatively large overlap onto operators which sample gluonic excitations. We consider the spectrum of Nucleon and Delta states at several quark masses finding a set of positive parity hybrid baryons with quantum numbers $N_{1/2^+},\\,N_{1/2^+},\\,N_{3/2^+},\\, N_{3/2^+},\\,N_{5/2^+},\\,$ and $\\Delta_{1/2^+},\\, \\Delta_{3/2^+}$ at an energy scale above the first band of `conventional' excited positive parity baryons. This pattern of states is compatible with a color octet gluonic excitation having $J^{P}=1^{+}$ as previously reported in the hybrid meson sector and with a comparable energy scale for the excitation, suggesting a common bound-state construction for hybrid mesons and baryons.
Density Estimation with Mercer Kernels
NASA Technical Reports Server (NTRS)
Macready, William G.
2003-01-01
We present a new method for density estimation based on Mercer kernels. The density estimate can be understood as the density induced on a data manifold by a mixture of Gaussians fit in a feature space. As is usual, the feature space and data manifold are defined with any suitable positive-definite kernel function. We modify the standard EM algorithm for mixtures of Gaussians to infer the parameters of the density. One benefit of the approach is it's conceptual simplicity, and uniform applicability over many different types of data. Preliminary results are presented for a number of simple problems.
Giannetti, P. )
1991-05-01
Recent analysis of jet data taken at the Fermilab Tevatron Collider at {radical}S = 1.8 Tev are presented. Inclusive jet, dijet, trijet and direct photon measurements are compared to QCD parton level calculations, at orders {alpha}{sub s}{sup 3} or {alpha}{sub s}{sup 2}. The large total transverse energy events are well described by the Herwig shower Montecarlo. 19 refs., 20 figs., 1 tab.
Bjorken, J.D.
1996-10-01
New directions for exploring QCD at future high-energy colliders are sketched. These include jets within jets. BFKL dynamics, soft and hard diffraction, searches for disoriented chiral condensate, and doing a better job on minimum bias physics. The new experimental opportunities include electron-ion collisions at HERA, a new collider detector at the C0 region of the TeVatron, and the FELIX initiative at the LHC.
Kronfeld, A.S.; Allison, I.F.; Aubin, C.; Bernard, C.; Davies, C.T.H.; DeTar, C.; Di Pierro, M.; Freeland, E.D.; Gottlieb, Steven; Gray, A.; Gregor, E.; Heller, U.M.; Hetrick, J.E.; El-Khadra, Aida X.; Levkova, L.; Mackenzie, P.B.; Maresca, F.; Menscher, D.; Nobes, M.; Okamoto, M.; Oktay, M.B.; /Fermilab /Glasgow U. /Columbia U. /Washington U., St. Louis /Utah U. /DePaul U. /Art Inst. of Chicago /Indiana U. /Ohio State U. /Arizona U. /APS, New York /U. Pacific, Stockton /Illinois U., Urbana /Cornell U., LEPP /Simon Fraser U. /UC, Santa Barbara
2005-09-01
In the past year, we calculated with lattice QCD three quantities that were unknown or poorly known. They are the q{sup 2} dependence of the form factor in semileptonic D {yields} K/{nu} decay, the decay constant of the D meson, and the mass of the B{sub c} meson. In this talk, we summarize these calculations, with emphasis on their (subsequent) confirmation by experiments.
Roberts, C.D.
1994-09-01
The Dyson-Schwinger equations (DSEs) are a tower of coupled integral equations that relate the Green functions of QCD to one another. Solving these equations provides the solution of QCD. This tower of equations includes the equation for the quark self-energy, which is the analogue of the gap equation in superconductivity, and the Bethe-Salpeter equation, the solution of which is the quark-antiquark bound state amplitude in QCD. The application of this approach to solving Abelian and non-Abelian gauge theories is reviewed. The nonperturbative DSE approach is being developed as both: (1) a computationally less intensive alternative and; (2) a complement to numerical simulations of the lattice action of QCD. In recent years, significant progress has been made with the DSE approach so that it is now possible to make sensible and direct comparisons between quantities calculated using this approach and the results of numerical simulations of Abelian gauge theories. Herein the application of the DSE approach to the calculation of pion observables is described: the {pi}-{pi} scattering lengths (a{sub 0}{sup 0}, a{sub 0}{sup 2}, A{sub 1}{sup 1}, a{sub 2}{sup 2}) and associated partial wave amplitudes; the {pi}{sup 0} {yields} {gamma}{gamma} decay width; and the charged pion form factor, F{sub {pi}}(q{sup 2}). Since this approach provides a straightforward, microscopic description of dynamical chiral symmetry breaking (D{sub X}SB) and confinement, the calculation of pion observables is a simple and elegant illustrative example of its power and efficacy. The relevant DSEs are discussed in the calculation of pion observables and concluding remarks are presented.
Hadronic Resonances from Lattice QCD
Lichtl, Adam C.; Bulava, John; Morningstar, Colin; Edwards, Robert; Mathur, Nilmani; Richards, David; Fleming, George; Juge, K. Jimmy; Wallace, Stephen J.
2007-10-26
The determination of the pattern of hadronic resonances as predicted by Quantum Chromodynamics requires the use of non-perturbative techniques. Lattice QCD has emerged as the dominant tool for such calculations, and has produced many QCD predictions which can be directly compared to experiment. The concepts underlying lattice QCD are outlined, methods for calculating excited states are discussed, and results from an exploratory Nucleon and Delta baryon spectrum study are presented.
Hadronic Resonances from Lattice QCD
John Bulava; Robert Edwards; George Fleming; K. Jimmy Juge; Adam C. Lichtl; Nilmani Mathur; Colin Morningstar; David Richards; Stephen J. Wallace
2007-06-16
The determination of the pattern of hadronic resonances as predicted by Quantum Chromodynamics requires the use of non-perturbative techniques. Lattice QCD has emerged as the dominant tool for such calculations, and has produced many QCD predictions which can be directly compared to experiment. The concepts underlying lattice QCD are outlined, methods for calculating excited states are discussed, and results from an exploratory Nucleon and Delta baryon spectrum study are presented.
Resource Letter QCD-1: Quantum chromodynamics
NASA Astrophysics Data System (ADS)
Kronfeld, Andreas S.; Quigg, Chris
2010-11-01
This Resource Letter provides a guide to the literature on quantum chromodynamics (QCD), the relativistic quantum field theory of the strong interactions. Journal articles, books, and other documents are cited for the following topics: Quarks and color, the parton model, Yang-Mills theory, experimental evidence for color, QCD as a color gauge theory, asymptotic freedom, QCD for heavy hadrons, QCD on the lattice, the QCD vacuum, pictures of quark confinement, early and modern applications of perturbative QCD, the determination of the strong coupling and quark masses, QCD and the hadron spectrum, hadron decays, the quark-gluon plasma, the strong nuclear interaction, and QCD's role in nuclear physics.
Technology Transfer Automated Retrieval System (TEKTRAN)
Oat (Avena sativa L.) kernels appear to contain much higher polar lipid concentrations than other plant tissues. We have extracted, identified, and quantified polar lipids from 18 oat genotypes grown in replicated plots in three environments in order to determine genotypic or environmental variation...
Accelerating the Original Profile Kernel
Hamp, Tobias; Goldberg, Tatyana; Rost, Burkhard
2013-01-01
One of the most accurate multi-class protein classification systems continues to be the profile-based SVM kernel introduced by the Leslie group. Unfortunately, its CPU requirements render it too slow for practical applications of large-scale classification tasks. Here, we introduce several software improvements that enable significant acceleration. Using various non-redundant data sets, we demonstrate that our new implementation reaches a maximal speed-up as high as 14-fold for calculating the same kernel matrix. Some predictions are over 200 times faster and render the kernel as possibly the top contender in a low ratio of speed/performance. Additionally, we explain how to parallelize various computations and provide an integrative program that reduces creating a production-quality classifier to a single program call. The new implementation is available as a Debian package under a free academic license and does not depend on commercial software. For non-Debian based distributions, the source package ships with a traditional Makefile-based installer. Download and installation instructions can be found at https://rostlab.org/owiki/index.php/Fast_Profile_Kernel. Bugs and other issues may be reported at https://rostlab.org/bugzilla3/enter_bug.cgi?product=fastprofkernel. PMID:23825697
Adaptive wiener image restoration kernel
Yuan, Ding
2007-06-05
A method and device for restoration of electro-optical image data using an adaptive Wiener filter begins with constructing imaging system Optical Transfer Function, and the Fourier Transformations of the noise and the image. A spatial representation of the imaged object is restored by spatial convolution of the image using a Wiener restoration kernel.
Local Observed-Score Kernel Equating
ERIC Educational Resources Information Center
Wiberg, Marie; van der Linden, Wim J.; von Davier, Alina A.
2014-01-01
Three local observed-score kernel equating methods that integrate methods from the local equating and kernel equating frameworks are proposed. The new methods were compared with their earlier counterparts with respect to such measures as bias--as defined by Lord's criterion of equity--and percent relative error. The local kernel item response…
7 CFR 981.408 - Inedible kernel.
Code of Federal Regulations, 2013 CFR
2013-01-01
... AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... Standards for Shelled Almonds, or which has embedded dirt or other foreign material not easily removed...
7 CFR 981.408 - Inedible kernel.
Code of Federal Regulations, 2010 CFR
2010-01-01
... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... Standards for Shelled Almonds, or which has embedded dirt or other foreign material not easily removed...
7 CFR 981.408 - Inedible kernel.
Code of Federal Regulations, 2014 CFR
2014-01-01
... AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... Standards for Shelled Almonds, or which has embedded dirt or other foreign material not easily removed...
7 CFR 981.408 - Inedible kernel.
Code of Federal Regulations, 2011 CFR
2011-01-01
... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... Standards for Shelled Almonds, or which has embedded dirt or other foreign material not easily removed...
7 CFR 981.408 - Inedible kernel.
Code of Federal Regulations, 2012 CFR
2012-01-01
... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... Standards for Shelled Almonds, or which has embedded dirt or other foreign material not easily removed...
NLO Hierarchy of Wilson Lines Evolution
Balitsky, Ian
2015-03-01
The high-energy behavior of QCD amplitudes can be described in terms of the rapidity evolution of Wilson lines. I present the hierarchy of evolution equations for Wilson lines in the next-to-leading order.
Confronting QCD with the experimental hadronic spectral functions from tau decay
Dominguez, C. A.; Nasrallah, N. F.; Schilcher, K.
2009-09-01
The (nonstrange) vector and axial-vector spectral functions extracted from {tau} decay by the ALEPH Collaboration are confronted with QCD in the framework of a finite energy sum rule involving a polynomial kernel tuned to suppress the region beyond the kinematical end point where there is no longer data. This effectively allows for a QCD finite energy sum rule analysis to be performed beyond the region of the existing data. Results show excellent agreement between data and perturbative QCD in the remarkably wide energy range s=3-10 GeV{sup 2}, leaving room for a dimension d=4 vacuum condensate consistent with values in the literature. A hypothetical dimension d=2 term in the operator product expansion is found to be extremely small, consistent with zero. Fixed order and contour improved perturbation theory are used, with both leading to similar results within errors. Full consistency is found between vector and axial-vector channel results.
QCD coupling constants and VDM
Erkol, G.; Ozpineci, A.; Zamiralov, V. S.
2012-10-23
QCD sum rules for coupling constants of vector mesons with baryons are constructed. The corresponding QCD sum rules for electric charges and magnetic moments are also derived and with the use of vector-meson-dominance model related to the coupling constants. The VDM role as the criterium of reciprocal validity of the sum rules is considered.
QCD: Questions, challenges, and dilemmas
Bjorken, J.
1996-11-01
An introduction to some outstanding issues in QCD is presented, with emphasis on work by Diakonov and co-workers on the influence of the instanton vacuum on low-energy QCD observables. This includes the calculation of input valence-parton distributions for deep-inelastic scattering. 35 refs., 3 figs.
NASA Astrophysics Data System (ADS)
Bartels, Jochen
2006-06-01
I summarize the present status of the AGK cutting rules in perturbative QCD. Particular attention is given to the application of the AGK analysis to diffraction and multiple scattering in DIS at HERA and to pp collisions at the LHC. I also discuss the bootstrap conditions which appear in pQCD.
Sakai, Tadakatsu; Sugimoto, Shigeki
2005-12-02
We propose a holographic dual of QCD with massless flavors on the basis of a D4/D8-brane configuration within a probe approximation. We are led to a five-dimensional Yang-Mills theory on a curved space-time along with a Chern-Simons five-form on it, both of which provide us with a unifying framework to study the massless pion and an infinite number of massive vector mesons. We make sample computations of the physical quantities that involve the mesons and compare them with the experimental data. It is found that most of the results of this model are compatible with the experiments.
NASA Astrophysics Data System (ADS)
Sakai, Tadakatsu; Sugimoto, Shigeki
2005-12-01
We propose a holographic dual of QCD with massless flavors on the basis of a D4/D8-brane configuration within a probe approximation. We are led to a five-dimensional Yang-Mills theory on a curved space-time along with a Chern-Simons five-form on it, both of which provide us with a unifying framework to study the massless pion and an infinite number of massive vector mesons. We make sample computations of the physical quantities that involve the mesons and compare them with the experimental data. It is found that most of the results of this model are compatible with the experiments.
Sekhar Chivukula
2010-01-08
The symmetries of a quantum field theory can be realized in a variety of ways. Symmetries can be realized explicitly, approximately, through spontaneous symmetry breaking or, via an anomaly, quantum effects can dynamically eliminate a symmetry of the theory that was present at the classical level. Quantum Chromodynamics (QCD), the modern theory of the strong interactions, exemplify each of these possibilities. The interplay of these effects determine the spectrum of particles that we observe and, ultimately, account for 99% of the mass of ordinary matter.
Cool QCD: Hadronic Physics and QCD in Nuclei
NASA Astrophysics Data System (ADS)
Cates, Gordon
2015-10-01
QCD is the only strongly-coupled theory given to us by Nature, and it gives rise to a host of striking phenomena. Two examples in hadronic physics include the dynamic generation of mass and the confinement of quarks. Indeed, the vast majority of the mass of visible matter is due to the kinetic and potential energy of the massless gluons and the essentially massless quarks. QCD also gives rise to the force that binds protons and neutrons into nuclei, including subtle effects that have historically been difficult to understand. Describing these phenomena in terms of QCD has represented a daunting task, but remarkable progress has been achieved in both theory and experiment. Both CEBAF at Jefferson Lab and RHIC at Brookhaven National Lab have provided unprecedented experimental tools for investigating QCD, and upgrades at both facilities promise even greater opportunities in the future. Also important are programs at FermiLab as well as the LHC at CERN. Looking further ahead, an electron ion collider (EIC) has the potential to answer whole new sets of questions regarding the role of gluons in nuclear matter, an issue that lies at the heart of the generation of mass. On the theoretical side, rapid progress in supercomputers is enabling stunning progress in Lattice QCD calculations, and approximate forms of QCD are also providing deep new physical insight. In this talk I will describe both recent advances in Cool QCD as well as the exciting scientific opportunities that exist for the future.
Nonperturbative QCD Calculations
NASA Astrophysics Data System (ADS)
Dellby, Niklas
1995-01-01
The research described in this thesis is an exact transformation of the Yang-Mills quantum chromodynamics (QCD) Lagrangrian into a form that is suitable for nonperturbative calculations. The conventional Yang-Mills Lagrangian has proven to be an excellent basis for perturbative calculations, but in nonperturbative calculations it is difficult to separate gauge problems from physical properties. To mitigate this problem, I develop a new equivalent Lagrangian that is not only expressed completely in terms of the field strengths ofthe gauge field but is also manifestly Lorentz and gauge invariant. The new Lagrangian is quadratic in derivatives, with non-linear local couplings, thus it is ideally suited for a numerical calculation. The field-strength Lagrangian is of such a form that it is possible to do a straightforward numerical stationary path expansion and find the fundamental QCD properties. This thesis examines several approximations analytically, investigating different ways to utilize the new Lagrangian. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253 -1690.).
Soltz, R; Vranas, P; Blumrich, M; Chen, D; Gara, A; Giampap, M; Heidelberger, P; Salapura, V; Sexton, J; Bhanot, G
2007-04-11
The theory of the strong nuclear force, Quantum Chromodynamics (QCD), can be numerically simulated from first principles on massively-parallel supercomputers using the method of Lattice Gauge Theory. We describe the special programming requirements of lattice QCD (LQCD) as well as the optimal supercomputer hardware architectures that it suggests. We demonstrate these methods on the BlueGene massively-parallel supercomputer and argue that LQCD and the BlueGene architecture are a natural match. This can be traced to the simple fact that LQCD is a regular lattice discretization of space into lattice sites while the BlueGene supercomputer is a discretization of space into compute nodes, and that both are constrained by requirements of locality. This simple relation is both technologically important and theoretically intriguing. The main result of this paper is the speedup of LQCD using up to 131,072 CPUs on the largest BlueGene/L supercomputer. The speedup is perfect with sustained performance of about 20% of peak. This corresponds to a maximum of 70.5 sustained TFlop/s. At these speeds LQCD and BlueGene are poised to produce the next generation of strong interaction physics theoretical results.
Dudek, Jozef J.; Edwards, Robert G.
2012-03-21
In this study, we present the first comprehensive study of hybrid baryons using lattice QCD methods. Using a large basis of composite QCD interpolating fields we extract an extensive spectrum of baryon states and isolate those of hybrid character using their relatively large overlap onto operators which sample gluonic excitations. We consider the spectrum of Nucleon and Delta states at several quark masses finding a set of positive parity hybrid baryons with quantum numbersmore » $$N_{1/2^+},\\,N_{1/2^+},\\,N_{3/2^+},\\, N_{3/2^+},\\,N_{5/2^+},\\,$$ and $$\\Delta_{1/2^+},\\, \\Delta_{3/2^+}$$ at an energy scale above the first band of `conventional' excited positive parity baryons. This pattern of states is compatible with a color octet gluonic excitation having $$J^{P}=1^{+}$$ as previously reported in the hybrid meson sector and with a comparable energy scale for the excitation, suggesting a common bound-state construction for hybrid mesons and baryons.« less
None
2011-10-06
Modern QCD - Lecture 1 Starting from the QCD Lagrangian we will revisit some basic QCD concepts and derive fundamental properties like gauge invariance and isospin symmetry and will discuss the Feynman rules of the theory. We will then focus on the gauge group of QCD and derive the Casimirs CF and CA and some useful color identities.
QCD Factorization and PDFs from Lattice QCD Calculation
NASA Astrophysics Data System (ADS)
Ma, Yan-Qing; Qiu, Jian-Wei
2015-02-01
In this talk, we review a QCD factorization based approach to extract parton distribution and correlation functions from lattice QCD calculation of single hadron matrix elements of quark-gluon operators. We argue that although the lattice QCD calculations are done in the Euclidean space, the nonperturbative collinear behavior of the matrix elements are the same as that in the Minkowski space, and could be systematically factorized into parton distribution functions with infrared safe matching coefficients. The matching coefficients can be calculated perturbatively by applying the factorization formalism on to asymptotic partonic states.
Derivation of aerodynamic kernel functions
NASA Technical Reports Server (NTRS)
Dowell, E. H.; Ventres, C. S.
1973-01-01
The method of Fourier transforms is used to determine the kernel function which relates the pressure on a lifting surface to the prescribed downwash within the framework of Dowell's (1971) shear flow model. This model is intended to improve upon the potential flow aerodynamic model by allowing for the aerodynamic boundary layer effects neglected in the potential flow model. For simplicity, incompressible, steady flow is considered. The proposed method is illustrated by deriving known results from potential flow theory.
Kernel Near Principal Component Analysis
MARTIN, SHAWN B.
2002-07-01
We propose a novel algorithm based on Principal Component Analysis (PCA). First, we present an interesting approximation of PCA using Gram-Schmidt orthonormalization. Next, we combine our approximation with the kernel functions from Support Vector Machines (SVMs) to provide a nonlinear generalization of PCA. After benchmarking our algorithm in the linear case, we explore its use in both the linear and nonlinear cases. We include applications to face data analysis, handwritten digit recognition, and fluid flow.
RKRD: Runtime Kernel Rootkit Detection
NASA Astrophysics Data System (ADS)
Grover, Satyajit; Khosravi, Hormuzd; Kolar, Divya; Moffat, Samuel; Kounavis, Michael E.
In this paper we address the problem of protecting computer systems against stealth malware. The problem is important because the number of known types of stealth malware increases exponentially. Existing approaches have some advantages for ensuring system integrity but sophisticated techniques utilized by stealthy malware can thwart them. We propose Runtime Kernel Rootkit Detection (RKRD), a hardware-based, event-driven, secure and inclusionary approach to kernel integrity that addresses some of the limitations of the state of the art. Our solution is based on the principles of using virtualization hardware for isolation, verifying signatures coming from trusted code as opposed to malware for scalability and performing system checks driven by events. Our RKRD implementation is guided by our goals of strong isolation, no modifications to target guest OS kernels, easy deployment, minimal infra-structure impact, and minimal performance overhead. We developed a system prototype and conducted a number of experiments which show that the per-formance impact of our solution is negligible.
Kernel CMAC with improved capability.
Horváth, Gábor; Szabó, Tamás
2007-02-01
The cerebellar model articulation controller (CMAC) has some attractive features, namely fast learning capability and the possibility of efficient digital hardware implementation. Although CMAC was proposed many years ago, several open questions have been left even for today. The most important ones are about its modeling and generalization capabilities. The limits of its modeling capability were addressed in the literature, and recently, certain questions of its generalization property were also investigated. This paper deals with both the modeling and the generalization properties of CMAC. First, a new interpolation model is introduced. Then, a detailed analysis of the generalization error is given, and an analytical expression of this error for some special cases is presented. It is shown that this generalization error can be rather significant, and a simple regularized training algorithm to reduce this error is proposed. The results related to the modeling capability show that there are differences between the one-dimensional (1-D) and the multidimensional versions of CMAC. This paper discusses the reasons of this difference and suggests a new kernel-based interpretation of CMAC. The kernel interpretation gives a unified framework. Applying this approach, both the 1-D and the multidimensional CMACs can be constructed with similar modeling capability. Finally, this paper shows that the regularized training algorithm can be applied for the kernel interpretations too, which results in a network with significantly improved approximation capabilities. PMID:17278566
NASA Astrophysics Data System (ADS)
Peter, Ulmschneider
When we are looking for intelligent life outside the Earth, there is a fundamental question: Assuming that life has formed on an extraterrestrial planet, will it also develop toward intelligence? As this is hotly debated, we will now describe the development of life on Earth in more detail in order to show that there are good reasons why evolution should culminate in intelligent beings.
Visualizing and Interacting with Kernelized Data.
Barbosa, A; Paulovich, F V; Paiva, A; Goldenstein, S; Petronetto, F; Nonato, L G
2016-03-01
Kernel-based methods have experienced a substantial progress in the last years, tuning out an essential mechanism for data classification, clustering and pattern recognition. The effectiveness of kernel-based techniques, though, depends largely on the capability of the underlying kernel to properly embed data in the feature space associated to the kernel. However, visualizing how a kernel embeds the data in a feature space is not so straightforward, as the embedding map and the feature space are implicitly defined by the kernel. In this work, we present a novel technique to visualize the action of a kernel, that is, how the kernel embeds data into a high-dimensional feature space. The proposed methodology relies on a solid mathematical formulation to map kernelized data onto a visual space. Our approach is faster and more accurate than most existing methods while still allowing interactive manipulation of the projection layout, a game-changing trait that other kernel-based projection techniques do not have. PMID:26829242
Lattice QCD and Nuclear Physics
Konstantinos Orginos
2007-03-01
A steady stream of developments in Lattice QCD have made it possible today to begin to address the question of how nuclear physics emerges from the underlying theory of strong interactions. Central role in this understanding play both the effective field theory description of nuclear forces and the ability to perform accurate non-perturbative calculations in lo w energy QCD. Here I present some recent results that attempt to extract important low energy constants of the effective field theory of nuclear forces from lattice QCD.
Hadron physics in holographic QCD
NASA Astrophysics Data System (ADS)
Santra, A. B.; Lombardo, U.; Bonanno, A.
2012-07-01
Hadron physics deals with the study of strongly interacting subatomic particles such as neutrons, protons, pions and others, collectively known as baryons and mesons. Physics of strong interaction is difficult. There are several approaches to understand it. However, in the recent years, an approach called, holographic QCD, based on string theory (or gauge-gravity duality) is becoming popular providing an alternative description of strong interaction physics. In this article, we aim to discuss development of strong interaction physics through QCD and string theory, leading to holographic QCD.
Non-perturbative QCD effects in q T spectra of Drell-Yan and Z-boson production
NASA Astrophysics Data System (ADS)
D'Alesio, Umberto; Echevarria, Miguel G.; Melis, Stefano; Scimemi, Ignazio
2014-11-01
The factorization theorems for transverse momentum distributions of dilepton/boson production, recently formulated by Collins and Echevarria-Idilbi-Scimemi in terms of well-defined transverse momentum dependent distributions (TMDs), allows for a systematic and quantitative analysis of non-perturbative QCD effects of the cross sections involving these quantities. In this paper we perform a global fit using all current available data for Drell-Yan and Z-boson production at hadron colliders within this framework. The perturbative calculable pieces of our estimates are included using a complete resummation at next-to-next-to-leading-logarithmic accuracy. Performing the matching of transverse momentum distributions onto the standard collinear parton distribution functions and recalling that the corresponding matching coefficient can be partially exponentiated, we find that this exponentiated part is spin-independent and resummable. We argue that the inclusion of higher order perturbative pieces is necessary when data from lower energy scales are analyzed. We consider non-perturbative corrections both to the intrinsic nucleon structure and to the evolution kernel and find that the non-perturbative part of the TMDs could be parametrized in terms of a minimal set of parameters (namely 2-3). When all corrections are included the global fit so performed gives a χ 2 /d .o .f . ≲ 1 and a very precise prediction for vector boson production at the Large Hadron Collider (LHC).
QCD Corrections and New Physics - Proceedings of the International Symposium
NASA Astrophysics Data System (ADS)
Kodaira, Jiro; Onogi, Tetsuya; Sasaki, Ken
1998-09-01
The Table of Contents for the full book PDF is as follows: * Preface * Opening Address * Top Quark Physics * Threshold Resummation of Soft Gluons in Hadronic Reactions - An Introduction * Recent Results from CDF * Top Quark Physics: Overview * Complete Description of Polarization Effects in Top Quark Decays Including Higher Order QCD Corrections * Top Pair Production in e+e- and γγ Processes * Structure Functions I * Highlights of Physics at HERA * Some Aspects of the BFKL Evolution * Structure Functions II * New Result from SMC on g_{1}^ρ * Studies of the Nucleon Spin Structure by HERMES * Recent Developments in Perturbative QCD: Q2 Evolution of Chiral-Odd Distributions h1(x,Q2) and hL(x,Q2) * The Small x Behavior of g1 in the Resummed Approach * Jet Physics * QCD Results from LEP1 and LEP2 * Twenty Years of Jet Physics : Old and New * Multi-Parton Loop Amplitudes and Next-to-Leading Order Jet Cross-Sections * Heavy Meson * PQCD Analysis of Inclusive Heavy Hadrons Decays * Strong Coupling Constant from Lattice QCD * Heavy-Light Decay Constant from Lattice NRQCD * Concluding Remarks * Program * Organizing Committee * List of Participants
Nonlinear projection trick in kernel methods: an alternative to the kernel trick.
Kwak, Nojun
2013-12-01
In kernel methods such as kernel principal component analysis (PCA) and support vector machines, the so called kernel trick is used to avoid direct calculations in a high (virtually infinite) dimensional kernel space. In this brief, based on the fact that the effective dimensionality of a kernel space is less than the number of training samples, we propose an alternative to the kernel trick that explicitly maps the input data into a reduced dimensional kernel space. This is easily obtained by the eigenvalue decomposition of the kernel matrix. The proposed method is named as the nonlinear projection trick in contrast to the kernel trick. With this technique, the applicability of the kernel methods is widened to arbitrary algorithms that do not use the dot product. The equivalence between the kernel trick and the nonlinear projection trick is shown for several conventional kernel methods. In addition, we extend PCA-L1, which uses L1-norm instead of L2-norm (or dot product), into a kernel version and show the effectiveness of the proposed approach. PMID:24805227
Image texture analysis of crushed wheat kernels
NASA Astrophysics Data System (ADS)
Zayas, Inna Y.; Martin, C. R.; Steele, James L.; Dempster, Richard E.
1992-03-01
The development of new approaches for wheat hardness assessment may impact the grain industry in marketing, milling, and breeding. This study used image texture features for wheat hardness evaluation. Application of digital imaging to grain for grading purposes is principally based on morphometrical (shape and size) characteristics of the kernels. A composite sample of 320 kernels for 17 wheat varieties were collected after testing and crushing with a single kernel hardness characterization meter. Six wheat classes where represented: HRW, HRS, SRW, SWW, Durum, and Club. In this study, parameters which characterize texture or spatial distribution of gray levels of an image were determined and used to classify images of crushed wheat kernels. The texture parameters of crushed wheat kernel images were different depending on class, hardness and variety of the wheat. Image texture analysis of crushed wheat kernels showed promise for use in class, hardness, milling quality, and variety discrimination.
QCD analogy for quantum gravity
NASA Astrophysics Data System (ADS)
Holdom, Bob; Ren, Jing
2016-06-01
Quadratic gravity presents us with a renormalizable, asymptotically free theory of quantum gravity. When its couplings grow strong at some scale, as in QCD, then this strong scale sets the Planck mass. QCD has a gluon that does not appear in the physical spectrum. Quadratic gravity has a spin-2 ghost that we conjecture does not appear in the physical spectrum. We discuss how the QCD analogy leads to this conjecture and to the possible emergence of general relativity. Certain aspects of the QCD path integral and its measure are also similar for quadratic gravity. With the addition of the Einstein-Hilbert term, quadratic gravity has a dimensionful parameter that seems to control a quantum phase transition and the size of a mass gap in the strong phase.
Excited Baryons in Holographic QCD
de Teramond, Guy F.; Brodsky, Stanley J.; /SLAC /Southern Denmark U., CP3-Origins
2011-11-08
The light-front holographic QCD approach is used to describe baryon spectroscopy and the systematics of nucleon transition form factors. Baryon spectroscopy and the excitation dynamics of nucleon resonances encoded in the nucleon transition form factors can provide fundamental insight into the strong-coupling dynamics of QCD. The transition from the hard-scattering perturbative domain to the non-perturbative region is sensitive to the detailed dynamics of confined quarks and gluons. Computations of such phenomena from first principles in QCD are clearly very challenging. The most successful theoretical approach thus far has been to quantize QCD on discrete lattices in Euclidean space-time; however, dynamical observables in Minkowski space-time, such as the time-like hadronic form factors are not amenable to Euclidean numerical lattice computations.
Molecular Hydrodynamics from Memory Kernels.
Lesnicki, Dominika; Vuilleumier, Rodolphe; Carof, Antoine; Rotenberg, Benjamin
2016-04-01
The memory kernel for a tagged particle in a fluid, computed from molecular dynamics simulations, decays algebraically as t^{-3/2}. We show how the hydrodynamic Basset-Boussinesq force naturally emerges from this long-time tail and generalize the concept of hydrodynamic added mass. This mass term is negative in the present case of a molecular solute, which is at odds with incompressible hydrodynamics predictions. Lastly, we discuss the various contributions to the friction, the associated time scales, and the crossover between the molecular and hydrodynamic regimes upon increasing the solute radius. PMID:27104730
KERNEL PHASE IN FIZEAU INTERFEROMETRY
Martinache, Frantz
2010-11-20
The detection of high contrast companions at small angular separation appears feasible in conventional direct images using the self-calibration properties of interferometric observable quantities. The friendly notion of closure phase, which is key to the recent observational successes of non-redundant aperture masking interferometry used with adaptive optics, appears to be one example of a wide family of observable quantities that are not contaminated by phase noise. In the high-Strehl regime, soon to be available thanks to the coming generation of extreme adaptive optics systems on ground-based telescopes, and already available from space, closure phase like information can be extracted from any direct image, even taken with a redundant aperture. These new phase-noise immune observable quantities, called kernel phases, are determined a priori from the knowledge of the geometry of the pupil only. Re-analysis of archive data acquired with the Hubble Space Telescope NICMOS instrument using this new kernel-phase algorithm demonstrates the power of the method as it clearly detects and locates with milliarcsecond precision a known companion to a star at angular separation less than the diffraction limit.
Nuclear chromodynamics: applications of QCD to relativistic multiquark systems
Brodsky, S.J.; Ji, C.R.
1984-07-01
We review the applications of quantum chromodynamics to nuclear multiquark systems. In particular, predictions are given for the deuteron reduced form factor in the high momentum transfer region, hidden color components in nuclear wavefunctions, and the short distance effective force between nucleons. A new antisymmetrization technique is presented which allows a basis for relativistic multiquark wavefunctions and solutions to their evolution to short distances. Areas in which conventional nuclear theory conflicts with QCD are also briefly reviewed. 48 references.
QCD measurements at the Tevatron
Bandurin, Dmitry; /Florida State U.
2011-12-01
Selected quantum chromodynamics (QCD) measurements performed at the Fermilab Run II Tevatron p{bar p} collider running at {radical}s = 1.96 TeV by CDF and D0 Collaborations are presented. The inclusive jet, dijet production and three-jet cross section measurements are used to test perturbative QCD calculations, constrain parton distribution function (PDF) determinations, and extract a precise value of the strong coupling constant, {alpha}{sub s}(m{sub Z}) = 0.1161{sub -0.0048}{sup +0.0041}. Inclusive photon production cross-section measurements reveal an inability of next-to-leading-order (NLO) perturbative QCD (pQCD) calculations to describe low-energy photons arising directly in the hard scatter. The diphoton production cross-sections check the validity of the NLO pQCD predictions, soft-gluon resummation methods implemented in theoretical calculations, and contributions from the parton-to-photon fragmentation diagrams. Events with W/Z+jets productions are used to measure many kinematic distributions allowing extensive tests and tunes of predictions from pQCD NLO and Monte-Carlo (MC) event generators. The charged-particle transverse momenta (p{sub T}) and multiplicity distributions in the inclusive minimum bias events are used to tune non-perturbative QCD models, including those describing the multiple parton interactions (MPI). Events with inclusive production of {gamma} and 2 or 3 jets are used to study increasingly important MPI phenomenon at high p{sub T}, measure an effective interaction cross section, {sigma}{sub eff} = 16.4 {+-} 2.3 mb, and limit existing MPI models.
Code of Federal Regulations, 2011 CFR
2011-01-01
... Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than one-eighth of its original volume...
Code of Federal Regulations, 2010 CFR
2010-01-01
... Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than one-eighth of its original volume...
Code of Federal Regulations, 2013 CFR
2013-01-01
..., CERTIFICATION, AND STANDARDS) United States Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than...
Code of Federal Regulations, 2012 CFR
2012-01-01
... Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than one-eighth of its original volume...
Code of Federal Regulations, 2014 CFR
2014-01-01
..., CERTIFICATION, AND STANDARDS) United States Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than...
Kernel spectral clustering with memory effect
NASA Astrophysics Data System (ADS)
Langone, Rocco; Alzate, Carlos; Suykens, Johan A. K.
2013-05-01
Evolving graphs describe many natural phenomena changing over time, such as social relationships, trade markets, metabolic networks etc. In this framework, performing community detection and analyzing the cluster evolution represents a critical task. Here we propose a new model for this purpose, where the smoothness of the clustering results over time can be considered as a valid prior knowledge. It is based on a constrained optimization formulation typical of Least Squares Support Vector Machines (LS-SVM), where the objective function is designed to explicitly incorporate temporal smoothness. The latter allows the model to cluster the current data well and to be consistent with the recent history. We also propose new model selection criteria in order to carefully choose the hyper-parameters of our model, which is a crucial issue to achieve good performances. We successfully test the model on four toy problems and on a real world network. We also compare our model with Evolutionary Spectral Clustering, which is a state-of-the-art algorithm for community detection of evolving networks, illustrating that the kernel spectral clustering with memory effect can achieve better or equal performances.
Hairpin Vortex Dynamics in a Kernel Experiment
NASA Astrophysics Data System (ADS)
Meng, H.; Yang, W.; Sheng, J.
1998-11-01
A surface-mounted trapezoidal tab is known to shed hairpin-like vortices and generate a pair of counter-rotating vortices in its wake. Such a flow serves as a kernel experiment for studying the dynamics of these vortex structures. Created by and scaled with the tab, the vortex structures are more orderly and larger than those in the natural wall turbulence and thus suitable for measurement by Particle Image Velocimetry (PIV) and visualization by Planar Laser Induced Fluorescence (PLIF). Time-series PIV provides insight into the evolution, self-enhancement, regeneration, and interaction of hairpin vortices, as well as interactions of the hairpins with the pressure-induced counter-rotating vortex pair (CVP). The topology of the wake structure indicates that the hairpin "heads" are formed from lifted shear-layer instability and "legs" from stretching by the CVP, which passes the energy to the hairpins. The CVP diminishes after one tab height, while the hairpins persist until 10 20 tab heights downstream. It is concluded that the lift-up of the near-surface viscous fluids is the key to hairpin vortex dynamics. Whether from the pumping action of the CVP or the ejection by an existing hairpin, the 3D lift-up of near-surface vorticity contributes to the increase of hairpin vortex strength and creation of secondary hairpins. http://www.mne.ksu.edu/ meng/labhome.html
Corn kernel oil and corn fiber oil
Technology Transfer Automated Retrieval System (TEKTRAN)
Unlike most edible plant oils that are obtained directly from oil-rich seeds by either pressing or solvent extraction, corn seeds (kernels) have low levels of oil (4%) and commercial corn oil is obtained from the corn germ (embryo) which is an oil-rich portion of the kernel. Commercial corn oil cou...
Andersen, Jens O.; Leganger, Lars E.; Strickland, Michael; Su, Nan
2011-10-15
In this brief report we compare the predictions of a recent next-to-next-to-leading order hard-thermal-loop perturbation theory (HTLpt) calculation of the QCD trace anomaly to available lattice data. We focus on the trace anomaly scaled by T{sup 2} in two cases: N{sub f}=0 and N{sub f}=3. When using the canonical value of {mu}=2{pi}T for the renormalization scale, we find that for Yang-Mills theory (N{sub f}=0) agreement between HTLpt and lattice data for the T{sup 2}-scaled trace anomaly begins at temperatures on the order of 8T{sub c}, while treating the subtracted piece as an interaction term when including quarks (N{sub f}=3) agreement begins already at temperatures above 2T{sub c}. In both cases we find that at very high temperatures the T{sup 2}-scaled trace anomaly increases with temperature in accordance with the predictions of HTLpt.
Recent QCD results from the Tevatron
Pickarz, Henryk; CDF and DO collaboration
1997-02-01
Recent QCD results from the CDF and D0 detectors at the Tevatron proton-antiproton collider are presented. An outlook for future QCD tests at the Tevatron collider is also breifly discussed. 27 refs., 11 figs.
Kenneth Wilson and Lattice QCD
NASA Astrophysics Data System (ADS)
Ukawa, Akira
2015-09-01
We discuss the physics and computation of lattice QCD, a space-time lattice formulation of quantum chromodynamics, and Kenneth Wilson's seminal role in its development. We start with the fundamental issue of confinement of quarks in the theory of the strong interactions, and discuss how lattice QCD provides a framework for understanding this phenomenon. A conceptual issue with lattice QCD is a conflict of space-time lattice with chiral symmetry of quarks. We discuss how this problem is resolved. Since lattice QCD is a non-linear quantum dynamical system with infinite degrees of freedom, quantities which are analytically calculable are limited. On the other hand, it provides an ideal case of massively parallel numerical computations. We review the long and distinguished history of parallel-architecture supercomputers designed and built for lattice QCD. We discuss algorithmic developments, in particular the difficulties posed by the fermionic nature of quarks, and their resolution. The triad of efforts toward better understanding of physics, better algorithms, and more powerful supercomputers have produced major breakthroughs in our understanding of the strong interactions. We review the salient results of this effort in understanding the hadron spectrum, the Cabibbo-Kobayashi-Maskawa matrix elements and CP violation, and quark-gluon plasma at high temperatures. We conclude with a brief summary and a future perspective.
Threefold Complementary Approach to Holographic QCD
Brodsky, Stanley J.; de Teramond, Guy F.; Dosch, Hans Gunter
2013-12-27
A complementary approach, derived from (a) higher-dimensional anti-de Sitter (AdS) space, (b) light-front quantization and (c) the invariance properties of the full conformal group in one dimension leads to a nonperturbative relativistic light-front wave equation which incorporates essential spectroscopic and dynamical features of hadron physics. The fundamental conformal symmetry of the classical QCD Lagrangian in the limit of massless quarks is encoded in the resulting effective theory. The mass scale for confinement emerges from the isomorphism between the conformal group andSO(2,1). This scale appears in the light-front Hamiltonian by mapping to the evolution operator in the formalism of de Alfaro, Fubini and Furlan, which retains the conformal invariance of the action. Remarkably, the specific form of the confinement interaction and the corresponding modification of AdS space are uniquely determined in this procedure.
Bayesian Kernel Mixtures for Counts
Canale, Antonio; Dunson, David B.
2011-01-01
Although Bayesian nonparametric mixture models for continuous data are well developed, there is a limited literature on related approaches for count data. A common strategy is to use a mixture of Poissons, which unfortunately is quite restrictive in not accounting for distributions having variance less than the mean. Other approaches include mixing multinomials, which requires finite support, and using a Dirichlet process prior with a Poisson base measure, which does not allow smooth deviations from the Poisson. As a broad class of alternative models, we propose to use nonparametric mixtures of rounded continuous kernels. An efficient Gibbs sampler is developed for posterior computation, and a simulation study is performed to assess performance. Focusing on the rounded Gaussian case, we generalize the modeling framework to account for multivariate count data, joint modeling with continuous and categorical variables, and other complications. The methods are illustrated through applications to a developmental toxicity study and marketing data. This article has supplementary material online. PMID:22523437
Bayesian Kernel Mixtures for Counts.
Canale, Antonio; Dunson, David B
2011-12-01
Although Bayesian nonparametric mixture models for continuous data are well developed, there is a limited literature on related approaches for count data. A common strategy is to use a mixture of Poissons, which unfortunately is quite restrictive in not accounting for distributions having variance less than the mean. Other approaches include mixing multinomials, which requires finite support, and using a Dirichlet process prior with a Poisson base measure, which does not allow smooth deviations from the Poisson. As a broad class of alternative models, we propose to use nonparametric mixtures of rounded continuous kernels. An efficient Gibbs sampler is developed for posterior computation, and a simulation study is performed to assess performance. Focusing on the rounded Gaussian case, we generalize the modeling framework to account for multivariate count data, joint modeling with continuous and categorical variables, and other complications. The methods are illustrated through applications to a developmental toxicity study and marketing data. This article has supplementary material online. PMID:22523437
QCD sum rules on the complex Borel plane
NASA Astrophysics Data System (ADS)
Araki, Ken-Ji; Ohtani, Keisuke; Gubler, Philipp; Oka, Makoto
2014-07-01
Borel-transformed QCD sum rules conventionally use a real-valued parameter (the Borel mass) for specifying the exponential weight over which hadronic spectral functions are averaged. In this paper, it is shown that the Borel mass can be generalized to have complex values and that new classes of sum rules can be derived from the resulting averages over the spectral functions. The real and imaginary parts of these novel sum rules turn out to have damped oscillating kernels and potentially contain a larger amount of information on the hadronic spectrum than the real-valued QCD sum rules. As a first practical test, we have formulated complex Borel sum rules for the φ -meson channel and have analyzed them using the maximum entropy method, by which we can extract the most probable spectral function from the sum rules without strong assumptions on its functional form. As a result, it is demonstrated that, compared to earlier studies, the complex-valued sum rules allow us to extract the spectral function with a significantly improved resolution and thus to study more detailed structures of the hadronic spectrum than previously possible.
LATTICE QCD THERMODYNAMICS WITH WILSON QUARKS.
EJIRI,S.
2007-11-20
We review studies of QCD thermodynamics by lattice QCD simulations with dynamical Wilson quarks. After explaining the basic properties of QCD with Wilson quarks at finite temperature including the phase structure and the scaling properties around the chiral phase transition, we discuss the critical temperature, the equation of state and heavy-quark free energies.
Lattice QCD input for axion cosmology
NASA Astrophysics Data System (ADS)
Berkowitz, Evan; Buchoff, Michael I.; Rinaldi, Enrico
2015-08-01
One intriguing beyond-the-Standard-Model particle is the QCD axion, which could simultaneously provide a solution to the Strong C P Problem and account for some, if not all, of the dark matter density in the Universe. This particle is a pseudo-Nambu-Goldstone boson of the conjectured Peccei-Quinn symmetry of the Standard Model. Its mass and interactions are suppressed by a heavy symmetry-breaking scale, fa, the value of which is roughly greater than 109 GeV (or, conversely, the axion mass, ma, is roughly less than 104 μ eV ). The density of axions in the Universe, which cannot exceed the relic dark matter density and is a quantity of great interest in axion experiments like ADMX, is a result of the early Universe interplay between cosmological evolution and the axion mass as a function of temperature. The latter quantity is proportional to the second derivative of the temperature-dependent QCD free energy with respect to the C P -violating phase, θ . However, this quantity is generically nonperturbative, and previous calculations have only employed instanton models at the high temperatures of interest (roughly 1 GeV). In this and future works, we aim to calculate the temperature-dependent axion mass at small θ from first-principle lattice calculations, with controlled statistical and systematic errors. Once calculated, this temperature-dependent axion mass is input for the classical evolution equations of the axion density of the Universe, which is required to be less than or equal to the dark matter density. Due to a variety of lattice systematic effects at the very high temperatures required, we perform a calculation of the leading small-θ cumulant of the theta vacua on large volume lattices for SU(3) Yang-Mills with high statistics as a first proof of concept, before attempting a full QCD calculation in the future. From these pure glue results, the misalignment mechanism yields the axion mass bound ma≥(14.6 ±0.1 ) μ eV when Peccei-Quinn breaking occurs
A Framework for Lattice QCD Calculations on GPUs
Winter, Frank; Clark, M A; Edwards, Robert G; Joo, Balint
2014-08-01
Computing platforms equipped with accelerators like GPUs have proven to provide great computational power. However, exploiting such platforms for existing scientific applications is not a trivial task. Current GPU programming frameworks such as CUDA C/C++ require low-level programming from the developer in order to achieve high performance code. As a result porting of applications to GPUs is typically limited to time-dominant algorithms and routines, leaving the remainder not accelerated which can open a serious Amdahl's law issue. The lattice QCD application Chroma allows to explore a different porting strategy. The layered structure of the software architecture logically separates the data-parallel from the application layer. The QCD Data-Parallel software layer provides data types and expressions with stencil-like operations suitable for lattice field theory and Chroma implements algorithms in terms of this high-level interface. Thus by porting the low-level layer one can effectively move the whole application in one swing to a different platform. The QDP-JIT/PTX library, the reimplementation of the low-level layer, provides a framework for lattice QCD calculations for the CUDA architecture. The complete software interface is supported and thus applications can be run unaltered on GPU-based parallel computers. This reimplementation was possible due to the availability of a JIT compiler (part of the NVIDIA Linux kernel driver) which translates an assembly-like language (PTX) to GPU code. The expression template technique is used to build PTX code generators and a software cache manages the GPU memory. This reimplementation allows us to deploy an efficient implementation of the full gauge-generation program with dynamical fermions on large-scale GPU-based machines such as Titan and Blue Waters which accelerates the algorithm by more than an order of magnitude.
J.J. Sakurai Prize for Theoretical Particle Physics: 40 Years of Lattice QCD
NASA Astrophysics Data System (ADS)
Lepage, Peter
2016-03-01
Lattice QCD was invented in 1973-74 by Ken Wilson, who passed away in 2013. This talk will describe the evolution of lattice QCD through the past 40 years with particular emphasis on its first years, and on the past decade, when lattice QCD simulations finally came of age. Thanks to theoretical breakthroughs in the late 1990s and early 2000s, lattice QCD simulations now produce the most accurate theoretical calculations in the history of strong-interaction physics. They play an essential role in high-precision experimental studies of physics within and beyond the Standard Model of Particle Physics. The talk will include a non-technical review of the conceptual ideas behind this revolutionary development in (highly) nonlinear quantum physics, together with a survey of its current impact on theoretical and experimental particle physics, and prospects for the future. Work supported by the National Science Foundation.
Glueball decay in holographic QCD
Hashimoto, Koji; Tan, C.-I; Terashima, Seiji
2008-04-15
Using holographic QCD based on D4-branes and D8-anti-D8-branes, we have computed couplings of glueballs to light mesons. We describe glueball decay by explicitly calculating its decay widths and branching ratios. Interestingly, while glueballs remain less well understood both theoretically and experimentally, our results are found to be consistent with the experimental data for the scalar glueball candidate f{sub 0}(1500). More generally, holographic QCD predicts that decay of any glueball to 4{pi}{sup 0} is suppressed, and that mixing of the lightest glueball with qq mesons is small.
QCD: Challenges for the future
Burrows, P.; Dawson, S.; Orr, L.; Smith, W.H.
1997-01-13
Despite many experimental verifications of the correctness of our basic understanding of QCD, there remain numerous open questions in strong interaction physics and we focus on the role of future colliders in addressing these questions. We discuss possible advances in the measurement of {alpha}{sub s}, in the study of parton distribution functions, and in the understanding of low x physics at present colliders and potential new facilities. We also touch briefly on the role of spin physics in advancing our understanding of QCD.
Neutron star structure from QCD
NASA Astrophysics Data System (ADS)
Fraga, Eduardo S.; Kurkela, Aleksi; Vuorinen, Aleksi
2016-03-01
In this review article, we argue that our current understanding of the thermodynamic properties of cold QCD matter, originating from first principles calculations at high and low densities, can be used to efficiently constrain the macroscopic properties of neutron stars. In particular, we demonstrate that combining state-of-the-art results from Chiral Effective Theory and perturbative QCD with the current bounds on neutron star masses, the Equation of State of neutron star matter can be obtained to an accuracy better than 30% at all densities.
The supercritical pomeron in QCD.
White, A. R.
1998-06-29
Deep-inelastic diffractive scaling violations have provided fundamental insight into the QCD pomeron, suggesting a single gluon inner structure rather than that of a perturbative two-gluon bound state. This talk outlines a derivation of a high-energy, transverse momentum cut-off, confining solution of QCD. The pomeron, in first approximation, is a single reggeized gluon plus a ''wee parton'' component that compensates for the color and particle properties of the gluon. This solution corresponds to a super-critical phase of Reggeon Field Theory.
QCD inequalities for hadron interactions.
Detmold, William
2015-06-01
We derive generalizations of the Weingarten-Witten QCD mass inequalities for particular multihadron systems. For systems of any number of identical pseudoscalar mesons of maximal isospin, these inequalities prove that near threshold interactions between the constituent mesons must be repulsive and that no bound states can form in these channels. Similar constraints in less symmetric systems are also extracted. These results are compatible with experimental results (where known) and recent lattice QCD calculations, and also lead to a more stringent bound on the nucleon mass than previously derived, m_{N}≥3/2m_{π}. PMID:26196617
Yun, J.C.
1990-10-10
In this paper we report recent QCD analysis with the new data taken from CDF detector. CDF recorded an integrated luminosity of 4.4 nb{sup {minus}1} during the 1988--1989 run at center of mass system (CMS) energy of 1.8 TeV. The major topics of this report are inclusive jet, dijet, trijet and direct photon analysis. These measurements are compared of QCD predictions. For the inclusive jet an dijet analysis, tests of quark compositeness are emphasized. 11 refs., 6 figs.
QCD corrections to triboson production
NASA Astrophysics Data System (ADS)
Lazopoulos, Achilleas; Melnikov, Kirill; Petriello, Frank
2007-07-01
We present a computation of the next-to-leading order QCD corrections to the production of three Z bosons at the Large Hadron Collider. We calculate these corrections using a completely numerical method that combines sector decomposition to extract infrared singularities with contour deformation of the Feynman parameter integrals to avoid internal loop thresholds. The NLO QCD corrections to pp→ZZZ are approximately 50% and are badly underestimated by the leading order scale dependence. However, the kinematic dependence of the corrections is minimal in phase space regions accessible at leading order.
Lattice QCD clusters at Fermilab
Holmgren, D.; Mackenzie, Paul B.; Singh, Anitoj; Simone, Jim; /Fermilab
2004-12-01
As part of the DOE SciDAC ''National Infrastructure for Lattice Gauge Computing'' project, Fermilab builds and operates production clusters for lattice QCD simulations. This paper will describe these clusters. The design of lattice QCD clusters requires careful attention to balancing memory bandwidth, floating point throughput, and network performance. We will discuss our investigations of various commodity processors, including Pentium 4E, Xeon, Opteron, and PPC970. We will also discuss our early experiences with the emerging Infiniband and PCI Express architectures. Finally, we will present our predictions and plans for future clusters.
Nucleon Structure from Lattice QCD
David Richards
2007-09-05
Recent advances in lattice field theory, in computer technology and in chiral perturbation theory have enabled lattice QCD to emerge as a powerful quantitative tool in understanding hadron structure. I describe recent progress in the computation of the nucleon form factors and moments of parton distribution functions, before proceeding to describe lattice studies of the Generalized Parton Distributions (GPDs). In particular, I show how lattice studies of GPDs contribute to building a three-dimensional picture of the proton, I conclude by describing the prospects for studying the structure of resonances from lattice QCD.
Putting Priors in Mixture Density Mercer Kernels
NASA Technical Reports Server (NTRS)
Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd
2004-01-01
This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly infinite dimensional feature space. We describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using predefined kernels. These data adaptive kernels can en- code prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS). The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains template for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic- algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code. The results show that the Mixture Density Mercer-Kernel described here outperforms tree-based classification in distinguishing high-redshift galaxies from low- redshift galaxies by approximately 16% on test data, bagged trees by approximately 7%, and bagged trees built on a much larger sample of data by approximately 2%.
Huang, Lulu; Massa, Lou
2010-01-01
The Kernel Energy Method (KEM) provides a way to calculate the ab-initio energy of very large biological molecules. The results are accurate, and the computational time reduced. However, by use of a list of double kernel interactions a significant additional reduction of computational effort may be achieved, still retaining ab-initio accuracy. A numerical comparison of the indices that name the known double interactions in question, allow one to list higher order interactions having the property of topological continuity within the full molecule of interest. When, that list of interactions is unpacked, as a kernel expansion, which weights the relative importance of each kernel in an expression for the total molecular energy, high accuracy, and a further significant reduction in computational effort results. A KEM molecular energy calculation based upon the HF/STO3G chemical model, is applied to the protein insulin, as an illustration. PMID:21243065
NASA Astrophysics Data System (ADS)
Boz, Tamer; Giudice, Pietro; Hands, Simon; Skullerud, Jon-Ivar; Williams, Anthony G.
2016-01-01
QCD at high chemical potential has interesting properties such as deconfinement of quarks. Two-color QCD, which enables numerical simulations on the lattice, constitutes a laboratory to study QCD at high chemical potential. Among the interesting properties of two-color QCD at high density is the diquark condensation, for which we present recent results obtained on a finer lattice compared to previous studies. The quark propagator in two-color QCD at non-zero chemical potential is referred to as the Gor'kov propagator. We express the Gor'kov propagator in terms of form factors and present recent lattice simulation results.
Kernel map compression for speeding the execution of kernel-based methods.
Arif, Omar; Vela, Patricio A
2011-06-01
The use of Mercer kernel methods in statistical learning theory provides for strong learning capabilities, as seen in kernel principal component analysis and support vector machines. Unfortunately, after learning, the computational complexity of execution through a kernel is of the order of the size of the training set, which is quite large for many applications. This paper proposes a two-step procedure for arriving at a compact and computationally efficient execution procedure. After learning in the kernel space, the proposed extension exploits the universal approximation capabilities of generalized radial basis function neural networks to efficiently approximate and replace the projections onto the empirical kernel map used during execution. Sample applications demonstrate significant compression of the kernel representation with graceful performance loss. PMID:21550884
Constructing perturbation theory kernels for large-scale structure in generalized cosmologies
NASA Astrophysics Data System (ADS)
Taruya, Atsushi
2016-07-01
We present a simple numerical scheme for perturbation theory (PT) calculations of large-scale structure. Solving the evolution equations for perturbations numerically, we construct the PT kernels as building blocks of statistical calculations, from which the power spectrum and/or correlation function can be systematically computed. The scheme is especially applicable to the generalized structure formation including modified gravity, in which the analytic construction of PT kernels is intractable. As an illustration, we show several examples for power spectrum calculations in f (R ) gravity and Λ CDM models.
Lattice QCD in Background Fields
William Detmold, Brian Tiburzi, Andre Walker-Loud
2009-06-01
Electromagnetic properties of hadrons can be computed by lattice simulations of QCD in background fields. We demonstrate new techniques for the investigation of charged hadron properties in electric fields. Our current calculations employ large electric fields, motivating us to analyze chiral dynamics in strong QED backgrounds, and subsequently uncover surprising non-perturbative effects present at finite volume.
Renormalization in Coulomb gauge QCD
NASA Astrophysics Data System (ADS)
Andraši, A.; Taylor, John C.
2011-04-01
In the Coulomb gauge of QCD, the Hamiltonian contains a non-linear Christ-Lee term, which may alternatively be derived from a careful treatment of ambiguous Feynman integrals at 2-loop order. We investigate how and if UV divergences from higher order graphs can be consistently absorbed by renormalization of the Christ-Lee term. We find that they cannot.
QCD Phase Transitions, Volume 15
Schaefer, T.; Shuryak, E.
1999-03-20
The title of the workshop, ''The QCD Phase Transitions'', in fact happened to be too narrow for its real contents. It would be more accurate to say that it was devoted to different phases of QCD and QCD-related gauge theories, with strong emphasis on discussion of the underlying non-perturbative mechanisms which manifest themselves as all those phases. Before we go to specifics, let us emphasize one important aspect of the present status of non-perturbative Quantum Field Theory in general. It remains true that its studies do not get attention proportional to the intellectual challenge they deserve, and that the theorists working on it remain very fragmented. The efforts to create Theory of Everything including Quantum Gravity have attracted the lion share of attention and young talent. Nevertheless, in the last few years there was also a tremendous progress and even some shift of attention toward emphasis on the unity of non-perturbative phenomena. For example, we have seen some efforts to connect the lessons from recent progress in Supersymmetric theories with that in QCD, as derived from phenomenology and lattice. Another example is Maldacena conjecture and related development, which connect three things together, string theory, super-gravity and the (N=4) supersymmetric gauge theory. Although the progress mentioned is remarkable by itself, if we would listen to each other more we may have chance to strengthen the field and reach better understanding of the spectacular non-perturbative physics.
Seven topics in perturbative QCD
Buras, A.J.
1980-09-01
The following topics of perturbative QCD are discussed: (1) deep inelastic scattering; (2) higher order corrections to e/sup +/e/sup -/ annihilation, to photon structure functions and to quarkonia decays; (3) higher order corrections to fragmentation functions and to various semi-inclusive processes; (4) higher twist contributions; (5) exclusive processes; (6) transverse momentum effects; (7) jet and photon physics.
Basics of QCD perturbation theory
Soper, D.E.
1997-06-01
This is an introduction to the use of QCD perturbation theory, emphasizing generic features of the theory that enable one to separate short-time and long-time effects. The author also covers some important classes of applications: electron-positron annihilation to hadrons, deeply inelastic scattering, and hard processes in hadron-hadron collisions. 31 refs., 38 figs.
Experimenting with Langevin lattice QCD
Gavai, R.V.; Potvin, J.; Sanielevici, S.
1987-05-01
We report on the status of our investigations of the effects of systematic errors upon the practical merits of Langevin updating in full lattice QCD. We formulate some rules for the safe use of this updating procedure and some observations on problems which may be common to all approximate fermion algorithms.
7 CFR 51.2296 - Three-fourths half kernel.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Three-fourths half kernel. 51.2296 Section 51.2296... STANDARDS) United States Standards for Shelled English Walnuts (Juglans Regia) Definitions § 51.2296 Three-fourths half kernel. Three-fourths half kernel means a portion of a half of a kernel which has more...
7 CFR 981.401 - Adjusted kernel weight.
Code of Federal Regulations, 2013 CFR
2013-01-01
... weight of delivery 10,000 10,000 2. Percent of edible kernel weight 53.0 84.0 3. Less weight loss in... 7 Agriculture 8 2013-01-01 2013-01-01 false Adjusted kernel weight. 981.401 Section 981.401... Administrative Rules and Regulations § 981.401 Adjusted kernel weight. (a) Definition. Adjusted kernel...
UPDATE OF GRAY KERNEL DISEASE OF MACADAMIA - 2006
Technology Transfer Automated Retrieval System (TEKTRAN)
Gray kernel is an important disease of macadamia that affects the quality of kernels with gray discoloration and a permeating, foul odor that can render entire batches of nuts unmarketable. We report on the successful production of gray kernel in raw macadamia kernels artificially inoculated with s...
7 CFR 981.401 - Adjusted kernel weight.
Code of Federal Regulations, 2011 CFR
2011-01-01
... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams;...
7 CFR 981.401 - Adjusted kernel weight.
Code of Federal Regulations, 2010 CFR
2010-01-01
... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams;...
7 CFR 981.401 - Adjusted kernel weight.
Code of Federal Regulations, 2014 CFR
2014-01-01
... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams;...
7 CFR 981.401 - Adjusted kernel weight.
Code of Federal Regulations, 2012 CFR
2012-01-01
... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams;...
7 CFR 51.2125 - Split or broken kernels.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 2 2011-01-01 2011-01-01 false Split or broken kernels. 51.2125 Section 51.2125... STANDARDS) United States Standards for Grades of Shelled Almonds Definitions § 51.2125 Split or broken kernels. Split or broken kernels means seven-eighths or less of complete whole kernels but which will...
7 CFR 51.2125 - Split or broken kernels.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 2 2012-01-01 2012-01-01 false Split or broken kernels. 51.2125 Section 51.2125... STANDARDS) United States Standards for Grades of Shelled Almonds Definitions § 51.2125 Split or broken kernels. Split or broken kernels means seven-eighths or less of complete whole kernels but which will...
KITTEN Lightweight Kernel 0.1 Beta
Energy Science and Technology Software Center (ESTSC)
2007-12-12
The Kitten Lightweight Kernel is a simplified OS (operating system) kernel that is intended to manage a compute node's hardware resources. It provides a set of mechanisms to user-level applications for utilizing hardware resources (e.g., allocating memory, creating processes, accessing the network). Kitten is much simpler than general-purpose OS kernels, such as Linux or Windows, but includes all of the esssential functionality needed to support HPC (high-performance computing) MPI, PGAS and OpenMP applications. Kitten providesmore » unique capabilities such as physically contiguous application memory, transparent large page support, and noise-free tick-less operation, which enable HPC applications to obtain greater efficiency and scalability than with general purpose OS kernels.« less
Biological sequence classification with multivariate string kernels.
Kuksa, Pavel P
2013-01-01
String kernel-based machine learning methods have yielded great success in practical tasks of structured/sequential data analysis. They often exhibit state-of-the-art performance on many practical tasks of sequence analysis such as biological sequence classification, remote homology detection, or protein superfamily and fold prediction. However, typical string kernel methods rely on the analysis of discrete 1D string data (e.g., DNA or amino acid sequences). In this paper, we address the multiclass biological sequence classification problems using multivariate representations in the form of sequences of features vectors (as in biological sequence profiles, or sequences of individual amino acid physicochemical descriptors) and a class of multivariate string kernels that exploit these representations. On three protein sequence classification tasks, the proposed multivariate representations and kernels show significant 15-20 percent improvements compared to existing state-of-the-art sequence classification methods. PMID:24384708
Biological Sequence Analysis with Multivariate String Kernels.
Kuksa, Pavel P
2013-03-01
String kernel-based machine learning methods have yielded great success in practical tasks of structured/sequential data analysis. They often exhibit state-of-the-art performance on many practical tasks of sequence analysis such as biological sequence classification, remote homology detection, or protein superfamily and fold prediction. However, typical string kernel methods rely on analysis of discrete one-dimensional (1D) string data (e.g., DNA or amino acid sequences). In this work we address the multi-class biological sequence classification problems using multivariate representations in the form of sequences of features vectors (as in biological sequence profiles, or sequences of individual amino acid physico-chemical descriptors) and a class of multivariate string kernels that exploit these representations. On a number of protein sequence classification tasks proposed multivariate representations and kernels show significant 15-20\\% improvements compared to existing state-of-the-art sequence classification methods. PMID:23509193
Axion cosmology, lattice QCD and the dilute instanton gas
NASA Astrophysics Data System (ADS)
Borsanyi, Sz.; Dierigl, M.; Fodor, Z.; Katz, S. D.; Mages, S. W.; Nogradi, D.; Redondo, J.; Ringwald, A.; Szabo, K. K.
2016-01-01
Axions are one of the most attractive dark matter candidates. The evolution of their number density in the early universe can be determined by calculating the topological susceptibility χ (T) of QCD as a function of the temperature. Lattice QCD provides an ab initio technique to carry out such a calculation. A full result needs two ingredients: physical quark masses and a controlled continuum extrapolation from non-vanishing to zero lattice spacings. We determine χ (T) in the quenched framework (infinitely large quark masses) and extrapolate its values to the continuum limit. The results are compared with the prediction of the dilute instanton gas approximation (DIGA). A nice agreement is found for the temperature dependence, whereas the overall normalization of the DIGA result still differs from the non-perturbative continuum extrapolated lattice results by a factor of order ten. We discuss the consequences of our findings for the prediction of the amount of axion dark matter.
Variational Dirichlet Blur Kernel Estimation.
Zhou, Xu; Mateos, Javier; Zhou, Fugen; Molina, Rafael; Katsaggelos, Aggelos K
2015-12-01
Blind image deconvolution involves two key objectives: 1) latent image and 2) blur estimation. For latent image estimation, we propose a fast deconvolution algorithm, which uses an image prior of nondimensional Gaussianity measure to enforce sparsity and an undetermined boundary condition methodology to reduce boundary artifacts. For blur estimation, a linear inverse problem with normalization and nonnegative constraints must be solved. However, the normalization constraint is ignored in many blind image deblurring methods, mainly because it makes the problem less tractable. In this paper, we show that the normalization constraint can be very naturally incorporated into the estimation process by using a Dirichlet distribution to approximate the posterior distribution of the blur. Making use of variational Dirichlet approximation, we provide a blur posterior approximation that considers the uncertainty of the estimate and removes noise in the estimated kernel. Experiments with synthetic and real data demonstrate that the proposed method is very competitive to the state-of-the-art blind image restoration methods. PMID:26390458
Weighted Bergman Kernels and Quantization}
NASA Astrophysics Data System (ADS)
Engliš, Miroslav
Let Ω be a bounded pseudoconvex domain in CN, φ, ψ two positive functions on Ω such that - log ψ, - log φ are plurisubharmonic, and z∈Ω a point at which - log φ is smooth and strictly plurisubharmonic. We show that as k-->∞, the Bergman kernels with respect to the weights φkψ have an asymptotic expansion
TICK: Transparent Incremental Checkpointing at Kernel Level
Energy Science and Technology Software Center (ESTSC)
2004-10-25
TICK is a software package implemented in Linux 2.6 that allows the save and restore of user processes, without any change to the user code or binary. With TICK a process can be suspended by the Linux kernel upon receiving an interrupt and saved in a file. This file can be later thawed in another computer running Linux (potentially the same computer). TICK is implemented as a Linux kernel module, in the Linux version 2.6.5
PET image reconstruction using kernel method.
Wang, Guobao; Qi, Jinyi
2015-01-01
Image reconstruction from low-count positron emission tomography (PET) projection data is challenging because the inverse problem is ill-posed. Prior information can be used to improve image quality. Inspired by the kernel methods in machine learning, this paper proposes a kernel based method that models PET image intensity in each pixel as a function of a set of features obtained from prior information. The kernel-based image model is incorporated into the forward model of PET projection data and the coefficients can be readily estimated by the maximum likelihood (ML) or penalized likelihood image reconstruction. A kernelized expectation-maximization algorithm is presented to obtain the ML estimate. Computer simulations show that the proposed approach can achieve better bias versus variance trade-off and higher contrast recovery for dynamic PET image reconstruction than the conventional maximum likelihood method with and without post-reconstruction denoising. Compared with other regularization-based methods, the kernel method is easier to implement and provides better image quality for low-count data. Application of the proposed kernel method to a 4-D dynamic PET patient dataset showed promising results. PMID:25095249
PET Image Reconstruction Using Kernel Method
Wang, Guobao; Qi, Jinyi
2014-01-01
Image reconstruction from low-count PET projection data is challenging because the inverse problem is ill-posed. Prior information can be used to improve image quality. Inspired by the kernel methods in machine learning, this paper proposes a kernel based method that models PET image intensity in each pixel as a function of a set of features obtained from prior information. The kernel-based image model is incorporated into the forward model of PET projection data and the coefficients can be readily estimated by the maximum likelihood (ML) or penalized likelihood image reconstruction. A kernelized expectation-maximization (EM) algorithm is presented to obtain the ML estimate. Computer simulations show that the proposed approach can achieve better bias versus variance trade-off and higher contrast recovery for dynamic PET image reconstruction than the conventional maximum likelihood method with and without post-reconstruction denoising. Compared with other regularization-based methods, the kernel method is easier to implement and provides better image quality for low-count data. Application of the proposed kernel method to a 4D dynamic PET patient dataset showed promising results. PMID:25095249
On the interface between perturbative and nonperturbative QCD
NASA Astrophysics Data System (ADS)
Deur, Alexandre; Brodsky, Stanley J.; de Téramond, Guy F.
2016-06-01
The QCD running coupling αs (Q2) sets the strength of the interactions of quarks and gluons as a function of the momentum transfer Q. The Q2 dependence of the coupling is required to describe hadronic interactions at both large and short distances. In this article we adopt the light-front holographic approach to strongly-coupled QCD, a formalism which incorporates confinement, predicts the spectroscopy of hadrons composed of light quarks, and describes the low-Q2 analytic behavior of the strong coupling αs (Q2). The high-Q2 dependence of the coupling αs (Q2) is specified by perturbative QCD and its renormalization group equation. The matching of the high and low Q2 regimes of αs (Q2) then determines the scale Q0 which sets the interface between perturbative and nonperturbative hadron dynamics. The value of Q0 can be used to set the factorization scale for DGLAP evolution of hadronic structure functions and the ERBL evolution of distribution amplitudes. We discuss the scheme-dependence of the value of Q0 and the infrared fixed-point of the QCD coupling. Our analysis is carried out for the MS ‾, g1, MOM and V renormalization schemes. Our results show that the discrepancies on the value of αs at large distance seen in the literature can be explained by different choices of renormalization schemes. We also provide the formulae to compute αs (Q2) over the entire range of space-like momentum transfer for the different renormalization schemes discussed in this article.
Evaluating the Gradient of the Thin Wire Kernel
NASA Technical Reports Server (NTRS)
Wilton, Donald R.; Champagne, Nathan J.
2008-01-01
Recently, a formulation for evaluating the thin wire kernel was developed that employed a change of variable to smooth the kernel integrand, canceling the singularity in the integrand. Hence, the typical expansion of the wire kernel in a series for use in the potential integrals is avoided. The new expression for the kernel is exact and may be used directly to determine the gradient of the wire kernel, which consists of components that are parallel and radial to the wire axis.
Calculation of the nucleon axial charge in lattice QCD
D. B. Renner; R. G. Edwards; G. Fleming; Ph. Hagler; J. W. Negele; K. Orginos; A. V. Pochinsky; D. G. Richards; W. Schroers
2006-09-01
Protons and neutrons have a rich structure in terms of their constituents, the quarks and gluons. Understanding this structure requires solving Quantum Chromodynamics (QCD). However QCD is extremely complicated, so we must numerically solve the equations of QCD using a method known as lattice QCD. Here we describe a typical lattice QCD calculation by examining our recent computation of the nucleon axial charge.
QCD with chiral 4-fermion interactions ({chi}QCD)
Kogut, J.B.; Sinclair, D.K.
1996-10-01
Lattice QCD with staggered quarks is augmented by the addition of a chiral 4-fermion interaction. The Dirac operator is now non-singular at m{sub q}=0, decreasing the computing requirements for light quark simulations by at least an order of magnitude. We present preliminary results from simulations at finite and zero temperatures for m{sub q}=0, with and without gauge fields. Chiral QCD enables simulations at physical u and d quark masses with at least an order of magnitude saving in CPU time. It also enables simulations with zero quark masses which is important for determining the equation of state. A renormalization group analysis will be needed to continue to the continuum limit. 7 refs., 2 figs.
Evaluating and Interpreting the Chemical Relevance of the Linear Response Kernel for Atoms.
Boisdenghien, Zino; Van Alsenoy, Christian; De Proft, Frank; Geerlings, Paul
2013-02-12
Although a lot of work has been done on the chemical relevance of the atom-condensed linear response kernel χAB regarding inductive, mesomeric, and hyperconjugative effects as well as (anti)aromaticity of molecules, the same cannot be said about its not condensed form χ(r,r'). Using a single Slater determinant KS type ansatz involving second order perturbation theory, we set out to investigate the linear response kernel for a number of judiciously chosen closed (sub)shell atoms throughout the periodic table and its relevance, e.g., in relation to the shell structure and polarizability. The numerical results are to the best of our knowledge the first systematic study on this noncondensed linear response function, the results for He and Be being in line with earlier work by Savin. Different graphical representations of the kernel are presented and discussed. Moreover, a frontier orbital approach has been tested illustrating the sensitivity of the nonintegrated kernel to the nodal structure of the orbitals. As a test of our method, a numerical integration of the linear response kernel was performed, yielding an accuracy of 10(-4). We also compare calculated values of the polarizability tensor and their evolution throughout the periodic table to high-level values found in the literature. PMID:26588743
Geiger, K.; Longacre, R.; Srivastava, D.K.
1999-02-01
VNI is a general-purpose Monte-Carlo event-generator, which includes the simulation of lepton-lepton, lepton-hadron, lepton-nucleus, hadron-hadron, hadron-nucleus, and nucleus-nucleus collisions. It uses the real-time evolution of parton cascades in conjunction with a self-consistent hadronization scheme, as well as the development of hadron cascades after hadronization. The causal evolution from a specific initial state (determined by the colliding beam particles) is followed by the time-development of the phase-space densities of partons, pre-hadronic parton clusters, and final-state hadrons, in position-space, momentum-space and color-space. The parton-evolution is described in terms of a space-time generalization of the familiar momentum-space description of multiple (semi)hard interactions in QCD, involving 2 {r_arrow} 2 parton collisions, 2 {r_arrow} 1 parton fusion processes, and 1 {r_arrow} 2 radiation processes. The formation of color-singlet pre-hadronic clusters and their decays into hadrons, on the other hand, is treated by using a spatial criterion motivated by confinement and a non-perturbative model for hadronization. Finally, the cascading of produced prehadronic clusters and of hadrons includes a multitude of 2 {r_arrow} n processes, and is modeled in parallel to the parton cascade description. This paper gives a brief review of the physics underlying VNI, as well as a detailed description of the program itself. The latter program description emphasizes easy-to-use pragmatism and explains how to use the program (including simple examples), annotates input and control parameters, and discusses output data provided by it.