Ji, C.R.
1985-09-01
We present a formalism for the evolution in Q/sub 2/ of multiquark systems as an application of perturbative quantum chromodynamics (QCD) to asymptotic, exclusive nuclear amplitudes. To leading terms in log Q/sup 2/ our formalism is equivalent to solving the renormalization group equations for these amplitudes. Completely antisymmetric multiquark color-singlet represntations are constructed and their evolution is investigated from the one-gluon exchange kernel. We argue that the evolution equation, together with a cluster decomposition, demonstrates a transition from the traditional meson and nucleon degrees of freedom of nuclear physics to quark and gluon degrees of freedom with increasing Q/sup 2/, or at small internucleon separation. As an example, we derive an evolution equation for a completely antisymmetric six-quark distribution amplitude and solve the evolution equation for a deuteron S-wave amplitude. The leading anomalous dimension and the corresponding eigensolution are found for the deuteron in order to predict the asymptotic form of the deuteron distribution amplitude (i.e., light-cone wave function at short distances). The fact that the six-quark state is 80 percent hidden color at small transverse separation implies that the deuteron form factor cannot be described at large Q/sup 2/ by meson-nucleon degrees of freedom alone. Furthermore, since the N-N channel is very suppressed under these conditions, the effective nucleon-nucleon potential is naturally repulsive at short distances. 20 refs.
Wilson Dslash Kernel From Lattice QCD Optimization
Joo, Balint; Smelyanskiy, Mikhail; Kalamkar, Dhiraj D.; Vaidyanathan, Karthikeyan
2015-07-01
Lattice Quantum Chromodynamics (LQCD) is a numerical technique used for calculations in Theoretical Nuclear and High Energy Physics. LQCD is traditionally one of the first applications ported to many new high performance computing architectures and indeed LQCD practitioners have been known to design and build custom LQCD computers. Lattice QCD kernels are frequently used as benchmarks (e.g. 168.wupwise in the SPEC suite) and are generally well understood, and as such are ideal to illustrate several optimization techniques. In this chapter we will detail our work in optimizing the Wilson-Dslash kernels for Intel Xeon Phi, however, as we will show the technique gives excellent performance on regular Xeon Architecture as well.
Higher-order Lipatov kernels and the QCD Pomeron
White, A.R.
1994-08-12
Three closely related topics are covered. The derivation of O(g{sup 4}) Lipatov kernels in pure glue QCD. The significance of quarks for the physical Pomeron in QCD. The possible inter-relation of Pomeron dynamics with Electroweak symmetry breaking.
QCDNUM: Fast QCD evolution and convolution
NASA Astrophysics Data System (ADS)
Botje, M.
2011-02-01
The QCDNUM program numerically solves the evolution equations for parton densities and fragmentation functions in perturbative QCD. Un-polarised parton densities can be evolved up to next-to-next-to-leading order in powers of the strong coupling constant, while polarised densities or fragmentation functions can be evolved up to next-to-leading order. Other types of evolution can be accessed by feeding alternative sets of evolution kernels into the program. A versatile convolution engine provides tools to compute parton luminosities, cross-sections in hadron-hadron scattering, and deep inelastic structure functions in the zero-mass scheme or in generalised mass schemes. Input to these calculations are either the QCDNUM evolved densities, or those read in from an external parton density repository. Included in the software distribution are packages to calculate zero-mass structure functions in un-polarised deep inelastic scattering, and heavy flavour contributions to these structure functions in the fixed flavour number scheme. Program summaryProgram title: QCDNUM version: 17.00 Catalogue identifier: AEHV_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHV_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU Public Licence No. of lines in distributed program, including test data, etc.: 45 736 No. of bytes in distributed program, including test data, etc.: 911 569 Distribution format: tar.gz Programming language: Fortran-77 Computer: All Operating system: All RAM: Typically 3 Mbytes Classification: 11.5 Nature of problem: Evolution of the strong coupling constant and parton densities, up to next-to-next-to-leading order in perturbative QCD. Computation of observable quantities by Mellin convolution of the evolved densities with partonic cross-sections. Solution method: Parametrisation of the parton densities as linear or quadratic splines on a discrete grid, and evolution of the spline
Jet quenching from QCD evolution
NASA Astrophysics Data System (ADS)
Chien, Yang-Ting; Emerman, Alexander; Kang, Zhong-Bo; Ovanesyan, Grigory; Vitev, Ivan
2016-04-01
Recent advances in soft-collinear effective theory with Glauber gluons have led to the development of a new method that gives a unified description of inclusive hadron production in reactions with nucleons and heavy nuclei. We show how this approach, based on the generalization of the DGLAP evolution equations to include final-state medium-induced parton shower corrections for large Q2 processes, can be combined with initial-state effects for applications to jet quenching phenomenology. We demonstrate that the traditional parton energy loss calculations can be regarded as a special soft-gluon emission limit of the general QCD evolution framework. We present phenomenological comparison of the SCETG -based results on the suppression of inclusive charged hadron and neutral pion production in √{sNN }=2.76 TeV lead-lead collisions at the Large Hadron Collider to experimental data. We also show theoretical predictions for the upcoming √{sNN }≃5.1 TeV Pb +Pb run at the LHC.
QCD Evolution of Helicity and Transversity TMDs
Prokudin, Alexei
2014-01-01
We examine the QCD evolution of the helicity and transversity parton distribution functions when including also their dependence on transverse momentum. Using an appropriate definition of these polarized transverse momentum distributions (TMDs), we describe their dependence on the factorization scale and rapidity cutoff, which is essential for phenomenological applications.
QCD Evolution of Naive-Time Quark-Gluon Correlation Functions
NASA Astrophysics Data System (ADS)
Kang, Zhong-Bo; Qiu, Jian-Wei
In this talk, we examine the existing calculations of QCD evolution kernels for the scale dependence of two sets of twist-3 quark-gluon correlation functions, Tq,F(x, x) and T(σ ){q, F}(x, x), which are the first transverse-momentum-moment of the naive-time-reversal-odd Sivers and Boer-Mulders function, respectively. The evolution kernels at the leading order in strong coupling constant αs were derived by several groups with apparent differences. We identify the sources of discrepancies and are able to reconcile the results from various groups.
R evolution: Improving perturbative QCD
Hoang, Andre H.; Jain, Ambar; Stewart, Iain W.; Scimemi, Ignazio
2010-07-01
Perturbative QCD results in the MS scheme can be dramatically improved by switching to a scheme that accounts for the dominant power law dependence on the factorization scale in the operator product expansion. We introduce the ''MSR scheme'' which achieves this in a Lorentz and gauge invariant way and has a very simple relation to MS. Results in MSR depend on a cutoff parameter R, in addition to the {mu} of MS. R variations can be used to independently estimate (i.) the size of power corrections, and (ii.) higher-order perturbative corrections (much like {mu} in MS). We give two examples at three-loop order, the ratio of mass splittings in the B*-B and D*-D systems, and the Ellis-Jaffe sum rule as a function of momentum transfer Q in deep inelastic scattering. Comparing to data, the perturbative MSR results work well even for Q{approx}1 GeV, and power corrections are reduced compared to MS.
The QCD evolution of TMD in the covariant approach
NASA Astrophysics Data System (ADS)
Efremov, A. V.; Teryaev, O. V.; Zavada, P.
2016-02-01
The procedure for calculation of the QCD evolution of transverse momentum dependent distributions within the covariant approach is suggested. The standard collinear QCD evolution together with the requirements of relativistic invariance and rotational symmetry of the nucleon in its rest frame represent the basic ingredients of our approach. The obtained results are compared with the predictions of some other approaches.
QCD EVOLUTION AND TMD/SPIN EXPERIMENTS
Jian-Ping Chen
2012-12-01
Transverse Spin and Transverse Momemtum Dependent (TMD) distribution study has been one of the main focuses of hadron physics in recent years. The initial exploratory Semi-Incluisve Deep-Inelastic-Scattering (SIDIS) experiments with transversely polarized proton and deuteron from HERMES and COMPASS attracted great attention and lead to very active efforts in both experiments and theory. QCD factorization has been carefully studied. A SIDIS experiment on the neutron with a polarized 3He target was performed at JLab. Recently published results will be shown. Precision TMD experiments are planned at JLab after the 12 GeV energy upgrade. The approved experiments with a new SoLID spectrometer on both the proton and neutron will be presented. Proper QCD evolution treatments beyond collinear cases become crucial for the precision study of the TMDs. Experimentally, Q2 evolution and higher-twist effects are often closely related. The experience of study higher-twist effects in the cases of moments of the spin structure functions will be discussed.
Evolution of fluctuations near QCD critical point
Stephanov, M. A.
2010-03-01
We propose to describe the time evolution of quasistationary fluctuations near QCD critical point by a system of stochastic Boltzmann-Langevin-Vlasov-type equations. We derive the equations and study the system analytically in the linearized regime. Known results for equilibrium stationary fluctuations as well as the critical scaling of diffusion coefficient are reproduced. We apply the approach to the long-standing question of the fate of the critical point fluctuations during the hadronic rescattering stage of the heavy-ion collision after chemical freeze-out. We find that if conserved particle number fluctuations survive the rescattering, so do, under a certain additional condition, the fluctuations of nonconserved quantities, such as mean transverse momentum. We derive a simple analytical formula for the magnitude of this memory effect.
Hierarchically Organized Iterative Solutions of the Evolution Equations in QCD
NASA Astrophysics Data System (ADS)
Jadach, S.; Skrzypek, M.; Was, Z.
2008-04-01
The task of Monte Carlo simulation of the evolution of the parton distributions in QCD and of constructing new parton shower Monte Carlo algorithms requires new way of organizing solutions of the QCD evolution equations, in which quark--gluon transitions on the one hand and quark--quark or gluon--gluon transitions (pure gluonstrahlung) on the other hand, are treated separately and differently. This requires certain reorganization of the iterative solutions of the QCD evolution equations and leads to what we refer to as a hierarchic iterative solutions of the evolution equations. We present three formal derivations of such a solution. Results presented here are already used in the other recent works to formulate new MC algorithms for the parton-shower-like implementations of the QCD evolution equations. They are primarily of the non-Markovian type. However, such a solution can be used for the Markovian-type MCs as well. We also comment briefly on the relation of the presented formalism to similar methods used in other branches of physics.
QCD Evolution of the Transverse Momentum Dependent Correlations
Zhou, Jian; Liang, Zuo-Tang; Yuan, Feng
2008-12-10
We study the QCD evolution for the twist-three quark-gluon correlation functions associated with the transverse momentum odd quark distributions. Different from that for the leading twist quark distributions, these evolution equations involve more general twist-three functions beyond the correlation functions themselves. They provide important information on nucleon structure, and can be studied in the semi-inclusive hadron production in deep inelastic scattering and Drell-Yan lepton pair production in pp scattering process.
Note on the QCD evolution of generalized form factors
Broniowski, Wojciech; Arriola, Enrique Ruiz
2009-03-01
Generalized form factors of hadrons are objects appearing in moments of the generalized parton distributions. Their leading-order DGLAP-ERBL QCD evolution is exceedingly simple and the solution is given in terms of matrix triangular structures of linear equations where the coefficients are the evolution ratios. We point out that this solution has a practical importance in analyses where the generalized form factors are basic objects, e.g., the lattice-gauge studies or models. It also displays general features of their evolution.
The chaotic effects in a nonlinear QCD evolution equation
NASA Astrophysics Data System (ADS)
Zhu, Wei; Shen, Zhenqi; Ruan, Jianhong
2016-10-01
The corrections of gluon fusion to the DGLAP and BFKL equations are discussed in a united partonic framework. The resulting nonlinear evolution equations are the well-known GLR-MQ-ZRS equation and a new evolution equation. Using the available saturation models as input, we find that the new evolution equation has the chaos solution with positive Lyapunov exponents in the perturbative range. We predict a new kind of shadowing caused by chaos, which blocks the QCD evolution in a critical small x range. The blocking effect in the evolution equation may explain the Abelian gluon assumption and even influence our expectations to the projected Large Hadron Electron Collider (LHeC), Very Large Hadron Collider (VLHC) and the upgrade (CppC) in a circular e+e- collider (SppC).
Analytic evolution of singular distribution amplitudes in QCD
NASA Astrophysics Data System (ADS)
Radyushkin, A. V.; Tandogan, A.
2014-04-01
We describe a method of analytic evolution of distribution amplitudes (DAs) that have singularities, such as nonzero values at the end points of the support region, jumps at some points inside the support region and cusps. We illustrate the method by applying it to the evolution of a flat (constant) DA and antisymmetric flat DA, and then use the method for evolution of the two-photon generalized distribution amplitude. Our approach has advantages over the standard method of expansion in Gegenbauer polynomials, which requires an infinite number of terms in order to accurately reproduce functions in the vicinity of singular points, and over a straightforward iteration of an initial distribution with evolution kernel. The latter produces logarithmically divergent terms at each iteration, while in our method the logarithmic singularities are summed from the start, which immediately produces a continuous curve, with only one or two iterations needed afterwards in order to get rather precise results.
Analytic Evolution of Singular Distribution Amplitudes in QCD
Radyushkin, Anatoly V.; Tandogan Kunkel, Asli
2014-03-01
We describe a method of analytic evolution of distribution amplitudes (DA) that have singularities, such as non-zero values at the end-points of the support region, jumps at some points inside the support region and cusps. We illustrate the method by applying it to the evolution of a flat (constant) DA, anti-symmetric at DA and then use it for evolution of the two-photon generalized distribution amplitude. Our approach has advantages over the standard method of expansion in Gegenbauer polynomials, which requires infinite number of terms in order to accurately reproduce functions in the vicinity of singular points, and over a straightforward iteration of an initial distribution with evolution kernel. The latter produces logarithmically divergent terms at each iteration, while in our method the logarithmic singularities are summed from the start, which immediately produces a continuous curve, with only one or two iterations needed afterwards in order to get rather precise results.
Analytic evolution of singular distribution amplitudes in QCD
NASA Astrophysics Data System (ADS)
Tandogan, Asli
Distribution amplitudes (DAs) are the basic functions that contain information about the quark momentum. DAs are necessary to describe hard exclusive processes in quantum chromodynamics. We describe a method of analytic evolution of DAs that have singularities such as nonzero values at the end points of the support region, jumps at some points inside the support region and cusps. We illustrate the method by applying it to the evolution of a flat (constant) DA, antisymmetric flat DA, and then use the method for evolution of the two-photon generalized distribution amplitude. Our approach to DA evolution has advantages over the standard method of expansion in Gegenbauer polynomials [1, 2] and over a straightforward iteration of an initial distribution with evolution kernel. Expansion in Gegenbauer polynomials requires an infinite number of terms in order to accurately reproduce functions in the vicinity of singular points. Straightforward iteration of an initial distribution produces logarithmically divergent terms at each iteration. In our method the logarithmic singularities are summed from the start, which immediately produces a continuous curve. Afterwards, in order to get precise results, only one or two iterations are needed.
NASA Astrophysics Data System (ADS)
Binol, Hamidullah; Bal, Abdullah; Cukur, Huseyin
2015-10-01
The performance of the kernel based techniques depends on the selection of kernel parameters. That's why; suitable parameter selection is an important problem for many kernel based techniques. This article presents a novel technique to learn the kernel parameters in kernel Fukunaga-Koontz Transform based (KFKT) classifier. The proposed approach determines the appropriate values of kernel parameters through optimizing an objective function constructed based on discrimination ability of KFKT. For this purpose we have utilized differential evolution algorithm (DEA). The new technique overcomes some disadvantages such as high time consumption existing in the traditional cross-validation method, and it can be utilized in any type of data. The experiments for target detection applications on the hyperspectral images verify the effectiveness of the proposed method.
Iterative filtering decomposition based on local spectral evolution kernel.
Wang, Yang; Wei, Guo-Wei; Yang, Siyang
2012-03-01
The synthesizing information, achieving understanding, and deriving insight from increasingly massive, time-varying, noisy and possibly conflicting data sets are some of most challenging tasks in the present information age. Traditional technologies, such as Fourier transform and wavelet multi-resolution analysis, are inadequate to handle all of the above-mentioned tasks. The empirical model decomposition (EMD) has emerged as a new powerful tool for resolving many challenging problems in data processing and analysis. Recently, an iterative filtering decomposition (IFD) has been introduced to address the stability and efficiency problems of the EMD. Another data analysis technique is the local spectral evolution kernel (LSEK), which provides a near prefect low pass filter with desirable time-frequency localizations. The present work utilizes the LSEK to further stabilize the IFD, and offers an efficient, flexible and robust scheme for information extraction, complexity reduction, and signal and image understanding. The performance of the present LSEK based IFD is intensively validated over a wide range of data processing tasks, including mode decomposition, analysis of time-varying data, information extraction from nonlinear dynamic systems, etc. The utility, robustness and usefulness of the proposed LESK based IFD are demonstrated via a large number of applications, such as the analysis of stock market data, the decomposition of ocean wave magnitudes, the understanding of physiologic signals and information recovery from noisy images. The performance of the proposed method is compared with that of existing methods in the literature. Our results indicate that the LSEK based IFD improves both the efficiency and the stability of conventional EMD algorithms. PMID:22350559
Iterative filtering decomposition based on local spectral evolution kernel
Wang, Yang; Wei, Guo-Wei; Yang, Siyang
2011-01-01
The synthesizing information, achieving understanding, and deriving insight from increasingly massive, time-varying, noisy and possibly conflicting data sets are some of most challenging tasks in the present information age. Traditional technologies, such as Fourier transform and wavelet multi-resolution analysis, are inadequate to handle all of the above-mentioned tasks. The empirical model decomposition (EMD) has emerged as a new powerful tool for resolving many challenging problems in data processing and analysis. Recently, an iterative filtering decomposition (IFD) has been introduced to address the stability and efficiency problems of the EMD. Another data analysis technique is the local spectral evolution kernel (LSEK), which provides a near prefect low pass filter with desirable time-frequency localizations. The present work utilizes the LSEK to further stabilize the IFD, and offers an efficient, flexible and robust scheme for information extraction, complexity reduction, and signal and image understanding. The performance of the present LSEK based IFD is intensively validated over a wide range of data processing tasks, including mode decomposition, analysis of time-varying data, information extraction from nonlinear dynamic systems, etc. The utility, robustness and usefulness of the proposed LESK based IFD are demonstrated via a large number of applications, such as the analysis of stock market data, the decomposition of ocean wave magnitudes, the understanding of physiologic signals and information recovery from noisy images. The performance of the proposed method is compared with that of existing methods in the literature. Our results indicate that the LSEK based IFD improves both the efficiency and the stability of conventional EMD algorithms. PMID:22350559
On the dependence of QCD splitting functions on the choice of the evolution variable
NASA Astrophysics Data System (ADS)
Jadach, S.; Kusina, A.; Placzek, W.; Skrzypek, M.
2016-08-01
We show that already at the NLO level the DGLAP evolution kernel P qq starts to depend on the choice of the evolution variable. We give an explicit example of such a variable, namely the maximum of transverse momenta of emitted partons and we identify a class of evolution variables that leave the NLO P qq kernel unchanged with respect to the known standard overline{MS} results. The kernels are calculated using a modified Curci-Furmanski-Petronzio method which is based on a direct Feynman-graphs calculation.
Non-Markovian Quantum Evolution: Time-Local Generators and Memory Kernels
NASA Astrophysics Data System (ADS)
Chruściński, Dariusz; Należyty, Paweł
2016-06-01
In this paper we provide a basic introduction to the topic of quantum non-Markovian evolution presenting both time-local and memory kernel approach to the evolution of open quantum systems. We start with the standard notion of a classical Markovian stochastic process and generalize it to classical Markovian stochastic evolution which in turn becomes a starting point of the quantum setting. Our approach is based on the notion of P-divisible, CP-divisible maps and their refinements to k-divisible maps. Basic methods enabling one to detect non-Markovianity of the quantum evolution are also presented. Our analysis is illustrated by several simple examples.
COLLINEAR SPLITTING, PARTON EVOLUTION AND THE STRANGE-QUARK ASYMMETRY OF THE NUCLEON IN NNLO QCD.
RODRIGO,G.CATANI,S.DE FLORIAN, D.VOGELSANG,W.
2004-04-25
We consider the collinear limit of QCD amplitudes at one-loop order, and their factorization properties directly in color space. These results apply to the multiple collinear limit of an arbitrary number of QCD partons, and are a basic ingredient in many higher-order computations. In particular, we discuss the triple collinear limit and its relation to flavor asymmetries in the QCD evolution of parton densities at three loops. As a phenomenological consequence of this new effect, and of the fact that the nucleon has non-vanishing quark valence densities, we study the perturbative generation of a strange-antistrange asymmetry s(x)-{bar s}(x) in the nucleon's sea.
Method of Analytic Evolution of Flat Distribution Amplitudes in QCD
NASA Astrophysics Data System (ADS)
Tandogan, Asli; Radyushkin, Anatoly V.
A new analytical method of performing ERBL evolution is described. The main goal is to develop an approach that works for distribution amplitudes that do not vanish at the end points, for which the standard method of expansion in Gegenbauer polynomials is inefficient. Two cases of the initial DA are considered: a purely flat DA, given by the same constant for all x, and an antisymmetric DA given by opposite constants for x > 1/2. For a purely flat DA, the evolution is governed by an overall (x\\bar {x})t dependence on the evolution parameter t times a factor that was calculated as an expansion in t. For an antisymmetric flat DA, an extra overall factor |1 - 2x|2t appears due to a jump at x = 1/2. A good convergence was observed in the t ≲ 1/2 region. For larger t, one can use the standard method of the Gegenbauer expansion.
QCD evolution of naive-time-reversal-odd parton distribution functions
NASA Astrophysics Data System (ADS)
Kang, Zhong-Bo; Qiu, Jian-Wei
2012-07-01
We reexamine the derivation of the leading order QCD evolution equations of twist-3 quark-gluon correlation functions, Tq,F (x , x) and Tq,F (σ) (x , x), which are the first transverse-momentum-moment of the naive-time-reversal-odd parton distribution functions - the Sivers and Boer-Mulders function, respectively. The evolution equations were derived by several groups with apparent differences. We identify the sources that are responsible for the differences, and are able to reconcile the results from various groups.
Method of Analytic Evolution of Flat Distribution Amplitudes in QCD
Asli Tandogan, Anatoly V. Radyushkin
2011-11-01
A new analytical method of performing ERBL evolution is described. The main goal is to develop an approach that works for distribution amplitudes that do not vanish at the end points, for which the standard method of expansion in Gegenbauer polynomials is inefficient. Two cases of the initial DA are considered: a purely flat DA, given by the same constant for all x, and an antisymmetric DA given by opposite constants for x < 1/2 or x > 1/2. For a purely flat DA, the evolution is governed by an overall (x (1-x)){sup t} dependence on the evolution parameter t times a factor that was calculated as an expansion in t. For an antisymmetric flat DA, an extra overall factor |1-2x|{sup 2t} appears due to a jump at x = 1/2. A good convergence was observed in the t {approx}< 1/2 region. For larger t, one can use the standard method of the Gegenbauer expansion.
Renormalization group evolution of multi-gluon correlators in high energy QCD
NASA Astrophysics Data System (ADS)
Dumitru, A.; Jalilian-Marian, J.; Lappi, T.; Schenke, B.; Venugopalan, R.
2011-12-01
Many-body QCD in leading high energy Regge asymptotics is described by the Balitsky-JIMWLK hierarchy of renormalization group equations for the x evolution of multi-point Wilson line correlators. These correlators are universal and ubiquitous in final states in deeply inelastic scattering and hadronic collisions. For instance, recently measured di-hadron correlations at forward rapidity in deuteron-gold collisions at the Relativistic Heavy Ion Collider (RHIC) are sensitive to four and six point correlators of Wilson lines in the small x color fields of the dense nuclear target. We evaluate these correlators numerically by solving the functional Langevin equation that describes the Balitsky-JIMWLK hierarchy. We compare the results to mean-field Gaussian and large Nc approximations used in previous phenomenological studies. We comment on the implications of our results for quantitative studies of multi-gluon final states in high energy QCD.
Statistical physics in QCD evolution towards high energies
NASA Astrophysics Data System (ADS)
Munier, Stéphane
2015-08-01
The concepts and methods used for the study of disordered systems have proven useful in the analysis of the evolution equations of quantum chromodynamics in the high-energy regime: Indeed, parton branching in the semi-classical approximation relevant at high energies and at a fixed impact parameter is a peculiar branching-diffusion process, and parton branching supplemented by saturation effects (such as gluon recombination) is a reaction-diffusion process. In this review article, we first introduce the basic concepts in the context of simple toy models, we study the properties of the latter, and show how the results obtained for the simple models may be taken over to quantum chromodynamics.
Real time evolution of non-Gaussian cumulants in the QCD critical regime
Mukherjee, Swagato; Venugopalan, Raju; Yin, Yi
2015-09-23
In this study, we derive a coupled set of equations that describe the nonequilibrium evolution of cumulants of critical fluctuations for spacetime trajectories on the crossover side of the QCD phase diagram. In particular, novel expressions are obtained for the nonequilibrium evolution of non-Gaussian skewness and kurtosis cumulants. UBy utilizing a simple model of the spacetime evolution of a heavy-ion collision, we demonstrate that, depending on the relaxation rate of critical fluctuations, skewness and kurtosis can differ significantly in magnitude as well as in sign from equilibrium expectations. Memory effects are important and shown to persist even for trajectories that skirt the edge of the critical regime. We use phenomenologically motivated parametrizations of freeze-out curves and of the beam-energy dependence of the net baryon chemical potential to explore the implications of our model study for the critical-point search in heavy-ion collisions.
Real time evolution of non-Gaussian cumulants in the QCD critical regime
Mukherjee, Swagato; Venugopalan, Raju; Yin, Yi
2015-09-23
In this study, we derive a coupled set of equations that describe the nonequilibrium evolution of cumulants of critical fluctuations for spacetime trajectories on the crossover side of the QCD phase diagram. In particular, novel expressions are obtained for the nonequilibrium evolution of non-Gaussian skewness and kurtosis cumulants. UBy utilizing a simple model of the spacetime evolution of a heavy-ion collision, we demonstrate that, depending on the relaxation rate of critical fluctuations, skewness and kurtosis can differ significantly in magnitude as well as in sign from equilibrium expectations. Memory effects are important and shown to persist even for trajectories thatmore » skirt the edge of the critical regime. We use phenomenologically motivated parametrizations of freeze-out curves and of the beam-energy dependence of the net baryon chemical potential to explore the implications of our model study for the critical-point search in heavy-ion collisions.« less
The Boer-Mulders Transverse Momentum Distribution in the Pion and its Evolution in Lattice QCD
NASA Astrophysics Data System (ADS)
Engelhardt, M.; Musch, B.; Hägler, P.; Schäfer, A.; Negele, J.
2015-02-01
Starting from a definition of transverse momentum-dependent parton distributions (TMDs) in terms of hadronic matrix elements of a quark bilocal operator containing a staple-shaped gauge link, selected TMD observables can be evaluated within Lattice QCD. A TMD ratio describing the Boer-Mulders effect in the pion is investigated, with a particular emphasis on its evolution as a function of a Collins-Soper-type parameter which quantifies the proximity of the staple-shaped gauge links to the light cone.
Studies of Analytic Evolution of Two-Photon Generalized Distribution Amplitude in QCD
NASA Astrophysics Data System (ADS)
Tandogan, Asli; Radyushkin, Anatoly V.
2014-01-01
We extend our method of analytic ERBL evolution for the case of distribution amplitudes that have jumps at some points x = ζi inside the support region 0 < x < 1. As an application of the method, we use it for evolution of the two-photon generalized distribution amplitude. Our approach has advantages over the standard method of expansion in Gegenbauer polynomials, which requires infinite number of terms in order to accuretly reproduce functions in the vicinity of singular points, and over the method of straightforward iteration of initial distribution with evolution kernel which produces logarithmically divergent terms at each iteration. In our method, the logarithmic singularities are summed from the start, which immediately produces a continuous curve, with only one or two iterations needed afterwards in order to get precise results.
Studies of Analytic Evolution of Two-Photon Generalized Distribution Amplitude in QCD
Tandogan Kunkel, Asli; Radyushkin, Anatoly V.
2014-01-01
We extend our method of analytic ERBL evolution for the case of distribution amplitudes that have jumps at some points x = {zeta}{sub I} inside the support region 0 < x < 1. As an application of the method, we use it for evolution of the two-photon generalized distribution amplitude. Our approach has advantages over the standard method of expansion in Gegenbauer polynomials, which requires infinite number of terms in order to accuretly reproduce functions in the vicinity of singular points, and over the method of straightforward iteration of initial distribution with evolution kernel which produces logarithmically divergent terms at each iteration. In our method, the logarithmic singularities are summed from the start, which immediately produces a continuous curve, with only one or two iterations needed afterwards in order to get precise results.
Generalized parton distributions of the pion in chiral quark models and their QCD evolution
Broniowski, Wojciech; Ruiz Arriola, Enrique; Golec-Biernat, Krzysztof
2008-02-01
We evaluate generalized parton distributions of the pion in two chiral quark models: the spectral quark model and the Nambu-Jona-Lasinio model with a Pauli-Villars regularization. We proceed by the evaluation of double distributions through the use of a manifestly covariant calculation based on the {alpha} representation of propagators. As a result polynomiality is incorporated automatically and calculations become simple. In addition, positivity and normalization constraints, sum rules, and soft-pion theorems are fulfilled. We obtain explicit formulas, holding at the low-energy quark-model scale. The expressions exhibit no factorization in the t-dependence. The QCD evolution of those parton distributions is carried out to experimentally or lattice accessible scales. We argue for the need of evolution by comparing the parton distribution function and the parton distribution amplitude of the pion to the available experimental and lattice data, and confirm that the quark-model scale is low, about 320 MeV.
Integrated Model of Multiple Kernel Learning and Differential Evolution for EUR/USD Trading
Deng, Shangkun; Sakurai, Akito
2014-01-01
Currency trading is an important area for individual investors, government policy decisions, and organization investments. In this study, we propose a hybrid approach referred to as MKL-DE, which combines multiple kernel learning (MKL) with differential evolution (DE) for trading a currency pair. MKL is used to learn a model that predicts changes in the target currency pair, whereas DE is used to generate the buy and sell signals for the target currency pair based on the relative strength index (RSI), while it is also combined with MKL as a trading signal. The new hybrid implementation is applied to EUR/USD trading, which is the most traded foreign exchange (FX) currency pair. MKL is essential for utilizing information from multiple information sources and DE is essential for formulating a trading rule based on a mixture of discrete structures and continuous parameters. Initially, the prediction model optimized by MKL predicts the returns based on a technical indicator called the moving average convergence and divergence. Next, a combined trading signal is optimized by DE using the inputs from the prediction model and technical indicator RSI obtained from multiple timeframes. The experimental results showed that trading using the prediction learned by MKL yielded consistent profits. PMID:25097891
NASA Astrophysics Data System (ADS)
Odorico, R.
1981-06-01
A Monte Carlo method is presented for the calculation of the QCD evolution of structure functions. Its application is discussed in detail in the framework of the LLA, but it can also be used with modified parton decay probability functions including higher-order effects. For heavy quark production, threshold constraints can be correctly taken into account, and one obtains results which at low Q2 are consistent with those of the photon-gluon fusion model.
Exclusive, Hard Diffraction in QCD
NASA Astrophysics Data System (ADS)
Freund, Andreas
1999-03-01
In the first chapter we give an introduction to hard diffractive scattering in QCD to introduce basic concepts and terminology. In the second chapter we make predictions for the evolution of skewed parton distributions in a proton in the LLA. We calculate the DGLAP-type evolution kernels in the LLA and solve the skewed GLAP evolution equations with a modified version of the CTEQ-package. In the third chapter, we discuss the algorithms used in the LO evolution program for skewed parton distributions in the DGLAP region, discuss the stability of the code and reproduce the LO diagonal evolution within less than 0.5% of the original CTEQ-code. In chapter 4, we show that factorization holds for the deeply virtual Compton scattering amplitude in QCD, up to power suppressed terms, to all orders in perturbation theory. In chapter 5, we demonstrate that perturbative QCD allows one to calculate the absolute cross section of diffractive, exclusive production of photons (DVCS) at large Q^2 at HERA, while the aligned jet model allows one to estimate the cross section for intermediate Q^2 ˜ 2 GeV^2. We find a significant DVCS counting rate for the current generation of experiments at HERA and a large azimuthal angle asymmetry for HERA kinematics. In the last chapter, we propose a new methodology of gaining shape fits to skewed parton distributions and, for the first time, to determine the ratio of the real to imaginary part of the DIS amplitude. We do this by using several recent fits to F_2(x,Q^2) to compute the asymmetry A for the combined DVCS and Bethe-Heitler cross section. In the appendix, we give an application of distributional methods as discussed abstractly in chapter 4.
Collins, John; Rogers, Ted
2015-04-01
There is considerable controversy about the size and importance of non-perturbative contributions to the evolution of transverse momentum dependent (TMD) parton distribution functions. Standard fits to relatively high-energy Drell-Yan data give evolution that when taken to lower Q is too rapid to be consistent with recent data in semi-inclusive deeply inelastic scattering. Some authors provide very different forms for TMD evolution, even arguing that non-perturbative contributions at large transverse distance bT are not needed or are irrelevant. Here, we systematically analyze the issues, both perturbative and non-perturbative. We make a motivated proposal for the parameterization of the non-perturbative part of the TMD evolution kernel that could give consistency: with the variety of apparently conflicting data, with theoretical perturbative calculations where they are applicable, and with general theoretical non-perturbative constraints on correlation functions at large distances. We propose and use a scheme- and scale-independent function A(bT) that gives a tool to compare and diagnose different proposals for TMD evolution. We also advocate for phenomenological studies of A(bT) as a probe of TMD evolution. The results are important generally for applications of TMD factorization. In particular, they are important to making predictions for proposed polarized Drell- Yan experiments to measure the Sivers function.
Collins, John; Rogers, Ted
2015-04-01
There is considerable controversy about the size and importance of non-perturbative contributions to the evolution of transverse momentum dependent (TMD) parton distribution functions. Standard fits to relatively high-energy Drell-Yan data give evolution that when taken to lower Q is too rapid to be consistent with recent data in semi-inclusive deeply inelastic scattering. Some authors provide very different forms for TMD evolution, even arguing that non-perturbative contributions at large transverse distance bT are not needed or are irrelevant. Here, we systematically analyze the issues, both perturbative and non-perturbative. We make a motivated proposal for the parameterization of the non-perturbative part ofmore » the TMD evolution kernel that could give consistency: with the variety of apparently conflicting data, with theoretical perturbative calculations where they are applicable, and with general theoretical non-perturbative constraints on correlation functions at large distances. We propose and use a scheme- and scale-independent function A(bT) that gives a tool to compare and diagnose different proposals for TMD evolution. We also advocate for phenomenological studies of A(bT) as a probe of TMD evolution. The results are important generally for applications of TMD factorization. In particular, they are important to making predictions for proposed polarized Drell- Yan experiments to measure the Sivers function.« less
Bornyakov, V.G.
2005-06-01
Possibilities that are provided by a lattice regularization of QCD for studying nonperturbative properties of QCD are discussed. A review of some recent results obtained from computer calculations in lattice QCD is given. In particular, the results for the QCD vacuum structure, the hadron mass spectrum, and the strong coupling constant are considered.
Albacete, Javier L.; Armesto, Nestor; Salgado, Carlos A.; Milhano, Jose Guilherme
2009-08-01
We perform a global fit to the structure function F{sub 2} measured in lepton-proton experiments at small values of Bjorken-x, x{<=}0.01, for all experimentally available values of Q{sup 2}, 0.045 GeV{sup 2}{<=}Q{sup 2}{<=}800 GeV{sup 2}. We show that the recent improvements resulting from the inclusion of running coupling corrections allow for a description of data in terms of nonlinear QCD evolution equations. In this approach F{sub 2} is calculated within the dipole model with all Bjorken-x dependence described by the running coupling Balitsky-Kovchegov equation. Two different initial conditions for the evolution are used, both yielding good fits to data with {chi}{sup 2}/d.o.f.<1.1. The proton longitudinal structure function F{sub L}, not included in the fits, is also well described. Our analysis allows to perform a first principle extrapolation of the proton-dipole scattering amplitude once the initial condition has been fitted to presently available data. We provide predictions for F{sub 2} and F{sub L} in the kinematical regions of interest for future colliders and ultra-high energy cosmic rays. A numerical implementation of our results down to x=10{sup -12} is released as a computer code for public use.
Small-x Evolution of Structure Functions in the Next-to-Leading Order
Giovanni Antonio Chirilli
2009-12-01
The high-energy behavior of amplitudes in gauge theories can be reformulated in terms of the evolution of Wilson-line operators. In the leading order this evolution is governed by the nonlinear Balitsky-Kovchegov (BK) equation. The NLO corrections define the scale of the running coupling constant in the BK equation and in QCD, its kernel has both conformal and non-conformal parts. To separate the conformally invariant effects from the running-coupling effects, we calculate the NLO evolution of the color dipoles in the conformal N = 4 SYM theory, then we define the "composite dipole operator" with the rapidity cutoff preserving conformal invariance, and the resulting Möbius invariant kernel for this operator agrees with the forward NLO BFKL calculation. In QCD, the NLO kernel for the composite operators resolves in a sum of the conformal part and the running-coupling part.
Small-x Evolution of Structure Functions in the Next-to-Leading Order
Chirilli, Giovanni A.
2009-12-17
The high-energy behavior of amplitudes in gauge theories can be reformulated in terms of the evolution of Wilson-line operators. In the leading order this evolution is governed by the nonlinear Balitsky-Kovchegov (BK) equation. The NLO corrections define the scale of the running-coupling constant in the BK equation and in QCD, its kernel has both conformal and non-conformal parts. To separate the conformally invariant effects from the running-coupling effects, we calculate the NLO evolution of the color dipoles in the conformal N = 4 SYM theory, then we define the 'composite dipole operator' with the rapidity cutoff preserving conformal invariance, and the resulting Moebius invariant kernel for this operator agrees with the forward NLO BFKL calculation.In QCD, the NLO kernel for the composite operators resolves in a sum of the conformal part and the running-coupling part.
Extraction of quark transversity distribution and Collins fragmentation functions with QCD evolution
Kang, Zhong-Bo; Prokudin, Alexei; Sun, Peng; Yuan, Feng
2016-01-13
In this paper, we study the transverse momentum dependent (TMD) evolution of the Collins azimuthal asymmetries in e+e- annihilations and semi-inclusive hadron production in deep inelastic scattering (SIDIS) processes. All the relevant coefficients are calculated up to the next-to-leading logarithmic (NLL) order accuracy. By applying the TMD evolution at the approximate NLL order in the Collins- Soper-Sterman (CSS) formalism, we extract transversity distributions for u and d quarks and Collins fragmentation functions from current experimental data by a global analysis of the Collins asymmetries in back-to-back di-hadron productions in e+e- annihilations measured by BELLE and BABAR Collaborations and SIDIS datamore » from HERMES, COMPASS, and JLab HALL A experiments. The impact of the evolution effects and the relevant theoretical uncertainties are discussed. We further discuss the TMD interpretation for our results, and illustrate the unpolarized quark distribution, transversity distribution, unpolarized quark fragmentation and Collins fragmentation functions depending on the transverse momentum and the hard momentum scale. Finally, we give predictions and discuss impact of future experiments.« less
Extraction of quark transversity distribution and Collins fragmentation functions with QCD evolution
NASA Astrophysics Data System (ADS)
Kang, Zhong-Bo; Prokudin, Alexei; Sun, Peng; Yuan, Feng
2016-01-01
We study the transverse-momentum-dependent (TMD) evolution of the Collins azimuthal asymmetries in e+e- annihilations and semi-inclusive hadron production in deep inelastic scattering processes. All the relevant coefficients are calculated up to the next-to-leading-logarithmic-order accuracy. By applying the TMD evolution at the approximate next-to-leading-logarithmic order in the Collins-Soper-Sterman formalism, we extract transversity distributions for u and d quarks and Collins fragmentation functions from current experimental data by a global analysis of the Collins asymmetries in back-to-back dihadron productions in e+e- annihilations measured by BELLE and BABAR collaborations and semi-inclusive hadron production in deep inelastic scattering data from HERMES, COMPASS, and JLab HALL A experiments. The impact of the evolution effects and the relevant theoretical uncertainties are discussed. We further discuss the TMD interpretation for our results and illustrate the unpolarized quark distribution, transversity distribution, unpolarized quark fragmentation, and Collins fragmentation functions depending on the transverse momentum and the hard momentum scale. We make detailed predictions for future experiments and discuss their impact.
Small-x evolution of structure functions in the next-to-leading order
Giovanni A. Chirilli
2010-01-01
The high-energy behavior of amplitudes in gauge theories can be reformulated in terms of the evolution of Wilson-line operators. In the leading order this evolution is governed by the non-linear Balitsky-Kovchegov (BK) equation. In QCD the NLO kernel has both conformal and non-conformal parts. To separate the conformally invariant effects from the running-coupling effects, we calculate the NLO evolution of the color dipoles in the conformal N = 4 SYM theory, then we define the "composite dipole operator", and the resulting Mobius invariant kernel for this operator agrees with the forward NLO BFKL calculation.
Two-loop conformal generators for leading-twist operators in QCD
NASA Astrophysics Data System (ADS)
Braun, V. M.; Manashov, A. N.; Moch, S.; Strohmaier, M.
2016-03-01
QCD evolution equations in minimal subtraction schemes have a hidden symmetry: one can construct three operators that commute with the evolution kernel and form an SL(2) algebra, i.e. they satisfy (exactly) the SL(2) commutation relations. In this paper we find explicit expressions for these operators to two-loop accuracy going over to QCD in non-integer d = 4 - 2ɛ space-time dimensions at the intermediate stage. In this way conformal symmetry of QCD is restored on quantum level at the specially chosen (critical) value of the coupling, and at the same time the theory is regularized allowing one to use the standard renormalization procedure for the relevant Feynman diagrams. Quantum corrections to conformal generators in d = 4 - 2ɛ effectively correspond to the conformal symmetry breaking in the physical theory in four dimensions and the SL(2) commutation relations lead to nontrivial constraints on the renormalization group equations for composite operators. This approach is valid to all orders in perturbation theory and the result includes automatically all terms that can be identified as due to a nonvanishing QCD β-function (in the physical theory in four dimensions). Our result can be used to derive three-loop evolution equations for flavor-nonsinglet quark-antiquark operators including mixing with the operators containing total derivatives. These equations govern, e.g., the scale dependence of generalized hadron parton distributions and light-cone meson distribution amplitudes.
Exclusive, hard diffraction in QCD
NASA Astrophysics Data System (ADS)
Freund, Andreas
In the first chapter we give an introduction to hard diffractive scattering in QCD to introduce basic concepts and terminology, thus setting the stage for the following chapters. In the second chapter we make predictions for nondiagonal parton distributions in a proton in the LLA. We calculate the DGLAP-type evolution kernels in the LLA, solve the nondiagonal GLAP evolution equations with a modified version of the CTEQ-package and comment on the range of applicability of the LLA in the asymmetric regime. We show that the nondiagonal gluon distribution g(x1,x2,t,μ2) can be well approximated at small x by the conventional gluon density xG(x,μ2). In the third chapter, we discuss the algorithms used in the LO evolution program for nondiagonal parton distributions in the DGLAP region and discuss the stability of the code. Furthermore, we demonstrate that we can reproduce the case of the LO diagonal evolution within less than 0.5% of the original code as developed by the CTEQ-collaboration. In chapter 4, we show that factorization holds for the deeply virtual Compton scattering amplitude in QCD, up to power suppressed terms, to all orders in perturbation theory. Furthermore, we show that the virtuality of the produced photon does not influence the general theorem. In chapter 5, we demonstrate that perturbative QCD allows one to calculate the absolute cross section of diffractive exclusive production of photons at large Q2 at HERA, while the aligned jet model allows one to estimate the cross section for intermediate Q2~2GeV2. Furthermore, we find that the imaginary part of the amplitude for the production of real photons is larger than the imaginary part of the corresponding DIS amplitude, leading to predictions of a significant counting rate for the current generation of experiments at HERA. We also find a large azimuthal angle asymmetry in ep scattering for HERA kinematics which allows one to directly measure the real part of the DVCS amplitude and hence the
The Chroma Software System for Lattice QCD
Robert Edwards; Balint Joo
2004-06-01
We describe aspects of the Chroma software system for lattice QCD calculations. Chroma is an open source C++ based software system developed using the software infrastructure of the US SciDAC initiative. Chroma interfaces with output from the BAGEL assembly generator for optimized lattice fermion kernels on some architectures. It can be run on workstations, clusters and the QCDOC supercomputer.
Hess, Peter O.
2006-09-25
A review is presented on the contributions of Mexican Scientists to QCD phenomenology. These contributions range from Constituent Quark model's (CQM) with a fixed number of quarks (antiquarks) to those where the number of quarks is not conserved. Also glueball spectra were treated with phenomenological models. Several other approaches are mentioned.
QCD at nonzero chemical potential: Recent progress on the lattice
NASA Astrophysics Data System (ADS)
Aarts, Gert; Attanasio, Felipe; Jäger, Benjamin; Seiler, Erhard; Sexty, Dénes; Stamatescu, Ion-Olimpiu
2016-01-01
We summarise recent progress in simulating QCD at nonzero baryon density using complex Langevin dynamics. After a brief outline of the main idea, we discuss gauge cooling as a means to control the evolution. Subsequently we present a status report for heavy dense QCD and its phase structure, full QCD with staggered quarks, and full QCD with Wilson quarks, both directly and using the hopping parameter expansion to all orders.
Initial-state splitting kernels in cold nuclear matter
NASA Astrophysics Data System (ADS)
Ovanesyan, Grigory; Ringer, Felix; Vitev, Ivan
2016-09-01
We derive medium-induced splitting kernels for energetic partons that undergo interactions in dense QCD matter before a hard-scattering event at large momentum transfer Q2. Working in the framework of the effective theory SCETG, we compute the splitting kernels beyond the soft gluon approximation. We present numerical studies that compare our new results with previous findings. We expect the full medium-induced splitting kernels to be most relevant for the extension of initial-state cold nuclear matter energy loss phenomenology in both p+A and A+A collisions.
Approximate kernel competitive learning.
Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang
2015-03-01
Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches.
Approximate kernel competitive learning.
Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang
2015-03-01
Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. PMID:25528318
Quark-hadron duality: Pinched kernel approach
NASA Astrophysics Data System (ADS)
Dominguez, C. A.; Hernandez, L. A.; Schilcher, K.; Spiesberger, H.
2016-08-01
Hadronic spectral functions measured by the ALEPH collaboration in the vector and axial-vector channels are used to study potential quark-hadron duality violations (DV). This is done entirely in the framework of pinched kernel finite energy sum rules (FESR), i.e. in a model independent fashion. The kinematical range of the ALEPH data is effectively extended up to s = 10 GeV2 by using an appropriate kernel, and assuming that in this region the spectral functions are given by perturbative QCD. Support for this assumption is obtained by using e+ e‑ annihilation data in the vector channel. Results in both channels show a good saturation of the pinched FESR, without further need of explicit models of DV.
Random walk through recent CDF QCD results
C. Mesropian
2003-04-09
We present recent results on jet fragmentation, jet evolution in jet and minimum bias events, and underlying event studies. The results presented in this talk address significant questions relevant to QCD and, in particular, to jet studies. One topic discussed is jet fragmentation and the possibility of describing it down to very small momentum scales in terms of pQCD. Another topic is the studies of underlying event energy originating from fragmentation of partons not associated with the hard scattering.
Modeling QCD for Hadron Physics
Tandy, P. C.
2011-10-24
We review the approach to modeling soft hadron physics observables based on the Dyson-Schwinger equations of QCD. The focus is on light quark mesons and in particular the pseudoscalar and vector ground states, their decays and electromagnetic couplings. We detail the wide variety of observables that can be correlated by a ladder-rainbow kernel with one infrared parameter fixed to the chiral quark condensate. A recently proposed novel perspective in which the quark condensate is contained within hadrons and not the vacuum is mentioned. The valence quark parton distributions, in the pion and kaon, as measured in the Drell Yan process, are investigated with the same ladder-rainbow truncation of the Dyson-Schwinger and Bethe-Salpeter equations.
Modeling QCD for Hadron Physics
NASA Astrophysics Data System (ADS)
Tandy, P. C.
2011-10-01
We review the approach to modeling soft hadron physics observables based on the Dyson-Schwinger equations of QCD. The focus is on light quark mesons and in particular the pseudoscalar and vector ground states, their decays and electromagnetic couplings. We detail the wide variety of observables that can be correlated by a ladder-rainbow kernel with one infrared parameter fixed to the chiral quark condensate. A recently proposed novel perspective in which the quark condensate is contained within hadrons and not the vacuum is mentioned. The valence quark parton distributions, in the pion and kaon, as measured in the Drell Yan process, are investigated with the same ladder-rainbow truncation of the Dyson-Schwinger and Bethe-Salpeter equations.
Hatsuda, Tetsuo
2012-11-12
Dynamics of hadrons and nuclei are governed by the fundamental theory of strong interaction, the quantm chromodynamics (QCD). The current status of QCD and its applications to nuclear physics are reviewed.
Small-x Evolution in the Next-to-Leading Order
Ian Balitsky
2009-10-01
The high-energy behavior of amplitudes in gauge theories can be reformulated in terms of the evolution of Wilson-line operators. In the leading order this evolution is governed by the non-linear BK equation. The NLO corrections define the scale of the running-coupling constant in the BK equation and in QCD, its kernel has both conformal and non-conformal parts. To separate the conformally invariant effects from the running-coupling effects, we calculate the NLO evolution of the color dipoles in the conformal N=4 SYM theory, then we define the 'composite dipole operator' with the rapidity cutoff preserving conformal invariance, and the resulting Möbius invariant kernel for this operator agrees with the forward NLO BFKL calculation.
Nuclear reactions from lattice QCD
Briceño, Raúl A.; Davoudi, Zohreh; Luu, Thomas C.
2015-01-13
In this study, one of the overarching goals of nuclear physics is to rigorously compute properties of hadronic systems directly from the fundamental theory of strong interactions, Quantum Chromodynamics (QCD). In particular, the hope is to perform reliable calculations of nuclear reactions which will impact our understanding of environments that occur during big bang nucleosynthesis, the evolution of stars and supernovae, and within nuclear reactors and high energy/density facilities. Such calculations, being truly ab initio, would include all two-nucleon and three- nucleon (and higher) interactions in a consistent manner. Currently, lattice QCD provides the only reliable option for performing calculationsmore » of some of the low-energy hadronic observables. With the aim of bridging the gap between lattice QCD and nuclear many-body physics, the Institute for Nuclear Theory held a workshop on Nuclear Reactions from Lattice QCD on March 2013. In this review article, we report on the topics discussed in this workshop and the path planned to move forward in the upcoming years.« less
Nuclear reactions from lattice QCD
Briceño, Raúl A.; Davoudi, Zohreh; Luu, Thomas C.
2015-01-13
In this study, one of the overarching goals of nuclear physics is to rigorously compute properties of hadronic systems directly from the fundamental theory of strong interactions, Quantum Chromodynamics (QCD). In particular, the hope is to perform reliable calculations of nuclear reactions which will impact our understanding of environments that occur during big bang nucleosynthesis, the evolution of stars and supernovae, and within nuclear reactors and high energy/density facilities. Such calculations, being truly ab initio, would include all two-nucleon and three- nucleon (and higher) interactions in a consistent manner. Currently, lattice QCD provides the only reliable option for performing calculations of some of the low-energy hadronic observables. With the aim of bridging the gap between lattice QCD and nuclear many-body physics, the Institute for Nuclear Theory held a workshop on Nuclear Reactions from Lattice QCD on March 2013. In this review article, we report on the topics discussed in this workshop and the path planned to move forward in the upcoming years.
NASA Astrophysics Data System (ADS)
Kuramashi, Yoshinobu
2007-12-01
Preface -- Fixed point actions, symmetries and symmetry transformations on the lattice / P. Hasenfratz -- Algorithms for dynamical fennions / A. D. Kennedy -- Applications of chiral perturbation theory to lattice QCD / Stephen R. Sharpe -- Lattice QCD with a chiral twist / S. Sint -- Non-perturbative QCD: renormalization, O(A) - Improvement and matching to Heavy Quark effective theory / Rainer Sommer.
Strange quark condensate from QCD sum rules to five loops
NASA Astrophysics Data System (ADS)
Dominguez, Cesareo A.; Nasrallah, Nasrallah F.; Schilcher, Karl
2008-02-01
It is argued that it is valid to use QCD sum rules to determine the scalar and pseudoscalar two-point functions at zero momentum, which in turn determine the ratio of the strange to non-strange quark condensates Rsu = langlebar ssrangle/langlebar qqrangle with (q = u, d). This is done in the framework of a new set of QCD Finite Energy Sum Rules (FESR) that involve as integration kernel a second degree polynomial, tuned to reduce considerably the systematic uncertainties in the hadronic spectral functions. As a result, the parameters limiting the precision of this determination are ΛQCD, and to a major extent the strange quark mass. From the positivity of Rsu there follows an upper bound on the latter: \\overline{ms}(2 GeV) <= 121 (105) MeV, for ΛQCD = 330 (420) MeV.
Duff, I.
1994-12-31
This workshop focuses on kernels for iterative software packages. Specifically, the three speakers discuss various aspects of sparse BLAS kernels. Their topics are: `Current status of user lever sparse BLAS`; Current status of the sparse BLAS toolkit`; and `Adding matrix-matrix and matrix-matrix-matrix multiply to the sparse BLAS toolkit`.
Analog forecasting with dynamics-adapted kernels
NASA Astrophysics Data System (ADS)
Zhao, Zhizhen; Giannakis, Dimitrios
2016-09-01
Analog forecasting is a nonparametric technique introduced by Lorenz in 1969 which predicts the evolution of states of a dynamical system (or observables defined on the states) by following the evolution of the sample in a historical record of observations which most closely resembles the current initial data. Here, we introduce a suite of forecasting methods which improve traditional analog forecasting by combining ideas from kernel methods developed in harmonic analysis and machine learning and state-space reconstruction for dynamical systems. A key ingredient of our approach is to replace single-analog forecasting with weighted ensembles of analogs constructed using local similarity kernels. The kernels used here employ a number of dynamics-dependent features designed to improve forecast skill, including Takens’ delay-coordinate maps (to recover information in the initial data lost through partial observations) and a directional dependence on the dynamical vector field generating the data. Mathematically, our approach is closely related to kernel methods for out-of-sample extension of functions, and we discuss alternative strategies based on the Nyström method and the multiscale Laplacian pyramids technique. We illustrate these techniques in applications to forecasting in a low-order deterministic model for atmospheric dynamics with chaotic metastability, and interannual-scale forecasting in the North Pacific sector of a comprehensive climate model. We find that forecasts based on kernel-weighted ensembles have significantly higher skill than the conventional approach following a single analog.
Inheritance of Kernel Color in Corn: Explanations and Investigations.
ERIC Educational Resources Information Center
Ford, Rosemary H.
2000-01-01
Offers a new perspective on traditional problems in genetics on kernel color in corn, including information about genetic regulation, metabolic pathways, and evolution of genes. (Contains 15 references.) (ASK)
NASA Astrophysics Data System (ADS)
Kuboyama, Tetsuji; Hirata, Kouichi; Kashima, Hisashi; F. Aoki-Kinoshita, Kiyoko; Yasuda, Hiroshi
Learning from tree-structured data has received increasing interest with the rapid growth of tree-encodable data in the World Wide Web, in biology, and in other areas. Our kernel function measures the similarity between two trees by counting the number of shared sub-patterns called tree q-grams, and runs, in effect, in linear time with respect to the number of tree nodes. We apply our kernel function with a support vector machine (SVM) to classify biological data, the glycans of several blood components. The experimental results show that our kernel function performs as well as one exclusively tailored to glycan properties.
QCD dynamics in mesons at soft and hard scales
Nguyen, T.; Souchlas, N. A.; Tandy, P. C.
2010-07-27
Using a ladder-rainbow kernel previously established for the soft scale of light quark hadrons, we explore, within a Dyson-Schwinger approach, phenomena that mix soft and hard scales of QCD. The difference between vector and axial vector current correlators is examined to estimate the four quark chiral condensate and the leading distance scale for the onset of non-perturbative phenomena in QCD. The valence quark distributions, in the pion and kaon, defined in deep inelastic scattering, and measured in the Drell Yan process, are investigated with the same ladder-rainbow truncation of the Dyson-Schwinger and Bethe-Salpeter equations.
Robotic Intelligence Kernel: Communications
Walton, Mike C.
2009-09-16
The INL Robotic Intelligence Kernel-Comms is the communication server that transmits information between one or more robots using the RIK and one or more user interfaces. It supports event handling and multiple hardware communication protocols.
NASA Astrophysics Data System (ADS)
Wilczek, Frank
Introduction Symmetry and the Phenomena of QCD Apparent and Actual Symmetries Asymptotic Freedom Confinement Chiral Symmetry Breaking Chiral Anomalies and Instantons High Temperature QCD: Asymptotic Properties Significance of High Temperature QCD Numerical Indications for Quasi-Free Behavior Ideas About Quark-Gluon Plasma Screening Versus Confinement Models of Chiral Symmetry Breaking More Refined Numerical Experiments High-Temperature QCD: Phase Transitions Yoga of Phase Transitions and Order Parameters Application to Glue Theories Application to Chiral Transitions Close Up on Two Flavors A Genuine Critical Point! (?) High-Density QCD: Methods Hopes, Doubts, and Fruition Another Renormalization Group Pairing Theory Taming the Magnetic Singularity High-Density QCD: Color-Flavor Locking and Quark-Hadron Continuity Gauge Symmetry (Non)Breaking Symmetry Accounting Elementary Excitations A Modified Photon Quark-Hadron Continuity Remembrance of Things Past More Quarks Fewer Quarks and Reality
Robotic Intelligence Kernel: Driver
2009-09-16
The INL Robotic Intelligence Kernel-Driver is built on top of the RIK-A and implements a dynamic autonomy structure. The RIK-D is used to orchestrate hardware for sensing and action as well as software components for perception, communication, behavior and world modeling into a single cognitive behavior kernel that provides intrinsic intelligence for a wide variety of unmanned ground vehicle systems.
Urban, Federico R.; Zhitnitsky, Ariel R.
2010-08-30
We review two mechanisms rooted in the infrared sector of QCD which, by exploiting the properties of the QCD ghost, as introduced by Veneziano, provide new insight on the cosmological dark energy problem, first, in the form of a Casimir-like energy from quantising QCD in a box, and second, in the form of additional, time-dependent, vacuum energy density in an expanding universe. Based on [1, 2].
Linearized Kernel Dictionary Learning
NASA Astrophysics Data System (ADS)
Golts, Alona; Elad, Michael
2016-06-01
In this paper we present a new approach of incorporating kernels into dictionary learning. The kernel K-SVD algorithm (KKSVD), which has been introduced recently, shows an improvement in classification performance, with relation to its linear counterpart K-SVD. However, this algorithm requires the storage and handling of a very large kernel matrix, which leads to high computational cost, while also limiting its use to setups with small number of training examples. We address these problems by combining two ideas: first we approximate the kernel matrix using a cleverly sampled subset of its columns using the Nystr\\"{o}m method; secondly, as we wish to avoid using this matrix altogether, we decompose it by SVD to form new "virtual samples," on which any linear dictionary learning can be employed. Our method, termed "Linearized Kernel Dictionary Learning" (LKDL) can be seamlessly applied as a pre-processing stage on top of any efficient off-the-shelf dictionary learning scheme, effectively "kernelizing" it. We demonstrate the effectiveness of our method on several tasks of both supervised and unsupervised classification and show the efficiency of the proposed scheme, its easy integration and performance boosting properties.
None
2016-07-12
Modern QCD - Lecture 3 We will introduce processes with initial-state hadrons and discuss parton distributions, sum rules, as well as the need for a factorization scale once radiative corrections are taken into account. We will then discuss the DGLAP equation, the evolution of parton densities, as well as ways in which parton densities are extracted from data.
Continuous Advances in QCD 2008
NASA Astrophysics Data System (ADS)
Peloso, Marco M.
2008-12-01
1. High-order calculations in QCD and in general gauge theories. NLO evolution of color dipoles / I. Balitsky. Recent perturbative results on heavy quark decays / J. H. Piclum, M. Dowling, A. Pak. Leading and non-leading singularities in gauge theory hard scattering / G. Sterman. The space-cone gauge, Lorentz invariance and on-shell recursion for one-loop Yang-Mills amplitudes / D. Vaman, Y.-P. Yao -- 2. Heavy flavor physics. Exotic cc¯ mesons / E. Braaten. Search for new physics in B[symbol]-mixing / A. J. Lenz. Implications of D[symbol]-D[symbol] mixing for new physics / A. A. Petrov. Precise determinations of the charm quark mass / M. Steinhauser -- 3. Quark-gluon dynamics at high density and/or high temperature. Crystalline condensate in the chiral Gross-Neveu model / G. V. Dunne, G. Basar. The strong coupling constant at low and high energies / J. H. Kühn. Quarkyonic matter and the phase diagram of QCD / L. McLerran. Statistical QCD with non-positive measure / J. C. Osborn, K. Splittorff, J. J. M. Verbaarschot. From equilibrium to transport properties of strongly correlated fermi liquids / T. Schäfer. Lessons from random matrix theory for QCD at finite density / K. Splittorff, J. J. M. Verbaarschot -- 4. Methods and models of holographic correspondence. Soft-wall dynamics in AdS/QCD / B. Batell. Holographic QCD / N. Evans, E. Threlfall. QCD glueball sum rules and vacuum topology / H. Forkel. The pion form factor in AdS/QCD / H. J. Kwee, R. F. Lebed. The fast life of holographic mesons / R. C. Myers, A. Sinha. Properties of Baryons from D-branes and instantons / S. Sugimoto. The master space of N = 1 quiver gauge theories: counting BPS operators / A. Zaffaroni. Topological field congurations. Skyrmions in theories with massless adjoint quarks / R. Auzzi. Domain walls, localization and confinement: what binds strings inside walls / S. Bolognesi. Static interactions of non-abelian vortices / M. Eto. Vortices which do not abelianize dynamically: semi
Norniella, Olga; /Barcelona, IFAE
2005-01-01
Recent QCD measurements from the CDF collaboration at the Tevatron are presented, together with future prospects as the luminosity increases. The measured inclusive jet cross section is compared to pQCD NLO predictions. Precise measurements on jet shapes and hadronic energy flows are compared to different phenomenological models that describe gluon emissions and the underlying event in hadron-hadron interactions.
Wilson loops and QCD/string scattering amplitudes
Makeenko, Yuri; Olesen, Poul
2009-07-15
We generalize modern ideas about the duality between Wilson loops and scattering amplitudes in N=4 super Yang-Mills theory to large N QCD by deriving a general relation between QCD meson scattering amplitudes and Wilson loops. We then investigate properties of the open-string disk amplitude integrated over reparametrizations. When the Wilson-loop is approximated by the area behavior, we find that the QCD scattering amplitude is a convolution of the standard Koba-Nielsen integrand and a kernel. As usual poles originate from the first factor, whereas no (momentum-dependent) poles can arise from the kernel. We show that the kernel becomes a constant when the number of external particles becomes large. The usual Veneziano amplitude then emerges in the kinematical regime, where the Wilson loop can be reliably approximated by the area behavior. In this case, we obtain a direct duality between Wilson loops and scattering amplitudes when spatial variables and momenta are interchanged, in analogy with the N=4 super Yang-Mills theory case.
LeFebvre, W.
1994-08-01
For many years, the popular program top has aided system administrations in examination of process resource usage on their machines. Yet few are familiar with the techniques involved in obtaining this information. Most of what is displayed by top is available only in the dark recesses of kernel memory. Extracting this information requires familiarity not only with how bytes are read from the kernel, but also what data needs to be read. The wide variety of systems and variants of the Unix operating system in today`s marketplace makes writing such a program very challenging. This paper explores the tremendous diversity in kernel information across the many platforms and the solutions employed by top to achieve and maintain ease of portability in the presence of such divergent systems.
Lepton asymmetry and the cosmic QCD transition
Schwarz, Dominik J.; Stuke, Maik E-mail: mstuke@physik.uni-bielefeld.de
2009-11-01
We study the influence of lepton asymmetries on the evolution of the early Universe. The lepton asymmetry l is poorly constrained by observations and might be orders of magnitudes larger than the observed baryon asymmetry b ≅ 10{sup −10}, |l|/b ≤ 2 × 10{sup 8}. We find that lepton asymmetries that are large compared to the tiny baryon asymmetry, can influence the dynamics of the QCD phase transition significantly. The cosmic trajectory in the μ{sub B}−T phase diagram of strongly interacting matter becomes a function of lepton (flavour) asymmetry. For tiny or vanishing baryon and lepton asymmetries lattice QCD simulations show that the cosmic QCD transition is a rapid crossover. However, for large lepton asymmetry, the order of the cosmic transition remains unknown.
Precision QCD measurements in DIS at HERA
NASA Astrophysics Data System (ADS)
Britzger, Daniel
2016-08-01
New and recent results on QCD measurements from the H1 and ZEUS experiments at the HERA ep collider are reviewed. The final results on the combined deep-inelastic neutral and charged current cross-sections are presented and their role in the extractions of parton distribution functions (PDFs) is studied. The PDF fits give insight into the compatibility of QCD evolution and heavy flavor schemes with the data as a function of kinematic variables such as the scale Q2. Measurements of jet production cross-sections in ep collisions provide direct proves of QCD and extractions of the strong coupling constants are performed. Charm and beauty cross-section measurements are used for the determination of the heavy quark masses. Their role in PDF fits is investigated. In the regime of diffractive DIS and photoproduction, dijet and prompt photon production cross-sections provide insights into the process of factorization and the nature of the diffractive exchange.
NLO evolution of color dipoles in N=4 SYM
Balitsky, Ian; Chirilli, Giovanni
2009-01-01
High-energy behavior of amplitudes in a gauge theory can be reformulated in terms of the evolution of Wilson-line operators. In the leading logarithmic approximation it is given by the conformally invariant BK equation for the evolution of color dipoles. In QCD, the next-to-leading order BK equation has both conformal and non-conformal parts, the latter providing the running of the coupling constant. To separate the conformally invariant effects from the running-coupling effects, we calculate the NLO evolution of the color dipoles in the conformal ${\\cal N}$=4 SYM theory. We define the ``composite dipole operator' with the rapidity cutoff preserving conformal invariance. The resulting M\\"obius invariant kernel agrees with the forward NLO BFKL calculation of Ref. 1
Calculates Thermal Neutron Scattering Kernel.
1989-11-10
Version 00 THRUSH computes the thermal neutron scattering kernel by the phonon expansion method for both coherent and incoherent scattering processes. The calculation of the coherent part is suitable only for calculating the scattering kernel for heavy water.
Robotic Intelligence Kernel: Visualization
2009-09-16
The INL Robotic Intelligence Kernel-Visualization is the software that supports the user interface. It uses the RIK-C software to communicate information to and from the robot. The RIK-V illustrates the data in a 3D display and provides an operating picture wherein the user can task the robot.
Robotic Intelligence Kernel: Architecture
2009-09-16
The INL Robotic Intelligence Kernel Architecture (RIK-A) is a multi-level architecture that supports a dynamic autonomy structure. The RIK-A is used to coalesce hardware for sensing and action as well as software components for perception, communication, behavior and world modeling into a framework that can be used to create behaviors for humans to interact with the robot.
QCD String in the Schwinger-Dyson Approach to Heavy-Light Quarkonia
Nefediev, A.V.
2005-03-01
The kernel of the Schwinger-Dyson equation for a heavy-light quarkonium is studied in the limit of potential quark dynamics, and the string correction to the quark-antiquark potential is derived in agreement with the results of the quantum-mechanical QCD string approach. Possible ways of further improvement of the method are outlined and discussed.
Brodsky, Stanley J.; de Teramond, Guy F.; /Costa Rica U.
2012-02-16
-front QCD Hamiltonian 'Light-Front Holography'. Light-Front Holography is in fact one of the most remarkable features of the AdS/CFT correspondence. The Hamiltonian equation of motion in the light-front (LF) is frame independent and has a structure similar to eigenmode equations in AdS space. This makes a direct connection of QCD with AdS/CFT methods possible. Remarkably, the AdS equations correspond to the kinetic energy terms of the partons inside a hadron, whereas the interaction terms build confinement and correspond to the truncation of AdS space in an effective dual gravity approximation. One can also study the gauge/gravity duality starting from the bound-state structure of hadrons in QCD quantized in the light-front. The LF Lorentz-invariant Hamiltonian equation for the relativistic bound-state system is P{sub {mu}}P{sup {mu}}|{psi}(P)> = (P{sup +}P{sup -} - P{sub {perpendicular}}{sup 2})|{psi}(P)> = M{sup 2}|{psi}(P)>, P{sup {+-}} = P{sup 0} {+-} P{sup 3}, where the LF time evolution operator P{sup -} is determined canonically from the QCD Lagrangian. To a first semiclassical approximation, where quantum loops and quark masses are not included, this leads to a LF Hamiltonian equation which describes the bound-state dynamics of light hadrons in terms of an invariant impact variable {zeta} which measures the separation of the partons within the hadron at equal light-front time {tau} = x{sup 0} + x{sup 3}. This allows us to identify the holographic variable z in AdS space with an impact variable {zeta}. The resulting Lorentz-invariant Schroedinger equation for general spin incorporates color confinement and is systematically improvable. Light-front holographic methods were originally introduced by matching the electromagnetic current matrix elements in AdS space with the corresponding expression using LF theory in physical space time. It was also shown that one obtains identical holographic mapping using the matrix elements of the energy-momentum tensor by perturbing
NASA Astrophysics Data System (ADS)
Lutz, Matthias F. M.; Lange, Jens Sören; Pennington, Michael; Bettoni, Diego; Brambilla, Nora; Crede, Volker; Eidelman, Simon; Gillitzer, Albrecht; Gradl, Wolfgang; Lang, Christian B.; Metag, Volker; Nakano, Takashi; Nieves, Juan; Neubert, Sebastian; Oka, Makoto; Olsen, Stephen L.; Pappagallo, Marco; Paul, Stephan; Pelizäus, Marc; Pilloni, Alessandro; Prencipe, Elisabetta; Ritman, Jim; Ryan, Sinead; Thoma, Ulrike; Uwer, Ulrich; Weise, Wolfram
2016-04-01
We report on the EMMI Rapid Reaction Task Force meeting 'Resonances in QCD', which took place at GSI October 12-14, 2015. A group of 26 people met to discuss the physics of resonances in QCD. The aim of the meeting was defined by the following three key questions: What is needed to understand the physics of resonances in QCD? Where does QCD lead us to expect resonances with exotic quantum numbers? What experimental efforts are required to arrive at a coherent picture? For light mesons and baryons only those with up, down and strange quark content were considered. For heavy-light and heavy-heavy meson systems, those with charm quarks were the focus. This document summarizes the discussions by the participants, which in turn led to the coherent conclusions we present here.
NASA Astrophysics Data System (ADS)
Deur, Alexandre; Brodsky, Stanley J.; de Téramond, Guy F.
2016-09-01
We review the present theoretical and empirical knowledge for αs, the fundamental coupling underlying the interactions of quarks and gluons in Quantum Chromodynamics (QCD). The dependence of αs(Q2) on momentum transfer Q encodes the underlying dynamics of hadron physics-from color confinement in the infrared domain to asymptotic freedom at short distances. We review constraints on αs(Q2) at high Q2, as predicted by perturbative QCD, and its analytic behavior at small Q2, based on models of nonperturbative dynamics. In the introductory part of this review, we explain the phenomenological meaning of the coupling, the reason for its running, and the challenges facing a complete understanding of its analytic behavior in the infrared domain. In the second, more technical, part of the review, we discuss the behavior of αs(Q2) in the high momentum transfer domain of QCD. We review how αs is defined, including its renormalization scheme dependence, the definition of its renormalization scale, the utility of effective charges, as well as "Commensurate Scale Relations" which connect the various definitions of the QCD coupling without renormalization-scale ambiguity. We also report recent significant measurements and advanced theoretical analyses which have led to precise QCD predictions at high energy. As an example of an important optimization procedure, we discuss the "Principle of Maximum Conformality", which enhances QCD's predictive power by removing the dependence of the predictions for physical observables on the choice of theoretical conventions such as the renormalization scheme. In the last part of the review, we discuss the challenge of understanding the analytic behavior αs(Q2) in the low momentum transfer domain. We survey various theoretical models for the nonperturbative strongly coupled regime, such as the light-front holographic approach to QCD. This new framework predicts the form of the quark-confinement potential underlying hadron spectroscopy and
Skands, Peter Z.; /Fermilab
2005-07-01
Recent developments in QCD phenomenology have spurred on several improved approaches to Monte Carlo event generation, relative to the post-LEP state of the art. In this brief review, the emphasis is placed on approaches for (1) consistently merging fixed-order matrix element calculations with parton shower descriptions of QCD radiation, (2) improving the parton shower algorithms themselves, and (3) improving the description of the underlying event in hadron collisions.
MC Kernel: Broadband Waveform Sensitivity Kernels for Seismic Tomography
NASA Astrophysics Data System (ADS)
Stähler, Simon C.; van Driel, Martin; Auer, Ludwig; Hosseini, Kasra; Sigloch, Karin; Nissen-Meyer, Tarje
2016-04-01
We present MC Kernel, a software implementation to calculate seismic sensitivity kernels on arbitrary tetrahedral or hexahedral grids across the whole observable seismic frequency band. Seismic sensitivity kernels are the basis for seismic tomography, since they map measurements to model perturbations. Their calculation over the whole frequency range was so far only possible with approximative methods (Dahlen et al. 2000). Fully numerical methods were restricted to the lower frequency range (usually below 0.05 Hz, Tromp et al. 2005). With our implementation, it's possible to compute accurate sensitivity kernels for global tomography across the observable seismic frequency band. These kernels rely on wavefield databases computed via AxiSEM (www.axisem.info), and thus on spherically symmetric models. The advantage is that frequencies up to 0.2 Hz and higher can be accessed. Since the usage of irregular, adapted grids is an integral part of regularisation in seismic tomography, MC Kernel works in a inversion-grid-centred fashion: A Monte-Carlo integration method is used to project the kernel onto each basis function, which allows to control the desired precision of the kernel estimation. Also, it means that the code concentrates calculation effort on regions of interest without prior assumptions on the kernel shape. The code makes extensive use of redundancies in calculating kernels for different receivers or frequency-pass-bands for one earthquake, to facilitate its usage in large-scale global seismic tomography.
Guo, Yi; Gao, Junbin; Kwan, Paul W
2008-08-01
In most existing dimensionality reduction algorithms, the main objective is to preserve relational structure among objects of the input space in a low dimensional embedding space. This is achieved by minimizing the inconsistency between two similarity/dissimilarity measures, one for the input data and the other for the embedded data, via a separate matching objective function. Based on this idea, a new dimensionality reduction method called Twin Kernel Embedding (TKE) is proposed. TKE addresses the problem of visualizing non-vectorial data that is difficult for conventional methods in practice due to the lack of efficient vectorial representation. TKE solves this problem by minimizing the inconsistency between the similarity measures captured respectively by their kernel Gram matrices in the two spaces. In the implementation, by optimizing a nonlinear objective function using the gradient descent algorithm, a local minimum can be reached. The results obtained include both the optimal similarity preserving embedding and the appropriate values for the hyperparameters of the kernel. Experimental evaluation on real non-vectorial datasets confirmed the effectiveness of TKE. TKE can be applied to other types of data beyond those mentioned in this paper whenever suitable measures of similarity/dissimilarity can be defined on the input data. PMID:18566501
FOREWORD: Extreme QCD 2012 (xQCD)
NASA Astrophysics Data System (ADS)
Alexandru, Andrei; Bazavov, Alexei; Liu, Keh-Fei
2013-04-01
The Extreme QCD 2012 conference, held at the George Washington University in August 2012, celebrated the 10th event in the series. It has been held annually since 2003 at different locations: San Carlos (2011), Bad Honnef (2010), Seoul (2009), Raleigh (2008), Rome (2007), Brookhaven (2006), Swansea (2005), Argonne (2004), and Nara (2003). As usual, it was a very productive and inspiring meeting that brought together experts in the field of finite-temperature QCD, both theoretical and experimental. On the experimental side, we heard about recent results from major experiments, such as PHENIX and STAR at Brookhaven National Laboratory, ALICE and CMS at CERN, and also about the constraints on the QCD phase diagram coming from astronomical observations of one of the largest laboratories one can imagine, neutron stars. The theoretical contributions covered a wide range of topics, including QCD thermodynamics at zero and finite chemical potential, new ideas to overcome the sign problem in the latter case, fluctuations of conserved charges and how they allow one to connect calculations in lattice QCD with experimentally measured quantities, finite-temperature behavior of theories with many flavors of fermions, properties and the fate of heavy quarkonium states in the quark-gluon plasma, and many others. The participants took the time to write up and revise their contributions and submit them for publication in these proceedings. Thanks to their efforts, we have now a good record of the ideas presented and discussed during the workshop. We hope that this will serve both as a reminder and as a reference for the participants and for other researchers interested in the physics of nuclear matter at high temperatures and density. To preserve the atmosphere of the event the contributions are ordered in the same way as the talks at the conference. We are honored to have helped organize the 10th meeting in this series, a milestone that reflects the lasting interest in this
NASA Astrophysics Data System (ADS)
Geiger, Klaus
1997-08-01
VNI is a general-purpose Monte Carlo event generator, which includes the simulation of lepton-lepton, lepton-hadron, lepton-nucleus, hadron-hadron, hadron-nucleus, and nucleus-nucleus collisions. On the basis of renormalization-group improved parton description and quantum-kinetic theory, it uses the real-time evolution of parton cascades in conjunction with a self-consistent hadronization scheme that is governed by the dynamics itself. The causal evolution from a specific initial state (determined by the colliding beam particles) is followed by the time development of the phase-space densities of partons, pre-hadronic parton clusters, and final-state hadrons, in position space, momentum space and color space. The parton evolution is described in terms of a space-time generalization of the familiar momentum-space description of multiple (semi) hard interactions in QCD, involving 2 → 2 parton collisions, 2 → 1 parton fusion processes, and 1 → 2 radiation processes. The formation of color-singlet pre-hadronic clusters and their decays into hadrons, on the other hand, is treated by using a spatial criterion motivated by confinement and a non-perturbative model for hadronization. This article gives a brief review of the physics underlying VNI, which is followed by a detailed description of the program itself. The latter program description emphasizes easy-to-use pragmatism and explains how to use the program (including a simple example), annotates input and control parameters, and discusses output data provided by it.
Ultrahigh energy neutrinos and nonlinear QCD dynamics
Machado, Magno V.T.
2004-09-01
The ultrahigh energy neutrino-nucleon cross sections are computed taking into account different phenomenological implementations of the nonlinear QCD dynamics. Based on the color dipole framework, the results for the saturation model supplemented by the Dokshitzer-Gribov-Lipatov-Altarelli-Parisi (DGLAP) evolution as well as for the Balitskii-Fadin-Kuraev-Lipatov (BFKL) formalism in the geometric scaling regime are presented. They are contrasted with recent calculations using next-to-leading order DGLAP and unified BFKL-DGLAP formalisms.
Harris, R.
1992-05-01
We present measurements of jet production and isolated prompt photon production in p{bar p} collisions at {radical}s = 1.8 TeV from the 1988--89 run of the Collider Detector at Fermilab (CDF). To test QCD with jets, the inclusive jet cross section (p{bar p} {yields} J + X) and two jet angular distributions (p{bar P} {yields} JJ + X) are compared to QCD predictions and are used to search for composite quarks. The ratio of the scaled jet cross sections at two Tevatron collision energies ({radical}s= 546 and 1800 GeV) is compared to QCD predictions for X{sub T} scaling violations. Also, we present the first evidence for QCD interference effects (color coherence) in third jet production (p{bar p} {yields} JJJ + X). To test QCD with photons, we present measurements of the transverse momentum spectrum of single isolated prompt photon production (p{bar p} {yields} {gamma} + X), double isolated prompt photon production (p{bar p} {yields} {gamma}{gamma} + X), and the angular distribution of photon-jet events (p{bar p} {yields} {gamma} J + X). We have also measured the isolated production ratio of {eta} and {pi}{sup 0} mesons (p{bar p} {yields} {eta} + X)/(p{bar p} {yields} {pi}{sup 0} + X) = 1.02 {plus minus} .15(stat) {plus minus} .23(sys).
Harris, R.; The CDF Collaboration
1992-05-01
We present measurements of jet production and isolated prompt photon production in p{bar p} collisions at {radical}s = 1.8 TeV from the 1988--89 run of the Collider Detector at Fermilab (CDF). To test QCD with jets, the inclusive jet cross section (p{bar p} {yields} J + X) and two jet angular distributions (p{bar P} {yields} JJ + X) are compared to QCD predictions and are used to search for composite quarks. The ratio of the scaled jet cross sections at two Tevatron collision energies ({radical}s= 546 and 1800 GeV) is compared to QCD predictions for X{sub T} scaling violations. Also, we present the first evidence for QCD interference effects (color coherence) in third jet production (p{bar p} {yields} JJJ + X). To test QCD with photons, we present measurements of the transverse momentum spectrum of single isolated prompt photon production (p{bar p} {yields} {gamma} + X), double isolated prompt photon production (p{bar p} {yields} {gamma}{gamma} + X), and the angular distribution of photon-jet events (p{bar p} {yields} {gamma} J + X). We have also measured the isolated production ratio of {eta} and {pi}{sup 0} mesons (p{bar p} {yields} {eta} + X)/(p{bar p} {yields} {pi}{sup 0} + X) = 1.02 {plus_minus} .15(stat) {plus_minus} .23(sys).
Heavy quarkonium production at collider energies: Factorization and evolution
NASA Astrophysics Data System (ADS)
Kang, Zhong-Bo; Ma, Yan-Qing; Qiu, Jian-Wei; Sterman, George
2014-08-01
We present a perturbative QCD factorization formalism for inclusive production of heavy quarkonia of large transverse momentum, pT at collider energies, including both leading power (LP) and next-to-leading power (NLP) behavior in pT. We demonstrate that both LP and NLP contributions can be factorized in terms of perturbatively calculable short-distance partonic coefficient functions and universal nonperturbative fragmentation functions, and derive the evolution equations that are implied by the factorization. We identify projection operators for all channels of the factorized LP and NLP infrared safe short-distance partonic hard parts, and corresponding operator definitions of fragmentation functions. For the NLP, we focus on the contributions involving the production of a heavy quark pair, a necessary condition for producing a heavy quarkonium. We evaluate the first nontrivial order of evolution kernels for all relevant fragmentation functions, and discuss the role of NLP contributions.
Deur, Alexandre; Brodsky, Stanley J.; de Téramond, Guy F.
2016-05-09
Here, we review present knowledge onmore » $$\\alpha_{s}$$, the Quantum Chromodynamics (QCD) running coupling. The dependence of $$\\alpha_s(Q^2)$$ on momentum transfer $Q$ encodes the underlying dynamics of hadron physics --from color confinement in the infrared domain to asymptotic freedom at short distances. We will survey our present theoretical and empirical knowledge of $$\\alpha_s(Q^2)$$, including constraints at high $Q^2$ predicted by perturbative QCD, and constraints at small $Q^2$ based on models of nonperturbative dynamics. In the first, introductory, part of this review, we explain the phenomenological meaning of the coupling, the reason for its running, and the challenges facing a complete understanding of its analytic behavior in the infrared domain. In the second, more technical, part of the review, we discuss $$\\alpha_s(Q^2)$$ in the high momentum transfer domain of QCD. We review how $$\\alpha_s$$ is defined, including its renormalization scheme dependence, the definition of its renormalization scale, the utility of effective charges, as well as `` Commensurate Scale Relations" which connect the various definitions of the QCD coupling without renormalization scale ambiguity. We also report recent important experimental measurements and advanced theoretical analyses which have led to precise QCD predictions at high energy. As an example of an important optimization procedure, we discuss the ``Principle of Maximum Conformality" which enhances QCD's predictive power by removing the dependence of the predictions for physical observables on the choice of the gauge and renormalization scheme. In last part of the review, we discuss $$\\alpha_s(Q^2)$$ in the low momentum transfer domain, where there has been no consensus on how to define $$\\alpha_s(Q^2)$$ or its analytic behavior. We will discuss the various approaches used for low energy calculations. Among them, we will discuss the light-front holographic approach to QCD in the strongly coupled
Brodsky, Stanley J.; /SLAC
2007-07-06
I discuss a number of novel topics in QCD, including the use of the AdS/CFT correspondence between Anti-de Sitter space and conformal gauge theories to obtain an analytically tractable approximation to QCD in the regime where the QCD coupling is large and constant. In particular, there is an exact correspondence between the fifth-dimension coordinate z of AdS space and a specific impact variable {zeta} which measures the separation of the quark constituents within the hadron in ordinary space-time. This connection allows one to compute the analytic form of the frame-independent light-front wavefunctions of mesons and baryons, the fundamental entities which encode hadron properties and allow the computation of exclusive scattering amplitudes. I also discuss a number of novel phenomenological features of QCD. Initial- and final-state interactions from gluon-exchange, normally neglected in the parton model, have a profound effect in QCD hard-scattering reactions, leading to leading-twist single-spin asymmetries, diffractive deep inelastic scattering, diffractive hard hadronic reactions, the breakdown of the Lam Tung relation in Drell-Yan reactions, and nuclear shadowing and non-universal antishadowing--leading-twist physics not incorporated in the light-front wavefunctions of the target computed in isolation. I also discuss tests of hidden color in nuclear wavefunctions, the use of diffraction to materialize the Fock states of a hadronic projectile and test QCD color transparency, and anomalous heavy quark effects. The presence of direct higher-twist processes where a proton is produced in the hard subprocess can explain the large proton-to-pion ratio seen in high centrality heavy ion collisions.
Halasz, M.A.; Verbaarschot, J.J.; Jackson, A.D.; Shrock, R.E.; Stephanov, M.A.
1998-11-01
We analyze the phase diagram of QCD with two massless quark flavors in the space of temperature T and chemical potential of the baryon charge {mu} using available experimental knowledge of QCD, insights gained from various models, as well as general and model independent arguments including continuity, universality, and thermodynamic relations. A random matrix model is used to describe the chiral symmetry restoration phase transition at finite T and {mu}. In agreement with general arguments, this model predicts a tricritical point in the T{mu} plane. Certain critical properties at such a point are universal and can be relevant to heavy ion collision experiments. {copyright} {ital 1998} {ital The American Physical Society}
Small-x evolution in the next-to-leading order
Giovanni Antonio Chirilli
2009-12-01
After a brief introduction to Deep Inelastic Scattering in the Bjorken limit and in the Regge Limit we discuss the operator product expansion in terms of non local string operator and in terms of Wilson lines. We will show how the high-energy behavior of amplitudes in gauge theories can be reformulated in terms of the evolution of Wilson-line operators. In the leading order this evolution is governed by the non-linear Balitsky-Kovchegov (BK) equation. In order to see if this equation is relevant for existing or future deep inelastic scattering (DIS) accelerators (like Electron Ion Collider (EIC) or Large Hadron electron Collider (LHeC)) one needs to know the next-to-leading order (NLO) corrections. In addition, the NLO corrections define the scale of the running-coupling constant in the BK equation and therefore determine the magnitude of the leading-order cross sections. In Quantum Chromodynamics (QCD), the next-to-leading order BK equation has both conformal and non-conformal parts. The NLO kernel for the composite operators resolves in a sum of the conformal part and the running-coupling part. The QCD and kernel of the BK equation is presented.
Kernel Phase and Kernel Amplitude in Fizeau Imaging
NASA Astrophysics Data System (ADS)
Pope, Benjamin J. S.
2016-09-01
Kernel phase interferometry is an approach to high angular resolution imaging which enhances the performance of speckle imaging with adaptive optics. Kernel phases are self-calibrating observables that generalize the idea of closure phases from non-redundant arrays to telescopes with arbitrarily shaped pupils, by considering a matrix-based approximation to the diffraction problem. In this paper I discuss the recent fhistory of kernel phase, in particular in the matrix-based study of sparse arrays, and propose an analogous generalization of the closure amplitude to kernel amplitudes. This new approach can self-calibrate throughput and scintillation errors in optical imaging, which extends the power of kernel phase-like methods to symmetric targets where amplitude and not phase calibration can be a significant limitation, and will enable further developments in high angular resolution astronomy.
Plunkett, R.; The CDF Collaboration
1991-10-01
Results are presented for hadronic jet and direct photon production at {radical}{bar s} = 1800 GeV. The data are compared with next-to-leading QCD calculations. A new limit on the scale of possible composite structure of the quarks is also reported. 12 refs., 4 figs.
Devlin, T.; CDF Collaboration
1996-10-01
The CDF collaboration is engaged in a broad program of QCD measurements at the Fermilab Tevatron Collider. I will discuss inclusive jet production at center-of-mass energies of 1800 GeV and 630 GeV, properties of events with very high total transverse energy and dijet angular distributions.
Brodsky, Stanley J.; Deshpande, Abhay L.; Gao, Haiyan; McKeown, Robert D.; Meyer, Curtis A.; Meziani, Zein-Eddine; Milner, Richard G.; Qiu, Jianwei; Richards, David G.; Roberts, Craig D.
2015-02-26
This White Paper presents the recommendations and scientific conclusions from the Town Meeting on QCD and Hadronic Physics that took place in the period 13-15 September 2014 at Temple University as part of the NSAC 2014 Long Range Planning process. The meeting was held in coordination with the Town Meeting on Phases of QCD and included a full day of joint plenary sessions of the two meetings. The goals of the meeting were to report and highlight progress in hadron physics in the seven years since the 2007 Long Range Plan (LRP07), and present a vision for the future by identifying the key questions and plausible paths to solutions which should define the next decade. The introductory summary details the recommendations and their supporting rationales, as determined at the Town Meeting on QCD and Hadron Physics, and the endorsements that were voted upon. The larger document is organized as follows. Section 2 highlights major progress since the 2007 LRP. It is followed, in Section 3, by a brief overview of the physics program planned for the immediate future. Finally, Section 4 provides an overview of the physics motivations and goals associated with the next QCD frontier: the Electron-Ion-Collider.
NASA Astrophysics Data System (ADS)
Ellis, Stephen D.; Soper, Davison E.
2013-06-01
An essential element of the development of the strong interaction component of the Standard Model of particle physics, QCD, has been the evolving understanding of the "jets" of particles that appear in the final states of high energy particle collisions. In this chapter we provide a historical outline of those developments...
Andreas S. Kronfeld
2002-09-30
After reviewing some of the mathematical foundations and numerical difficulties facing lattice QCD, I review the status of several calculations relevant to experimental high-energy physics. The topics considered are moments of structure functions, which may prove relevant to search for new phenomena at the LHC, and several aspects of flavor physics, which are relevant to understanding CP and flavor violation.
Nathan Isgur
1997-03-01
The author presents an idiosyncratic view of baryons which calls for a marriage between quark-based and hadronic models of QCD. He advocates a treatment based on valence quark plus glue dominance of hadron structure, with the sea of q pairs (in the form of virtual hadron pairs) as important corrections.
Lincoln, Don
2016-07-12
The strongest force in the universe is the strong nuclear force and it governs the behavior of quarks and gluons inside protons and neutrons. The name of the theory that governs this force is quantum chromodynamics, or QCD. In this video, Fermilabâs Dr. Don Lincoln explains the intricacies of this dominant component of the Standard Model.
NASA Astrophysics Data System (ADS)
Brodsky, Stanley J.
2011-04-01
I review a number of topics where conventional wisdom in hadron physics has been challenged. For example, hadrons can be produced at large transverse momentum directly within a hard QCD subprocess, rather than from jet fragmentation. Such "direct" higher-twist processes can explain the deviations from perturbative QCD predictions in measurements of inclusive hadron cross sections at fixed {xT} = 2{pT}/√ s , as well as the "baryon anomaly, the anomalously large proton-to-pion ratio seen in high centrality heavy ion collisions. Initial-state and final-state interactions of the struck quark, soft-gluon rescattering associated with its Wilson line lead to Bjorken-scaling single-spin asymmetries, diffractive deep inelastic scattering, the breakdown of the Lam-Tung relation in Drell-Yan reactions, as well as nuclear shadowing and antishadowing. The Gribov-Glauber theory predicts that antishadowing of nuclear structure functions is not universal, but instead depends on the flavor quantum numbers of each quark and antiquark, thus explaining the anomalous nuclear dependence measured in deep-inelastic neutrino scattering. Since shadowing and antishadowing arise from the physics of leading-twist diffractive deep inelastic scattering, one cannot attribute such phenomena to the structure of the nucleus itself. It is thus important to distinguish "static" structure functions, the probability distributions computed from the square of the target light-front wavefunctions, versus "dynamical" structure functions which include the effects of the final-state rescattering of the struck quark. The importance of the J = 0 photon-quark QCD contact interaction in deeply virtual Compton scattering is also emphasized. The scheme-independent BLM method for setting the renormalization scale is discussed. The elimination of the renormalization scale ambiguity would greatly improve the precision of QCD predictions and increase the sensitivity of searches for new physics at the LHC. Other novel
Brodsky, Stanley J.; /SLAC /Southern Denmark U., CP3-Origins
2011-08-12
I review a number of topics where conventional wisdom in hadron physics has been challenged. For example, hadrons can be produced at large transverse momentum directly within a hard higher-twist QCD subprocess, rather than from jet fragmentation. Such 'direct' processes can explain the deviations from perturbative QCD predictions in measurements of inclusive hadron cross sections at fixed x{sub T} = 2p{sub T}/{radical}s, as well as the 'baryon anomaly', the anomalously large proton-to-pion ratio seen in high centrality heavy ion collisions. Initial-state and final-state interactions of the struck quark, the soft-gluon rescattering associated with its Wilson line, lead to Bjorken-scaling single-spin asymmetries, diffractive deep inelastic scattering, the breakdown of the Lam-Tung relation in Drell-Yan reactions, as well as nuclear shadowing and antishadowing. The Gribov-Glauber theory predicts that antishadowing of nuclear structure functions is not universal, but instead depends on the flavor quantum numbers of each quark and antiquark, thus explaining the anomalous nuclear dependence measured in deep-inelastic neutrino scattering. Since shadowing and antishadowing arise from the physics of leading-twist diffractive deep inelastic scattering, one cannot attribute such phenomena to the structure of the nucleus itself. It is thus important to distinguish 'static' structure functions, the probability distributions computed from the square of the target light-front wavefunctions, versus 'dynamical' structure functions which include the effects of the final-state rescattering of the struck quark. The importance of the J = 0 photon-quark QCD contact interaction in deeply virtual Compton scattering is also emphasized. The scheme-independent BLM method for setting the renormalization scale is discussed. Eliminating the renormalization scale ambiguity greatly improves the precision of QCD predictions and increases the sensitivity of searches for new physics at the LHC
Non-perturbative QCD Modeling and Meson Physics
Nguyen, T.; Souchlas, N. A.; Tandy, P. C.
2009-04-20
Using a ladder-rainbow kernel previously established for light quark hadron physics, we explore the extension to masses and electroweak decay constants of ground state pseudoscalar and vector quarkonia and heavy-light mesons in the c- and b-quark regions. We make a systematic study of the effectiveness of a constituent mass concept as a replacement for a heavy quark dressed propagator for such states. The difference between vector and axial vector current correlators is explored within the same model to provide an estimate of the four quark chiral condensate and the leading distance scale for the onset of non-perturbative phenomena in QCD.
Soft and Hard Scale QCD Dynamics in Mesons
NASA Astrophysics Data System (ADS)
Nguyen, T.; Souchlas, N. A.; Tandy, P. C.
2011-09-01
Using a ladder-rainbow kernel previously established for light quark hadron physics, we explore the extension to masses and electroweak decay constants of ground state pseudoscalar and vector quarkonia and heavy-light mesons in the c- and b-quark regions. We make a systematic study of the effectiveness of a constituent mass concept as a replacement for a heavy quark dressed propagator for such states. The difference between vector and axial vector current correlators is explored within the same model to provide an estimate of the four quark chiral condensate and the leading distance scale for the onset of non-perturbative phenomena in QCD.
Non-perturbative QCD Modeling and Meson Physics
NASA Astrophysics Data System (ADS)
Nguyen, T.; Souchlas, N. A.; Tandy, P. C.
2009-04-01
Using a ladder-rainbow kernel previously established for light quark hadron physics, we explore the extension to masses and electroweak decay constants of ground state pseudoscalar and vector quarkonia and heavy-light mesons in the c- and b-quark regions. We make a systematic study of the effectiveness of a constituent mass concept as a replacement for a heavy quark dressed propagator for such states. The difference between vector and axial vector current correlators is explored within the same model to provide an estimate of the four quark chiral condensate and the leading distance scale for the onset of non-perturbative phenomena in QCD.
MAGNETIC FIELDS FROM QCD PHASE TRANSITIONS
Tevzadze, Alexander G.; Kisslinger, Leonard; Kahniashvili, Tina; Brandenburg, Axel
2012-11-01
We study the evolution of QCD phase transition-generated magnetic fields (MFs) in freely decaying MHD turbulence of the expanding universe. We consider an MF generation model that starts from basic non-perturbative QCD theory and predicts stochastic MFs with an amplitude of the order of 0.02 {mu}G and small magnetic helicity. We employ direct numerical simulations to model the MHD turbulence decay and identify two different regimes: a 'weakly helical' turbulence regime, when magnetic helicity increases during decay, and 'fully helical' turbulence, when maximal magnetic helicity is reached and an inverse cascade develops. The results of our analysis show that in the most optimistic scenario the magnetic correlation length in the comoving frame can reach 10 kpc with the amplitude of the effective MF being 0.007 nG. We demonstrate that the considered model of magnetogenesis can provide the seed MF for galaxies and clusters.
Bruemmer, David J.
2009-11-17
A robot platform includes perceptors, locomotors, and a system controller. The system controller executes a robot intelligence kernel (RIK) that includes a multi-level architecture and a dynamic autonomy structure. The multi-level architecture includes a robot behavior level for defining robot behaviors, that incorporate robot attributes and a cognitive level for defining conduct modules that blend an adaptive interaction between predefined decision functions and the robot behaviors. The dynamic autonomy structure is configured for modifying a transaction capacity between an operator intervention and a robot initiative and may include multiple levels with at least a teleoperation mode configured to maximize the operator intervention and minimize the robot initiative and an autonomous mode configured to minimize the operator intervention and maximize the robot initiative. Within the RIK at least the cognitive level includes the dynamic autonomy structure.
Nowicki, Dimitri; Siegelmann, Hava
2010-06-11
This paper introduces a new model of associative memory, capable of both binary and continuous-valued inputs. Based on kernel theory, the memory model is on one hand a generalization of Radial Basis Function networks and, on the other, is in feature space, analogous to a Hopfield network. Attractors can be added, deleted, and updated on-line simply, without harming existing memories, and the number of attractors is independent of input dimension. Input vectors do not have to adhere to a fixed or bounded dimensionality; they can increase and decrease it without relearning previous memories. A memory consolidation process enables the network to generalize concepts and form clusters of input data, which outperforms many unsupervised clustering techniques; this process is demonstrated on handwritten digits from MNIST. Another process, reminiscent of memory reconsolidation is introduced, in which existing memories are refreshed and tuned with new inputs; this process is demonstrated on series of morphed faces.
Non-perturbative effects for the BFKL equation in QCD and in N = 4 SUSY
NASA Astrophysics Data System (ADS)
Lipatov, L. N.
2015-04-01
We discuss BFKL equations for composite states of Reggeized gluons in the color singlet and adjoint representations in the next-to-leading logarithmic approximation (NLLA). In an accordance with the non-Fredholm properties of the corresponding kernels, their eigenvalues in N = 4 SUSY are calculated for arbitrary couplings at large values of anomalous dimensions. The Green function for the BFKL equation in LLA with the running coupling in QCD is expressed in terms of non-perturbative phases of eigenfunctions. It allows to construct the spectrum of Pomerons for two different models of the QCD dynamics at large distances.
Kovacs, E.; CDF Collaboration
1996-02-01
We present results for the inclusive jet cross section and the dijet mass distribution. The inclusive cross section and dijet mass both exhibit significant deviations from the predictions of NLO QCD for jets with E{sub T}>200 GeV, or dijet masses > 400 GeV/c{sup 2}. We show that it is possible, within a global QCD analysis that includes the CDF inclusive jet data, to modify the gluon distribution at high x. The resulting increase in the jet cross-section predictions is 25-35%. Owing to the presence of k{sub T} smearing effects, the direct photon data does not provide as strong a constraint on the gluon distribution as previously thought. A comparison of the CDF and UA2 jet data, which have a common range in x, is plagued by theoretical and experimental uncertainties, and cannot at present confirm the CDF excess or the modified gluon distribution.
NASA Astrophysics Data System (ADS)
Dudek, Jozef J.
2016-03-01
I describe how hadron-hadron scattering amplitudes are related to the eigenstates of QCD in a finite cubic volume. The discrete spectrum of such eigenstates can be determined from correlation functions computed using lattice QCD, and the corresponding scattering amplitudes extracted. I review results from the Hadron Spectrum Collaboration who have used these finite volume methods to study ππ elastic scattering, including the ρ resonance, as well as coupled-channel πK, ηK scattering. The very recent extension to the case where an external current acts is also presented, considering the reaction πγ* → ππ, from which the unstable ρ → πγ transition form factor is extracted. Ongoing calculations are advertised and the outlook for finite volume approaches is presented.
Dudek, Jozef J.; Edwards, Robert G.
2012-03-21
In this study, we present the first comprehensive study of hybrid baryons using lattice QCD methods. Using a large basis of composite QCD interpolating fields we extract an extensive spectrum of baryon states and isolate those of hybrid character using their relatively large overlap onto operators which sample gluonic excitations. We consider the spectrum of Nucleon and Delta states at several quark masses finding a set of positive parity hybrid baryons with quantum numbers $N_{1/2^+},\\,N_{1/2^+},\\,N_{3/2^+},\\, N_{3/2^+},\\,N_{5/2^+},\\,$ and $\\Delta_{1/2^+},\\, \\Delta_{3/2^+}$ at an energy scale above the first band of `conventional' excited positive parity baryons. This pattern of states is compatible with a color octet gluonic excitation having $J^{P}=1^{+}$ as previously reported in the hybrid meson sector and with a comparable energy scale for the excitation, suggesting a common bound-state construction for hybrid mesons and baryons.
Gupta, R.
1998-12-31
The goal of the lectures on lattice QCD (LQCD) is to provide an overview of both the technical issues and the progress made so far in obtaining phenomenologically useful numbers. The lectures consist of three parts. The author`s charter is to provide an introduction to LQCD and outline the scope of LQCD calculations. In the second set of lectures, Guido Martinelli will discuss the progress they have made so far in obtaining results, and their impact on Standard Model phenomenology. Finally, Martin Luescher will discuss the topical subjects of chiral symmetry, improved formulation of lattice QCD, and the impact these improvements will have on the quality of results expected from the next generation of simulations.
Giannetti, P. )
1991-05-01
Recent analysis of jet data taken at the Fermilab Tevatron Collider at {radical}S = 1.8 Tev are presented. Inclusive jet, dijet, trijet and direct photon measurements are compared to QCD parton level calculations, at orders {alpha}{sub s}{sup 3} or {alpha}{sub s}{sup 2}. The large total transverse energy events are well described by the Herwig shower Montecarlo. 19 refs., 20 figs., 1 tab.
Roberts, C.D.
1994-09-01
The Dyson-Schwinger equations (DSEs) are a tower of coupled integral equations that relate the Green functions of QCD to one another. Solving these equations provides the solution of QCD. This tower of equations includes the equation for the quark self-energy, which is the analogue of the gap equation in superconductivity, and the Bethe-Salpeter equation, the solution of which is the quark-antiquark bound state amplitude in QCD. The application of this approach to solving Abelian and non-Abelian gauge theories is reviewed. The nonperturbative DSE approach is being developed as both: (1) a computationally less intensive alternative and; (2) a complement to numerical simulations of the lattice action of QCD. In recent years, significant progress has been made with the DSE approach so that it is now possible to make sensible and direct comparisons between quantities calculated using this approach and the results of numerical simulations of Abelian gauge theories. Herein the application of the DSE approach to the calculation of pion observables is described: the {pi}-{pi} scattering lengths (a{sub 0}{sup 0}, a{sub 0}{sup 2}, A{sub 1}{sup 1}, a{sub 2}{sup 2}) and associated partial wave amplitudes; the {pi}{sup 0} {yields} {gamma}{gamma} decay width; and the charged pion form factor, F{sub {pi}}(q{sup 2}). Since this approach provides a straightforward, microscopic description of dynamical chiral symmetry breaking (D{sub X}SB) and confinement, the calculation of pion observables is a simple and elegant illustrative example of its power and efficacy. The relevant DSEs are discussed in the calculation of pion observables and concluding remarks are presented.
Kernel Methods on Riemannian Manifolds with Gaussian RBF Kernels.
Jayasumana, Sadeep; Hartley, Richard; Salzmann, Mathieu; Li, Hongdong; Harandi, Mehrtash
2015-12-01
In this paper, we develop an approach to exploiting kernel methods with manifold-valued data. In many computer vision problems, the data can be naturally represented as points on a Riemannian manifold. Due to the non-Euclidean geometry of Riemannian manifolds, usual Euclidean computer vision and machine learning algorithms yield inferior results on such data. In this paper, we define Gaussian radial basis function (RBF)-based positive definite kernels on manifolds that permit us to embed a given manifold with a corresponding metric in a high dimensional reproducing kernel Hilbert space. These kernels make it possible to utilize algorithms developed for linear spaces on nonlinear manifold-valued data. Since the Gaussian RBF defined with any given metric is not always positive definite, we present a unified framework for analyzing the positive definiteness of the Gaussian RBF on a generic metric space. We then use the proposed framework to identify positive definite kernels on two specific manifolds commonly encountered in computer vision: the Riemannian manifold of symmetric positive definite matrices and the Grassmann manifold, i.e., the Riemannian manifold of linear subspaces of a Euclidean space. We show that many popular algorithms designed for Euclidean spaces, such as support vector machines, discriminant analysis and principal component analysis can be generalized to Riemannian manifolds with the help of such positive definite Gaussian kernels. PMID:26539851
ERIC Educational Resources Information Center
Mayr, Ernst
1978-01-01
Traces the history of evolution theory from Lamarck and Darwin to the present. Discusses natural selection in detail. Suggests that, besides biological evolution, there is also a cultural evolution which is more rapid than the former. (MA)
Hadronic Resonances from Lattice QCD
Lichtl, Adam C.; Bulava, John; Morningstar, Colin; Edwards, Robert; Mathur, Nilmani; Richards, David; Fleming, George; Juge, K. Jimmy; Wallace, Stephen J.
2007-10-26
The determination of the pattern of hadronic resonances as predicted by Quantum Chromodynamics requires the use of non-perturbative techniques. Lattice QCD has emerged as the dominant tool for such calculations, and has produced many QCD predictions which can be directly compared to experiment. The concepts underlying lattice QCD are outlined, methods for calculating excited states are discussed, and results from an exploratory Nucleon and Delta baryon spectrum study are presented.
Hadronic Resonances from Lattice QCD
John Bulava; Robert Edwards; George Fleming; K. Jimmy Juge; Adam C. Lichtl; Nilmani Mathur; Colin Morningstar; David Richards; Stephen J. Wallace
2007-06-16
The determination of the pattern of hadronic resonances as predicted by Quantum Chromodynamics requires the use of non-perturbative techniques. Lattice QCD has emerged as the dominant tool for such calculations, and has produced many QCD predictions which can be directly compared to experiment. The concepts underlying lattice QCD are outlined, methods for calculating excited states are discussed, and results from an exploratory Nucleon and Delta baryon spectrum study are presented.
Baryon Interactions from Lattice QCD
Aoki, Sinya
2010-05-12
We report on new attempt to investigate baryon interactions in lattice QCD. From the Bethe-Salpeter (BS) wave function, we have successfully extracted the nucleon-nucleon (NN) potentials in quenched QCD simulations, which reproduce qualitative features of modern NN potentials. The method has been extended to obtain the tensor potential as well as the central potential and also applied to the hyperon-nucleon (YN) interactions, in both quenched and full QCD.
Renormalization of Extended QCD2
NASA Astrophysics Data System (ADS)
Fukaya, Hidenori; Yamamura, Ryo
2015-10-01
Extended QCD (XQCD), proposed by Kaplan [D. B. Kaplan, arXiv:1306.5818], is an interesting reformulation of QCD with additional bosonic auxiliary fields. While its partition function is kept exactly the same as that of original QCD, XQCD naturally contains properties of low-energy hadronic models. We analyze the renormalization group flow of 2D (X)QCD, which is solvable in the limit of a large number of colors N_c, to understand what kind of roles the auxiliary degrees of freedom play and how the hadronic picture emerges in the low-energy region.
NASA Astrophysics Data System (ADS)
Dominguez, C. A.
2013-08-01
A general, and very basic introduction to QCD sum rules is presented, with emphasis on recent issues to be described at length in other papers in this issue. Collectively, these papers constitute the proceedings of the International Workshop on Determination of the Fundamental Parameters of QCD, Singapore, March 2013.
Learning With Jensen-Tsallis Kernels.
Ghoshdastidar, Debarghya; Adsul, Ajay P; Dukkipati, Ambedkar
2016-10-01
Jensen-type [Jensen-Shannon (JS) and Jensen-Tsallis] kernels were first proposed by Martins et al. (2009). These kernels are based on JS divergences that originated in the information theory. In this paper, we extend the Jensen-type kernels on probability measures to define positive-definite kernels on Euclidean space. We show that the special cases of these kernels include dot-product kernels. Since Jensen-type divergences are multidistribution divergences, we propose their multipoint variants, and study spectral clustering and kernel methods based on these. We also provide experimental studies on benchmark image database and gene expression database that show the benefits of the proposed kernels compared with the existing kernels. The experiments on clustering also demonstrate the use of constructing multipoint similarities.
Cool QCD: Hadronic Physics and QCD in Nuclei
NASA Astrophysics Data System (ADS)
Cates, Gordon
2015-10-01
QCD is the only strongly-coupled theory given to us by Nature, and it gives rise to a host of striking phenomena. Two examples in hadronic physics include the dynamic generation of mass and the confinement of quarks. Indeed, the vast majority of the mass of visible matter is due to the kinetic and potential energy of the massless gluons and the essentially massless quarks. QCD also gives rise to the force that binds protons and neutrons into nuclei, including subtle effects that have historically been difficult to understand. Describing these phenomena in terms of QCD has represented a daunting task, but remarkable progress has been achieved in both theory and experiment. Both CEBAF at Jefferson Lab and RHIC at Brookhaven National Lab have provided unprecedented experimental tools for investigating QCD, and upgrades at both facilities promise even greater opportunities in the future. Also important are programs at FermiLab as well as the LHC at CERN. Looking further ahead, an electron ion collider (EIC) has the potential to answer whole new sets of questions regarding the role of gluons in nuclear matter, an issue that lies at the heart of the generation of mass. On the theoretical side, rapid progress in supercomputers is enabling stunning progress in Lattice QCD calculations, and approximate forms of QCD are also providing deep new physical insight. In this talk I will describe both recent advances in Cool QCD as well as the exciting scientific opportunities that exist for the future.
Soltz, R; Vranas, P; Blumrich, M; Chen, D; Gara, A; Giampap, M; Heidelberger, P; Salapura, V; Sexton, J; Bhanot, G
2007-04-11
The theory of the strong nuclear force, Quantum Chromodynamics (QCD), can be numerically simulated from first principles on massively-parallel supercomputers using the method of Lattice Gauge Theory. We describe the special programming requirements of lattice QCD (LQCD) as well as the optimal supercomputer hardware architectures that it suggests. We demonstrate these methods on the BlueGene massively-parallel supercomputer and argue that LQCD and the BlueGene architecture are a natural match. This can be traced to the simple fact that LQCD is a regular lattice discretization of space into lattice sites while the BlueGene supercomputer is a discretization of space into compute nodes, and that both are constrained by requirements of locality. This simple relation is both technologically important and theoretically intriguing. The main result of this paper is the speedup of LQCD using up to 131,072 CPUs on the largest BlueGene/L supercomputer. The speedup is perfect with sustained performance of about 20% of peak. This corresponds to a maximum of 70.5 sustained TFlop/s. At these speeds LQCD and BlueGene are poised to produce the next generation of strong interaction physics theoretical results.
Dudek, Jozef J.; Edwards, Robert G.
2012-03-21
In this study, we present the first comprehensive study of hybrid baryons using lattice QCD methods. Using a large basis of composite QCD interpolating fields we extract an extensive spectrum of baryon states and isolate those of hybrid character using their relatively large overlap onto operators which sample gluonic excitations. We consider the spectrum of Nucleon and Delta states at several quark masses finding a set of positive parity hybrid baryons with quantum numbersmore » $$N_{1/2^+},\\,N_{1/2^+},\\,N_{3/2^+},\\, N_{3/2^+},\\,N_{5/2^+},\\,$$ and $$\\Delta_{1/2^+},\\, \\Delta_{3/2^+}$$ at an energy scale above the first band of `conventional' excited positive parity baryons. This pattern of states is compatible with a color octet gluonic excitation having $$J^{P}=1^{+}$$ as previously reported in the hybrid meson sector and with a comparable energy scale for the excitation, suggesting a common bound-state construction for hybrid mesons and baryons.« less
High energy asymptotics of scattering processes in QCD
Enberg, R.; Golec-Biernat, K.; Munier, S.
2005-10-01
High energy scattering in the QCD parton model was recently shown to be a reaction-diffusion process and, thus, to lie in the universality class of the stochastic Fisher-Kolmogorov-Petrovsky-Piscounov equation. We recall that the latter appears naturally in the context of the parton model. We provide a thorough numerical analysis of the mean-field approximation, given in QCD by the Balitsky-Kovchegov equation. In the framework of a simple stochastic toy model that captures the relevant features of QCD, we discuss and illustrate the universal properties of such stochastic models. We investigate, in particular, the validity of the mean-field approximation and how it is broken by fluctuations. We find that the mean-field approximation is a good approximation in the initial stages of the evolution in rapidity.
The High Energy Asymptotics of Scattering Processes in QCD
Enberg, Rikard; Golec-Biernat, K.; Munier, S.
2005-05-12
High energy scattering in the QCD parton model was recently shown to be a reaction-diffusion process, and thus to lie in the universality class of the stochastic Fisher-Kolmogorov-Petrovsky-Piscounov equation. We recall that the latter appears naturally in the context of the parton model. We provide a thorough numerical analysis of the mean field approximation, given in QCD by the Balitsky-Kovchegov equation. In the framework of a simple stochastic toy model that captures the relevant features of QCD, we discuss and illustrate the universal properties of such stochastic models. We investigate in particular the validity of the mean field approximation and how it is broken by fluctuations. We find that the mean field approximation is a good approximation in the initial stages of the evolution in rapidity.
Dynamics for QCD on an Infinite Lattice
NASA Astrophysics Data System (ADS)
Grundling, Hendrik; Rudolph, Gerd
2016-08-01
We prove the existence of the dynamics automorphism group for Hamiltonian QCD on an infinite lattice in R^3, and this is done in a C*-algebraic context. The existence of ground states is also obtained. Starting with the finite lattice model for Hamiltonian QCD developed by Kijowski, Rudolph (cf. J Math Phys 43:1796-1808 [15], J Math Phys 46:032303 [16]), we state its field algebra and a natural representation. We then generalize this representation to the infinite lattice, and construct a Hilbert space which has represented on it all the local algebras (i.e., kinematics algebras associated with finite connected sublattices) equipped with the correct graded commutation relations. On a suitably large C*-algebra acting on this Hilbert space, and containing all the local algebras, we prove that there is a one parameter automorphism group, which is the pointwise norm limit of the local time evolutions along a sequence of finite sublattices, increasing to the full lattice. This is our global time evolution. We then take as our field algebra the C*-algebra generated by all the orbits of the local algebras w.r.t. the global time evolution. Thus the time evolution creates the field algebra. The time evolution is strongly continuous on this choice of field algebra, though not on the original larger C*-algebra. We define the gauge transformations, explain how to enforce the Gauss law constraint, show that the dynamics automorphism group descends to the algebra of physical observables and prove that gauge invariant ground states exist.
None
2016-07-12
Modern QCD - Lecture 1 Starting from the QCD Lagrangian we will revisit some basic QCD concepts and derive fundamental properties like gauge invariance and isospin symmetry and will discuss the Feynman rules of the theory. We will then focus on the gauge group of QCD and derive the Casimirs CF and CA and some useful color identities.
Aligning Biomolecular Networks Using Modular Graph Kernels
NASA Astrophysics Data System (ADS)
Towfic, Fadi; Greenlee, M. Heather West; Honavar, Vasant
Comparative analysis of biomolecular networks constructed using measurements from different conditions, tissues, and organisms offer a powerful approach to understanding the structure, function, dynamics, and evolution of complex biological systems. We explore a class of algorithms for aligning large biomolecular networks by breaking down such networks into subgraphs and computing the alignment of the networks based on the alignment of their subgraphs. The resulting subnetworks are compared using graph kernels as scoring functions. We provide implementations of the resulting algorithms as part of BiNA, an open source biomolecular network alignment toolkit. Our experiments using Drosophila melanogaster, Saccharomyces cerevisiae, Mus musculus and Homo sapiens protein-protein interaction networks extracted from the DIP repository of protein-protein interaction data demonstrate that the performance of the proposed algorithms (as measured by % GO term enrichment of subnetworks identified by the alignment) is competitive with some of the state-of-the-art algorithms for pair-wise alignment of large protein-protein interaction networks. Our results also show that the inter-species similarity scores computed based on graph kernels can be used to cluster the species into a species tree that is consistent with the known phylogenetic relationships among the species.
RTOS kernel in portable electrocardiograph
NASA Astrophysics Data System (ADS)
Centeno, C. A.; Voos, J. A.; Riva, G. G.; Zerbini, C.; Gonzalez, E. A.
2011-12-01
This paper presents the use of a Real Time Operating System (RTOS) on a portable electrocardiograph based on a microcontroller platform. All medical device digital functions are performed by the microcontroller. The electrocardiograph CPU is based on the 18F4550 microcontroller, in which an uCOS-II RTOS can be embedded. The decision associated with the kernel use is based on its benefits, the license for educational use and its intrinsic time control and peripherals management. The feasibility of its use on the electrocardiograph is evaluated based on the minimum memory requirements due to the kernel structure. The kernel's own tools were used for time estimation and evaluation of resources used by each process. After this feasibility analysis, the migration from cyclic code to a structure based on separate processes or tasks able to synchronize events is used; resulting in an electrocardiograph running on one Central Processing Unit (CPU) based on RTOS.
Density Estimation with Mercer Kernels
NASA Technical Reports Server (NTRS)
Macready, William G.
2003-01-01
We present a new method for density estimation based on Mercer kernels. The density estimate can be understood as the density induced on a data manifold by a mixture of Gaussians fit in a feature space. As is usual, the feature space and data manifold are defined with any suitable positive-definite kernel function. We modify the standard EM algorithm for mixtures of Gaussians to infer the parameters of the density. One benefit of the approach is it's conceptual simplicity, and uniform applicability over many different types of data. Preliminary results are presented for a number of simple problems.
NLO Hierarchy of Wilson Lines Evolution
Balitsky, Ian
2015-03-01
The high-energy behavior of QCD amplitudes can be described in terms of the rapidity evolution of Wilson lines. I present the hierarchy of evolution equations for Wilson lines in the next-to-leading order.
Sekhar Chivukula
2016-07-12
The symmetries of a quantum field theory can be realized in a variety of ways. Symmetries can be realized explicitly, approximately, through spontaneous symmetry breaking or, via an anomaly, quantum effects can dynamically eliminate a symmetry of the theory that was presentÂ at the classical level. Â Quantum Chromodynamics (QCD),Â the modern theoryÂ of the strong interactions, exemplify each ofÂ these possibilities.Â The interplayÂ of these effects determine theÂ spectrum of particles that we observeÂ and, ultimately, account forÂ 99% of the mass of ordinary matter.Â
The NAS kernel benchmark program
NASA Technical Reports Server (NTRS)
Bailey, D. H.; Barton, J. T.
1985-01-01
A collection of benchmark test kernels that measure supercomputer performance has been developed for the use of the NAS (Numerical Aerodynamic Simulation) program at the NASA Ames Research Center. This benchmark program is described in detail and the specific ground rules are given for running the program as a performance test.
Adaptive wiener image restoration kernel
Yuan, Ding
2007-06-05
A method and device for restoration of electro-optical image data using an adaptive Wiener filter begins with constructing imaging system Optical Transfer Function, and the Fourier Transformations of the noise and the image. A spatial representation of the imaged object is restored by spatial convolution of the image using a Wiener restoration kernel.
QCD analogy for quantum gravity
NASA Astrophysics Data System (ADS)
Holdom, Bob; Ren, Jing
2016-06-01
Quadratic gravity presents us with a renormalizable, asymptotically free theory of quantum gravity. When its couplings grow strong at some scale, as in QCD, then this strong scale sets the Planck mass. QCD has a gluon that does not appear in the physical spectrum. Quadratic gravity has a spin-2 ghost that we conjecture does not appear in the physical spectrum. We discuss how the QCD analogy leads to this conjecture and to the possible emergence of general relativity. Certain aspects of the QCD path integral and its measure are also similar for quadratic gravity. With the addition of the Einstein-Hilbert term, quadratic gravity has a dimensionful parameter that seems to control a quantum phase transition and the size of a mass gap in the strong phase.
PHENOMENOLOGICAL STUDIES IN QCD RESUMMATION.
KULESZA,A.; STERMAN,G.; VOGELSANG,W.
2002-09-01
We study applications of QCD soft-gluon resummations to electroweak annihilation cross sections. We focus on a formalism that allows to resume logarithmic corrections arising near partonic threshold and at small transverse momentum simultaneously.
Excited Baryons in Holographic QCD
de Teramond, Guy F.; Brodsky, Stanley J.; /SLAC /Southern Denmark U., CP3-Origins
2011-11-08
The light-front holographic QCD approach is used to describe baryon spectroscopy and the systematics of nucleon transition form factors. Baryon spectroscopy and the excitation dynamics of nucleon resonances encoded in the nucleon transition form factors can provide fundamental insight into the strong-coupling dynamics of QCD. The transition from the hard-scattering perturbative domain to the non-perturbative region is sensitive to the detailed dynamics of confined quarks and gluons. Computations of such phenomena from first principles in QCD are clearly very challenging. The most successful theoretical approach thus far has been to quantize QCD on discrete lattices in Euclidean space-time; however, dynamical observables in Minkowski space-time, such as the time-like hadronic form factors are not amenable to Euclidean numerical lattice computations.
Local Observed-Score Kernel Equating
ERIC Educational Resources Information Center
Wiberg, Marie; van der Linden, Wim J.; von Davier, Alina A.
2014-01-01
Three local observed-score kernel equating methods that integrate methods from the local equating and kernel equating frameworks are proposed. The new methods were compared with their earlier counterparts with respect to such measures as bias--as defined by Lord's criterion of equity--and percent relative error. The local kernel item response…
7 CFR 981.408 - Inedible kernel.
Code of Federal Regulations, 2014 CFR
2014-01-01
... AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... Standards for Shelled Almonds, or which has embedded dirt or other foreign material not easily removed...
7 CFR 981.408 - Inedible kernel.
Code of Federal Regulations, 2013 CFR
2013-01-01
... AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... Standards for Shelled Almonds, or which has embedded dirt or other foreign material not easily removed...
7 CFR 981.408 - Inedible kernel.
Code of Federal Regulations, 2011 CFR
2011-01-01
... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... Standards for Shelled Almonds, or which has embedded dirt or other foreign material not easily removed...
7 CFR 981.408 - Inedible kernel.
Code of Federal Regulations, 2012 CFR
2012-01-01
... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... Standards for Shelled Almonds, or which has embedded dirt or other foreign material not easily removed...
7 CFR 981.408 - Inedible kernel.
Code of Federal Regulations, 2010 CFR
2010-01-01
... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... Standards for Shelled Almonds, or which has embedded dirt or other foreign material not easily removed...
The Emergence of Hadrons from QCD Color
NASA Astrophysics Data System (ADS)
Brooks, Will
2013-10-01
The propagation of colored quarks through strongly interacting systems, and their subsequent evolution into color-singlet hadrons, are phenomena that showcase unique facets of Quantum Chromodynamics (QCD). Medium-stimulated gluon bremsstrahlung, a fundamental QCD process, induces broadening of the transverse momentum of the parton, and creates partonic energy loss manifesting itself in experimental observables that are accessible in high energy interactions in hot and cold systems. The formation of hadrons, which is the dynamical enforcement of the QCD confinement principle, is very poorly understood on the basis of fundamental theory, although detailed models such as the Lund string model or cluster hadronization models can generally be tuned to capture the main features of hadronic final states. With the advent of the technical capability to study hadronic final states with good particle identification and at high luminosity, a new opportunity has appeared. Study of the characteristics of parton propagation and hadron formation as they unfold within atomic nuclei are now being used to understand the coherence and spatial features of these processes and to refine new experimental tools that will be used in future experiments. Fixed-target data on nuclei with lepton and hadron beams, and collider experiments involving nuclei, all make essential contact with these topics and they elucidate different aspects of these same themes. In this talk, a survey of the most relevant recent data and its potential interpretation will be followed by descriptions of feasible experiments at an electron-ion collider, in the context of existing measurements as well as the experiments performed following the upgrade of Jefferson Lab to 12 GeV.
Wigner functions defined with Laplace transform kernels.
Oh, Se Baek; Petruccelli, Jonathan C; Tian, Lei; Barbastathis, George
2011-10-24
We propose a new Wigner-type phase-space function using Laplace transform kernels--Laplace kernel Wigner function. Whereas momentum variables are real in the traditional Wigner function, the Laplace kernel Wigner function may have complex momentum variables. Due to the property of the Laplace transform, a broader range of signals can be represented in complex phase-space. We show that the Laplace kernel Wigner function exhibits similar properties in the marginals as the traditional Wigner function. As an example, we use the Laplace kernel Wigner function to analyze evanescent waves supported by surface plasmon polariton.
NASA Astrophysics Data System (ADS)
Milana, Joseph Philip
Two investigations in quantum chromodynamics are presented. The first, entitled "Factorization at low x", is a study in perturbative QCD of high energy hadron-hadron collisions using the double log approximation to probe new, hitherto unstudied, kinematic regions. The reaction proceeds via a parton from hadron one (with a fraction of the hadron's momentum x_1, and probability of being in the hadron, F_1 ) colliding with a parton from hadron two (with momentum fraction x_2 and structure function F_2). The new regions of study are those in which one momentum fraction is much larger than the other (i.e. x_1 gg x_2, or x_2 gg x _1). New processes, involving soft gluons, are identified and an estimate for their contribution to the cross-section is given. Although involving soft gluons, it is seen these processes nevertheless preserve factorization as they can be incorporated into a redefinition of one of the structure functions (F_2 or F_1 respectively). The second study, "Gluons in the Chiral Bag", is a perturbative QCD calculation, using cavity quantum chromodynamics, of gluon exchange corrections to the cranking moment of inertia of the chiral bag model (CBM). Cranking (the introduction of a slowly rotating, quantized collective motion) is needed to construct the nucleon and delta states in the CBM. By fitting the empirical Delta - N mass splitting a value of the effective strong coupling is extracted. It is found that when the bag is small (R < 0.5 fm), the nucleon-delta mass splitting is adequately described without including any gluon corrections. For larger bag radii, (i.e. R = 1 fm) the size of the coupling constant (alpha _{c} = 0.6) thus extraced compares favorably with the MIT coupling (alpha_ {c} = 0.55), but represents no true improvement. Since a large fraction of the energy splitting between the nucleon and delta states in the CBM may be attributed to rotational energy of the meson cloud, this is unexpected.
QCD measurements at the Tevatron
Bandurin, Dmitry; /Florida State U.
2011-12-01
Selected quantum chromodynamics (QCD) measurements performed at the Fermilab Run II Tevatron p{bar p} collider running at {radical}s = 1.96 TeV by CDF and D0 Collaborations are presented. The inclusive jet, dijet production and three-jet cross section measurements are used to test perturbative QCD calculations, constrain parton distribution function (PDF) determinations, and extract a precise value of the strong coupling constant, {alpha}{sub s}(m{sub Z}) = 0.1161{sub -0.0048}{sup +0.0041}. Inclusive photon production cross-section measurements reveal an inability of next-to-leading-order (NLO) perturbative QCD (pQCD) calculations to describe low-energy photons arising directly in the hard scatter. The diphoton production cross-sections check the validity of the NLO pQCD predictions, soft-gluon resummation methods implemented in theoretical calculations, and contributions from the parton-to-photon fragmentation diagrams. Events with W/Z+jets productions are used to measure many kinematic distributions allowing extensive tests and tunes of predictions from pQCD NLO and Monte-Carlo (MC) event generators. The charged-particle transverse momenta (p{sub T}) and multiplicity distributions in the inclusive minimum bias events are used to tune non-perturbative QCD models, including those describing the multiple parton interactions (MPI). Events with inclusive production of {gamma} and 2 or 3 jets are used to study increasingly important MPI phenomenon at high p{sub T}, measure an effective interaction cross section, {sigma}{sub eff} = 16.4 {+-} 2.3 mb, and limit existing MPI models.
The Emergence of Hadrons from QCD Color
NASA Astrophysics Data System (ADS)
Brooks, William; Color Dynamics in Cold Matter (CDCM) Collaboration
2015-10-01
The formation of hadrons from energetic quarks, the dynamical enforcement of QCD confinement, is not well understood at a fundamental level. In Deep Inelastic Scattering, modifications of the distributions of identified hadrons emerging from nuclei of different sizes reveal a rich variety of spatial and temporal characteristics of the hadronization process, including its dependence on spin, flavor, energy, and hadron mass and structure. The EIC will feature a wide range of kinematics, allowing a complete investigation of medium-induced gluon bremsstrahlung by the propagating quarks, leading to partonic energy loss. This fundamental process, which is also at the heart of jet quenching in heavy ion collisions, can be studied for light and heavy quarks at the EIC through observables quantifying hadron ``attenuation'' for a variety of hadron species. Transverse momentum broadening of hadrons, which is sensitive to the nuclear gluonic field, will also be accessible, and can be used to test our understanding from pQCD of how this quantity evolves with pathlength, as well as its connection to partonic energy loss. The evolution of the forming hadrons in the medium will shed new light on the dynamical origins of the forces between hadrons, and thus ultimately on the nuclear force. Supported by the Comision Nacional de Investigacion Cientifica y Tecnologica (CONICYT) of Chile.
Gupta, R.
1994-12-31
This talk contains an analysis of quenched chiral perturbation theory and its consequences. The chiral behavior of a number of quantities such as the pion mass m{sub pi}{sup 2}, the Bernard-Golterman ratios R and {sub X}, the masses of nucleons, and the kaon B-parameter are examined to see if the singular terms induced by the additional Goldstone boson, {eta}{prime}, are visible in present data. The overall conclusion (different from that presented at the lattice meeting) of this analysis is that even though there are some caveats attached to the indications of the extra terms induced by {eta}{prime} loops, the standard expressions break down when extrapolating the quenched data with m{sub q} < m{sub s}/2 to physical light quarks. I then show that due to the single and double poles in the quenched {eta}{prime}, the axial charge of the proton cannot be calculated using the Adler-Bell-Jackiw anomaly condition. I conclude with a review of the status of the calculation of light quark masses from lattice QCD.
Andersen, Jens O.; Leganger, Lars E.; Strickland, Michael; Su, Nan
2011-10-15
In this brief report we compare the predictions of a recent next-to-next-to-leading order hard-thermal-loop perturbation theory (HTLpt) calculation of the QCD trace anomaly to available lattice data. We focus on the trace anomaly scaled by T{sup 2} in two cases: N{sub f}=0 and N{sub f}=3. When using the canonical value of {mu}=2{pi}T for the renormalization scale, we find that for Yang-Mills theory (N{sub f}=0) agreement between HTLpt and lattice data for the T{sup 2}-scaled trace anomaly begins at temperatures on the order of 8T{sub c}, while treating the subtracted piece as an interaction term when including quarks (N{sub f}=3) agreement begins already at temperatures above 2T{sub c}. In both cases we find that at very high temperatures the T{sup 2}-scaled trace anomaly increases with temperature in accordance with the predictions of HTLpt.
Kernel Near Principal Component Analysis
MARTIN, SHAWN B.
2002-07-01
We propose a novel algorithm based on Principal Component Analysis (PCA). First, we present an interesting approximation of PCA using Gram-Schmidt orthonormalization. Next, we combine our approximation with the kernel functions from Support Vector Machines (SVMs) to provide a nonlinear generalization of PCA. After benchmarking our algorithm in the linear case, we explore its use in both the linear and nonlinear cases. We include applications to face data analysis, handwritten digit recognition, and fluid flow.
Magnetically induced QCD Kondo effect
NASA Astrophysics Data System (ADS)
Ozaki, Sho; Itakura, Kazunori; Kuramoto, Yoshio
2016-10-01
The "QCD Kondo effect" stems from the color exchange interaction in QCD with non-Abelian property, and can be realized in a high-density quark matter containing heavy-quark impurities. We propose a novel type of the QCD Kondo effect induced by a strong magnetic field. In addition to the fact that the magnetic field does not affect the color degrees of freedom, two properties caused by the Landau quantization in a strong magnetic field are essential for the "magnetically induced QCD Kondo effect"; (1) dimensional reduction to 1 +1 -dimensions, and (2) finiteness of the density of states for lowest energy quarks. We demonstrate that, in a strong magnetic field B , the scattering amplitude of a massless quark off a heavy quark impurity indeed shows a characteristic behavior of the Kondo effect. The resulting Kondo scale is estimated as ΛK≃√{eqB }αs1 /3exp {-4 π /Ncαslog (4 π /αs)} where αs and Nc are the fine structure constant of strong interaction and the number of colors in QCD, and eq is the electric charge of light quarks.
Kenneth Wilson and Lattice QCD
NASA Astrophysics Data System (ADS)
Ukawa, Akira
2015-09-01
We discuss the physics and computation of lattice QCD, a space-time lattice formulation of quantum chromodynamics, and Kenneth Wilson's seminal role in its development. We start with the fundamental issue of confinement of quarks in the theory of the strong interactions, and discuss how lattice QCD provides a framework for understanding this phenomenon. A conceptual issue with lattice QCD is a conflict of space-time lattice with chiral symmetry of quarks. We discuss how this problem is resolved. Since lattice QCD is a non-linear quantum dynamical system with infinite degrees of freedom, quantities which are analytically calculable are limited. On the other hand, it provides an ideal case of massively parallel numerical computations. We review the long and distinguished history of parallel-architecture supercomputers designed and built for lattice QCD. We discuss algorithmic developments, in particular the difficulties posed by the fermionic nature of quarks, and their resolution. The triad of efforts toward better understanding of physics, better algorithms, and more powerful supercomputers have produced major breakthroughs in our understanding of the strong interactions. We review the salient results of this effort in understanding the hadron spectrum, the Cabibbo-Kobayashi-Maskawa matrix elements and CP violation, and quark-gluon plasma at high temperatures. We conclude with a brief summary and a future perspective.
Recent QCD results from the Tevatron
Pickarz, Henryk; CDF and DO collaboration
1997-02-01
Recent QCD results from the CDF and D0 detectors at the Tevatron proton-antiproton collider are presented. An outlook for future QCD tests at the Tevatron collider is also breifly discussed. 27 refs., 11 figs.
Two-body non-leptonic heavy-to-heavy decays at NNLO in QCD factorization
NASA Astrophysics Data System (ADS)
Huber, Tobias; Kränkl, Susanne; Li, Xin-Qiang
2016-09-01
We evaluate in the framework of QCD factorization the two-loop vertex corrections to the decays {overline{B}}_{(s)}to {D}_{(s)}^{(ast )+}{L}- and Λ b → Λ c + L -, where L is a light meson from the set { π, ρ, K (∗) , a 1}. These decays are paradigms of the QCD factorization approach since only the colour-allowed tree amplitude contributes at leading power. Hence they are sensitive to the size of power corrections once their leading-power perturbative expansion is under control. Here we compute the two-loop O({α}_s^2) correction to the leading-power hard scattering kernels, and give the results for the convoluted kernels almost completely analytically. Our newly computed contribution amounts to a positive shift of the magnitude of the tree amplitude by ˜ 2%. We then perform an extensive phenomenological analysis to NNLO in QCD factorization, using the most recent values for non-perturbative input parameters. Given the fact that the NNLO perturbative correction and updated values for form factors increase the theory prediction for branching ratios, while experimental central values have at the same time decreased, we reanalyze the role and potential size of power corrections by means of appropriately chosen ratios of decay channels.
Threefold Complementary Approach to Holographic QCD
Brodsky, Stanley J.; de Teramond, Guy F.; Dosch, Hans Gunter
2013-12-27
A complementary approach, derived from (a) higher-dimensional anti-de Sitter (AdS) space, (b) light-front quantization and (c) the invariance properties of the full conformal group in one dimension leads to a nonperturbative relativistic light-front wave equation which incorporates essential spectroscopic and dynamical features of hadron physics. The fundamental conformal symmetry of the classical QCD Lagrangian in the limit of massless quarks is encoded in the resulting effective theory. The mass scale for confinement emerges from the isomorphism between the conformal group andSO(2,1). This scale appears in the light-front Hamiltonian by mapping to the evolution operator in the formalism of de Alfaro, Fubini and Furlan, which retains the conformal invariance of the action. Remarkably, the specific form of the confinement interaction and the corresponding modification of AdS space are uniquely determined in this procedure.
Nonlinear projection trick in kernel methods: an alternative to the kernel trick.
Kwak, Nojun
2013-12-01
In kernel methods such as kernel principal component analysis (PCA) and support vector machines, the so called kernel trick is used to avoid direct calculations in a high (virtually infinite) dimensional kernel space. In this brief, based on the fact that the effective dimensionality of a kernel space is less than the number of training samples, we propose an alternative to the kernel trick that explicitly maps the input data into a reduced dimensional kernel space. This is easily obtained by the eigenvalue decomposition of the kernel matrix. The proposed method is named as the nonlinear projection trick in contrast to the kernel trick. With this technique, the applicability of the kernel methods is widened to arbitrary algorithms that do not use the dot product. The equivalence between the kernel trick and the nonlinear projection trick is shown for several conventional kernel methods. In addition, we extend PCA-L1, which uses L1-norm instead of L2-norm (or dot product), into a kernel version and show the effectiveness of the proposed approach.
Nonlinear projection trick in kernel methods: an alternative to the kernel trick.
Kwak, Nojun
2013-12-01
In kernel methods such as kernel principal component analysis (PCA) and support vector machines, the so called kernel trick is used to avoid direct calculations in a high (virtually infinite) dimensional kernel space. In this brief, based on the fact that the effective dimensionality of a kernel space is less than the number of training samples, we propose an alternative to the kernel trick that explicitly maps the input data into a reduced dimensional kernel space. This is easily obtained by the eigenvalue decomposition of the kernel matrix. The proposed method is named as the nonlinear projection trick in contrast to the kernel trick. With this technique, the applicability of the kernel methods is widened to arbitrary algorithms that do not use the dot product. The equivalence between the kernel trick and the nonlinear projection trick is shown for several conventional kernel methods. In addition, we extend PCA-L1, which uses L1-norm instead of L2-norm (or dot product), into a kernel version and show the effectiveness of the proposed approach. PMID:24805227
The supercritical pomeron in QCD.
White, A. R.
1998-06-29
Deep-inelastic diffractive scaling violations have provided fundamental insight into the QCD pomeron, suggesting a single gluon inner structure rather than that of a perturbative two-gluon bound state. This talk outlines a derivation of a high-energy, transverse momentum cut-off, confining solution of QCD. The pomeron, in first approximation, is a single reggeized gluon plus a ''wee parton'' component that compensates for the color and particle properties of the gluon. This solution corresponds to a super-critical phase of Reggeon Field Theory.
NASA Astrophysics Data System (ADS)
Schröder, York
2016-05-01
When heated and/or compressed, strongly interacting matter exhibits a rich phase structure. In this talk, I will concentrate on its behavior under variations of the temperature, which is most relevant for phenomenological applications such as in cosmology, heavy-ion collisions, and astrophysics. In particular, effective field theory methods can be used to combine lattice and continuum calculations, in order to obtain high-precision results for the relevant thermodynamic quantities such as the QCD pressure and equation of state. I will discuss the current status of this systematic approach to QCD thermodynamics, and point out the remaining (technical) problems.
Renormalization in Coulomb gauge QCD
Andrasi, A.; Taylor, John C.
2011-04-15
Research Highlights: > The Hamiltonian in the Coulomb gauge of QCD contains a non-linear Christ-Lee term. > We investigate the UV divergences from higher order graphs. > We find that they cannot be absorbed by renormalization of the Christ-Lee term. - Abstract: In the Coulomb gauge of QCD, the Hamiltonian contains a non-linear Christ-Lee term, which may alternatively be derived from a careful treatment of ambiguous Feynman integrals at 2-loop order. We investigate how and if UV divergences from higher order graphs can be consistently absorbed by renormalization of the Christ-Lee term. We find that they cannot.
QCD inequalities for hadron interactions.
Detmold, William
2015-06-01
We derive generalizations of the Weingarten-Witten QCD mass inequalities for particular multihadron systems. For systems of any number of identical pseudoscalar mesons of maximal isospin, these inequalities prove that near threshold interactions between the constituent mesons must be repulsive and that no bound states can form in these channels. Similar constraints in less symmetric systems are also extracted. These results are compatible with experimental results (where known) and recent lattice QCD calculations, and also lead to a more stringent bound on the nucleon mass than previously derived, m_{N}≥3/2m_{π}. PMID:26196617
QCD corrections to triboson production
NASA Astrophysics Data System (ADS)
Lazopoulos, Achilleas; Melnikov, Kirill; Petriello, Frank
2007-07-01
We present a computation of the next-to-leading order QCD corrections to the production of three Z bosons at the Large Hadron Collider. We calculate these corrections using a completely numerical method that combines sector decomposition to extract infrared singularities with contour deformation of the Feynman parameter integrals to avoid internal loop thresholds. The NLO QCD corrections to pp→ZZZ are approximately 50% and are badly underestimated by the leading order scale dependence. However, the kinematic dependence of the corrections is minimal in phase space regions accessible at leading order.
Stem kernels for RNA sequence analyses.
Sakakibara, Yasubumi; Popendorf, Kris; Ogawa, Nana; Asai, Kiyoshi; Sato, Kengo
2007-10-01
Several computational methods based on stochastic context-free grammars have been developed for modeling and analyzing functional RNA sequences. These grammatical methods have succeeded in modeling typical secondary structures of RNA, and are used for structural alignment of RNA sequences. However, such stochastic models cannot sufficiently discriminate member sequences of an RNA family from nonmembers and hence detect noncoding RNA regions from genome sequences. A novel kernel function, stem kernel, for the discrimination and detection of functional RNA sequences using support vector machines (SVMs) is proposed. The stem kernel is a natural extension of the string kernel, specifically the all-subsequences kernel, and is tailored to measure the similarity of two RNA sequences from the viewpoint of secondary structures. The stem kernel examines all possible common base pairs and stem structures of arbitrary lengths, including pseudoknots between two RNA sequences, and calculates the inner product of common stem structure counts. An efficient algorithm is developed to calculate the stem kernels based on dynamic programming. The stem kernels are then applied to discriminate members of an RNA family from nonmembers using SVMs. The study indicates that the discrimination ability of the stem kernel is strong compared with conventional methods. Furthermore, the potential application of the stem kernel is demonstrated by the detection of remotely homologous RNA families in terms of secondary structures. This is because the string kernel is proven to work for the remote homology detection of protein sequences. These experimental results have convinced us to apply the stem kernel in order to find novel RNA families from genome sequences. PMID:17933013
Predicting Protein Function Using Multiple Kernels.
Yu, Guoxian; Rangwala, Huzefa; Domeniconi, Carlotta; Zhang, Guoji; Zhang, Zili
2015-01-01
High-throughput experimental techniques provide a wide variety of heterogeneous proteomic data sources. To exploit the information spread across multiple sources for protein function prediction, these data sources are transformed into kernels and then integrated into a composite kernel. Several methods first optimize the weights on these kernels to produce a composite kernel, and then train a classifier on the composite kernel. As such, these approaches result in an optimal composite kernel, but not necessarily in an optimal classifier. On the other hand, some approaches optimize the loss of binary classifiers and learn weights for the different kernels iteratively. For multi-class or multi-label data, these methods have to solve the problem of optimizing weights on these kernels for each of the labels, which are computationally expensive and ignore the correlation among labels. In this paper, we propose a method called Predicting Protein Function using Multiple Kernels (ProMK). ProMK iteratively optimizes the phases of learning optimal weights and reduces the empirical loss of multi-label classifier for each of the labels simultaneously. ProMK can integrate kernels selectively and downgrade the weights on noisy kernels. We investigate the performance of ProMK on several publicly available protein function prediction benchmarks and synthetic datasets. We show that the proposed approach performs better than previously proposed protein function prediction approaches that integrate multiple data sources and multi-label multiple kernel learning methods. The codes of our proposed method are available at https://sites.google.com/site/guoxian85/promk.
Valence QCD: Connecting QCD to the quark model
Liu, K.F.; Dong, S.J.; Draper, T.; Sloan, J.; Leinweber, D.; Wilcox, W.; Woloshyn, R.M.
1999-06-01
A valence QCD theory is developed to study the valence quark properties of hadrons. To keep only the valence degrees of freedom, the pair creation through the {ital Z} graphs is deleted in the connected insertions, whereas the sea quarks are eliminated in the disconnected insertions. This is achieved with a new {open_quotes}valence QCD{close_quotes} Lagrangian where the action in the time direction is modified so that the particle and antiparticle decouple. It is shown in this valence version of QCD that the ratios of isovector to isoscalar matrix elements (e.g., F{sub A}/D{sub A} and F{sub S}/D{sub S} ratios) in the nucleon reproduce the SU(6) quark model predictions in a lattice QCD calculation. We also consider how the hadron masses are affected on the lattice and discover new insights into the origin of dynamical mass generation. It is found that, within statistical errors, the nucleon and the {Delta} become degenerate for the quark masses we have studied (ranging from 1 to 4 times the strange mass). The {pi} and {rho} become nearly degenerate in this range. It is shown that valence QCD has the {ital C}, {ital P}, {ital T} symmetries. The lattice version is reflection positive. It also has the vector and axial symmetries. The latter leads to a modified partially conserved axial Ward identity. As a result, the theory has a U(2N{sub F}) symmetry in the particle-antiparticle space. Through lattice simulation, it appears that this is dynamically broken down to U{sub q}(N{sub F}){times}U{sub {bar q}}(N{sub F}). Furthermore, the lattice simulation reveals spin degeneracy in the hadron masses and various matrix elements. This leads to an approximate U{sub q}(2N{sub F}){times}U{sub {bar q}}(2N{sub F}) symmetry which is the basis for the valence quark model. In addition, we find that the masses of {ital N}, {Delta},{rho},{pi},a{sub 1}, and a{sub 0} all drop precipitously compared to their counterparts in the quenched QCD calculation. This is interpreted as due to the
J.J. Sakurai Prize for Theoretical Particle Physics: 40 Years of Lattice QCD
NASA Astrophysics Data System (ADS)
Lepage, Peter
2016-03-01
Lattice QCD was invented in 1973-74 by Ken Wilson, who passed away in 2013. This talk will describe the evolution of lattice QCD through the past 40 years with particular emphasis on its first years, and on the past decade, when lattice QCD simulations finally came of age. Thanks to theoretical breakthroughs in the late 1990s and early 2000s, lattice QCD simulations now produce the most accurate theoretical calculations in the history of strong-interaction physics. They play an essential role in high-precision experimental studies of physics within and beyond the Standard Model of Particle Physics. The talk will include a non-technical review of the conceptual ideas behind this revolutionary development in (highly) nonlinear quantum physics, together with a survey of its current impact on theoretical and experimental particle physics, and prospects for the future. Work supported by the National Science Foundation.
A Framework for Lattice QCD Calculations on GPUs
Winter, Frank; Clark, M A; Edwards, Robert G; Joo, Balint
2014-08-01
Computing platforms equipped with accelerators like GPUs have proven to provide great computational power. However, exploiting such platforms for existing scientific applications is not a trivial task. Current GPU programming frameworks such as CUDA C/C++ require low-level programming from the developer in order to achieve high performance code. As a result porting of applications to GPUs is typically limited to time-dominant algorithms and routines, leaving the remainder not accelerated which can open a serious Amdahl's law issue. The lattice QCD application Chroma allows to explore a different porting strategy. The layered structure of the software architecture logically separates the data-parallel from the application layer. The QCD Data-Parallel software layer provides data types and expressions with stencil-like operations suitable for lattice field theory and Chroma implements algorithms in terms of this high-level interface. Thus by porting the low-level layer one can effectively move the whole application in one swing to a different platform. The QDP-JIT/PTX library, the reimplementation of the low-level layer, provides a framework for lattice QCD calculations for the CUDA architecture. The complete software interface is supported and thus applications can be run unaltered on GPU-based parallel computers. This reimplementation was possible due to the availability of a JIT compiler (part of the NVIDIA Linux kernel driver) which translates an assembly-like language (PTX) to GPU code. The expression template technique is used to build PTX code generators and a software cache manages the GPU memory. This reimplementation allows us to deploy an efficient implementation of the full gauge-generation program with dynamical fermions on large-scale GPU-based machines such as Titan and Blue Waters which accelerates the algorithm by more than an order of magnitude.
Kernel earth mover's distance for EEG classification.
Daliri, Mohammad Reza
2013-07-01
Here, we propose a new kernel approach based on the earth mover's distance (EMD) for electroencephalography (EEG) signal classification. The EEG time series are first transformed into histograms in this approach. The distance between these histograms is then computed using the EMD in a pair-wise manner. We bring the distances into a kernel form called kernel EMD. The support vector classifier can then be used for the classification of EEG signals. The experimental results on the real EEG data show that the new kernel method is very effective, and can classify the data with higher accuracy than traditional methods.
Molecular Hydrodynamics from Memory Kernels.
Lesnicki, Dominika; Vuilleumier, Rodolphe; Carof, Antoine; Rotenberg, Benjamin
2016-04-01
The memory kernel for a tagged particle in a fluid, computed from molecular dynamics simulations, decays algebraically as t^{-3/2}. We show how the hydrodynamic Basset-Boussinesq force naturally emerges from this long-time tail and generalize the concept of hydrodynamic added mass. This mass term is negative in the present case of a molecular solute, which is at odds with incompressible hydrodynamics predictions. Lastly, we discuss the various contributions to the friction, the associated time scales, and the crossover between the molecular and hydrodynamic regimes upon increasing the solute radius. PMID:27104730
Lattice QCD in Background Fields
William Detmold, Brian Tiburzi, Andre Walker-Loud
2009-06-01
Electromagnetic properties of hadrons can be computed by lattice simulations of QCD in background fields. We demonstrate new techniques for the investigation of charged hadron properties in electric fields. Our current calculations employ large electric fields, motivating us to analyze chiral dynamics in strong QED backgrounds, and subsequently uncover surprising non-perturbative effects present at finite volume.
Experimenting with Langevin lattice QCD
Gavai, R.V.; Potvin, J.; Sanielevici, S.
1987-05-01
We report on the status of our investigations of the effects of systematic errors upon the practical merits of Langevin updating in full lattice QCD. We formulate some rules for the safe use of this updating procedure and some observations on problems which may be common to all approximate fermion algorithms.
Spin physics through QCD instantons
NASA Astrophysics Data System (ADS)
Qian, Yachao; Zahed, Ismail
2016-11-01
We review some aspects of spin physics where QCD instantons play an important role. In particular, their large contributions in semi-inclusive deep-inelastic scattering and polarized proton on proton scattering. We also review their possible contribution in the P-odd pion azimuthal charge correlations in peripheral AA scattering at collider energies.
Basics of QCD perturbation theory
Soper, D.E.
1997-06-01
This is an introduction to the use of QCD perturbation theory, emphasizing generic features of the theory that enable one to separate short-time and long-time effects. The author also covers some important classes of applications: electron-positron annihilation to hadrons, deeply inelastic scattering, and hard processes in hadron-hadron collisions. 31 refs., 38 figs.
Seven topics in perturbative QCD
Buras, A.J.
1980-09-01
The following topics of perturbative QCD are discussed: (1) deep inelastic scattering; (2) higher order corrections to e/sup +/e/sup -/ annihilation, to photon structure functions and to quarkonia decays; (3) higher order corrections to fragmentation functions and to various semi-inclusive processes; (4) higher twist contributions; (5) exclusive processes; (6) transverse momentum effects; (7) jet and photon physics.
QCD Phase Transitions, Volume 15
Schaefer, T.; Shuryak, E.
1999-03-20
The title of the workshop, ''The QCD Phase Transitions'', in fact happened to be too narrow for its real contents. It would be more accurate to say that it was devoted to different phases of QCD and QCD-related gauge theories, with strong emphasis on discussion of the underlying non-perturbative mechanisms which manifest themselves as all those phases. Before we go to specifics, let us emphasize one important aspect of the present status of non-perturbative Quantum Field Theory in general. It remains true that its studies do not get attention proportional to the intellectual challenge they deserve, and that the theorists working on it remain very fragmented. The efforts to create Theory of Everything including Quantum Gravity have attracted the lion share of attention and young talent. Nevertheless, in the last few years there was also a tremendous progress and even some shift of attention toward emphasis on the unity of non-perturbative phenomena. For example, we have seen some efforts to connect the lessons from recent progress in Supersymmetric theories with that in QCD, as derived from phenomenology and lattice. Another example is Maldacena conjecture and related development, which connect three things together, string theory, super-gravity and the (N=4) supersymmetric gauge theory. Although the progress mentioned is remarkable by itself, if we would listen to each other more we may have chance to strengthen the field and reach better understanding of the spectacular non-perturbative physics.
Protoribosome by quantum kernel energy method.
Huang, Lulu; Krupkin, Miri; Bashan, Anat; Yonath, Ada; Massa, Lou
2013-09-10
Experimental evidence suggests the existence of an RNA molecular prebiotic entity, called by us the "protoribosome," which may have evolved in the RNA world before evolution of the genetic code and proteins. This vestige of the RNA world, which possesses all of the capabilities required for peptide bond formation, seems to be still functioning in the heart of all of the contemporary ribosome. Within the modern ribosome this remnant includes the peptidyl transferase center. Its highly conserved nucleotide sequence is suggestive of its robustness under diverse environmental conditions, and hence on its prebiotic origin. Its twofold pseudosymmetry suggests that this entity could have been a dimer of self-folding RNA units that formed a pocket within which two activated amino acids might be accommodated, similar to the binding mode of modern tRNA molecules that carry amino acids or peptidyl moieties. Using quantum mechanics and crystal coordinates, this work studies the question of whether the putative protoribosome has properties necessary to function as an evolutionary precursor to the modern ribosome. The quantum model used in the calculations is density functional theory--B3LYP/3-21G*, implemented using the kernel energy method to make the computations practical and efficient. It occurs that the necessary conditions that would characterize a practicable protoribosome--namely (i) energetic structural stability and (ii) energetically stable attachment to substrates--are both well satisfied.
Kernel spectral clustering with memory effect
NASA Astrophysics Data System (ADS)
Langone, Rocco; Alzate, Carlos; Suykens, Johan A. K.
2013-05-01
Evolving graphs describe many natural phenomena changing over time, such as social relationships, trade markets, metabolic networks etc. In this framework, performing community detection and analyzing the cluster evolution represents a critical task. Here we propose a new model for this purpose, where the smoothness of the clustering results over time can be considered as a valid prior knowledge. It is based on a constrained optimization formulation typical of Least Squares Support Vector Machines (LS-SVM), where the objective function is designed to explicitly incorporate temporal smoothness. The latter allows the model to cluster the current data well and to be consistent with the recent history. We also propose new model selection criteria in order to carefully choose the hyper-parameters of our model, which is a crucial issue to achieve good performances. We successfully test the model on four toy problems and on a real world network. We also compare our model with Evolutionary Spectral Clustering, which is a state-of-the-art algorithm for community detection of evolving networks, illustrating that the kernel spectral clustering with memory effect can achieve better or equal performances.
Improving the Bandwidth Selection in Kernel Equating
ERIC Educational Resources Information Center
Andersson, Björn; von Davier, Alina A.
2014-01-01
We investigate the current bandwidth selection methods in kernel equating and propose a method based on Silverman's rule of thumb for selecting the bandwidth parameters. In kernel equating, the bandwidth parameters have previously been obtained by minimizing a penalty function. This minimization process has been criticized by practitioners…
The context-tree kernel for strings.
Cuturi, Marco; Vert, Jean-Philippe
2005-10-01
We propose a new kernel for strings which borrows ideas and techniques from information theory and data compression. This kernel can be used in combination with any kernel method, in particular Support Vector Machines for string classification, with notable applications in proteomics. By using a Bayesian averaging framework with conjugate priors on a class of Markovian models known as probabilistic suffix trees or context-trees, we compute the value of this kernel in linear time and space while only using the information contained in the spectrum of the considered strings. This is ensured through an adaptation of a compression method known as the context-tree weighting algorithm. Encouraging classification results are reported on a standard protein homology detection experiment, showing that the context-tree kernel performs well with respect to other state-of-the-art methods while using no biological prior knowledge.
Sufficient conditions for a memory-kernel master equation
NASA Astrophysics Data System (ADS)
Chruściński, Dariusz; Kossakowski, Andrzej
2016-08-01
We derive sufficient conditions for the memory-kernel governing nonlocal master equation which guarantee a legitimate (completely positive and trace-preserving) dynamical map. It turns out that these conditions provide natural parametrizations of the dynamical map being a generalization of the Markovian semigroup. This parametrization is defined by the so-called legitimate pair—monotonic quantum operation and completely positive map—and it is shown that such a class of maps covers almost all known examples from the Markovian semigroup, the semi-Markov evolution, up to collision models and their generalization.
Bayesian Kernel Mixtures for Counts
Canale, Antonio; Dunson, David B.
2011-01-01
Although Bayesian nonparametric mixture models for continuous data are well developed, there is a limited literature on related approaches for count data. A common strategy is to use a mixture of Poissons, which unfortunately is quite restrictive in not accounting for distributions having variance less than the mean. Other approaches include mixing multinomials, which requires finite support, and using a Dirichlet process prior with a Poisson base measure, which does not allow smooth deviations from the Poisson. As a broad class of alternative models, we propose to use nonparametric mixtures of rounded continuous kernels. An efficient Gibbs sampler is developed for posterior computation, and a simulation study is performed to assess performance. Focusing on the rounded Gaussian case, we generalize the modeling framework to account for multivariate count data, joint modeling with continuous and categorical variables, and other complications. The methods are illustrated through applications to a developmental toxicity study and marketing data. This article has supplementary material online. PMID:22523437
Bayesian Kernel Mixtures for Counts.
Canale, Antonio; Dunson, David B
2011-12-01
Although Bayesian nonparametric mixture models for continuous data are well developed, there is a limited literature on related approaches for count data. A common strategy is to use a mixture of Poissons, which unfortunately is quite restrictive in not accounting for distributions having variance less than the mean. Other approaches include mixing multinomials, which requires finite support, and using a Dirichlet process prior with a Poisson base measure, which does not allow smooth deviations from the Poisson. As a broad class of alternative models, we propose to use nonparametric mixtures of rounded continuous kernels. An efficient Gibbs sampler is developed for posterior computation, and a simulation study is performed to assess performance. Focusing on the rounded Gaussian case, we generalize the modeling framework to account for multivariate count data, joint modeling with continuous and categorical variables, and other complications. The methods are illustrated through applications to a developmental toxicity study and marketing data. This article has supplementary material online. PMID:22523437
MULTIVARIATE KERNEL PARTITION PROCESS MIXTURES
Dunson, David B.
2013-01-01
Mixtures provide a useful approach for relaxing parametric assumptions. Discrete mixture models induce clusters, typically with the same cluster allocation for each parameter in multivariate cases. As a more flexible approach that facilitates sparse nonparametric modeling of multivariate random effects distributions, this article proposes a kernel partition process (KPP) in which the cluster allocation varies for different parameters. The KPP is shown to be the driving measure for a multivariate ordered Chinese restaurant process that induces a highly-flexible dependence structure in local clustering. This structure allows the relative locations of the random effects to inform the clustering process, with spatially-proximal random effects likely to be assigned the same cluster index. An exact block Gibbs sampler is developed for posterior computation, avoiding truncation of the infinite measure. The methods are applied to hormone curve data, and a dependent KPP is proposed for classification from functional predictors. PMID:24478563
anQCD: Fortran programs for couplings at complex momenta in various analytic QCD models
NASA Astrophysics Data System (ADS)
Ayala, César; Cvetič, Gorazd
2016-02-01
We provide three Fortran programs which evaluate the QCD analytic (holomorphic) couplings Aν(Q2) for complex or real squared momenta Q2. These couplings are holomorphic analogs of the powers a(Q2)ν of the underlying perturbative QCD (pQCD) coupling a(Q2) ≡αs(Q2) / π, in three analytic QCD models (anQCD): Fractional Analytic Perturbation Theory (FAPT), Two-delta analytic QCD (2 δanQCD), and Massive Perturbation Theory (MPT). The index ν can be noninteger. The provided programs do basically the same job as the Mathematica package anQCD.m published by us previously (Ayala and Cvetič, 2015), but are now written in Fortran.
LATTICE QCD AT FINITE DENSITY.
SCHMIDT, C.
2006-07-23
I discuss different approaches to finite density lattice QCD. In particular, I focus on the structure of the phase diagram and discuss attempts to determine the location of the critical end-point. Recent results on the transition line as function of the chemical potential (T{sub c}({mu}{sub q})) are reviewed. Along the transition line, hadronic fluctuations have been calculated; which can be used to characterize properties of the Quark Gluon plasma and eventually can also help to identify the location of the critical end-point in the QCD phase diagram on the lattice and in heavy ion experiments. Furthermore, I comment on the structure of the phase diagram at large {mu}{sub q}.
Nuclear forces from lattice QCD
Ishii, Noriyoshi
2011-05-06
Lattice QCD construction of nuclear forces is reviewed. In this method, the nuclear potentials are constructed by solving the Schroedinger equation, where equal-time Nambu-Bethe-Salpeter (NBS) wave functions are regarded as quantum mechanical wave functions. Since the long distance behavior of equal-time NBS wave functions is controlled by the scattering phase, which is in exactly the same way as scattering wave functions in quantum mechanics, the resulting potentials are faithful to the NN scattering data. The derivative expansion of this potential leads to the central and the tensor potentials at the leading order. Some of numerical results of these two potentials are shown based on the quenched QCD.
QCD with rooted staggered fermions
NASA Astrophysics Data System (ADS)
Goltermann, M.
In this talk, I will give an overview of the theoretical status of staggered Lattice QCD with the “fourth-root trick.” In this regularization of QCD, a separate staggered quark field is used for each physical flavor, and the inherent four-fold multiplicity that comes with the use of staggered fermions is removed by taking the fourth root of the staggered determinant for each flavor. At nonzero lattice spacing, the resulting theory is nonlocal and not unitary, but there are now strong arguments that this disease is cured in the continuum limit. In addition, the approach to the continuum limit can be understood in detail in the framework of effective field theories such as staggered chiral perturbation theory.
Dru Renner
2012-04-01
Precision computation of hadronic physics with lattice QCD is becoming feasible. The last decade has seen precent-level calculations of many simple properties of mesons, and the last few years have seen calculations of baryon masses, including the nucleon mass, accurate to a few percent. As computational power increases and algorithms advance, the precise calculation of a variety of more demanding hadronic properties will become realistic. With this in mind, I discuss the current lattice QCD calculations of generalized parton distributions with an emphasis on the prospects for well-controlled calculations for these observables as well. I will do this by way of several examples: the pion and nucleon form factors and moments of the nucleon parton and generalized-parton distributions.
Huston, J. |; CDF Collaboration
1994-01-01
CDF has recently concluded a very successful 1992--93 data run in which an integrated luminosity of 21.3 pb {sup {minus}1} was written to tape. The large data sample allows for a greater discovery potential for new phenomena and for better statistical and systematic precision in analysis of conventional physics. This paper summarizes some of the new results from QCD analyses for this run.
Yamamoto, Arata
2016-07-29
We propose the lattice QCD calculation of the Berry phase, which is defined by the ground state of a single fermion. We perform the ground-state projection of a single-fermion propagator, construct the Berry link variable on a momentum-space lattice, and calculate the Berry phase. As the first application, the first Chern number of the (2+1)-dimensional Wilson fermion is calculated by the Monte Carlo simulation. PMID:27517766
DeGrand, T.
1997-06-01
These lectures provide an introduction to lattice methods for nonperturbative studies of Quantum Chromodynamics. Lecture 1: Basic techniques for QCD and results for hadron spectroscopy using the simplest discretizations; lecture 2: Improved actions--what they are and how well they work; lecture 3: SLAC physics from the lattice-structure functions, the mass of the glueball, heavy quarks and {alpha}{sub s} (M{sub z}), and B-{anti B} mixing. 67 refs., 36 figs.
The status of perturbative QCD
Ellis, R.K.
1988-10-01
The advances in perturbative QCD are reviewed. The status of determinations of the coupling constant ..cap alpha../sub S/ and the parton distribution functions is presented. New theoretical results on the spin dependent structure functions of the proton are also reviewed. The theoretical description of the production of vector bosons, jets and heavy quarks is outlined with special emphasis on new results. Expected rates for top quark production at hadronic colliders are presented. 111 refs., 8 figs.
Brodsky, Stanley J.; de Teramond, Guy F.; /Costa Rica U. /SLAC
2007-02-21
The AdS/CFT correspondence between string theory in AdS space and conformal .eld theories in physical spacetime leads to an analytic, semi-classical model for strongly-coupled QCD which has scale invariance and dimensional counting at short distances and color confinement at large distances. Although QCD is not conformally invariant, one can nevertheless use the mathematical representation of the conformal group in five-dimensional anti-de Sitter space to construct a first approximation to the theory. The AdS/CFT correspondence also provides insights into the inherently non-perturbative aspects of QCD, such as the orbital and radial spectra of hadrons and the form of hadronic wavefunctions. In particular, we show that there is an exact correspondence between the fifth-dimensional coordinate of AdS space z and a specific impact variable {zeta} which measures the separation of the quark and gluonic constituents within the hadron in ordinary space-time. This connection allows one to compute the analytic form of the frame-independent light-front wavefunctions, the fundamental entities which encode hadron properties and allow the computation of decay constants, form factors, and other exclusive scattering amplitudes. New relativistic lightfront equations in ordinary space-time are found which reproduce the results obtained using the 5-dimensional theory. The effective light-front equations possess remarkable algebraic structures and integrability properties. Since they are complete and orthonormal, the AdS/CFT model wavefunctions can also be used as a basis for the diagonalization of the full light-front QCD Hamiltonian, thus systematically improving the AdS/CFT approximation.
Putting Priors in Mixture Density Mercer Kernels
NASA Technical Reports Server (NTRS)
Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd
2004-01-01
This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly infinite dimensional feature space. We describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using predefined kernels. These data adaptive kernels can en- code prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS). The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains template for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic- algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code. The results show that the Mixture Density Mercer-Kernel described here outperforms tree-based classification in distinguishing high-redshift galaxies from low- redshift galaxies by approximately 16% on test data, bagged trees by approximately 7%, and bagged trees built on a much larger sample of data by approximately 2%.
Ideal regularization for learning kernels from labels.
Pan, Binbin; Lai, Jianhuang; Shen, Lixin
2014-08-01
In this paper, we propose a new form of regularization that is able to utilize the label information of a data set for learning kernels. The proposed regularization, referred to as ideal regularization, is a linear function of the kernel matrix to be learned. The ideal regularization allows us to develop efficient algorithms to exploit labels. Three applications of the ideal regularization are considered. Firstly, we use the ideal regularization to incorporate the labels into a standard kernel, making the resulting kernel more appropriate for learning tasks. Next, we employ the ideal regularization to learn a data-dependent kernel matrix from an initial kernel matrix (which contains prior similarity information, geometric structures, and labels of the data). Finally, we incorporate the ideal regularization to some state-of-the-art kernel learning problems. With this regularization, these learning problems can be formulated as simpler ones which permit more efficient solvers. Empirical results show that the ideal regularization exploits the labels effectively and efficiently.
FermiQCD: A tool kit for parallel lattice QCD applications
Di Pierro, M.
2002-03-01
We present here the most recent version of FermiQCD, a collection of C++ classes, functions and parallel algorithms for lattice QCD, based on Matrix Distributed Processing. FermiQCD allows fast development of parallel lattice applications and includes some SSE2 optimizations for clusters of Pentium 4 PCs.
None
2016-07-12
Modern QCD - Lecture 2 We will start discussing the matter content of the theory and revisit the experimental measurements that led to the discovery of quarks. We will then consider a classic QCD observable, the R-ratio, and use it to illustrate the appearance of UV divergences and the need to renormalize the coupling constant of QCD. We will then discuss asymptotic freedom and confinement. Finally, we will examine a case where soft and collinear infrared divergences appear, will discuss the soft approximation in QCD and will introduce the concept of infrared safe jets.
Kernel score statistic for dependent data.
Malzahn, Dörthe; Friedrichs, Stefanie; Rosenberger, Albert; Bickeböller, Heike
2014-01-01
The kernel score statistic is a global covariance component test over a set of genetic markers. It provides a flexible modeling framework and does not collapse marker information. We generalize the kernel score statistic to allow for familial dependencies and to adjust for random confounder effects. With this extension, we adjust our analysis of real and simulated baseline systolic blood pressure for polygenic familial background. We find that the kernel score test gains appreciably in power through the use of sequencing compared to tag-single-nucleotide polymorphisms for very rare single nucleotide polymorphisms with <1% minor allele frequency.
NASA Astrophysics Data System (ADS)
Niemi, H.; Eskola, K. J.; Paatelainen, R.
2016-02-01
We introduce an event-by-event perturbative-QCD + saturation + hydro ("EKRT") framework for ultrarelativistic heavy-ion collisions, where we compute the produced fluctuating QCD-matter energy densities from next-to-leading-order perturbative QCD using a saturation conjecture to control soft-particle production and describe the space-time evolution of the QCD matter with dissipative fluid dynamics, event by event. We perform a simultaneous comparison of the centrality dependence of hadronic multiplicities, transverse momentum spectra, and flow coefficients of the azimuth-angle asymmetries against the LHC and RHIC measurements. We compare also the computed event-by-event probability distributions of relative fluctuations of elliptic flow and event-plane angle correlations with the experimental data from Pb +Pb collisions at the LHC. We show how such a systematic multienergy and multiobservable analysis tests the initial-state calculation and the applicability region of hydrodynamics and, in particular, how it constrains the temperature dependence of the shear viscosity-to-entropy ratio of QCD matter in its different phases in a remarkably consistent manner.
Geiger, K.; Longacre, R.; Srivastava, D.K.
1999-02-01
VNI is a general-purpose Monte-Carlo event-generator, which includes the simulation of lepton-lepton, lepton-hadron, lepton-nucleus, hadron-hadron, hadron-nucleus, and nucleus-nucleus collisions. It uses the real-time evolution of parton cascades in conjunction with a self-consistent hadronization scheme, as well as the development of hadron cascades after hadronization. The causal evolution from a specific initial state (determined by the colliding beam particles) is followed by the time-development of the phase-space densities of partons, pre-hadronic parton clusters, and final-state hadrons, in position-space, momentum-space and color-space. The parton-evolution is described in terms of a space-time generalization of the familiar momentum-space description of multiple (semi)hard interactions in QCD, involving 2 {r_arrow} 2 parton collisions, 2 {r_arrow} 1 parton fusion processes, and 1 {r_arrow} 2 radiation processes. The formation of color-singlet pre-hadronic clusters and their decays into hadrons, on the other hand, is treated by using a spatial criterion motivated by confinement and a non-perturbative model for hadronization. Finally, the cascading of produced prehadronic clusters and of hadrons includes a multitude of 2 {r_arrow} n processes, and is modeled in parallel to the parton cascade description. This paper gives a brief review of the physics underlying VNI, as well as a detailed description of the program itself. The latter program description emphasizes easy-to-use pragmatism and explains how to use the program (including simple examples), annotates input and control parameters, and discusses output data provided by it.
Constructing perturbation theory kernels for large-scale structure in generalized cosmologies
NASA Astrophysics Data System (ADS)
Taruya, Atsushi
2016-07-01
We present a simple numerical scheme for perturbation theory (PT) calculations of large-scale structure. Solving the evolution equations for perturbations numerically, we construct the PT kernels as building blocks of statistical calculations, from which the power spectrum and/or correlation function can be systematically computed. The scheme is especially applicable to the generalized structure formation including modified gravity, in which the analytic construction of PT kernels is intractable. As an illustration, we show several examples for power spectrum calculations in f (R ) gravity and Λ CDM models.
7 CFR 981.401 - Adjusted kernel weight.
Code of Federal Regulations, 2014 CFR
2014-01-01
... weight of delivery 10,000 10,000 2. Percent of edible kernel weight 53.0 84.0 3. Less weight loss in... 7 Agriculture 8 2014-01-01 2014-01-01 false Adjusted kernel weight. 981.401 Section 981.401... Administrative Rules and Regulations § 981.401 Adjusted kernel weight. (a) Definition. Adjusted kernel...
7 CFR 981.401 - Adjusted kernel weight.
Code of Federal Regulations, 2012 CFR
2012-01-01
... weight of delivery 10,000 10,000 2. Percent of edible kernel weight 53.0 84.0 3. Less weight loss in... 7 Agriculture 8 2012-01-01 2012-01-01 false Adjusted kernel weight. 981.401 Section 981.401... Administrative Rules and Regulations § 981.401 Adjusted kernel weight. (a) Definition. Adjusted kernel...
7 CFR 981.401 - Adjusted kernel weight.
Code of Federal Regulations, 2013 CFR
2013-01-01
... weight of delivery 10,000 10,000 2. Percent of edible kernel weight 53.0 84.0 3. Less weight loss in... 7 Agriculture 8 2013-01-01 2013-01-01 false Adjusted kernel weight. 981.401 Section 981.401... Administrative Rules and Regulations § 981.401 Adjusted kernel weight. (a) Definition. Adjusted kernel...
7 CFR 981.401 - Adjusted kernel weight.
Code of Federal Regulations, 2011 CFR
2011-01-01
... weight of delivery 10,000 10,000 2. Percent of edible kernel weight 53.0 84.0 3. Less weight loss in... 7 Agriculture 8 2011-01-01 2011-01-01 false Adjusted kernel weight. 981.401 Section 981.401... Administrative Rules and Regulations § 981.401 Adjusted kernel weight. (a) Definition. Adjusted kernel...
7 CFR 981.401 - Adjusted kernel weight.
Code of Federal Regulations, 2010 CFR
2010-01-01
... weight of delivery 10,000 10,000 2. Percent of edible kernel weight 53.0 84.0 3. Less weight loss in... 7 Agriculture 8 2010-01-01 2010-01-01 false Adjusted kernel weight. 981.401 Section 981.401... Administrative Rules and Regulations § 981.401 Adjusted kernel weight. (a) Definition. Adjusted kernel...
7 CFR 51.2125 - Split or broken kernels.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Split or broken kernels. 51.2125 Section 51.2125... STANDARDS) United States Standards for Grades of Shelled Almonds Definitions § 51.2125 Split or broken kernels. Split or broken kernels means seven-eighths or less of complete whole kernels but which will...
7 CFR 51.2125 - Split or broken kernels.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 2 2011-01-01 2011-01-01 false Split or broken kernels. 51.2125 Section 51.2125... STANDARDS) United States Standards for Grades of Shelled Almonds Definitions § 51.2125 Split or broken kernels. Split or broken kernels means seven-eighths or less of complete whole kernels but which will...
7 CFR 51.2125 - Split or broken kernels.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 2 2012-01-01 2012-01-01 false Split or broken kernels. 51.2125 Section 51.2125... STANDARDS) United States Standards for Grades of Shelled Almonds Definitions § 51.2125 Split or broken kernels. Split or broken kernels means seven-eighths or less of complete whole kernels but which will...
7 CFR 51.1403 - Kernel color classification.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 2 2012-01-01 2012-01-01 false Kernel color classification. 51.1403 Section 51.1403... STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Kernel Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the...
7 CFR 51.1403 - Kernel color classification.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Kernel color classification. 51.1403 Section 51.1403... STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Kernel Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the...
7 CFR 51.1403 - Kernel color classification.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 2 2014-01-01 2014-01-01 false Kernel color classification. 51.1403 Section 51.1403... Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the color classifications provided in this section. When the color of kernels in a...
7 CFR 51.1403 - Kernel color classification.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 2 2013-01-01 2013-01-01 false Kernel color classification. 51.1403 Section 51.1403... Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the color classifications provided in this section. When the color of kernels in a...
7 CFR 51.1403 - Kernel color classification.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 2 2011-01-01 2011-01-01 false Kernel color classification. 51.1403 Section 51.1403... STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Kernel Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the...
KITTEN Lightweight Kernel 0.1 Beta
2007-12-12
The Kitten Lightweight Kernel is a simplified OS (operating system) kernel that is intended to manage a compute node's hardware resources. It provides a set of mechanisms to user-level applications for utilizing hardware resources (e.g., allocating memory, creating processes, accessing the network). Kitten is much simpler than general-purpose OS kernels, such as Linux or Windows, but includes all of the esssential functionality needed to support HPC (high-performance computing) MPI, PGAS and OpenMP applications. Kitten providesmore » unique capabilities such as physically contiguous application memory, transparent large page support, and noise-free tick-less operation, which enable HPC applications to obtain greater efficiency and scalability than with general purpose OS kernels.« less
Quantum kernel applications in medicinal chemistry.
Huang, Lulu; Massa, Lou
2012-07-01
Progress in the quantum mechanics of biological molecules is being driven by computational advances. The notion of quantum kernels can be introduced to simplify the formalism of quantum mechanics, making it especially suitable for parallel computation of very large biological molecules. The essential idea is to mathematically break large biological molecules into smaller kernels that are calculationally tractable, and then to represent the full molecule by a summation over the kernels. The accuracy of the kernel energy method (KEM) is shown by systematic application to a great variety of molecular types found in biology. These include peptides, proteins, DNA and RNA. Examples are given that explore the KEM across a variety of chemical models, and to the outer limits of energy accuracy and molecular size. KEM represents an advance in quantum biology applicable to problems in medicine and drug design. PMID:22857535
A QCD Analysis of Average Transverse Momentum in Jet Fragmentation
NASA Astrophysics Data System (ADS)
Iguchi, K.; Nakkagawa, H.; Niégawa, A.
1981-08-01
The generalized Altarelli-Parisi equations for the full fragmentation functions of partons are solved within the LLA. The analysis of the average transverse momentum ⪉ngle kT ranglez of hadrons produced inside a jet in e+e- annihilation shows that the LLA calculations of QCD give satisfactory description of the data if we take into account correctly the kinematical restrictions to the evolution of jets. Discussions on the use of the LLA in the phenomenological analyses are also given.
Transverse Momentum-Dependent Parton Distributions From Lattice QCD
Michael Engelhardt, Bernhard Musch, Philipp Haegler, Andreas Schaefer
2012-12-01
Starting from a definition of transverse momentum-dependent parton distributions for semi-inclusive deep inelastic scattering and the Drell-Yan process, given in terms of matrix elements of a quark bilocal operator containing a staple-shaped Wilson connection, a scheme to determine such observables in lattice QCD is developed and explored. Parametrizing the aforementioned matrix elements in terms of invariant amplitudes permits a simple transformation of the problem to a Lorentz frame suited for the lattice calculation. Results for the Sivers and Boer-Mulders transverse momentum shifts are presented, focusing in particular on their dependence on the staple extent and the Collins-Soper evolution parameter.
Transverse Momentum-Dependent Parton Distributions from Lattice QCD
NASA Astrophysics Data System (ADS)
Engelhardt, M.; Musch, B.; Hägler, P.; Negele, J.; Schäfer, A.
Starting from a definition of transverse momentum-dependent parton distributions for semi-inclusive deep inelastic scattering and the Drell-Yan process, given in terms of matrix elements of a quark bilocal operator containing a staple-shaped Wilson connection, a scheme to determine such observables in lattice QCD is developed and explored. Parametrizing the aforementioned matrix elements in terms of invariant amplitudes permits a simple transformation of the problem to a Lorentz frame suited for the lattice calculation. Results for the Sivers and Boer-Mulders transverse momentum shifts are presented, focusing in particular on their dependence on the staple extent and the Collins-Soper evolution parameter.
Variational Dirichlet Blur Kernel Estimation.
Zhou, Xu; Mateos, Javier; Zhou, Fugen; Molina, Rafael; Katsaggelos, Aggelos K
2015-12-01
Blind image deconvolution involves two key objectives: 1) latent image and 2) blur estimation. For latent image estimation, we propose a fast deconvolution algorithm, which uses an image prior of nondimensional Gaussianity measure to enforce sparsity and an undetermined boundary condition methodology to reduce boundary artifacts. For blur estimation, a linear inverse problem with normalization and nonnegative constraints must be solved. However, the normalization constraint is ignored in many blind image deblurring methods, mainly because it makes the problem less tractable. In this paper, we show that the normalization constraint can be very naturally incorporated into the estimation process by using a Dirichlet distribution to approximate the posterior distribution of the blur. Making use of variational Dirichlet approximation, we provide a blur posterior approximation that considers the uncertainty of the estimate and removes noise in the estimated kernel. Experiments with synthetic and real data demonstrate that the proposed method is very competitive to the state-of-the-art blind image restoration methods. PMID:26390458
Weighted Bergman Kernels and Quantization}
NASA Astrophysics Data System (ADS)
Engliš, Miroslav
Let Ω be a bounded pseudoconvex domain in CN, φ, ψ two positive functions on Ω such that - log ψ, - log φ are plurisubharmonic, and z∈Ω a point at which - log φ is smooth and strictly plurisubharmonic. We show that as k-->∞, the Bergman kernels with respect to the weights φkψ have an asymptotic expansion
TICK: Transparent Incremental Checkpointing at Kernel Level
Petrini, Fabrizio; Gioiosa, Roberto
2004-10-25
TICK is a software package implemented in Linux 2.6 that allows the save and restore of user processes, without any change to the user code or binary. With TICK a process can be suspended by the Linux kernel upon receiving an interrupt and saved in a file. This file can be later thawed in another computer running Linux (potentially the same computer). TICK is implemented as a Linux kernel module, in the Linux version 2.6.5
A kernel autoassociator approach to pattern classification.
Zhang, Haihong; Huang, Weimin; Huang, Zhiyong; Zhang, Bailing
2005-06-01
Autoassociators are a special type of neural networks which, by learning to reproduce a given set of patterns, grasp the underlying concept that is useful for pattern classification. In this paper, we present a novel nonlinear model referred to as kernel autoassociators based on kernel methods. While conventional non-linear autoassociation models emphasize searching for the non-linear representations of input patterns, a kernel autoassociator takes a kernel feature space as the nonlinear manifold, and places emphasis on the reconstruction of input patterns from the kernel feature space. Two methods are proposed to address the reconstruction problem, using linear and multivariate polynomial functions, respectively. We apply the proposed model to novelty detection with or without novelty examples and study it on the promoter detection and sonar target recognition problems. We also apply the model to mclass classification problems including wine recognition, glass recognition, handwritten digit recognition, and face recognition. The experimental results show that, compared with conventional autoassociators and other recognition systems, kernel autoassociators can provide better or comparable performance for concept learning and recognition in various domains. PMID:15971928
A kernel autoassociator approach to pattern classification.
Zhang, Haihong; Huang, Weimin; Huang, Zhiyong; Zhang, Bailing
2005-06-01
Autoassociators are a special type of neural networks which, by learning to reproduce a given set of patterns, grasp the underlying concept that is useful for pattern classification. In this paper, we present a novel nonlinear model referred to as kernel autoassociators based on kernel methods. While conventional non-linear autoassociation models emphasize searching for the non-linear representations of input patterns, a kernel autoassociator takes a kernel feature space as the nonlinear manifold, and places emphasis on the reconstruction of input patterns from the kernel feature space. Two methods are proposed to address the reconstruction problem, using linear and multivariate polynomial functions, respectively. We apply the proposed model to novelty detection with or without novelty examples and study it on the promoter detection and sonar target recognition problems. We also apply the model to mclass classification problems including wine recognition, glass recognition, handwritten digit recognition, and face recognition. The experimental results show that, compared with conventional autoassociators and other recognition systems, kernel autoassociators can provide better or comparable performance for concept learning and recognition in various domains.
PET Image Reconstruction Using Kernel Method
Wang, Guobao; Qi, Jinyi
2014-01-01
Image reconstruction from low-count PET projection data is challenging because the inverse problem is ill-posed. Prior information can be used to improve image quality. Inspired by the kernel methods in machine learning, this paper proposes a kernel based method that models PET image intensity in each pixel as a function of a set of features obtained from prior information. The kernel-based image model is incorporated into the forward model of PET projection data and the coefficients can be readily estimated by the maximum likelihood (ML) or penalized likelihood image reconstruction. A kernelized expectation-maximization (EM) algorithm is presented to obtain the ML estimate. Computer simulations show that the proposed approach can achieve better bias versus variance trade-off and higher contrast recovery for dynamic PET image reconstruction than the conventional maximum likelihood method with and without post-reconstruction denoising. Compared with other regularization-based methods, the kernel method is easier to implement and provides better image quality for low-count data. Application of the proposed kernel method to a 4D dynamic PET patient dataset showed promising results. PMID:25095249
Solving QCD via multi-Regge theory.
White, A. R.
1998-11-04
A high-energy, transverse momentum cut-off, solution of QCD is outlined. Regge pole and single gluon properties of the pomeron are directly related to the confinement and chiral symmetry breaking properties of the hadron spectrum. This solution, which corresponds to a supercritical phase of Reggeon Field Theory, may only be applicable to QCD with a very special quark content.
Solvable models and hidden symmetries in QCD
Yepez-Martinez, Tochtli; Hess, P. O.; Civitarese, O.; Lerma H., S.
2010-12-23
We show that QCD Hamiltonians at low energy exhibit an SU(2) structure, when only few orbital levels are considered. In case many orbital levels are taken into account we also find a semi-analytic solution for the energy levels of the dominant part of the QCD Hamiltonian. The findings are important to propose the structure of phenomenological models.
The Excited State Spectrum of QCD
Robert Edwards
2010-08-01
The determination of the highly excited state spectrum of baryons within QCD is a major theoretical and experimental challenge. I will present recent results from lattice QCD that give some indications on the structure of these highly excited states, and outline on-going and future work needed for a full determination of the spectrum, including strong decays.
Recent results on lattice QCD thermodynamics
NASA Astrophysics Data System (ADS)
Ratti, Claudia
2016-08-01
I review recent results on QCD thermodynamics from lattice simulations. In particular, I will focus on the QCD equation of state at zero and finite chemical potential, the curvature of the phase diagram and fluctuations of conserved charges. The latter are compared to experimental data, to the purpose of extracting the chemical freeze-out temperature and chemical potential from first principles.
QCD tests in electron-positron scattering
Maruyama, T.
1995-11-01
Recent results on QCD tests at the Z{sup o} resonance are described. Measurements of Color factor ratios, and studies of final state photon radiation are performed by the LEP experiments. QCD tests using a longitudinally polarized beam are reported by the SLD experiment.
Kenneth Wilson — Renormalization and QCD
NASA Astrophysics Data System (ADS)
Wegner, Franz J.
2014-07-01
Kenneth Wilson had an enormous impact on field theory, in particular on the renormalization group and critical phenomena, and on QCD. I had the great pleasure to work in three fields to which he contributed essentially: Critical phenomena, gauge-invariance in duality and QCD, and flow equations and similarity renormalization.
Consistent Perturbative Fixed Point Calculations in QCD and Supersymmetric QCD.
Ryttov, Thomas A
2016-08-12
We suggest how to consistently calculate the anomalous dimension γ_{*} of the ψ[over ¯]ψ operator in finite order perturbation theory at an infrared fixed point for asymptotically free theories. If the n+1 loop beta function and n loop anomalous dimension are known, then γ_{*} can be calculated exactly and fully scheme independently in a Banks-Zaks expansion through O(Δ_{f}^{n}), where Δ_{f}=N[over ¯]_{f}-N_{f}, N_{f} is the number of flavors, and N[over ¯]_{f} is the number of flavors above which asymptotic freedom is lost. For a supersymmetric theory, the calculation preserves supersymmetry order by order in Δ_{f}. We then compute γ_{*} through O(Δ_{f}^{2}) for supersymmetric QCD in the dimensional reduction scheme and find that it matches the exact known result. We find that γ_{*} is astonishingly well described in perturbation theory already at the few loops level throughout the entire conformal window. We finally compute γ_{*} through O(Δ_{f}^{3}) for QCD and a variety of other nonsupersymmetric fermionic gauge theories. Small values of γ_{*} are observed for a large range of flavors. PMID:27563948
Consistent Perturbative Fixed Point Calculations in QCD and Supersymmetric QCD
NASA Astrophysics Data System (ADS)
Ryttov, Thomas A.
2016-08-01
We suggest how to consistently calculate the anomalous dimension γ* of the ψ ¯ ψ operator in finite order perturbation theory at an infrared fixed point for asymptotically free theories. If the n +1 loop beta function and n loop anomalous dimension are known, then γ* can be calculated exactly and fully scheme independently in a Banks-Zaks expansion through O (Δfn) , where Δf=N¯ f-Nf , Nf is the number of flavors, and N¯f is the number of flavors above which asymptotic freedom is lost. For a supersymmetric theory, the calculation preserves supersymmetry order by order in Δf. We then compute γ* through O (Δf2) for supersymmetric QCD in the dimensional reduction scheme and find that it matches the exact known result. We find that γ* is astonishingly well described in perturbation theory already at the few loops level throughout the entire conformal window. We finally compute γ* through O (Δf3) for QCD and a variety of other nonsupersymmetric fermionic gauge theories. Small values of γ* are observed for a large range of flavors.
"Quantum Field Theory and QCD"
Jaffe, Arthur M.
2006-02-25
This grant partially funded a meeting, "QFT & QCD: Past, Present and Future" held at Harvard University, Cambridge, MA on March 18-19, 2005. The participants ranged from senior scientists (including at least 9 Nobel Prize winners, and 1 Fields medalist) to graduate students and undergraduates. There were several hundred persons in attendance at each lecture. The lectures ranged from superlative reviews of past progress, lists of important, unsolved questions, to provocative hypotheses for future discovery. The project generated a great deal of interest on the internet, raising awareness and interest in the open questions of theoretical physics.
Kaon Condensation with Lattice QCD
Detmold, Will; Detmold, William; Detmold, Will; Detmold, William; Savage, Martin; Walker-Loud, Andre; Orginos, Konstantinos; Torok, Aaron
2008-09-01
doi: http://dx.doi.org/10.1103/PhysRevD.78.054514
Kaon condensation may play an important role in the structure of hadronic matter at densities greater than that of nuclear matter, as exist in the interior of neutron stars. We present the results of the first lattice QCD calculation of kaon condensation obtained by studying systems containing up to twelve charged kaons. Surprisingly, the equation of state of the condensate is remarkably well reproduced by leading order chiral perturbation theory. We determine the three-kaon interaction from the multi-kaon systems and update our results for pion condensates.
Nuclear Physics from Lattice QCD
William Detmold, Silas Beane, Konstantinos Orginos, Martin Savage
2011-01-01
We review recent progress toward establishing lattice Quantum Chromodynamics as a predictive calculational framework for nuclear physics. A survey of the current techniques that are used to extract low-energy hadronic scattering amplitudes and interactions is followed by a review of recent two-body and few-body calculations by the NPLQCD collaboration and others. An outline of the nuclear physics that is expected to be accomplished with Lattice QCD in the next decade, along with estimates of the required computational resources, is presented.
Single transverse-spin asymmetry in QCD
NASA Astrophysics Data System (ADS)
Koike, Yuji
2014-09-01
So far large single transverse-spin asymmetries (SSA) have been observed in many high-energy processes such as semi-inclusive deep inelastic scattering and proton-proton collisions. Since the conventional parton model and perturbative QCD can not accomodate such large SSAs, the framework for QCD hard processes had to be extended to understand the mechanism of SSA. In this extended frameworks of QCD, intrinsic transverse momentum of partons and the multi-parton (quark-gluon and pure-gluonic) correlations in the hadrons, which were absent in the conventional framework, play a crucial role to cause SSAs, and well-defined formulation of these effects has been a big challenge for QCD theorists. Study on these effects has greatly promoted our understanding on QCD dynamics and hadron structure. In this talk, I will present an overview on these theoretical activity, emphasizing the important role of the Drell-Yan process.
QCD Phase Transition in Dgp Brane Cosmology
NASA Astrophysics Data System (ADS)
Atazadeh, K.; Ghezelbash, A. M.; Sepangi, H. R.
2012-08-01
In the standard picture of cosmology it is predicted that a phase transition, associated with chiral symmetry breaking after the electroweak transition, has occurred at approximately 10μ seconds after the Big Bang to convert a plasma of free quarks and gluons into hadrons. We consider the quark-hadron phase transition in a Dvali, Gabadadze and Porrati (DGP) brane world scenario within an effective model of QCD. We study the evolution of the physical quantities useful for the study of the early universe, namely, the energy density, temperature and the scale factor before, during and after the phase transition. Also, due to the high energy density in the early universe, we consider the quadratic energy density term that appears in the Friedmann equation. In DGP brane models such a term corresponds to the negative branch (ɛ = -1) of the Friedmann equation when the Hubble radius is much smaller than the crossover length in 4D and 5D regimes. We show that for different values of the cosmological constant on a brane, λ, phase transition occurs and results in decreasing the effective temperature of the quark-gluon plasma and of the hadronic fluid. We then consider the quark-hadron transition in the smooth crossover regime at high and low temperatures and show that such a transition occurs along with decreasing the effective temperature of the quark-gluon plasma during the process of the phase transition.
Up- and down-quark masses from finite-energy QCD sum rules to five loops
Dominguez, C. A.; Nasrallah, N. F.; Roentsch, R. H.; Schilcher, K.
2009-01-01
The up- and down-quark masses are determined from an optimized QCD finite-energy sum rule involving the correlator of axial-vector divergences, to five-loop order in perturbative QCD, and including leading nonperturbative QCD and higher order quark-mass corrections. This finite-energy sum rule is designed to reduce considerably the systematic uncertainties arising from the (unmeasured) hadronic resonance sector, which in this framework contributes less than 3-4% to the quark mass. This is achieved by introducing an integration kernel in the form of a second degree polynomial, restricted to vanish at the peak of the two lowest lying resonances. The driving hadronic contribution is then the pion pole, with parameters well known from experiment. The determination is done in the framework of contour improved perturbation theory, which exhibits a very good convergence, leading to a remarkably stable result in the unusually wide window s{sub 0}=1.0-4.0 GeV{sup 2}, where s{sub 0} is the radius of the integration contour in the complex energy (squared) plane. The results are m{sub u}(Q=2 GeV)=2.9{+-}0.2 MeV, m{sub d}(Q=2 GeV)=5.3{+-}0.4 MeV, and (m{sub u}+m{sub d})/2=4.1{+-}0.2 MeV (at a scale Q=2 GeV)
Up- and down-quark masses from finite-energy QCD sum rules to five loops
NASA Astrophysics Data System (ADS)
Dominguez, C. A.; Nasrallah, N. F.; Röntsch, R. H.; Schilcher, K.
2009-01-01
The up- and down-quark masses are determined from an optimized QCD finite-energy sum rule involving the correlator of axial-vector divergences, to five-loop order in perturbative QCD, and including leading nonperturbative QCD and higher order quark-mass corrections. This finite-energy sum rule is designed to reduce considerably the systematic uncertainties arising from the (unmeasured) hadronic resonance sector, which in this framework contributes less than 3-4% to the quark mass. This is achieved by introducing an integration kernel in the form of a second degree polynomial, restricted to vanish at the peak of the two lowest lying resonances. The driving hadronic contribution is then the pion pole, with parameters well known from experiment. The determination is done in the framework of contour improved perturbation theory, which exhibits a very good convergence, leading to a remarkably stable result in the unusually wide window s0=1.0-4.0GeV2, where s0 is the radius of the integration contour in the complex energy (squared) plane. The results are mu(Q=2GeV)=2.9±0.2MeV, md(Q=2GeV)=5.3±0.4MeV, and (mu+md)/2=4.1±0.2MeV (at a scale Q=2GeV).
The QCD/SM working group: Summary report
W. Giele et al.
2004-01-12
Quantum Chromo-Dynamics (QCD), and more generally the physics of the Standard Model (SM), enter in many ways in high energy processes at TeV Colliders, and especially in hadron colliders (the Tevatron at Fermilab and the forthcoming LHC at CERN), First of all, at hadron colliders, QCD controls the parton luminosity, which rules the production rates of any particle or system with large invariant mass and/or large transverse momentum. Accurate predictions for any signal of possible ''New Physics'' sought at hadron colliders, as well as the corresponding backgrounds, require an improvement in the control of uncertainties on the determination of PDF and of the propagation of these uncertainties in the predictions. Furthermore, to fully exploit these new types of PDF with uncertainties, uniform tools (computer interfaces, standardization of the PDF evolution codes used by the various groups fitting PDF's) need to be proposed and developed. The dynamics of colour also affects, both in normalization and shape, various observables of the signals of any possible ''New Physics'' sought at the TeV scale, such as, e.g. the production rate, or the distributions in transverse momentum of the Higgs boson. Last, but not least, QCD governs many backgrounds to the searches for this ''New Physics''. Large and important QCD corrections may come from extra hard parton emission (and the corresponding virtual corrections), involving multi-leg and/or multi-loop amplitudes. This requires complex higher order calculations, and new methods have to be designed to compute the required multi-legs and/or multi-loop corrections in a tractable form. In the case of semi-inclusive observables, logarithmically enhanced contributions coming from multiple soft and collinear gluon emission require sophisticated QCD resummation techniques. Resummation is a catch-all name for efforts to extend the predictive power of QCD by summing the large logarithmic corrections to all orders in perturbation theory. In
Smith, W.H.
1997-06-01
These lectures describe QCD physics studies over the period 1992--1996 from data taken with collisions of 27 GeV electrons and positrons with 820 GeV protons at the HERA collider at DESY by the two general-purpose detectors H1 and ZEUS. The focus of these lectures is on structure functions and jet production in deep inelastic scattering, photoproduction, and diffraction. The topics covered start with a general introduction to HERA and ep scattering. Structure functions are discussed. This includes the parton model, scaling violation, and the extraction of F{sub 2}, which is used to determine the gluon momentum distribution. Both low and high Q{sup 2} regimes are discussed. The low Q{sup 2} transition from perturbative QCD to soft hadronic physics is examined. Jet production in deep inelastic scattering to measure {alpha}{sub s}, and in photoproduction to study resolved and direct photoproduction, is also presented. This is followed by a discussion of diffraction that begins with a general introduction to diffraction in hadronic collisions and its relation to ep collisions, and moves on to deep inelastic scattering, where the structure of diffractive exchange is studied, and in photoproduction, where dijet production provides insights into the structure of the Pomeron. 95 refs., 39 figs.
Nuclear Physics and Lattice QCD
Beane, Silas
2003-11-01
Impressive progress is currently being made in computing properties and interac- tions of the low-lying hadrons using lattice QCD. However, cost limitations will, for the foreseeable future, necessitate the use of quark masses, Mq, that are signif- icantly larger than those of nature, lattice spacings, a, that are not significantly smaller than the physical scale of interest, and lattice sizes, L, that are not sig- nificantly larger than the physical scale of interest. Extrapolations in the quark masses, lattice spacing and lattice volume are therefore required. The hierarchy of mass scales is: L 1 j Mq j â ºC j a 1 . The appropriate EFT for incorporating the light quark masses, the finite lattice spacing and the lattice size into hadronic observables is C-PT, which provides systematic expansions in the small parame- ters e m L, 1/ Lâ ºC, p/â ºC, Mq/â ºC and aâ ºC . The lattice introduces other unphysical scales as well. Lattice QCD quarks will increasingly be artificially separated
Vranas, P
2007-06-18
Quantum Chromodynamics is the theory of nuclear and sub-nuclear physics. It is a celebrated theory and one of its inventors, F. Wilczek, has termed it as '... our most perfect physical theory'. Part of this is related to the fact that QCD can be numerically simulated from first principles using the methods of lattice gauge theory. The computational demands of QCD are enormous and have not only played a role in the history of supercomputers but are also helping define their future. Here I will discuss the intimate relation of QCD and massively parallel supercomputers with focus on the Blue Gene supercomputer and QCD thermodynamics. I will present results on the performance of QCD on the Blue Gene as well as physics simulation results of QCD at temperatures high enough that sub-nuclear matter transitions to a plasma state of elementary particles, the quark gluon plasma. This state of matter is thought to have existed at around 10 microseconds after the big bang. Current heavy ion experiments are in the quest of reproducing it for the first time since then. And numerical simulations of QCD on the Blue Gene systems are calculating the theoretical values of fundamental parameters so that comparisons of experiment and theory can be made.
Adaptive kernels for multi-fiber reconstruction.
Barmpoutis, Angelos; Jian, Bing; Vemuri, Baba C
2009-01-01
In this paper we present a novel method for multi-fiber reconstruction given a diffusion-weighted MRI dataset. There are several existing methods that employ various spherical deconvolution kernels for achieving this task. However the kernels in all of the existing methods rely on certain assumptions regarding the properties of the underlying fibers, which introduce inaccuracies and unnatural limitations in them. Our model is a non trivial generalization of the spherical deconvolution model, which unlike the existing methods does not make use of a fix-shaped kernel. Instead, the shape of the kernel is estimated simultaneously with the rest of the unknown parameters by employing a general adaptive model that can theoretically approximate any spherical deconvolution kernel. The performance of our model is demonstrated using simulated and real diffusion-weighed MR datasets and compared quantitatively with several existing techniques in literature. The results obtained indicate that our model has superior performance that is close to the theoretic limit of the best possible achievable result.
Brodsky, Stanley J.; Cao, Fu-Guang; de Teramond, Guy F.; /Costa Rica U.
2011-11-04
The QCD evolution of the pion distribution amplitude (DA) {phi}{sub {pi}} (x, Q{sup 2}) is computed for several commonly used models. Our analysis includes the nonperturbative form predicted by lightfront holographic QCD, thus combining the nonperturbative bound state dynamics of the pion with the perturbative ERBL evolution of the pion distribution amplitude. We calculate the meson-photon transition form factors for the {pi}{sup 0}, {eta} and {eta}' using the hard-scattering formalism. We point out that a widely-used approximation of replacing {phi} (x; (1 - x)Q) with {phi} (x;Q) in the calculations will unjustifiably reduce the predictions for the meson-photon transition form factors. It is found that the four models of the pion DA discussed give very different predictions for the Q{sup 2} dependence of the meson-photon transition form factors in the region of Q{sup 2} > 30 GeV{sup 2}. More accurate measurements of these transition form factors at the large Q{sup 2} region will be able to distinguish the four models of the pion DA. The rapid growth of the large Q{sup 2} data for the pion-photon transition form factor reported by the BABAR Collaboration is difficult to explain within the current framework of QCD. If the BABAR data for the meson-photon transition form factor for the {pi}{sup 0} is confirmed, it could indicate physics beyond-the-standard model, such as a weakly-coupled elementary C = + axial vector or pseudoscalar z{sup 0} in the few GeV domain, an elementary field which would provide the coupling {gamma}{sup *}{gamma} {yields} z{sup 0} {yields} {pi}{sup 0} at leading twist. Our analysis thus indicates the importance of additional measurements of the pion-photon transition form factor at large Q{sup 2}.
The QCD vacuum, hadrons and superdense matter
Shuryak, E.
1986-01-01
This is probably the only textbook available that gathers QCD, many-body theory and phase transitions in one volume. The presentation is pedagogical and readable. Contents: The QCD Vacuum: Introduction; QCD on the Lattice Topological Effects in Gauges Theories. Correlation Functions and Microscopic Excitations: Introduction; Operator Product Expansion; The Sum Rules beyond OPE; Nonpower Contributions to Correlators and Instantons; Hadronic Spectroscopy on the Lattice. Dense Matter: Hadronic Matter; Asymptotically Dense Quark-Gluon Plasma; Instantons in Matter; Lattice Calculations at Finite Temperature; Phase Transitions; Macroscopic Excitations and Experiments: General Properties of High Energy Collisions; ''Barometers'', ''Thermometers'', Interferometric ''Microscope''; Experimental Perspectives.
Shape of mesons in holographic QCD
Torabian, Mahdi; Yee, Ho-Ung
2009-10-15
Based on the expectation that the constituent quark model may capture the right physics in the large N limit, we point out that the orbital angular momentum of the quark-antiquark pair inside light mesons of low spins in the constituent quark model may provide a clue for the holographic dual string model of large N QCD. Our discussion, relying on a few suggestive assumptions, leads to a necessity of world-sheet fermions in the bulk of dual strings that can incorporate intrinsic spins of fundamental QCD degrees of freedom. We also comment on the interesting issue of the size of mesons in holographic QCD.
Death to perturbative QCD in exclusive processes?
Eckardt, R.; Hansper, J.; Gari, M.F.
1994-04-01
The authors discuss the question of whether perturbative QCD is applicable in calculations of exclusive processes at available momentum transfers. They show that the currently used method of determining hadronic quark distribution amplitudes from QCD sum rules yields wave functions which are completely undetermined because the polynomial expansion diverges. Because of the indeterminacy of the wave functions no statement can be made at present as to whether perturbative QCD is valid. The authors emphasize the necessity of a rigorous discussion of the subject and the importance of experimental data in the range of interest.
Excited light isoscalar mesons from lattice QCD
Christopher Thomas
2011-07-01
I report a recent lattice QCD calculation of an excited spectrum of light isoscalar mesons, something that has up to now proved challenging for lattice QCD. With novel techniques we extract an extensive spectrum with high statistical precision, including spin-four states and, for the first time, light isoscalars with exotic quantum numbers. In addition, the hidden flavour content of these mesons is determined, providing a window on annihilation dynamics in QCD. I comment on future prospects including applications to the study of resonances.
Overcoming Unix kernel deficiencies in a portable, distributed storage system
Gary, M.
1990-01-01
The LINCS Storage System at Lawrence Livermore National Laboratory was designed to provide an efficient, portable, distributed file and directory system capable of running on a variety of hardware platforms, consistent with the IEEE Mass Storage System Reference Model. Our intent was to meet these requirements with a storage system running atop standard, unmodified versions of the Unix operating system. Most of the system components runs as ordinary user processes. However, for those components that were implemented in the kernel to improve performances, Unix presented a number of hurdles. These included the lack of a lightweight tasking facility in the kernel; process-blocked I/O; inefficient data transfer; and the lack of optimized drivers for storage devices. How we overcame these difficulties is the subject of this paper. Ideally, future evolution of Unix by vendors will provide the missing facilities; until then, however, data centers adopting Unix operating systems for large-scale distributed computing will have to provide similar solutions. 11 refs., 5 figs.
Fast generation of sparse random kernel graphs
Hagberg, Aric; Lemons, Nathan; Du, Wen -Bo
2015-09-10
The development of kernel-based inhomogeneous random graphs has provided models that are flexible enough to capture many observed characteristics of real networks, and that are also mathematically tractable. We specify a class of inhomogeneous random graph models, called random kernel graphs, that produces sparse graphs with tunable graph properties, and we develop an efficient generation algorithm to sample random instances from this model. As real-world networks are usually large, it is essential that the run-time of generation algorithms scales better than quadratically in the number of vertices n. We show that for many practical kernels our algorithm runs in time at most ο(n(logn)²). As an example, we show how to generate samples of power-law degree distribution graphs with tunable assortativity.
Fast generation of sparse random kernel graphs
Hagberg, Aric; Lemons, Nathan; Du, Wen -Bo
2015-09-10
The development of kernel-based inhomogeneous random graphs has provided models that are flexible enough to capture many observed characteristics of real networks, and that are also mathematically tractable. We specify a class of inhomogeneous random graph models, called random kernel graphs, that produces sparse graphs with tunable graph properties, and we develop an efficient generation algorithm to sample random instances from this model. As real-world networks are usually large, it is essential that the run-time of generation algorithms scales better than quadratically in the number of vertices n. We show that for many practical kernels our algorithm runs in timemore » at most ο(n(logn)²). As an example, we show how to generate samples of power-law degree distribution graphs with tunable assortativity.« less
Kernel bandwidth estimation for nonparametric modeling.
Bors, Adrian G; Nasios, Nikolaos
2009-12-01
Kernel density estimation is a nonparametric procedure for probability density modeling, which has found several applications in various fields. The smoothness and modeling ability of the functional approximation are controlled by the kernel bandwidth. In this paper, we describe a Bayesian estimation method for finding the bandwidth from a given data set. The proposed bandwidth estimation method is applied in three different computational-intelligence methods that rely on kernel density estimation: 1) scale space; 2) mean shift; and 3) quantum clustering. The third method is a novel approach that relies on the principles of quantum mechanics. This method is based on the analogy between data samples and quantum particles and uses the SchrOdinger potential as a cost function. The proposed methodology is used for blind-source separation of modulated signals and for terrain segmentation based on topography information.
Experimental study of turbulent flame kernel propagation
Mansour, Mohy; Peters, Norbert; Schrader, Lars-Uve
2008-07-15
Flame kernels in spark ignited combustion systems dominate the flame propagation and combustion stability and performance. They are likely controlled by the spark energy, flow field and mixing field. The aim of the present work is to experimentally investigate the structure and propagation of the flame kernel in turbulent premixed methane flow using advanced laser-based techniques. The spark is generated using pulsed Nd:YAG laser with 20 mJ pulse energy in order to avoid the effect of the electrodes on the flame kernel structure and the variation of spark energy from shot-to-shot. Four flames have been investigated at equivalence ratios, {phi}{sub j}, of 0.8 and 1.0 and jet velocities, U{sub j}, of 6 and 12 m/s. A combined two-dimensional Rayleigh and LIPF-OH technique has been applied. The flame kernel structure has been collected at several time intervals from the laser ignition between 10 {mu}s and 2 ms. The data show that the flame kernel structure starts with spherical shape and changes gradually to peanut-like, then to mushroom-like and finally disturbed by the turbulence. The mushroom-like structure lasts longer in the stoichiometric and slower jet velocity. The growth rate of the average flame kernel radius is divided into two linear relations; the first one during the first 100 {mu}s is almost three times faster than that at the later stage between 100 and 2000 {mu}s. The flame propagation is slightly faster in leaner flames. The trends of the flame propagation, flame radius, flame cross-sectional area and mean flame temperature are related to the jet velocity and equivalence ratio. The relations obtained in the present work allow the prediction of any of these parameters at different conditions. (author)
QCD tests with polarized beams
Maruyama, Takashi; SLD Collaboration
1996-09-01
The authors present three QCD studies performed by the SLD experiment at SLAC, utilizing the highly polarized SLC electron beam. They examined particle production differences in light quark and antiquark hemispheres, and observed more high momentum baryons and K{sup {minus}}`s than antibaryons and K{sup +}`s in quark hemispheres, consistent with the leading particle hypothesis. They performed a search for jet handedness in light q- and {anti q}-jets. Assuming Standard Model values of quark polarization in Z{sup 0} decays, they have set an improved upper limit on the analyzing power of the handedness method. They studied the correlation between the Z{sup 0} spin and the event-plane orientation in polarized Z{sup 0} decays into three jets.
Gluonic transversity from lattice QCD
NASA Astrophysics Data System (ADS)
Detmold, W.; Shanahan, P. E.
2016-07-01
We present an exploratory study of the gluonic structure of the ϕ meson using lattice QCD (LQCD). This includes the first investigation of gluonic transversity via the leading moment of the twist-2 double-helicity-flip gluonic structure function Δ (x ,Q2). This structure function only exists for targets of spin J ≥1 and does not mix with quark distributions at leading twist, thereby providing a particularly clean probe of gluonic degrees of freedom. We also explore the gluonic analogue of the Soffer bound which relates the helicity flip and nonflip gluonic distributions, finding it to be saturated at the level of 80%. This work sets the stage for more complex LQCD studies of gluonic structure in the nucleon and in light nuclei where Δ (x ,Q2) is an "exotic glue" observable probing gluons in a nucleus not associated with individual nucleons.
Volatile compound formation during argan kernel roasting.
El Monfalouti, Hanae; Charrouf, Zoubida; Giordano, Manuela; Guillaume, Dominique; Kartah, Badreddine; Harhar, Hicham; Gharby, Saïd; Denhez, Clément; Zeppa, Giuseppe
2013-01-01
Virgin edible argan oil is prepared by cold-pressing argan kernels previously roasted at 110 degrees C for up to 25 minutes. The concentration of 40 volatile compounds in virgin edible argan oil was determined as a function of argan kernel roasting time. Most of the volatile compounds begin to be formed after 15 to 25 minutes of roasting. This suggests that a strictly controlled roasting time should allow the modulation of argan oil taste and thus satisfy different types of consumers. This could be of major importance considering the present booming use of edible argan oil.
Reduced multiple empirical kernel learning machine.
Wang, Zhe; Lu, MingZhe; Gao, Daqi
2015-02-01
Multiple kernel learning (MKL) is demonstrated to be flexible and effective in depicting heterogeneous data sources since MKL can introduce multiple kernels rather than a single fixed kernel into applications. However, MKL would get a high time and space complexity in contrast to single kernel learning, which is not expected in real-world applications. Meanwhile, it is known that the kernel mapping ways of MKL generally have two forms including implicit kernel mapping and empirical kernel mapping (EKM), where the latter is less attracted. In this paper, we focus on the MKL with the EKM, and propose a reduced multiple empirical kernel learning machine named RMEKLM for short. To the best of our knowledge, it is the first to reduce both time and space complexity of the MKL with EKM. Different from the existing MKL, the proposed RMEKLM adopts the Gauss Elimination technique to extract a set of feature vectors, which is validated that doing so does not lose much information of the original feature space. Then RMEKLM adopts the extracted feature vectors to span a reduced orthonormal subspace of the feature space, which is visualized in terms of the geometry structure. It can be demonstrated that the spanned subspace is isomorphic to the original feature space, which means that the dot product of two vectors in the original feature space is equal to that of the two corresponding vectors in the generated orthonormal subspace. More importantly, the proposed RMEKLM brings a simpler computation and meanwhile needs a less storage space, especially in the processing of testing. Finally, the experimental results show that RMEKLM owns a much efficient and effective performance in terms of both complexity and classification. The contributions of this paper can be given as follows: (1) by mapping the input space into an orthonormal subspace, the geometry of the generated subspace is visualized; (2) this paper first reduces both the time and space complexity of the EKM-based MKL; (3
Utilizing Kernelized Advection Schemes in Ocean Models
NASA Astrophysics Data System (ADS)
Zadeh, N.; Balaji, V.
2008-12-01
There has been a recent effort in the ocean model community to use a set of generic FORTRAN library routines for advection of scalar tracers in the ocean. In a collaborative project called Hybrid Ocean Model Environement (HOME), vastly different advection schemes (space-differencing schemes for advection equation) become available to modelers in the form of subroutine calls (kernels). In this talk we explore the possibility of utilizing ESMF data structures in wrapping these kernels so that they can be readily used in ESMF gridded components.
Kernel abortion in maize. II. Distribution of /sup 14/C among kernel carboydrates
Hanft, J.M.; Jones, R.J.
1986-06-01
This study was designed to compare the uptake and distribution of /sup 14/C among fructose, glucose, sucrose, and starch in the cob, pedicel, and endosperm tissues of maize (Zea mays L.) kernels induced to abort by high temperature with those that develop normally. Kernels cultured in vitro at 309 and 35/sup 0/C were transferred to (/sup 14/C)sucrose media 10 days after pollination. Kernels cultured at 35/sup 0/C aborted prior to the onset of linear dry matter accumulation. Significant uptake into the cob, pedicel, and endosperm of radioactivity associated with the soluble and starch fractions of the tissues was detected after 24 hours in culture on atlageled media. After 8 days in culture on (/sup 14/C)sucrose media, 48 and 40% of the radioactivity associated with the cob carbohydrates was found in the reducing sugars at 30 and 35/sup 0/C, respectively. Of the total carbohydrates, a higher percentage of label was associated with sucrose and lower percentage with fructose and glucose in pedicel tissue of kernels cultured at 35/sup 0/C compared to kernels cultured at 30/sup 0/C. These results indicate that sucrose was not cleaved to fructose and glucose as rapidly during the unloading process in the pedicel of kernels induced to abort by high temperature. Kernels cultured at 35/sup 0/C had a much lower proportion of label associated with endosperm starch (29%) than did kernels cultured at 30/sup 0/C (89%). Kernels cultured at 35/sup 0/C had a correspondingly higher proportion of /sup 14/C in endosperm fructose, glucose, and sucrose.
None
2016-07-12
Modern QCD - Lecture 4 We will consider some processes of interest at the LHC and will discuss the main elements of their cross-section calculations. We will also summarize the current status of higher order calculations.
Heavy Quarks, QCD, and Effective Field Theory
Thomas Mehen
2012-10-09
The research supported by this OJI award is in the area of heavy quark and quarkonium production, especially the application Soft-Collinear E ective Theory (SCET) to the hadronic production of quarkonia. SCET is an e ffective theory which allows one to derive factorization theorems and perform all order resummations for QCD processes. Factorization theorems allow one to separate the various scales entering a QCD process, and in particular, separate perturbative scales from nonperturbative scales. The perturbative physics can then be calculated using QCD perturbation theory. Universal functions with precise fi eld theoretic de nitions describe the nonperturbative physics. In addition, higher order perturbative QCD corrections that are enhanced by large logarithms can be resummed using the renormalization group equations of SCET. The applies SCET to the physics of heavy quarks, heavy quarkonium, and similar particles.
Simplifying Multi-Jet QCD Computation
Peskin, Michael E.; /SLAC
2011-11-04
These lectures give a pedagogical discussion of the computation of QCD tree amplitudes for collider physics. The tools reviewed are spinor products, color ordering, MHV amplitudes, and the Britto-Cachazo-Feng-Witten recursion formula.
Excited light meson spectroscopy from lattice QCD
Christopher Thomas, Hadron Spectrum Collaboration
2012-04-01
I report on recent progress in calculating excited meson spectra using lattice QCD, emphasizing results and phenomenology. With novel techniques we can now extract extensive spectra of excited mesons with high statistical precision, including spin-four states and those with exotic quantum numbers. As well as isovector meson spectra, I will present new calculations of the spectrum of excited light isoscalar mesons, something that has up to now been a challenge for lattice QCD. I show determinations of the flavor content of these mesons, including the eta-eta' mixing angle, providing a window on annihilation dynamics in QCD. I will also discuss recent work on using lattice QCD to map out the energy-dependent phase shift in pi-pi scattering and future applications of the methodology to the study of resonances and decays.
Deep Sequencing of RNA from Ancient Maize Kernels
Rasmussen, Morten; Cappellini, Enrico; Romero-Navarro, J. Alberto; Wales, Nathan; Alquezar-Planas, David E.; Penfield, Steven; Brown, Terence A.; Vielle-Calzada, Jean-Philippe; Montiel, Rafael; Jørgensen, Tina; Odegaard, Nancy; Jacobs, Michael; Arriaza, Bernardo; Higham, Thomas F. G.; Ramsey, Christopher Bronk; Willerslev, Eske; Gilbert, M. Thomas P.
2013-01-01
The characterization of biomolecules from ancient samples can shed otherwise unobtainable insights into the past. Despite the fundamental role of transcriptomal change in evolution, the potential of ancient RNA remains unexploited – perhaps due to dogma associated with the fragility of RNA. We hypothesize that seeds offer a plausible refuge for long-term RNA survival, due to the fundamental role of RNA during seed germination. Using RNA-Seq on cDNA synthesized from nucleic acid extracts, we validate this hypothesis through demonstration of partial transcriptomal recovery from two sources of ancient maize kernels. The results suggest that ancient seed transcriptomics may offer a powerful new tool with which to study plant domestication. PMID:23326310
Accuracy of Reduced and Extended Thin-Wire Kernels
Burke, G J
2008-11-24
Some results are presented comparing the accuracy of the reduced thin-wire kernel and an extended kernel with exact integration of the 1/R term of the Green's function and results are shown for simple wire structures.
Novel QCD effects in nuclear collisions
Brodsky, S.J.
1991-12-01
Heavy ion collisions can provide a novel environment for testing fundamental dynamical processes in QCD, including minijet formation and interactions, formation zone phenomena, color filtering, coherent co-mover interactions, and new higher twist mechanisms which could account for the observed excess production and anomalous nuclear target dependence of heavy flavor production. The possibility of using light-cone thermodynamics and a corresponding covariant temperature to describe the QCD phases of the nuclear fragmentation region is also briefly discussed.
Lattice QCD and the Jefferson Laboratory Program
Jozef Dudek, Robert Edwards, David Richards, Konstantinos Orginos
2011-06-01
Lattice gauge theory provides our only means of performing \\textit{ab initio} calculations in the non-perturbative regime. It has thus become an increasing important component of the Jefferson Laboratory physics program. In this paper, we describe the contributions of lattice QCD to our understanding of hadronic and nuclear physics, focusing on the structure of hadrons, the calculation of the spectrum and properties of resonances, and finally on deriving an understanding of the QCD origin of nuclear forces.
QCD and hard diffraction at the LHC
Albrow, Michael G.; /Fermilab
2005-09-01
As an introduction to QCD at the LHC the author gives an overview of QCD at the Tevatron, emphasizing the high Q{sup 2} frontier which will be taken over by the LHC. After describing briefly the LHC detectors the author discusses high mass diffraction, in particular central exclusive production of Higgs and vector boson pairs. The author introduces the FP420 project to measure the scattered protons 420m downstream of ATLAS and CMS.
Recent QCD Studies at the Tevatron
Group, Robert Craig
2008-04-01
Since the beginning of Run II at the Fermilab Tevatron the QCD physics groups of the CDF and D0 experiments have worked to reach unprecedented levels of precision for many QCD observables. Thanks to the large dataset--over 3 fb{sup -1} of integrated luminosity recorded by each experiment--important new measurements have recently been made public and will be summarized in this paper.
Lattice and Phase Diagram in QCD
Lombardo, Maria Paola
2008-10-13
Model calculations have produced a number of very interesting expectations for the QCD Phase Diagram, and the task of a lattice calculations is to put these studies on a quantitative grounds. I will give an overview of the current status of the lattice analysis of the QCD phase diagram, from the quantitative results of mature calculations at zero and small baryochemical potential, to the exploratory studies of the colder, denser phase.
Some new/old approaches to QCD
Gross, D.J.
1992-11-01
In this lecture I shall discuss some recent attempts to revive some old ideas to address the problem of solving QCD. I believe that it is timely to return to this problem which has been woefully neglected for the last decade. QCD is a permanent part of the theoretical landscape and eventually we will have to develop analytic tools for dealing with the theory in the infra-red. Lattice techniques are useful but they have not yet lived up to their promise. Even if one manages to derive the hadronic spectrum numerically, to an accuracy of 10% or even 1%, we will not be truly satisfied unless we have some analytic understanding of the results. Also, lattice Monte-Carlo methods can only be used to answer a small set of questions. Many issues of great conceptual and practical interest-in particular the calculation of scattering amplitudes, are thus far beyond lattice control. Any progress in controlling QCD in an explicit analytic, fashion would be of great conceptual value. It would also be of great practical aid to experimentalists, who must use rather ad-hoc and primitive models of QCD scattering amplitudes to estimate the backgrounds to interesting new physics. I will discuss an attempt to derive a string representation of QCD and a revival of the large N approach to QCD. Both of these ideas have a long history, many theorist-years have been devoted to their pursuit-so far with little success. I believe that it is time to try again. In part this is because of the progress in the last few years in string theory. Our increased understanding of string theory should make the attempt to discover a stringy representation of QCD easier, and the methods explored in matrix models might be employed to study the large N limit of QCD.
The structure of gluon radiation in QCD
Parke, S.; Mangano, M.
1989-08-01
For massless QCD the hard scattering amplitudes are naturally written in terms of the dual color expansion. here I present this expansion for purely gluonic processes and processes involving quark-antiquark pairs and gluons. The properties of the sub-amplitudes as well as explicit algebraic expressions are given for a number of these processes. Also, I demonstrate how to recover massless QED amplitudes from the dual expansion of massless QCD. 16 refs., 3 figs., 1 tab.
Some New/Old Approaches to QCD
DOE R&D Accomplishments Database
Gross, D. J.
1992-11-01
In this lecture I shall discuss some recent attempts to revive some old ideas to address the problem of solving QCD. I believe that it is timely to return to this problem which has been woefully neglected for the last decade. QCD is a permanent part of the theoretical landscape and eventually we will have to develop analytic tools for dealing with the theory in the infra-red. Lattice techniques are useful but they have not yet lived up to their promise. Even if one manages to derive the hadronic spectrum numerically, to an accuracy of 10% or even 1%, we will not be truly satisfied unless we have some analytic understanding of the results. Also, lattice Monte-Carlo methods can only be used to answer a small set of questions. Many issues of great conceptual and practical interest-in particular the calculation of scattering amplitudes, are thus far beyond lattice control. Any progress in controlling QCD in an explicit analytic, fashion would be of great conceptual value. It would also be of great practical aid to experimentalists, who must use rather ad-hoc and primitive models of QCD scattering amplitudes to estimate the backgrounds to interesting new physics. I will discuss an attempt to derive a string representation of QCD and a revival of the large N approach to QCD. Both of these ideas have a long history, many theorist-years have been devoted to their pursuit-so far with little success. I believe that it is time to try again. In part this is because of the progress in the last few years in string theory. Our increased understanding of string theory should make the attempt to discover a stringy representation of QCD easier, and the methods explored in matrix models might be employed to study the large N limit of QCD.
Kernel Partial Least Squares for Nonlinear Regression and Discrimination
NASA Technical Reports Server (NTRS)
Rosipal, Roman; Clancy, Daniel (Technical Monitor)
2002-01-01
This paper summarizes recent results on applying the method of partial least squares (PLS) in a reproducing kernel Hilbert space (RKHS). A previously proposed kernel PLS regression model was proven to be competitive with other regularized regression methods in RKHS. The family of nonlinear kernel-based PLS models is extended by considering the kernel PLS method for discrimination. Theoretical and experimental results on a two-class discrimination problem indicate usefulness of the method.
Fabrication of Uranium Oxycarbide Kernels for HTR Fuel
Charles Barnes; CLay Richardson; Scott Nagley; John Hunn; Eric Shaber
2010-10-01
Babcock and Wilcox (B&W) has been producing high quality uranium oxycarbide (UCO) kernels for Advanced Gas Reactor (AGR) fuel tests at the Idaho National Laboratory. In 2005, 350-µm, 19.7% 235U-enriched UCO kernels were produced for the AGR-1 test fuel. Following coating of these kernels and forming the coated-particles into compacts, this fuel was irradiated in the Advanced Test Reactor (ATR) from December 2006 until November 2009. B&W produced 425-µm, 14% enriched UCO kernels in 2008, and these kernels were used to produce fuel for the AGR-2 experiment that was inserted in ATR in 2010. B&W also produced 500-µm, 9.6% enriched UO2 kernels for the AGR-2 experiments. Kernels of the same size and enrichment as AGR-1 were also produced for the AGR-3/4 experiment. In addition to fabricating enriched UCO and UO2 kernels, B&W has produced more than 100 kg of natural uranium UCO kernels which are being used in coating development tests. Successive lots of kernels have demonstrated consistent high quality and also allowed for fabrication process improvements. Improvements in kernel forming were made subsequent to AGR-1 kernel production. Following fabrication of AGR-2 kernels, incremental increases in sintering furnace charge size have been demonstrated. Recently small scale sintering tests using a small development furnace equipped with a residual gas analyzer (RGA) has increased understanding of how kernel sintering parameters affect sintered kernel properties. The steps taken to increase throughput and process knowledge have reduced kernel production costs. Studies have been performed of additional modifications toward the goal of increasing capacity of the current fabrication line to use for production of first core fuel for the Next Generation Nuclear Plant (NGNP) and providing a basis for the design of a full scale fuel fabrication facility.
Windows on the axion. [quantum chromodynamics (QCD)
NASA Technical Reports Server (NTRS)
Turner, Michael S.
1989-01-01
Peccei-Quinn symmetry with attendant axion is a most compelling, and perhaps the most minimal, extension of the standard model, as it provides a very elegant solution to the nagging strong CP-problem associated with the theta vacuum structure of QCD. However, particle physics gives little guidance as to the axion mass; a priori, the plausible values span the range: 10(-12)eV is approx. less than m(a) which is approx. less than 10(6)eV, some 18 orders-of-magnitude. Laboratory experiments have excluded masses greater than 10(4)eV, leaving unprobed some 16 orders-of-magnitude. Axions have a host of interesting astrophysical and cosmological effects, including, modifying the evolution of stars of all types (our sun, red giants, white dwarfs, and neutron stars), contributing significantly to the mass density of the Universe today, and producting detectable line radiation through the decays of relic axions. Consideration of these effects has probed 14 orders-of-magnitude in axion mass, and has left open only two windows for further exploration: 10(-6)eV is approx. less than m(a) is approx. less than 10(-3)eV and 1eV is approx. less than m(a) is approx. less than 5eV (hadronic axions only). Both these windows are accessible to experiment, and a variety of very interesting experiments, all of which involve heavenly axions, are being planned or are underway.
Code of Federal Regulations, 2010 CFR
2010-01-01
... Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE REGULATIONS AND STANDARDS UNDER THE AGRICULTURAL MARKETING ACT OF 1946... the separated half of a kernel with not more than one-eighth broken off....
Kernel Temporal Differences for Neural Decoding
Bae, Jihye; Sanchez Giraldo, Luis G.; Pohlmeyer, Eric A.; Francis, Joseph T.; Sanchez, Justin C.; Príncipe, José C.
2015-01-01
We study the feasibility and capability of the kernel temporal difference (KTD)(λ) algorithm for neural decoding. KTD(λ) is an online, kernel-based learning algorithm, which has been introduced to estimate value functions in reinforcement learning. This algorithm combines kernel-based representations with the temporal difference approach to learning. One of our key observations is that by using strictly positive definite kernels, algorithm's convergence can be guaranteed for policy evaluation. The algorithm's nonlinear functional approximation capabilities are shown in both simulations of policy evaluation and neural decoding problems (policy improvement). KTD can handle high-dimensional neural states containing spatial-temporal information at a reasonable computational complexity allowing real-time applications. When the algorithm seeks a proper mapping between a monkey's neural states and desired positions of a computer cursor or a robot arm, in both open-loop and closed-loop experiments, it can effectively learn the neural state to action mapping. Finally, a visualization of the coadaptation process between the decoder and the subject shows the algorithm's capabilities in reinforcement learning brain machine interfaces. PMID:25866504
7 CFR 981.8 - Inedible kernel.
Code of Federal Regulations, 2013 CFR
2013-01-01
... AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order... of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or brown spot, as defined in the United States Standards for Shelled Almonds, or which has embedded...
7 CFR 981.8 - Inedible kernel.
Code of Federal Regulations, 2011 CFR
2011-01-01
... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order... of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or brown spot, as defined in the United States Standards for Shelled Almonds, or which has embedded...
7 CFR 981.8 - Inedible kernel.
Code of Federal Regulations, 2010 CFR
2010-01-01
... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order... of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or brown spot, as defined in the United States Standards for Shelled Almonds, or which has embedded...
7 CFR 981.8 - Inedible kernel.
Code of Federal Regulations, 2012 CFR
2012-01-01
... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order... of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or brown spot, as defined in the United States Standards for Shelled Almonds, or which has embedded...
7 CFR 981.8 - Inedible kernel.
Code of Federal Regulations, 2014 CFR
2014-01-01
... AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order... of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or brown spot, as defined in the United States Standards for Shelled Almonds, or which has embedded...
21 CFR 176.350 - Tamarind seed kernel powder.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 3 2010-04-01 2009-04-01 true Tamarind seed kernel powder. 176.350 Section 176... Substances for Use Only as Components of Paper and Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as a component of articles intended for use in...
21 CFR 176.350 - Tamarind seed kernel powder.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 3 2011-04-01 2011-04-01 false Tamarind seed kernel powder. 176.350 Section 176... Substances for Use Only as Components of Paper and Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as a component of articles intended for use in...
21 CFR 176.350 - Tamarind seed kernel powder.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 21 Food and Drugs 3 2012-04-01 2012-04-01 false Tamarind seed kernel powder. 176.350 Section 176... Substances for Use Only as Components of Paper and Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as a component of articles intended for use in...
21 CFR 176.350 - Tamarind seed kernel powder.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 21 Food and Drugs 3 2013-04-01 2013-04-01 false Tamarind seed kernel powder. 176.350 Section 176... Substances for Use Only as Components of Paper and Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as a component of articles intended for use in...
21 CFR 176.350 - Tamarind seed kernel powder.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 21 Food and Drugs 3 2014-04-01 2014-04-01 false Tamarind seed kernel powder. 176.350 Section 176... Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as a..., packaging, transporting, or holding food, subject to the provisions of this section. (a) Tamarind...
7 CFR 868.254 - Broken kernels determination.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 7 2010-01-01 2010-01-01 false Broken kernels determination. 868.254 Section 868.254 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Governing Application of Standards § 868.254 Broken kernels determination. Broken kernels shall...
7 CFR 868.304 - Broken kernels determination.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 7 2013-01-01 2013-01-01 false Broken kernels determination. 868.304 Section 868.304 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Application of Standards § 868.304 Broken kernels determination. Broken kernels shall be determined by the...
7 CFR 868.304 - Broken kernels determination.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 7 2012-01-01 2012-01-01 false Broken kernels determination. 868.304 Section 868.304 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Application of Standards § 868.304 Broken kernels determination. Broken kernels shall be determined by the...
7 CFR 868.254 - Broken kernels determination.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 7 2013-01-01 2013-01-01 false Broken kernels determination. 868.254 Section 868.254 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Governing Application of Standards § 868.254 Broken kernels determination. Broken kernels shall...
7 CFR 868.304 - Broken kernels determination.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 7 2014-01-01 2014-01-01 false Broken kernels determination. 868.304 Section 868.304 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Application of Standards § 868.304 Broken kernels determination. Broken kernels shall be determined by the...
7 CFR 868.254 - Broken kernels determination.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 7 2012-01-01 2012-01-01 false Broken kernels determination. 868.254 Section 868.254 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Governing Application of Standards § 868.254 Broken kernels determination. Broken kernels shall...
7 CFR 868.254 - Broken kernels determination.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 7 2011-01-01 2011-01-01 false Broken kernels determination. 868.254 Section 868.254 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Governing Application of Standards § 868.254 Broken kernels determination. Broken kernels shall...
7 CFR 51.2125 - Split or broken kernels.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 2 2014-01-01 2014-01-01 false Split or broken kernels. 51.2125 Section 51.2125 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards... § 51.2125 Split or broken kernels. Split or broken kernels means seven-eighths or less of...
7 CFR 868.304 - Broken kernels determination.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 7 2011-01-01 2011-01-01 false Broken kernels determination. 868.304 Section 868.304 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Application of Standards § 868.304 Broken kernels determination. Broken kernels shall be determined by the...
7 CFR 51.2125 - Split or broken kernels.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 2 2013-01-01 2013-01-01 false Split or broken kernels. 51.2125 Section 51.2125 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards... § 51.2125 Split or broken kernels. Split or broken kernels means seven-eighths or less of...
7 CFR 868.254 - Broken kernels determination.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 7 2014-01-01 2014-01-01 false Broken kernels determination. 868.254 Section 868.254 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Governing Application of Standards § 868.254 Broken kernels determination. Broken kernels shall...
7 CFR 868.304 - Broken kernels determination.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 7 2010-01-01 2010-01-01 false Broken kernels determination. 868.304 Section 868.304 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Application of Standards § 868.304 Broken kernels determination. Broken kernels shall be determined by the...
Standard model group, QCD subgroup - dynamics isolating and testing the elementary QCD subprocess
Tannenbaum, M.J.
1982-01-01
QCD to an experimentalist is the theory of interactions of quarks and gluons. Experimentalists like QCD because QCD is analogous to QED. Thus, following Drell and others who have for many years studied the validity of QED, one has a ready-made menu for tests of QCD. There are the static and long distance tests. These topics are covered by Peter LePage in the static properties group. In this report, dynamic and short distance tests of QCD will be discussed, primarily via reactions with large transverse momenta. This report is an introduction and overview of the subject, to serve as a framework for other reports from the subgroup. In the last two sections, the author has taken the opportunity to discuss his own ideas and opinions.
QCD as a topologically ordered system
Zhitnitsky, Ariel R.
2013-09-15
We argue that QCD belongs to a topologically ordered phase similar to many well-known condensed matter systems with a gap such as topological insulators or superconductors. Our arguments are based on an analysis of the so-called “deformed QCD” which is a weakly coupled gauge theory, but nevertheless preserves all the crucial elements of strongly interacting QCD, including confinement, nontrivial θ dependence, degeneracy of the topological sectors, etc. Specifically, we construct the so-called topological “BF” action which reproduces the well known infrared features of the theory such as non-dispersive contribution to the topological susceptibility which cannot be associated with any propagating degrees of freedom. Furthermore, we interpret the well known resolution of the celebrated U(1){sub A} problem where the would be η{sup ′} Goldstone boson generates its mass as a result of mixing of the Goldstone field with a topological auxiliary field characterizing the system. We then identify the non-propagating auxiliary topological field of the BF formulation in deformed QCD with the Veneziano ghost (which plays the crucial role in resolution of the U(1){sub A} problem). Finally, we elaborate on relation between “string-net” condensation in topologically ordered condensed matter systems and long range coherent configurations, the “skeletons”, studied in QCD lattice simulations. -- Highlights: •QCD may belong to a topologically ordered phase similar to condensed matter (CM) systems. •We identify the non-propagating topological field in deformed QCD with the Veneziano ghost. •Relation between “string-net” condensates in CM systems and the “skeletons” in QCD lattice simulations is studied.
Hadronic and nuclear interactions in QCD
Not Available
1982-01-01
Despite the evidence that QCD - or something close to it - gives a correct description of the structure of hadrons and their interactions, it seems paradoxical that the theory has thus far had very little impact in nuclear physics. One reason for this is that the application of QCD to distances larger than 1 fm involves coherent, non-perturbative dynamics which is beyond present calculational techniques. For example, in QCD the nuclear force can evidently be ascribed to quark interchange and gluon exchange processes. These, however, are as complicated to analyze from a fundamental point of view as is the analogous covalent bond in molecular physics. Since a detailed description of quark-quark interactions and the structure of hadronic wavefunctions is not yet well-understood in QCD, it is evident that a quantitative first-principle description of the nuclear force will require a great deal of theoretical effort. Another reason for the limited impact of QCD in nuclear physics has been the conventional assumption that nuclear interactions can for the most part be analyzed in terms of an effective meson-nucleon field theory or potential model in isolation from the details of short distance quark and gluon structure of hadrons. These lectures, argue that this view is untenable: in fact, there is no correspondence principle which yields traditional nuclear physics as a rigorous large-distance or non-relativistic limit of QCD dynamics. On the other hand, the distinctions between standard nuclear physics dynamics and QCD at nuclear dimensions are extremely interesting and illuminating for both particle and nuclear physics.
Carbothermic Synthesis of ~820- m UN Kernels. Investigation of Process Variables
Lindemer, Terrence; Silva, Chinthaka M; Henry, Jr, John James; McMurray, Jake W; Jolly, Brian C; Hunt, Rodney Dale; Terrani, Kurt A
2015-06-01
This report details the continued investigation of process variables involved in converting sol-gel-derived, urainia-carbon microspheres to ~820-μm-dia. UN fuel kernels in flow-through, vertical refractory-metal crucibles at temperatures up to 2123 K. Experiments included calcining of air-dried UO_{3}-H_{2}O-C microspheres in Ar and H_{2}-containing gases, conversion of the resulting UO_{2}-C kernels to dense UO_{2}:2UC in the same gases and vacuum, and its conversion in N_{2} to in UC_{1-x}N_{x}. The thermodynamics of the relevant reactions were applied extensively to interpret and control the process variables. Producing the precursor UO_{2}:2UC kernel of ~96% theoretical density was required, but its subsequent conversion to UC_{1-x}N_{x} at 2123 K was not accompanied by sintering and resulted in ~83-86% of theoretical density. Decreasing the UC_{1-x}N_{x} kernel carbide component via HCN evolution was shown to be quantitatively consistent with present and past experiments and the only useful application of H2 in the entire process.
Chare kernel; A runtime support system for parallel computations
Shu, W. ); Kale, L.V. )
1991-03-01
This paper presents the chare kernel system, which supports parallel computations with irregular structure. The chare kernel is a collection of primitive functions that manage chares, manipulative messages, invoke atomic computations, and coordinate concurrent activities. Programs written in the chare kernel language can be executed on different parallel machines without change. Users writing such programs concern themselves with the creation of parallel actions but not with assigning them to specific processors. The authors describe the design and implementation of the chare kernel. Performance of chare kernel programs on two hypercube machines, the Intel iPSC/2 and the NCUBE, is also given.
Kernel weights optimization for error diffusion halftoning method
NASA Astrophysics Data System (ADS)
Fedoseev, Victor
2015-02-01
This paper describes a study to find the best error diffusion kernel for digital halftoning under various restrictions on the number of non-zero kernel coefficients and their set of values. As an objective measure of quality, WSNR was used. The problem of multidimensional optimization was solved numerically using several well-known algorithms: Nelder- Mead, BFGS, and others. The study found a kernel function that provides a quality gain of about 5% in comparison with the best of the commonly used kernel introduced by Floyd and Steinberg. Other kernels obtained allow to significantly reduce the computational complexity of the halftoning process without reducing its quality.
Online kernel principal component analysis: a reduced-order model.
Honeine, Paul
2012-09-01
Kernel principal component analysis (kernel-PCA) is an elegant nonlinear extension of one of the most used data analysis and dimensionality reduction techniques, the principal component analysis. In this paper, we propose an online algorithm for kernel-PCA. To this end, we examine a kernel-based version of Oja's rule, initially put forward to extract a linear principal axe. As with most kernel-based machines, the model order equals the number of available observations. To provide an online scheme, we propose to control the model order. We discuss theoretical results, such as an upper bound on the error of approximating the principal functions with the reduced-order model. We derive a recursive algorithm to discover the first principal axis, and extend it to multiple axes. Experimental results demonstrate the effectiveness of the proposed approach, both on synthetic data set and on images of handwritten digits, with comparison to classical kernel-PCA and iterative kernel-PCA.
QCD tests in $p\\bar{p}$ collisions
Huth, John E.; Mangano, Michelangelo L.
1993-02-01
We review the status of QCD tests in high energy p-pbar collisions. Contents: i) Introduction ii) QCD in Hadronic Collisions iii) Jet Production iv) Heavy Flavour Production v) W and Z Production vi) Direct Photons.
A Novel Framework for Learning Geometry-Aware Kernels.
Pan, Binbin; Chen, Wen-Sheng; Xu, Chen; Chen, Bo
2016-05-01
The data from real world usually have nonlinear geometric structure, which are often assumed to lie on or close to a low-dimensional manifold in a high-dimensional space. How to detect this nonlinear geometric structure of the data is important for the learning algorithms. Recently, there has been a surge of interest in utilizing kernels to exploit the manifold structure of the data. Such kernels are called geometry-aware kernels and are widely used in the machine learning algorithms. The performance of these algorithms critically relies on the choice of the geometry-aware kernels. Intuitively, a good geometry-aware kernel should utilize additional information other than the geometric information. In many applications, it is required to compute the out-of-sample data directly. However, most of the geometry-aware kernel methods are restricted to the available data given beforehand, with no straightforward extension for out-of-sample data. In this paper, we propose a framework for more general geometry-aware kernel learning. The proposed framework integrates multiple sources of information and enables us to develop flexible and effective kernel matrices. Then, we theoretically show how the learned kernel matrices are extended to the corresponding kernel functions, in which the out-of-sample data can be computed directly. Under our framework, a novel family of geometry-aware kernels is developed. Especially, some existing geometry-aware kernels can be viewed as instances of our framework. The performance of the kernels is evaluated on dimensionality reduction, classification, and clustering tasks. The empirical results show that our kernels significantly improve the performance.
Searching and Indexing Genomic Databases via Kernelization
Gagie, Travis; Puglisi, Simon J.
2015-01-01
The rapid advance of DNA sequencing technologies has yielded databases of thousands of genomes. To search and index these databases effectively, it is important that we take advantage of the similarity between those genomes. Several authors have recently suggested searching or indexing only one reference genome and the parts of the other genomes where they differ. In this paper, we survey the 20-year history of this idea and discuss its relation to kernelization in parameterized complexity. PMID:25710001
Antiangular Ordering of Gluon Radiation in QCD Media
Mehtar-Tani, Yacine; Salgado, Carlos A.; Tywoniuk, Konrad
2011-03-25
We investigate angular and energy distributions of medium-induced gluon emission off a quark-antiquark antenna in the framework of perturbative QCD as an attempt toward understanding, from first principles, jet evolution inside the quark-gluon plasma. In-medium color coherence between emitters, neglected in all previous calculations, leads to a novel mechanism of soft-gluon radiation. The structure of the corresponding spectrum, in contrast with known medium-induced radiation, i.e., off a single emitter, retains some properties of the vacuum case; in particular, it exhibits a soft divergence. However, as opposed to the vacuum, the collinear singularity is regulated by the pair opening angle, leading to a strict angular separation between vacuum and medium-induced radiation, denoted as antiangular ordering. We comment on the possible consequences of this new contribution for jet observables in heavy-ion collisions.
Transverse momentum-dependent parton distribution functions from lattice QCD
Michael Engelhardt, Philipp Haegler, Bernhard Musch, John Negele, Andreas Schaefer
2012-12-01
Transverse momentum-dependent parton distributions (TMDs) relevant for semi-inclusive deep inelastic scattering (SIDIS) and the Drell-Yan process can be defined in terms of matrix elements of a quark bilocal operator containing a staple-shaped Wilson connection. Starting from such a definition, a scheme to determine TMDs in lattice QCD is developed and explored. Parametrizing the aforementioned matrix elements in terms of invariant amplitudes permits a simple transformation of the problem to a Lorentz frame suited for the lattice calculation. Results for the Sivers and Boer-Mulders transverse momentum shifts are obtained using ensembles at the pion masses 369MeV and 518MeV, focusing in particular on the dependence of these shifts on the staple extent and a Collins-Soper-type evolution parameter quantifying proximity of the staples to the light cone.
A Fast Reduced Kernel Extreme Learning Machine.
Deng, Wan-Yu; Ong, Yew-Soon; Zheng, Qing-Hua
2016-04-01
In this paper, we present a fast and accurate kernel-based supervised algorithm referred to as the Reduced Kernel Extreme Learning Machine (RKELM). In contrast to the work on Support Vector Machine (SVM) or Least Square SVM (LS-SVM), which identifies the support vectors or weight vectors iteratively, the proposed RKELM randomly selects a subset of the available data samples as support vectors (or mapping samples). By avoiding the iterative steps of SVM, significant cost savings in the training process can be readily attained, especially on Big datasets. RKELM is established based on the rigorous proof of universal learning involving reduced kernel-based SLFN. In particular, we prove that RKELM can approximate any nonlinear functions accurately under the condition of support vectors sufficiency. Experimental results on a wide variety of real world small instance size and large instance size applications in the context of binary classification, multi-class problem and regression are then reported to show that RKELM can perform at competitive level of generalized performance as the SVM/LS-SVM at only a fraction of the computational effort incurred.
Semi-Supervised Kernel Mean Shift Clustering.
Anand, Saket; Mittal, Sushil; Tuzel, Oncel; Meer, Peter
2014-06-01
Mean shift clustering is a powerful nonparametric technique that does not require prior knowledge of the number of clusters and does not constrain the shape of the clusters. However, being completely unsupervised, its performance suffers when the original distance metric fails to capture the underlying cluster structure. Despite recent advances in semi-supervised clustering methods, there has been little effort towards incorporating supervision into mean shift. We propose a semi-supervised framework for kernel mean shift clustering (SKMS) that uses only pairwise constraints to guide the clustering procedure. The points are first mapped to a high-dimensional kernel space where the constraints are imposed by a linear transformation of the mapped points. This is achieved by modifying the initial kernel matrix by minimizing a log det divergence-based objective function. We show the advantages of SKMS by evaluating its performance on various synthetic and real datasets while comparing with state-of-the-art semi-supervised clustering algorithms. PMID:26353281
Kernel methods for phenotyping complex plant architecture.
Kawamura, Koji; Hibrand-Saint Oyant, Laurence; Foucher, Fabrice; Thouroude, Tatiana; Loustau, Sébastien
2014-02-01
The Quantitative Trait Loci (QTL) mapping of plant architecture is a critical step for understanding the genetic determinism of plant architecture. Previous studies adopted simple measurements, such as plant-height, stem-diameter and branching-intensity for QTL mapping of plant architecture. Many of these quantitative traits were generally correlated to each other, which give rise to statistical problem in the detection of QTL. We aim to test the applicability of kernel methods to phenotyping inflorescence architecture and its QTL mapping. We first test Kernel Principal Component Analysis (KPCA) and Support Vector Machines (SVM) over an artificial dataset of simulated inflorescences with different types of flower distribution, which is coded as a sequence of flower-number per node along a shoot. The ability of discriminating the different inflorescence types by SVM and KPCA is illustrated. We then apply the KPCA representation to the real dataset of rose inflorescence shoots (n=1460) obtained from a 98 F1 hybrid mapping population. We find kernel principal components with high heritability (>0.7), and the QTL analysis identifies a new QTL, which was not detected by a trait-by-trait analysis of simple architectural measurements. The main tools developed in this paper could be use to tackle the general problem of QTL mapping of complex (sequences, 3D structure, graphs) phenotypic traits.
A Fast Reduced Kernel Extreme Learning Machine.
Deng, Wan-Yu; Ong, Yew-Soon; Zheng, Qing-Hua
2016-04-01
In this paper, we present a fast and accurate kernel-based supervised algorithm referred to as the Reduced Kernel Extreme Learning Machine (RKELM). In contrast to the work on Support Vector Machine (SVM) or Least Square SVM (LS-SVM), which identifies the support vectors or weight vectors iteratively, the proposed RKELM randomly selects a subset of the available data samples as support vectors (or mapping samples). By avoiding the iterative steps of SVM, significant cost savings in the training process can be readily attained, especially on Big datasets. RKELM is established based on the rigorous proof of universal learning involving reduced kernel-based SLFN. In particular, we prove that RKELM can approximate any nonlinear functions accurately under the condition of support vectors sufficiency. Experimental results on a wide variety of real world small instance size and large instance size applications in the context of binary classification, multi-class problem and regression are then reported to show that RKELM can perform at competitive level of generalized performance as the SVM/LS-SVM at only a fraction of the computational effort incurred. PMID:26829605
NASA Astrophysics Data System (ADS)
Pope, Benjamin; Tuthill, Peter; Hinkley, Sasha; Ireland, Michael J.; Greenbaum, Alexandra; Latyshev, Alexey; Monnier, John D.; Martinache, Frantz
2016-01-01
At present, the principal limitation on the resolution and contrast of astronomical imaging instruments comes from aberrations in the optical path, which may be imposed by the Earth's turbulent atmosphere or by variations in the alignment and shape of the telescope optics. These errors can be corrected physically, with active and adaptive optics, and in post-processing of the resulting image. A recently developed adaptive optics post-processing technique, called kernel-phase interferometry, uses linear combinations of phases that are self-calibrating with respect to small errors, with the goal of constructing observables that are robust against the residual optical aberrations in otherwise well-corrected imaging systems. Here, we present a direct comparison between kernel phase and the more established competing techniques, aperture masking interferometry, point spread function (PSF) fitting and bispectral analysis. We resolve the α Ophiuchi binary system near periastron, using the Palomar 200-Inch Telescope. This is the first case in which kernel phase has been used with a full aperture to resolve a system close to the diffraction limit with ground-based extreme adaptive optics observations. Excellent agreement in astrometric quantities is found between kernel phase and masking, and kernel phase significantly outperforms PSF fitting and bispectral analysis, demonstrating its viability as an alternative to conventional non-redundant masking under appropriate conditions.
New View of the QCD Phase Diagram
McLerran,L.
2009-07-09
Quarkyonic matter is confining but can have densities much larger than 3QCD. Its existence isargued in the large Nc limit of QCD and implies that there are at least three phases of QCD with greatly different bulk properties. These are a Confined Phase of hadrons, a Deconfined Phase ofquarks and gluons, and the Quarkyonic Phase. In the Quarkyonic Phase, the baryon density isaccounted for by a quasi-free gas of quarks, and the the antiquarks and gluons are confined intomesons, glueballs. Quarks near the Fermi surface also are treated as baryons. (In addition tothese phases, there is a color superconducting phase that has vastly different transport properties than the above, but with bulk properties, such as pressure and energy density, that are not greatlydifferent than that of Quarkyonic Matter.)
Tagging the pion quark structure in QCD
Bakulev, A.P.; Mikhailov, S.V.; Stefanis, N.G.
2006-03-01
We combine the constraints on the pion quark structure available from perturbative QCD, nonperturbative QCD (nonlocal QCD sum rules and light-cone sum rules) with the analysis of current data on F{sub {pi}}{sub {gamma}}{sub {gamma}}{sub *}(Q{sup 2}), including recent high-precision lattice calculations of the second moment of the pion's distribution amplitude. We supplement these constraints with those extracted from the renormalon approach by means of the twist-four contributions to the pion distribution amplitude in order to further increase stability with respect to related theoretical uncertainties. We show which regions in the space of the first two nontrivial Gegenbauer coefficients a{sub 2} and a{sub 4} of all these constraints overlap, tagging this way the pion structure to the highest degree possible at present.
Quarkonium states in an anisotropic QCD plasma
Dumitru, Adrian; Guo Yun; Mocsy, Agnes; Strickland, Michael
2009-03-01
We consider quarkonium in a hot quantum chromodynamics (QCD) plasma which, due to expansion and nonzero viscosity, exhibits a local anisotropy in momentum space. At short distances the heavy-quark potential is known at tree level from the hard-thermal loop resummed gluon propagator in anisotropic perturbative QCD. The potential at long distances is modeled as a QCD string which is screened at the same scale as the Coulomb field. At asymptotic separation the potential energy is nonzero and inversely proportional to the temperature. We obtain numerical solutions of the three-dimensional Schroedinger equation for this potential. We find that quarkonium binding is stronger at nonvanishing viscosity and expansion rate, and that the anisotropy leads to polarization of the P-wave states.
QCD sign problem for small chemical potential
Splittorff, K.; Verbaarschot, J. J. M.
2007-06-01
The expectation value of the complex phase factor of the fermion determinant is computed in the microscopic domain of QCD at nonzero chemical potential. We find that the average phase factor is nonvanishing below a critical value of the chemical potential equal to half the pion mass and vanishes exponentially in the volume for larger values of the chemical potential. This holds for QCD with dynamical quarks as well as for quenched and phase quenched QCD. The average phase factor has an essential singularity for zero chemical potential and cannot be obtained by analytic continuation from imaginary chemical potential or by means of a Taylor expansion. The leading order correction in the p-expansion of the chiral Lagrangian is calculated as well.
NASA Astrophysics Data System (ADS)
Alfaro, Jorge; Andrianov, Alexander; Labraña, Pedro
2004-07-01
We study an extended QCD model in (1+1) dimensions obtained from QCD in 4D by compactifying two spatial dimensions and projecting onto the zero-mode subspace. We work out this model in the large Nc limit and using light cone gauge but keeping the equal-time quantization. This system is found to induce a dynamical mass for transverse gluons — adjoint scalars in QCD2, and to undergo a chiral symmetry breaking with the full quark propagators yielding non-tachyonic, dynamical quark masses, even in the chiral limit. We study quark-antiquark bound states which can be classified in this model by their properties under Lorentz transformations inherited from 4D. The scalar and pseudoscalar sectors of the theory are examined and in the chiral limit a massless ground state for pseudoscalars is revealed with a wave function generalizing the so called 't Hooft pion solution.
Phase diagram of chirally imbalanced QCD matter
Chernodub, M. N.; Nedelin, A. S.
2011-05-15
We compute the QCD phase diagram in the plane of the chiral chemical potential and temperature using the linear sigma model coupled to quarks and to the Polyakov loop. The chiral chemical potential accounts for effects of imbalanced chirality due to QCD sphaleron transitions which may emerge in heavy-ion collisions. We found three effects caused by the chiral chemical potential: the imbalanced chirality (i) tightens the link between deconfinement and chiral phase transitions; (ii) lowers the common critical temperature; (iii) strengthens the order of the phase transition by converting the crossover into the strong first order phase transition passing via the second order end point. Since the fermionic determinant with the chiral chemical potential has no sign problem, the chirally imbalanced QCD matter can be studied in numerical lattice simulations.
Holographic models and the QCD trace anomaly
Jose L. Goity, Roberto C. Trinchero
2012-08-01
Five dimensional dilaton models are considered as possible holographic duals of the pure gauge QCD vacuum. In the framework of these models, the QCD trace anomaly equation is considered. Each quantity appearing in that equation is computed by holographic means. Two exact solutions for different dilaton potentials corresponding to perturbative and non-perturbative {beta}-functions are studied. It is shown that in the perturbative case, where the {beta}-function is the QCD one at leading order, the resulting space is not asymptotically AdS. In the non-perturbative case, the model considered presents confinement of static quarks and leads to a non-vanishing gluon condensate, although it does not correspond to an asymptotically free theory. In both cases analyses based on the trace anomaly and on Wilson loops are carried out.
Equation of State from Lattice QCD Calculations
Gupta, Rajan
2011-01-01
We provide a status report on the calculation of the Equation of State (EoS) of QCD at finite temperature using lattice QCD. Most of the discussion will focus on comparison of recent results obtained by the HotQCD and Wuppertal-Budapest collaborations. We will show that very significant progress has been made towards obtaining high precision results over the temperature range of T = 150-700 MeV. The various sources of systematic uncertainties will be discussed and the differences between the two calculations highlighted. Our final conclusion is that these lattice results of EoS are precise enough to be used in the phenomenological analysis of heavy ion experiments at RHIC and LHC.
QCD flux tubes and anomaly inflow
NASA Astrophysics Data System (ADS)
Xiong, Chi
2013-07-01
We apply the Callan-Harvey anomaly-inflow mechanism to the study of QCD (chromoelectric) flux tubes, quark (pair) creation, and the chiral magnetic effect, using new variables from the Cho-Faddeev-Niemi decomposition of the gauge potential. A phenomenological description of chromoelectric flux tubes is obtained by studying a gauged Nambu-Jona-Lasinio effective Lagrangian, derived from the original QCD Lagrangian. At the quantum level, quark condensates in the QCD vacuum may form a vortexlike structure in a chromoelectric flux tube. Quark zero modes trapped in the vortex are chiral and lead to a two-dimensional gauge anomaly. To cancel it, an effective Chern-Simons coupling is needed and, hence, a topological charge density term naturally appears.
Soltz, R A
2009-08-13
We present results from recent calculations of the QCD equation of state by the HotQCD Collaboration and review the implications for hydrodynamic modeling. The equation of state of QCD at zero baryon density was calculated on a lattice of dimensions 32{sup 3} x 8 with m{sub l} = 0.1 m{sub s} (corresponding to a pion mass of {approx}220 MeV) using two improved staggered fermion actions, p4 and asqtad. Calculations were performed along lines of constant physics using more than 100M cpu-hours on BG/L supercomputers at LLNL, NYBlue, and SDSC. We present parameterizations of the equation of state suitable for input into hydrodynamics models of heavy ion collisions.
Exploring hyperons and hypernuclei with lattice QCD
Beane, S.R.; Bedaque, P.F.; Parreno, A.; Savage, M.J.
2003-01-01
In this work we outline a program for lattice QCD that wouldprovide a first step toward understanding the strong and weakinteractions of strange baryons. The study of hypernuclear physics hasprovided a significant amount of information regarding the structure andweak decays of light nuclei containing one or two Lambda's, and Sigma's.From a theoretical standpoint, little is known about the hyperon-nucleoninteraction, which is required input for systematic calculations ofhypernuclear structure. Furthermore, the long-standing discrepancies inthe P-wave amplitudes for nonleptonic hyperon decays remain to beunderstood, and their resolution is central to a better understanding ofthe weak decays of hypernuclei. We present a framework that utilizesLuscher's finite-volume techniques in lattice QCD to extract thescattering length and effective range for Lambda-N scattering in both QCDand partially-quenched QCD. The effective theory describing thenonleptonic decays of hyperons using isospin symmetry alone, appropriatefor lattice calculations, is constructed.
Brodsky, Stanley J.; de Teramond, Guy F.; /SLAC /Southern Denmark U., CP3-Origins /Costa Rica U.
2011-01-10
AdS/QCD, the correspondence between theories in a dilaton-modified five-dimensional anti-de Sitter space and confining field theories in physical space-time, provides a remarkable semiclassical model for hadron physics. Light-front holography allows hadronic amplitudes in the AdS fifth dimension to be mapped to frame-independent light-front wavefunctions of hadrons in physical space-time. The result is a single-variable light-front Schroedinger equation which determines the eigenspectrum and the light-front wavefunctions of hadrons for general spin and orbital angular momentum. The coordinate z in AdS space is uniquely identified with a Lorentz-invariant coordinate {zeta} which measures the separation of the constituents within a hadron at equal light-front time and determines the off-shell dynamics of the bound state wavefunctions as a function of the invariant mass of the constituents. The hadron eigenstates generally have components with different orbital angular momentum; e.g., the proton eigenstate in AdS/QCD with massless quarks has L = 0 and L = 1 light-front Fock components with equal probability. Higher Fock states with extra quark-anti quark pairs also arise. The soft-wall model also predicts the form of the nonperturbative effective coupling and its {beta}-function. The AdS/QCD model can be systematically improved by using its complete orthonormal solutions to diagonalize the full QCD light-front Hamiltonian or by applying the Lippmann-Schwinger method to systematically include QCD interaction terms. Some novel features of QCD are discussed, including the consequences of confinement for quark and gluon condensates. A method for computing the hadronization of quark and gluon jets at the amplitude level is outlined.
QCD unitarity constraints on Reggeon Field Theory
NASA Astrophysics Data System (ADS)
Kovner, Alex; Levin, Eugene; Lublinsky, Michael
2016-08-01
We point out that the s-channel unitarity of QCD imposes meaningful constraints on a possible form of the QCD Reggeon Field Theory. We show that neither the BFKL nor JIMWLK nor Braun's Hamiltonian satisfy the said constraints. In a toy, zero transverse dimensional case we construct a model that satisfies the analogous constraint and show that at infinite energy it indeed tends to a "black disk limit" as opposed to the model with triple Pomeron vertex only, routinely used as a toy model in the literature.
Novel Aspects of Hard Diffraction in QCD
Brodsky, Stanley J.; /SLAC
2005-12-14
Initial- and final-state interactions from gluon-exchange, normally neglected in the parton model have a profound effect in QCD hard-scattering reactions, leading to leading-twist single-spin asymmetries, diffractive deep inelastic scattering, diffractive hard hadronic reactions, and nuclear shadowing and antishadowing--leading-twist physics not incorporated in the light-front wavefunctions of the target computed in isolation. I also discuss the use of diffraction to materialize the Fock states of a hadronic projectile and test QCD color transparency.
Cascade Baryon Spectrum from Lattice QCD
Mathur, Nilmani; Bulava, John; Edwards, Robert; Engelson, Eric; Joo, Balint; Lichtl, Adam; Lin, Huey-Wen; Morningstar, Colin; Richards, David; Wallace, Stephen
2008-12-01
A comprehensive study of the cascade baryon spectrum using lattice QCD affords the prospect of predicting the masses of states not yet discovered experimentally, and determining the spin and parity of those states for which the quantum numbers are not yet known. The study of the cascades, containing two strange quarks, is particularly attractive for lattice QCD in that the chiral effects are reduced compared to states composed only of u/d quarks, and the states are typically narrow. We report preliminary results for the cascade spectrum obtained by using anisotropic Nf = 2 Wilson lattices with temporal lattice spacing 5.56 GeV?1.
String breaking in four dimensional lattice QCD
Duncan, A.; Eichten, E.; Thacker, H.
2001-06-01
Virtual quark pair screening leads to breaking of the string between fundamental representation quarks in QCD. For unquenched four dimensional lattice QCD, this (so far elusive) phenomenon is studied using the recently developed truncated determinant algorithm (TDA). The dynamical configurations were generated on a 650 MHz PC. Quark eigenmodes up to 420 MeV are included exactly in these TDA studies performed at low quark mass on large coarse [but O(a{sup 2}) improved] lattices. A study of Wilson line correlators in Coulomb gauge extracted from an ensemble of 1000 two-flavor dynamical configurations reveals evidence for flattening of the string tension at distances R{approx}>1 fm.
QCD subgroup on diffractive and forward physics
Albrow, M.G.; Baker, W.; Bhatti, A.
1997-09-01
Over the last few years, there has been a resurgence of interest in small-x or diffractive physics. This has been due to the realization that perturbative QCD techniques may be applicable to what was previously thought of as a non-perturbative problem and to the opening up of new energy regimes at HERA and the Tevatron collider. The goal is to understand the pomeron, and hence the behavior of total cross sections, elastic scattering and diffractive excitation, in terms of the underlying theory, QCD. This paper is divided into experiments of hadron-hadron colliders and electron-proton colliders.
Is fractional electric charge problematic for QCD
Slansky, R.
1982-01-01
A model of broken QCD is described here; SU/sub 3//sup c/ is broken to SO/sub 3//sup g/ (g for glow) such that color triplets become glow triplets. With this breaking pattern, there should exist low-mass, fractionally-charged diquark states that are not strongly bound to nuclei, but are rarely produced at present accelerator facilities. The breaking of QCD can be done with a 27/sup c/, in which case, this strong interaction theory is easily embedded in unified models such as those based on SU/sub 5/, SO/sub 10/, or E/sub 6/.
Hadron scattering and resonances in QCD
NASA Astrophysics Data System (ADS)
Dudek, Jozef J.
2016-05-01
I describe how hadron-hadron scattering amplitudes are related to the eigenstates of QCD in a finite cubic volume. The discrete spectrum of such eigenstates can be determined from correlation functions computed using lattice QCD, and the corresponding scattering amplitudes extracted. I review results from the Hadron Spectrum Collaboration who have used these finite volume methods to study ππ elastic scattering, including the ρ resonance, as well as coupled-channel π >K, ηK scattering. Ongoing calculations are advertised and the outlook for finite volume approaches is presented.
Exclusive hadronic and nuclear processes in QCD
Brodsky, S.J.
1985-12-01
Hadronic and nuclear processes are covered, in which all final particles are measured at large invariant masses compared with each other, i.e., large momentum transfer exclusive reactions. Hadronic wave functions in QCD and QCD sum rule constraints on hadron wave functions are discussed. The question of the range of applicability of the factorization formula and perturbation theory for exclusive processes is considered. Some consequences of quark and gluon degrees of freedom in nuclei are discussed which are outside the usual domain of traditional nuclear physics. 44 refs., 7 figs. (LEW)
Perturbative QCD at Finite Temperature and Density
NASA Astrophysics Data System (ADS)
Niégawa, A.
This is a comprehensive review on the perturbative hot QCD including the recent developments. The main body of the review is concentrated upon dealing with physical quantities like reaction rates. Contents: S1. Introduction, S2. Perturbative thermal field theory: Feynman rules, S3. Reaction-rate formula, S4. Hard-thermal-loop resummation scheme in hot QCD, S5. Effective action, S6. Hard modes with |P2| ≤ O (g2 T2), S7. Application to the computation of physical quantities, S8. Beyond the hard-thermal-loop resummation scheme, S9. Conclusions.
Hadron interaction at high energies in QCD
NASA Astrophysics Data System (ADS)
Ryskin, M. G.
1990-01-01
The interaction radius for the processes with all the momenta qti ≥ Q0 is calculated in the LLA of the perturbative QCD. The slope of the elastic cross section, B, increases with the energy as B = R2/2 ˜ √ αsln s/ Q02. The role of absorption corrections and the difference between nonenhanced (eikonal) and semienhanced (fan) screening corrections are discussed. In the last section the high Et jet production is considered in the diffraction dissociation processes. The predictions of QCD LLA agree well with the data of UA-8 experiment.
The odderon intercept in perturbative QCD
NASA Astrophysics Data System (ADS)
Gauron, P.; Lipatov, L. N.; Nicolescu, B.
1994-06-01
We construct, in the framework of QCD, the conformally invariant functional whose maximal value gives the J-plane location of the leading singularity of the t-channel partial waves in LLA for diagrams with n reggeized gluons. In the case of the odderon the wave function in the impact-parameter space depends on only one anharmonic ratio and the corresponding functional is significantly simplified. By using a variational method with conformal techniques we show that the odderon J-plane singularity in the LLA approximation of QCD lies above 1.
Collimation of average multiplicity in QCD jets
NASA Astrophysics Data System (ADS)
Arleo, François; Pérez Ramos, Redamy
2009-11-01
The collimation of average multiplicity inside quark and gluon jets is investigated in perturbative QCD in the modified leading logarithmic approximation (MLLA). The role of higher order corrections accounting for energy conservation and the running of the coupling constant leads to smaller multiplicity collimation as compared to leading logarithmic approximation (LLA) results. The collimation of jets produced in heavy-ion collisions has also been explored by using medium-modified splitting functions enhanced in the infrared sector. As compared to elementary collisions, the angular distribution of the jet multiplicity is found to broaden in QCD media at all energy scales.
Conformal properties of the odderon in QCD
NASA Astrophysics Data System (ADS)
Gauron, Pierre; Lipatov, Lev; Nicolescu, Basarab
1991-05-01
We construct, in the framework of QCD, the conformally invariant functional whose maximal value gives the J-plane location of the leading singularity of the t-channel partial waves in LLA for diagrams with n reggeized gluons. In the case of the odderon the wave function in impact-parameter space depends on only one anharmonic ratio and the corresponding functional is significantly simplified. We discuss in the variational approach the relation between the odderon and the pomeron in QCD. A semiquantitative argument is given that the intercept of the odderon in LLA is probably bigger than 1.
Hyperon-Nucleon Interactions from QCD
NASA Astrophysics Data System (ADS)
Savage, Martin
2012-10-01
Low-energy neutron-Sigma- interactions determine, in part, the role of the strange quark in dense matter, such as that found in astrophysical environments. The scattering phase shifts for this system are obtained from Lattice QCD calculations, performed at a pion mass of 389 MeV in two large lattice volumes and at one lattice spacing, and are extrapolated to the physical pion mass using effective field theory. The interactions determined from QCD are consistent with those extracted from hyperon-nucleon experimental data within uncertainties.
The instanton liquid model of QCD
Blotz, A.
1998-12-31
Within a microscopic model for the non-perturbative vacuum of QCD, hadronic correlation functions are calculated. In the model the vacuum is a statistical, interacting ensemble of instantons and anti-instantons at the scale of {Lambda}{sub QCD}. Hadronic two-point as well as three-point correlation functions are evaluated and compared with phenomenological information about the spectra, couplings and form factors. Especially the electro magnetic form factor of the pion is obtained and new predictions for the charm contribution to DIS structure functions are made.
Experimental Study of Nucleon Structure and QCD
Jian-Ping Chen
2012-03-01
Overview of Experimental Study of Nucleon Structure and QCD, with focus on the spin structure. Nucleon (spin) Structure provides valuable information on QCD dynamics. A decade of experiments from JLab yields these exciting results: (1) valence spin structure, duality; (2) spin sum rules and polarizabilities; (3) precision measurements of g{sub 2} - high-twist; and (4) first neutron transverse spin results - Collins/Sivers/A{sub LT}. There is a bright future as the 12 GeV Upgrade will greatly enhance our capability: (1) Precision determination of the valence quark spin structure flavor separation; and (2) Precision extraction of transversity/tensor charge/TMDs.
Recent QCD Results from the Tevatron
Vellidis, Costas
2015-10-10
Four years after the shutdown of the Tevatron proton-antiproton collider, the two Tevatron experiments, CDF and DZero, continue producing important results that test the theory of the strong interaction, Quantum Chromodynamics (QCD). The experiments exploit the advantages of the data sample acquired during the Tevatron Run II, stemming from the unique pp initial state, the clean environment at the relatively low Tevatron instantaneous luminosities, and the good understanding of the data sample after many years of calibrations and optimizations. A summary of results using the full integrated luminosity is presented, focusing on measurements of prompt photon production, weak boson production associated with jets, and non-perturbative QCD processes.
Finite volume QCD at fixed topological charge
Aoki, Sinya; Fukaya, Hidenori; Hashimoto, Shoji; Onogi, Tetsuya
2007-09-01
In finite volume the partition function of QCD with a given {theta} is a sum of different topological sectors with a weight primarily determined by the topological susceptibility. If a physical observable is evaluated only in a fixed topological sector, the result deviates from the true expectation value by an amount proportional to the inverse space-time volume 1/V. Using the saddle point expansion, we derive formulas to express the correction due to the fixed topological charge in terms of a 1/V expansion. Applying this formula, we propose a class of methods to determine the topological susceptibility in QCD from various correlation functions calculated in a fixed topological sector.
Light mesons in QCD and unquenching effects from the 3PI effective action
NASA Astrophysics Data System (ADS)
Williams, Richard; Fischer, Christian S.; Heupel, Walter
2016-02-01
We investigate the impact of unquenching effects on QCD Green's functions, in the form of quark-loop contributions to both the gluon propagator and three-gluon vertex, in a three-loop inspired truncation of the three-particle irreducible (3PI) effective action. The fully coupled system of Dyson-Schwinger equations for the quark-gluon, ghost-gluon and three-gluon vertices, together with the quark propagator, are solved self-consistently; our only input are the ghost and gluon propagators themselves that are constrained by calculations within lattice QCD. We find that the two different unquenching effects have roughly equal, but opposite, impact on the quark-gluon vertex and quark propagator, with an overall negative impact on the latter. By taking further derivatives of the 3PI effective action, we construct the corresponding quark-antiquark kernel of the Bethe-Salpeter equation for mesons. The leading component is gluon exchange between two fully dressed quark-gluon vertices, thus introducing for the first time an obvious scalar-scalar component to the binding. We gain access to time-like properties of bound states by analytically continuing the coupled system of Dyson-Schwinger equations to the complex plane. We observe that the vector axial-vector splitting is in accord with experiment and that the lightest quark-antiquark scalar meson is above 1 GeV in mass.
Chiral logarithms in quenched QCD
Y. Chen; S. J. Dong; T. Draper; I. Horvath; F. X. Lee; K. F. Liu; N. Mathur; and J. B. Zhang
2004-08-01
The quenched chiral logarithms are examined on a 163x28 lattice with Iwasaki gauge action and overlap fermions. The pion decay constant fpi is used to set the lattice spacing, a = 0.200(3) fm. With pion mass as low as {approx}180 MeV, we see the quenched chiral logarithms clearly in mpi2/m and fP, the pseudoscalar decay constant. The authors analyze the data to determine how low the pion mass needs to be in order for the quenched one-loop chiral perturbation theory (chiPT) to apply. With the constrained curve-fitting method, they are able to extract the quenched chiral logarithmic parameter delta together with other low-energy parameters. Only for mpi<=300 MeV do we obtain a consistent and stable fit with a constant delta which they determine to be 0.24(3)(4) (at the chiral scale Lambdachi = 0.8 GeV). By comparing to the 123x28 lattice, they estimate the finite volume effect to be about 2.7% for the smallest pion mass. They also fitted the pion mass to the form for the re-summed cactus diagrams and found that its applicable region is extended farther than the range for the one-loop formula, perhaps up to mpi {approx}500-600 MeV. The scale independent delta is determined to be 0.20(3) in this case. The authors study the quenched non-analytic terms in the nucleon mass and find that the coefficient C1/2 in the nucleon mass is consistent with the prediction of one-loop chiPT. They also obtain the low energy constant L5 from fpi. They conclude from this study that it is imperative to cover only the range of data with the pion mass less than {approx}300 MeV in order to examine the chiral behavior of the hadron masses and decay constants in quenched QCD and match them with quenched one-loop chiPT.
Dzierba, A.R.
1995-10-01
One of the open questions in non-perturbative QCD has to do with the existence of meson states predicted by the theory other than qq states. These include four-quark states (q{sup 2}q{sup 2} or molecules like KK), states of pure glue (glueballs: gg or ggg) and mixed or hybrid states (qqg). The prima facie candidate for a non-qq state would be one possessing exotic quantum numbers, J{sup PC}, not consistent with a qq combination. Examples include J{sup PC}=0{sup +-},{sup 0--},{sup -+},{hor_ellipsis} Remarkably, states with exotic quantum numbers have not been found despite intensive searches. The case for a possible sighting of an exotic J{sup Jc}= 1{sup -+} state decaying into {eta}{pi}{sup O}, made a few years ago, seems to be dissolving. Yet, the evidence for non-qq states is clearly present. Conventional qq nonets are over-subscribed, states have been found with decay modes or production characteristics peculiar for qq. The experimental lesson we have learned is that information from a number of complementary processes must be brought together in order to understand the meson spectrum. Information has come from e{sup +}e{sup -},{gamma}{gamma}, {gamma}{gamma}, and pp collisions, from vector meson decays and from peripheral and central hadroproduction. This talk will review the status of the experimental search. I will especially point out how new technology is being brought to bear on the re-visit of the light quark sector. New instrumentation allows for sophisticated and selective triggers. The recent explosion in computing power allows us to analyze data with unprecedented statistics. Preliminary results from a recently completed, ultra-high statistics experiment using the Multiparticle Spectrometer at Brookhaven Lab will be presented. I will also describe the extension of the search to CEBAF, where an approved experiment there will study the sub-structure of scalar mesons via the radiative decays of the ER meson.
Small convolution kernels for high-fidelity image restoration
NASA Technical Reports Server (NTRS)
Reichenbach, Stephen E.; Park, Stephen K.
1991-01-01
An algorithm is developed for computing the mean-square-optimal values for small, image-restoration kernels. The algorithm is based on a comprehensive, end-to-end imaging system model that accounts for the important components of the imaging process: the statistics of the scene, the point-spread function of the image-gathering device, sampling effects, noise, and display reconstruction. Subject to constraints on the spatial support of the kernel, the algorithm generates the kernel values that restore the image with maximum fidelity, that is, the kernel minimizes the expected mean-square restoration error. The algorithm is consistent with the derivation of the spatially unconstrained Wiener filter, but leads to a small, spatially constrained kernel that, unlike the unconstrained filter, can be efficiently implemented by convolution. Simulation experiments demonstrate that for a wide range of imaging systems these small kernels can restore images with fidelity comparable to images restored with the unconstrained Wiener filter.
Multiple kernel learning for sparse representation-based classification.
Shrivastava, Ashish; Patel, Vishal M; Chellappa, Rama
2014-07-01
In this paper, we propose a multiple kernel learning (MKL) algorithm that is based on the sparse representation-based classification (SRC) method. Taking advantage of the nonlinear kernel SRC in efficiently representing the nonlinearities in the high-dimensional feature space, we propose an MKL method based on the kernel alignment criteria. Our method uses a two step training method to learn the kernel weights and sparse codes. At each iteration, the sparse codes are updated first while fixing the kernel mixing coefficients, and then the kernel mixing coefficients are updated while fixing the sparse codes. These two steps are repeated until a stopping criteria is met. The effectiveness of the proposed method is demonstrated using several publicly available image classification databases and it is shown that this method can perform significantly better than many competitive image classification algorithms. PMID:24835226
Visualization of nonlinear kernel models in neuroimaging by sensitivity maps.
Rasmussen, Peter Mondrup; Madsen, Kristoffer Hougaard; Lund, Torben Ellegaard; Hansen, Lars Kai
2011-04-01
There is significant current interest in decoding mental states from neuroimages. In this context kernel methods, e.g., support vector machines (SVM) are frequently adopted to learn statistical relations between patterns of brain activation and experimental conditions. In this paper we focus on visualization of such nonlinear kernel models. Specifically, we investigate the sensitivity map as a technique for generation of global summary maps of kernel classification models. We illustrate the performance of the sensitivity map on functional magnetic resonance (fMRI) data based on visual stimuli. We show that the performance of linear models is reduced for certain scan labelings/categorizations in this data set, while the nonlinear models provide more flexibility. We show that the sensitivity map can be used to visualize nonlinear versions of kernel logistic regression, the kernel Fisher discriminant, and the SVM, and conclude that the sensitivity map is a versatile and computationally efficient tool for visualization of nonlinear kernel models in neuroimaging.
Monte Carlo Code System for Electron (Positron) Dose Kernel Calculations.
CHIBANI, OMAR
1999-05-12
Version 00 KERNEL performs dose kernel calculations for an electron (positron) isotropic point source in an infinite homogeneous medium. First, the auxiliary code PRELIM is used to prepare cross section data for the considered medium. Then the KERNEL code simulates the transport of electrons and bremsstrahlung photons through the medium until all particles reach their cutoff energies. The deposited energy is scored in concentric spherical shells at a radial distance ranging from zero to twice the source particle range.
Pomeron and Odderon in QCD and a two dimensional conformal field theory
NASA Astrophysics Data System (ADS)
Lipatov, L. N.
1991-02-01
The problem of solving the Bethe-Salpeter equations in LLA for t-channel partial waves corresponding to Feynman diagrams with many reggeized gluons is simplified significantly by using their conformal invariance in the impact parameter representation and the separability property of their integral kernels. In particular, for the three gluon system with the odderon quantum numbers we obtain a one-dimensional integral equation. It is known [1], that in the leading logarithmic approximation (LLA) the gluon production amplitudes at large energies √s have the multi-Regge form and are expressed in terms of the reggeized gluon trajectory j = 1+ ω, ω ˜ g 2 and the Reggeon-Reggeon-particle vertex γ ˜ g, where g is the QCD coupling constant. High order radiative corrections to these quantities and many Reggeon vertices can be calculated using a dispersive approach [2]. The hadron scattering amplitudes in LLA are expressed through the solution of the Bethe-Salpeter equation for t-channel partial waves f ω(k, k', q) describing the pomeron built from two reggeized gluons [1, 3]. Further, the analogous equation for the three gluon compound state with the odderon quantum numbers (P j = C = -1) is constructed with the use of the integral kernels for pairlike gluon interactions which are proportional to the pomeron kernel [4]. It is convinient to perform the Fourier transform of the function fω( overlinek ioverlinek i,) depending on transverse components overlineki, overlineki, of virtual gluon momenta and to pass to the impact parameter representation fω( overlinep j, overlinep j (here overlinep jand overlinep j are the transverse coordinates of the initial and final gluons in the t-channel). The Bethe-Salpeter equations in this representation are conformal invariant and their solutions fω( overline/gg9i, overlineϱi.) can be interpreted as the Green functions of a two dimensional Euclidean field theory [5].
Varelas, N.; D0 Collaboration
1997-10-01
We present recent results on jet production, dijet angular distributions, W+ Jets, and color coherence from p{anti p} collisions at {radical}s = 1.8 TeV at the Fermilab Tevatron Collider using the D0 detector. The data are compared to perturbative QCD calculations or to predictions of parton shower based Monte Carlo models.
QCD in hadron-hadron collisions
Albrow, M.
1997-03-01
Quantum Chromodynamics provides a good description of many aspects of high energy hadron-hadron collisions, and this will be described, along with some aspects that are not yet understood in QCD. Topics include high E{sub T} jet production, direct photon, W, Z and heavy flavor production, rapidity gaps and hard diffraction.
Heavy quark masses from lattice QCD
NASA Astrophysics Data System (ADS)
Lytle, Andrew T.
2016-07-01
Progress in quark mass determinations from lattice QCD is reviewed, focusing on results for charm and bottom mass. These are of particular interest for precision Higgs studies. Recent determinations have achieved percent-level uncertainties with controlled systematics. Future prospects for these calculations are also discussed.
Exploring Hyperons and Hypernuclei with Lattice QCD
S.R. Beane; P.F. Bedaque; A. Parreno; M.J. Savage
2005-01-01
In this work we outline a program for lattice QCD that would provide a first step toward understanding the strong and weak interactions of strange baryons. The study of hypernuclear physics has provided a significant amount of information regarding the structure and weak decays of light nuclei containing one or two Lambda's, and Sigma's. From a theoretical standpoint, little is known about the hyperon-nucleon interaction, which is required input for systematic calculations of hypernuclear structure. Furthermore, the long-standing discrepancies in the P-wave amplitudes for nonleptonic hyperon decays remain to be understood, and their resolution is central to a better understanding of the weak decays of hypernuclei. We present a framework that utilizes Luscher's finite-volume techniques in lattice QCD to extract the scattering length and effective range for Lambda-N scattering in both QCD and partially-quenched QCD. The effective theory describing the nonleptonic decays of hyperons using isospin symmetry alone, appropriate for lattice calculations, is constructed.
THE TOP QUARK, QCD, AND NEW PHYSICS.
DAWSON,S.
2002-06-01
The role of the top quark in completing the Standard Model quark sector is reviewed, along with a discussion of production, decay, and theoretical restrictions on the top quark properties. Particular attention is paid to the top quark as a laboratory for perturbative QCD. As examples of the relevance of QCD corrections in the top quark sector, the calculation of e{sup +}e{sup -} + t{bar t} at next-to-leading-order QCD using the phase space slicing algorithm and the implications of a precision measurement of the top quark mass are discussed in detail. The associated production of a t{bar t} pair and a Higgs boson in either e{sup +}e{sup -} or hadronic collisions is presented at next-to-leading-order QCD and its importance for a measurement of the top quark Yulrawa coupling emphasized. Implications of the heavy top quark mass for model builders are briefly examined, with the minimal supersymmetric Standard Model and topcolor discussed as specific examples.
Schvellinger, Martin
2008-07-28
We briefly review one of the current applications of the AdS/CFT correspondence known as AdS/QCD and discuss about the calculation of four-point quark-flavour current correlation functions and their applications to the calculation of observables related to neutral kaon decays and neutral kaon mixing processes.
From continuum QCD to hadron observables
NASA Astrophysics Data System (ADS)
Binosi, Daniele
2016-03-01
We show that the form of the renormalization group invariant quark-gluon interaction predicted by a refined nonperturbative analysis of the QCD gauge sector is in quantitative agreement with the one required for describing a wide range of hadron observables using sophisticated truncation schemes of the Schwinger-Dyson equations relevant in the matter sector.
The CKM Matrix from Lattice QCD
Mackenzie, Paul B.; /Fermilab
2009-07-01
Lattice QCD plays an essential role in testing and determining the parameters of the CKM theory of flavor mixing and CP violation. Very high precisions are required for lattice calculations analyzing CKM data; I discuss the prospects for achieving them. Lattice calculations will also play a role in investigating flavor mixing and CP violation beyond the Standard Model.
On-Shell Methods in Perturbative QCD
Bern, Zvi; Dixon, Lance J.; Kosower, David A.
2007-04-25
We review on-shell methods for computing multi-parton scattering amplitudes in perturbative QCD, utilizing their unitarity and factorization properties. We focus on aspects which are useful for the construction of one-loop amplitudes needed for phenomenological studies at the Large Hadron Collider.
QCD PHASE TRANSITIONS-VOLUME 15.
SCHAFER,T.
1998-11-04
The title of the workshop, ''The QCD Phase Transitions'', in fact happened to be too narrow for its real contents. It would be more accurate to say that it was devoted to different phases of QCD and QCD-related gauge theories, with strong emphasis on discussion of the underlying non-perturbative mechanisms which manifest themselves as all those phases. Before we go to specifics, let us emphasize one important aspect of the present status of non-perturbative Quantum Field Theory in general. It remains true that its studies do not get attention proportional to the intellectual challenge they deserve, and that the theorists working on it remain very fragmented. The efforts to create Theory of Everything including Quantum Gravity have attracted the lion share of attention and young talent. Nevertheless, in the last few years there was also a tremendous progress and even some shift of attention toward emphasis on the unity of non-perturbative phenomena. For example, we have seen some. efforts to connect the lessons from recent progress in Supersymmetric theories with that in QCD, as derived from phenomenology and lattice. Another example is Maldacena conjecture and related development, which connect three things together, string theory, super-gravity and the (N=4) supersymmetric gauge theory. Although the progress mentioned is remarkable by itself, if we would listen to each other more we may have chance to strengthen the field and reach better understanding of the spectacular non-perturbative physics.
Visualization Tools for Lattice QCD - Final Report
Massimo Di Pierro
2012-03-15
Our research project is about the development of visualization tools for Lattice QCD. We developed various tools by extending existing libraries, adding new algorithms, exposing new APIs, and creating web interfaces (including the new NERSC gauge connection web site). Our tools cover the full stack of operations from automating download of data, to generating VTK files (topological charge, plaquette, Polyakov lines, quark and meson propagators, currents), to turning the VTK files into images, movies, and web pages. Some of the tools have their own web interfaces. Some Lattice QCD visualization have been created in the past but, to our knowledge, our tools are the only ones of their kind since they are general purpose, customizable, and relatively easy to use. We believe they will be valuable to physicists working in the field. They can be used to better teach Lattice QCD concepts to new graduate students; they can be used to observe the changes in topological charge density and detect possible sources of bias in computations; they can be used to observe the convergence of the algorithms at a local level and determine possible problems; they can be used to probe heavy-light mesons with currents and determine their spatial distribution; they can be used to detect corrupted gauge configurations. There are some indirect results of this grant that will benefit a broader audience than Lattice QCD physicists.
QCD parton model at collider energies
Ellis, R.K.
1984-09-01
Using the example of vector boson production, the application of the QCD improved parton model at collider energies is reviewed. The reliability of the extrapolation to SSC energies is assessed. Predictions at ..sqrt..S = 0.54 TeV are compared with data. 21 references.
Nonperturbative QCD corrections to electroweak observables
Dru B Renner, Xu Feng, Karl Jansen, Marcus Petschlies
2011-12-01
Nonperturbative QCD corrections are important to many low-energy electroweak observables, for example the muon magnetic moment. However, hadronic corrections also play a significant role at much higher energies due to their impact on the running of standard model parameters, such as the electromagnetic coupling. Currently, these hadronic contributions are accounted for by a combination of experimental measurements and phenomenological modeling but ideally should be calculated from first principles. Recent developments indicate that many of the most important hadronic corrections may be feasibly calculated using lattice QCD methods. To illustrate this, we will examine the lattice computation of the leading-order QCD corrections to the muon magnetic moment, paying particular attention to a recently developed method but also reviewing the results from other calculations. We will then continue with several examples that demonstrate the potential impact of the new approach: the leading-order corrections to the electron and tau magnetic moments, the running of the electromagnetic coupling, and a class of the next-to-leading-order corrections for the muon magnetic moment. Along the way, we will mention applications to the Adler function, the determination of the strong coupling constant and QCD corrections to muonic-hydrogen.
Lattice QCD on a Beowulf Cluster
NASA Astrophysics Data System (ADS)
Kima, Seyong
Using commodity component personal computers based on Alpha processor and commodity network devices and a switch, we built an 8-node parallel computer. GNU/Linux is chosen as an operating system and message passing libraries such as PVM, LAM, and MPICH have been tested as a parallel programming environment. We discuss our lattice QCD project for a heavy quark system on this computer.
Pluto collaboration
1981-02-01
Results obtained with the PLUTO detector at PETRA are presented. Multihadron final states have been analysed with respect to clustering, energy-energy correlations and transverse momenta in jets. QCD predictions for hard gluon emission and soft gluon-quark cascades are discussed. Results on ..cap alpha../sub s/ and the gluon spin are given.
Quark screening lengths in finite temperature QCD
Gocksch, A. California Univ., Santa Barbara, CA . Inst. for Theoretical Physics)
1990-11-01
We have computed Landau gauge quark propagators in both the confined and deconfined phase of QCD. I discuss the magnitude of the resulting screening lengths as well as aspects of chiral symmetry relevant to the quark propagator. 12 refs., 1 fig., 1 tab.
Gauged Axions and their QCD Interactions
Coriano, Claudio; Mariano, Antonio; Guzzi, Marco
2010-12-22
We present a brief overview of axion models associated to anomalous abelian (gauge) symmetries, discussing their main phenomenological features. Among these, the mechanism of vacuum misalignment introduced at the QCD and at the electroweak phase transitions, with the appearance of periodic potentials, responsible for the generation of a mass for these types of axions.
Dual condensate and QCD phase transition
Zhang Bo; Bruckmann, Falk; Fodor, Zoltan; Szabo, Kalman K.; Gattringer, Christof
2011-05-23
The dual condensate is a new QCD phase transition order parameter, which connnects confinement and chiral symmetry breaking as different mass limits. We discuss the relation between the fermion spectrum at general boundary conditions and the dual condensate and show numerical results for the latter from unquenched SU(3) lattice configurations.
Marking up lattice QCD configurations and ensembles
P.Coddington; B.Joo; C.M.Maynard; D.Pleiter; T.Yoshie
2007-10-01
QCDml is an XML-based markup language designed for sharing QCD configurations and ensembles world-wide via the International Lattice Data Grid (ILDG). Based on the latest release, we present key ingredients of the QCDml in order to provide some starting points for colleagues in this community to markup valuable configurations and submit them to the ILDG.
The hadron spectrum from lattice QCD
Peardon, Mike
2010-08-05
Lattice spectroscopy is becoming increasingly sophisticated. This review will introduce the methodology and describe progress made recently probing the spectrum of excitations of QCD. The focus will be on describing new developments that enable excited states, exotic quantum numbers and resonances to be explored.
The Top Quark, QCD, And New Physics.
DOE R&D Accomplishments Database
Dawson, S.
2002-06-01
The role of the top quark in completing the Standard Model quark sector is reviewed, along with a discussion of production, decay, and theoretical restrictions on the top quark properties. Particular attention is paid to the top quark as a laboratory for perturbative QCD. As examples of the relevance of QCD corrections in the top quark sector, the calculation of e{sup+}e{sup -}+ t{bar t} at next-to-leading-order QCD using the phase space slicing algorithm and the implications of a precision measurement of the top quark mass are discussed in detail. The associated production of a t{bar t} pair and a Higgs boson in either e{sup+}e{sup -} or hadronic collisions is presented at next-to-leading-order QCD and its importance for a measurement of the top quark Yulrawa coupling emphasized. Implications of the heavy top quark mass for model builders are briefly examined, with the minimal supersymmetric Standard Model and topcolor discussed as specific examples.
QCD subgroup on diffractive and forward physics
Albrow, M.G.; Baker, W.; Bhatti, A.
1996-10-01
The goal is to understand the pomeron, and hence the behavior of total cross sections, elastic scattering and diffractive excitation, in terms of the underlying theory, QCD. A description of the basic ideas and phenomenology is followed by a discussion of hadron-hadron and electron-proton experiments. An appendix lists recommended diffractive-physics terms and definitions. 44 refs., 6 figs.
A Kernel-based Account of Bibliometric Measures
NASA Astrophysics Data System (ADS)
Ito, Takahiko; Shimbo, Masashi; Kudo, Taku; Matsumoto, Yuji
The application of kernel methods to citation analysis is explored. We show that a family of kernels on graphs provides a unified perspective on the three bibliometric measures that have been discussed independently: relatedness between documents, global importance of individual documents, and importance of documents relative to one or more (root) documents (relative importance). The framework provided by the kernels establishes relative importance as an intermediate between relatedness and global importance, in which the degree of `relativity,' or the bias between relatedness and importance, is naturally controlled by a parameter characterizing individual kernels in the family.
Embedded real-time operating system micro kernel design
NASA Astrophysics Data System (ADS)
Cheng, Xiao-hui; Li, Ming-qiang; Wang, Xin-zheng
2005-12-01
Embedded systems usually require a real-time character. Base on an 8051 microcontroller, an embedded real-time operating system micro kernel is proposed consisting of six parts, including a critical section process, task scheduling, interruption handle, semaphore and message mailbox communication, clock managent and memory managent. Distributed CPU and other resources are among tasks rationally according to the importance and urgency. The design proposed here provides the position, definition, function and principle of micro kernel. The kernel runs on the platform of an ATMEL AT89C51 microcontroller. Simulation results prove that the designed micro kernel is stable and reliable and has quick response while operating in an application system.
Robust visual tracking via speedup multiple kernel ridge regression
NASA Astrophysics Data System (ADS)
Qian, Cheng; Breckon, Toby P.; Li, Hui
2015-09-01
Most of the tracking methods attempt to build up feature spaces to represent the appearance of a target. However, limited by the complex structure of the distribution of features, the feature spaces constructed in a linear manner cannot characterize the nonlinear structure well. We propose an appearance model based on kernel ridge regression for visual tracking. Dense sampling is fulfilled around the target image patches to collect the training samples. In order to obtain a kernel space in favor of describing the target appearance, multiple kernel learning is introduced into the selection of kernels. Under the framework, instead of a single kernel, a linear combination of kernels is learned from the training samples to create a kernel space. Resorting to the circulant property of a kernel matrix, a fast interpolate iterative algorithm is developed to seek coefficients that are assigned to these kernels so as to give an optimal combination. After the regression function is learned, all candidate image patches gathered are taken as the input of the function, and the candidate with the maximal response is regarded as the object image patch. Extensive experimental results demonstrate that the proposed method outperforms other state-of-the-art tracking methods.
Robust kernel collaborative representation for face recognition
NASA Astrophysics Data System (ADS)
Huang, Wei; Wang, Xiaohui; Ma, Yanbo; Jiang, Yuzheng; Zhu, Yinghui; Jin, Zhong
2015-05-01
One of the greatest challenges of representation-based face recognition is that the training samples are usually insufficient. In other words, the training set usually does not include enough samples to show varieties of high-dimensional face images caused by illuminations, facial expressions, and postures. When the test sample is significantly different from the training samples of the same subject, the recognition performance will be sharply reduced. We propose a robust kernel collaborative representation based on virtual samples for face recognition. We think that the virtual training set conveys some reasonable and possible variations of the original training samples. Hence, we design a new object function to more closely match the representation coefficients generated from the original and virtual training sets. In order to further improve the robustness, we implement the corresponding representation-based face recognition in kernel space. It is noteworthy that any kind of virtual training samples can be used in our method. We use noised face images to obtain virtual face samples. The noise can be approximately viewed as a reflection of the varieties of illuminations, facial expressions, and postures. Our work is a simple and feasible way to obtain virtual face samples to impose Gaussian noise (and other types of noise) specifically to the original training samples to obtain possible variations of the original samples. Experimental results on the FERET, Georgia Tech, and ORL face databases show that the proposed method is more robust than two state-of-the-art face recognition methods, such as CRC and Kernel CRC.
LFK. Livermore FORTRAN Kernel Computer Test
McMahon, F.H.
1990-05-01
LFK, the Livermore FORTRAN Kernels, is a computer performance test that measures a realistic floating-point performance range for FORTRAN applications. Informally known as the Livermore Loops test, the LFK test may be used as a computer performance test, as a test of compiler accuracy (via checksums) and efficiency, or as a hardware endurance test. The LFK test, which focuses on FORTRAN as used in computational physics, measures the joint performance of the computer CPU, the compiler, and the computational structures in units of Megaflops/sec or Mflops. A C language version of subroutine KERNEL is also included which executes 24 samples of C numerical computation. The 24 kernels are a hydrodynamics code fragment, a fragment from an incomplete Cholesky conjugate gradient code, the standard inner product function of linear algebra, a fragment from a banded linear equations routine, a segment of a tridiagonal elimination routine, an example of a general linear recurrence equation, an equation of state fragment, part of an alternating direction implicit integration code, an integrate predictor code, a difference predictor code, a first sum, a first difference, a fragment from a two-dimensional particle-in-cell code, a part of a one-dimensional particle-in-cell code, an example of how casually FORTRAN can be written, a Monte Carlo search loop, an example of an implicit conditional computation, a fragment of a two-dimensional explicit hydrodynamics code, a general linear recurrence equation, part of a discrete ordinates transport program, a simple matrix calculation, a segment of a Planckian distribution procedure, a two-dimensional implicit hydrodynamics fragment, and determination of the location of the first minimum in an array.
Topological Charge Evolution in the Markov-Chain of QCD
Derek Leinweber; Anthony Williams; Jian-bo Zhang; Frank Lee
2004-04-01
The topological charge is studied on lattices of large physical volume and fine lattice spacing. We illustrate how a parity transformation on the SU(3) link-variables of lattice gauge configurations reverses the sign of the topological charge and leaves the action invariant. Random applications of the parity transformation are proposed to traverse from one topological charge sign to the other. The transformation provides an improved unbiased estimator of the ensemble average and is essential in improving the ergodicity of the Markov chain process.
Big bang nucleosynthesis and ΛQCD
NASA Astrophysics Data System (ADS)
Kneller, James P.; McLaughlin, Gail C.
2003-11-01
Big bang nucleosynthesis (BBN) has increasingly become the tool of choice for investigating the permitted variation of fundamental constants during the earliest epochs of the Universe. Here we present a BBN calculation that has been modified to permit changes in the QCD scale, ΛQCD. The primary effects of changing the QCD scale upon BBN are through the deuteron binding energy BD and the neutron-proton mass difference δmnp, which both play crucial roles in determining the primordial abundances. In this paper we show how a simplified BBN calculation allows us to restrict the nuclear data we need to just BD and δmnp yet still gives useful results so that any variation in ΛQCD may be constrained via the corresponding shifts in BD and δmnp by using the current estimates of the primordial deuterium abundance and helium mass fraction. The simplification predicts the helium-4 and deuterium abundances to within 1% and 50%, respectively, when compared with the results of a standard BBN code. But ΛQCD also affects much of the remaining required nuclear input so this method introduces a systematic error into the calculation and we find a degeneracy between BD and δmnp. We show how increased understanding of the relationship of the pion mass and/or BD to other nuclear parameters, such as the binding energy of tritium and the cross section of T+D→4He+n, would yield constraints upon any change in BD and δmnp at the 10% level.
Light-cone quantization and QCD phenomenology
Brodsky, S.J.; Robertson, D.G.
1995-12-31
In principle, quantum chromodynamics provides a fundamental description of hadronic and nuclear structure and dynamics in terms of their elementary quark and gluon degrees of freedom. In practice, the direct application of QCD to reactions involving the structure of hadrons is extremely complex because of the interplay of nonperturbative effects such as color confinement and multi-quark coherence. A crucial tool in analyzing such phenomena is the use of relativistic light-cone quantum mechanics and Fock state methods to provide tractable and consistent treatments of relativistic many-body systems. In this article we present an overview of this formalism applied to QCD, focusing in particular on applications to the final states in deep inelastic lepton scattering that will be relevant for the proposed European Laboratory for Electrons (ELFE), HERMES, HERA, SLAC, and CEBAF. We begin with a brief introduction to light-cone field theory, stressing how it many allow the derivation of a constituent picture, analogous to the constituent quark model, from QCD. We then discuss several applications of the light-cone Fock state formalism to QCD phenomenology. The Fock state representation includes all quantum fluctuations of the hadron wavefunction, including far off-shell configurations such as intrinsic charm and, in the case of nuclei, hidden color. In some applications, such as exclusive processes at large momentum transfer, one can make first-principle predictions using factorization theorems which separate the hard perturbative dynamics from the nonpertubative physics associated with hadron binding. The Fock state components of the hadron with small transverse size, which dominate hard exclusive reactions, have small color dipole moments and thus diminished hadronic interactions. Thus QCD predicts minimal absorptive corrections, i.e., color transparency for quasi-elastic exclusive reactions in nuclear targets at large momentum transfer.
Oil point pressure of Indian almond kernels
NASA Astrophysics Data System (ADS)
Aregbesola, O.; Olatunde, G.; Esuola, S.; Owolarafe, O.
2012-07-01
The effect of preprocessing conditions such as moisture content, heating temperature, heating time and particle size on oil point pressure of Indian almond kernel was investigated. Results showed that oil point pressure was significantly (P < 0.05) affected by above mentioned parameters. It was also observed that oil point pressure reduced with increase in heating temperature and heating time for both coarse and fine particles. Furthermore, an increase in moisture content resulted in increased oil point pressure for coarse particles while there was a reduction in oil point pressure with increase in moisture content for fine particles.
Verification of Chare-kernel programs
Bhansali, S.; Kale, L.V. )
1989-01-01
Experience with concurrent programming has shown that concurrent programs can conceal bugs even after extensive testing. Thus, there is a need for practical techniques which can establish the correctness of parallel programs. This paper proposes a method for showing how to prove the partial correctness of programs written in the Chare-kernel language, which is a language designed to support the parallel execution of computation with irregular structures. The proof is based on the lattice proof technique and is divided into two parts. The first part is concerned with the program behavior within a single chare instance, whereas the second part captures the inter-chare interaction.
Prediction of kernel density of corn using single-kernel near infrared spectroscopy
Technology Transfer Automated Retrieval System (TEKTRAN)
Corn hardness as is an important property for dry and wet-millers, food processors and corn breeders developing hybrids for specific markets. Of the several methods used to measure hardness, kernel density measurements are one of the more repeatable methods to quantify hardness. Near infrared spec...
New thresholds for Primordial Black Hole formation during the QCD phase transition
NASA Astrophysics Data System (ADS)
Sobrinho, J. L. G.; Augusto, P.; Gonçalves, A. L.
2016-08-01
Primordial Black Holes (PBHs) might have formed in the early Universe as a consequence of the collapse of density fluctuations with an amplitude above a critical value δc: the formation threshold. Although for a radiation-dominated Universe δc remains constant, if the Universe experiences some dust-like phases (e.g. phase transitions) δc might decrease, improving the chances of PBH formation. We studied the evolution of δc during the QCD phase transition epoch within three different models: Bag Model (BM), Lattice Fit Model (LFM), and Crossover Model (CM). We found that the reduction on the background value of δc can be as high as 77% (BM), which might imply a ˜10-10 probability of PBHs forming at the QCD epoch.
QCD Processes and Hadron Production in High Energy ELECTRON(+) - Annihilation.
NASA Astrophysics Data System (ADS)
Burrows, Philip Nicholas
Available from UMI in association with The British Library. Requires signed TDF. A study is presented of general features of the reaction e^+e^- to hadrons. The data are interpreted in terms of current models of the underlying QCD and hadronisation processes. These models are outlined in detail and their predictions are compared with most of the available experimental data collected between 12.0 and 46.8 GeV mean centre of mass energies. The model arbitrary parameters were optimised to give a generally good description of the global properties of the large hadronic event sample accumulated by the TASSO detector at 35 GeV: The Lund O(alpha_sp {s}{2}) model describes properties in the event plane very well, but is deficient in the properties transverse to this plane. The Webber LLA model gives a good description of the transverse observables, but overestimates those quantities in the plane. The Lund LLA + O( alpha_{s}) model provides a good representation of the transverse properties but underestimates some quantities in the plane, though the discrepancy is much smaller than for the LLA model. The evolution of the observables as a function of c.m. energy between 12.0 and 41.5 GeV is generally well described, the Lund LLA + O(alpha_ {s}) model representing the data best. It is concluded that this model is successful in reproducing accurately most features of the data because it includes QCD calculations of both hard and multiple soft gluon emission processes. The model predictions are extended up to W = 200 GeV, where the two parton cascade models give similar predictions of the event properties which differ significantly from those of the O(alpha_sp{s} {2}) model. Top quark production is simulated at W = 200 GeV for a top mass of 60 GeV/c^2 and the distributions of thrust, aplanarity, p_ {Tin}, p_{Tout} and rapidity are found to be most sensitive to its presence. The data at 35 GeV are also analysed in terms of explicit multijet final states and compared with the QCD
KOVCHEGOV,Y.V.
2000-04-25
The authors derive an equation determining the small-x evolution of the F{sub 2} structure function of a large nucleus which resumes a cascade of gluons in the leading logarithmic approximation using Mueller's color dipole model. In the traditional language it corresponds to resummation of the pomeron fan diagrams, originally conjectured in the GLR equation. The authors show that the solution of the equation describes the physics of structure functions at high partonic densities, thus allowing them to gain some understanding of the most interesting and challenging phenomena in small-x physics--saturation.
Linear and kernel methods for multi- and hypervariate change detection
NASA Astrophysics Data System (ADS)
Nielsen, Allan A.; Canty, Morton J.
2010-10-01
The iteratively re-weighted multivariate alteration detection (IR-MAD) algorithm may be used both for unsuper- vised change detection in multi- and hyperspectral remote sensing imagery as well as for automatic radiometric normalization of multi- or hypervariate multitemporal image sequences. Principal component analysis (PCA) as well as maximum autocorrelation factor (MAF) and minimum noise fraction (MNF) analyses of IR-MAD images, both linear and kernel-based (which are nonlinear), may further enhance change signals relative to no-change background. The kernel versions are based on a dual formulation, also termed Q-mode analysis, in which the data enter into the analysis via inner products in the Gram matrix only. In the kernel version the inner products of the original data are replaced by inner products between nonlinear mappings into higher dimensional feature space. Via kernel substitution, also known as the kernel trick, these inner products between the mappings are in turn replaced by a kernel function and all quantities needed in the analysis are expressed in terms of the kernel function. This means that we need not know the nonlinear mappings explicitly. Kernel principal component analysis (PCA), kernel MAF and kernel MNF analyses handle nonlinearities by implicitly transforming data into high (even innite) dimensional feature space via the kernel function and then performing a linear analysis in that space. In image analysis the Gram matrix is often prohibitively large (its size is the number of pixels in the image squared). In this case we may sub-sample the image and carry out the kernel eigenvalue analysis on a set of training data samples only. To obtain a transformed version of the entire image we then project all pixels, which we call the test data, mapped nonlinearly onto the primal eigenvectors. IDL (Interactive Data Language) implementations of IR-MAD, automatic radiometric normalization and kernel PCA/MAF/MNF transformations have been written
Fructan metabolism in developing wheat (Triticum aestivum L.) kernels.
Verspreet, Joran; Cimini, Sara; Vergauwen, Rudy; Dornez, Emmie; Locato, Vittoria; Le Roy, Katrien; De Gara, Laura; Van den Ende, Wim; Delcour, Jan A; Courtin, Christophe M
2013-12-01
Although fructans play a crucial role in wheat kernel development, their metabolism during kernel maturation is far from being understood. In this study, all major fructan-metabolizing enzymes together with fructan content, fructan degree of polymerization and the presence of fructan oligosaccharides were examined in developing wheat kernels (Triticum aestivum L. var. Homeros) from anthesis until maturity. Fructan accumulation occurred mainly in the first 2 weeks after anthesis, and a maximal fructan concentration of 2.5 ± 0.3 mg fructan per kernel was reached at 16 days after anthesis (DAA). Fructan synthesis was catalyzed by 1-SST (sucrose:sucrose 1-fructosyltransferase) and 6-SFT (sucrose:fructan 6-fructosyltransferase), and to a lesser extent by 1-FFT (fructan:fructan 1-fructosyltransferase). Despite the presence of 6G-kestotriose in wheat kernel extracts, the measured 6G-FFT (fructan:fructan 6G-fructosyltransferase) activity levels were low. During kernel filling, which lasted from 2 to 6 weeks after anthesis, kernel fructan content decreased from 2.5 ± 0.3 to 1.31 ± 0.12 mg fructan per kernel (42 DAA) and the average fructan degree of polymerization decreased from 7.3 ± 0.4 (14 DAA) to 4.4 ± 0.1 (42 DAA). FEH (fructan exohydrolase) reached maximal activity between 20 and 28 DAA. No fructan-metabolizing enzyme activities were registered during the final phase of kernel maturation, and fructan content and structure remained unchanged. This study provides insight into the complex metabolism of fructans during wheat kernel development and relates fructan turnover to the general phases of kernel development.
Bergman kernel, balanced metrics and black holes
NASA Astrophysics Data System (ADS)
Klevtsov, Semyon
In this thesis we explore the connections between the Kahler geometry and Landau levels on compact manifolds. We rederive the expansion of the Bergman kernel on Kahler manifolds developed by Tian, Yau, Zelditch, Lu and Catlin, using path integral and perturbation theory. The physics interpretation of this result is as an expansion of the projector of wavefunctions on the lowest Landau level, in the special case that the magnetic field is proportional to the Kahler form. This is a geometric expansion, somewhat similar to the DeWitt-Seeley-Gilkey short time expansion for the heat kernel, but in this case describing the long time limit, without depending on supersymmetry. We also generalize this expansion to supersymmetric quantum mechanics and more general magnetic fields, and explore its applications. These include the quantum Hall effect in curved space, the balanced metrics and Kahler gravity. In particular, we conjecture that for a probe in a BPS black hole in type II strings compactified on Calabi-Yau manifolds, the moduli space metric is the balanced metric.
Delimiting Areas of Endemism through Kernel Interpolation
Oliveira, Ubirajara; Brescovit, Antonio D.; Santos, Adalberto J.
2015-01-01
We propose a new approach for identification of areas of endemism, the Geographical Interpolation of Endemism (GIE), based on kernel spatial interpolation. This method differs from others in being independent of grid cells. This new approach is based on estimating the overlap between the distribution of species through a kernel interpolation of centroids of species distribution and areas of influence defined from the distance between the centroid and the farthest point of occurrence of each species. We used this method to delimit areas of endemism of spiders from Brazil. To assess the effectiveness of GIE, we analyzed the same data using Parsimony Analysis of Endemism and NDM and compared the areas identified through each method. The analyses using GIE identified 101 areas of endemism of spiders in Brazil GIE demonstrated to be effective in identifying areas of endemism in multiple scales, with fuzzy edges and supported by more synendemic species than in the other methods. The areas of endemism identified with GIE were generally congruent with those identified for other taxonomic groups, suggesting that common processes can be responsible for the origin and maintenance of these biogeographic units. PMID:25611971
Pareto-path multitask multiple kernel learning.
Li, Cong; Georgiopoulos, Michael; Anagnostopoulos, Georgios C
2015-01-01
A traditional and intuitively appealing Multitask Multiple Kernel Learning (MT-MKL) method is to optimize the sum (thus, the average) of objective functions with (partially) shared kernel function, which allows information sharing among the tasks. We point out that the obtained solution corresponds to a single point on the Pareto Front (PF) of a multiobjective optimization problem, which considers the concurrent optimization of all task objectives involved in the Multitask Learning (MTL) problem. Motivated by this last observation and arguing that the former approach is heuristic, we propose a novel support vector machine MT-MKL framework that considers an implicitly defined set of conic combinations of task objectives. We show that solving our framework produces solutions along a path on the aforementioned PF and that it subsumes the optimization of the average of objective functions as a special case. Using the algorithms we derived, we demonstrate through a series of experimental results that the framework is capable of achieving a better classification performance, when compared with other similar MTL approaches. PMID:25532155
Scientific Computing Kernels on the Cell Processor
Williams, Samuel W.; Shalf, John; Oliker, Leonid; Kamil, Shoaib; Husbands, Parry; Yelick, Katherine
2007-04-04
The slowing pace of commodity microprocessor performance improvements combined with ever-increasing chip power demands has become of utmost concern to computational scientists. As a result, the high performance computing community is examining alternative architectures that address the limitations of modern cache-based designs. In this work, we examine the potential of using the recently-released STI Cell processor as a building block for future high-end computing systems. Our work contains several novel contributions. First, we introduce a performance model for Cell and apply it to several key scientific computing kernels: dense matrix multiply, sparse matrix vector multiply, stencil computations, and 1D/2D FFTs. The difficulty of programming Cell, which requires assembly level intrinsics for the best performance, makes this model useful as an initial step in algorithm design and evaluation. Next, we validate the accuracy of our model by comparing results against published hardware results, as well as our own implementations on a 3.2GHz Cell blade. Additionally, we compare Cell performance to benchmarks run on leading superscalar (AMD Opteron), VLIW (Intel Itanium2), and vector (Cray X1E) architectures. Our work also explores several different mappings of the kernels and demonstrates a simple and effective programming model for Cell's unique architecture. Finally, we propose modest microarchitectural modifications that could significantly increase the efficiency of double-precision calculations. Overall results demonstrate the tremendous potential of the Cell architecture for scientific computations in terms of both raw performance and power efficiency.
Pareto-path multitask multiple kernel learning.
Li, Cong; Georgiopoulos, Michael; Anagnostopoulos, Georgios C
2015-01-01
A traditional and intuitively appealing Multitask Multiple Kernel Learning (MT-MKL) method is to optimize the sum (thus, the average) of objective functions with (partially) shared kernel function, which allows information sharing among the tasks. We point out that the obtained solution corresponds to a single point on the Pareto Front (PF) of a multiobjective optimization problem, which considers the concurrent optimization of all task objectives involved in the Multitask Learning (MTL) problem. Motivated by this last observation and arguing that the former approach is heuristic, we propose a novel support vector machine MT-MKL framework that considers an implicitly defined set of conic combinations of task objectives. We show that solving our framework produces solutions along a path on the aforementioned PF and that it subsumes the optimization of the average of objective functions as a special case. Using the algorithms we derived, we demonstrate through a series of experimental results that the framework is capable of achieving a better classification performance, when compared with other similar MTL approaches.
Stable Local Volatility Calibration Using Kernel Splines
NASA Astrophysics Data System (ADS)
Coleman, Thomas F.; Li, Yuying; Wang, Cheng
2010-09-01
We propose an optimization formulation using L1 norm to ensure accuracy and stability in calibrating a local volatility function for option pricing. Using a regularization parameter, the proposed objective function balances the calibration accuracy with the model complexity. Motivated by the support vector machine learning, the unknown local volatility function is represented by a kernel function generating splines and the model complexity is controlled by minimizing the 1-norm of the kernel coefficient vector. In the context of the support vector regression for function estimation based on a finite set of observations, this corresponds to minimizing the number of support vectors for predictability. We illustrate the ability of the proposed approach to reconstruct the local volatility function in a synthetic market. In addition, based on S&P 500 market index option data, we demonstrate that the calibrated local volatility surface is simple and resembles the observed implied volatility surface in shape. Stability is illustrated by calibrating local volatility functions using market option data from different dates.
Transcriptome analysis of Ginkgo biloba kernels
He, Bing; Gu, Yincong; Xu, Meng; Wang, Jianwen; Cao, Fuliang; Xu, Li-an
2015-01-01
Ginkgo biloba is a dioecious species native to China with medicinally and phylogenetically important characteristics; however, genomic resources for this species are limited. In this study, we performed the first transcriptome sequencing for Ginkgo kernels at five time points using Illumina paired-end sequencing. Approximately 25.08-Gb clean reads were obtained, and 68,547 unigenes with an average length of 870 bp were generated by de novo assembly. Of these unigenes, 29,987 (43.74%) were annotated in publicly available plant protein database. A total of 3,869 genes were identified as significantly differentially expressed, and enrichment analysis was conducted at different time points. Furthermore, metabolic pathway analysis revealed that 66 unigenes were responsible for terpenoid backbone biosynthesis, with up to 12 up-regulated unigenes involved in the biosynthesis of ginkgolide and bilobalide. Differential gene expression analysis together with real-time PCR experiments indicated that the synthesis of bilobalide may have interfered with the ginkgolide synthesis process in the kernel. These data can remarkably expand the existing transcriptome resources of Ginkgo, and provide a valuable platform to reveal more on developmental and metabolic mechanisms of this species. PMID:26500663
Understanding QCD at high density from a Z3 -symmetric QCD-like theory
NASA Astrophysics Data System (ADS)
Kouno, Hiroaki; Kashiwa, Kouji; Takahashi, Junichi; Misumi, Tatsuhiro; Yahiro, Masanobu
2016-03-01
We investigate QCD at large μ /T by using Z3-symmetric S U (3 ) gauge theory, where μ is the quark-number chemical potential and T is temperature. We impose the flavor-dependent twist boundary condition on quarks in QCD. This QCD-like theory has the twist angle θ as a parameter, and agrees with QCD when θ =0 and becomes symmetric when θ =2 π /3 . For both QCD and the Z3-symmetric S U (3 ) gauge theory, the phase diagram is drawn in μ -T plane with the Polyakov-loop extended Nambu-Jona-Lasinio model. In the Z3-symmetric S U (3 ) gauge theory, the Polyakov loop φ is zero in the confined phase appearing at T ≲200 MeV and μ ≲300 MeV . The perfectly confined phase never coexists with the color superconducting (CSC) phase, since finite diquark condensate in the CSC phase breaks Z3 symmetry and then makes φ finite. When μ ≳300 MeV , the CSC phase is more stable than the perfectly confined phase at T ≲100 MeV . Meanwhile, the chiral symmetry can be broken in the perfectly confined phase, since the chiral condensate is Z3 invariant. Consequently, the perfectly confined phase is divided into the perfectly confined phase without chiral symmetry restoration in a region of μ ≲300 MeV and T ≲200 MeV and the perfectly confined phase with chiral symmetry restoration in a region of μ ≳300 MeV and 100 ≲T ≲200 MeV . At low temperature, the basic phase structure of Z3-symmetric QCD-like theory remains in QCD. Properties of the sign problem in Z3-symmetric theory are also discussed. We discuss a numerical framework to evaluate observables at θ =0 from those at θ =2 π /3 .
Technology Transfer Automated Retrieval System (TEKTRAN)
Maize kernel density impacts milling quality of the grain due to kernel hardness. Harder kernels are correlated with higher test weight and are more resistant to breakage during harvest and transport. Softer kernels, in addition to being susceptible to mechanical damage, are also prone to pathogen ...
Community detection using Kernel Spectral Clustering with memory
NASA Astrophysics Data System (ADS)
Langone, Rocco; Suykens, Johan A. K.
2013-02-01
This work is related to the problem of community detection in dynamic scenarios, which for instance arises in the segmentation of moving objects, clustering of telephone traffic data, time-series micro-array data etc. A desirable feature of a clustering model which has to capture the evolution of communities over time is the temporal smoothness between clusters in successive time-steps. In this way the model is able to track the long-term trend and in the same time it smooths out short-term variation due to noise. We use the Kernel Spectral Clustering with Memory effect (MKSC) which allows to predict cluster memberships of new nodes via out-of-sample extension and has a proper model selection scheme. It is based on a constrained optimization formulation typical of Least Squares Support Vector Machines (LS-SVM), where the objective function is designed to explicitly incorporate temporal smoothness as a valid prior knowledge. The latter, in fact, allows the model to cluster the current data well and to be consistent with the recent history. Here we propose a generalization of the MKSC model with an arbitrary memory, not only one time-step in the past. The experiments conducted on toy problems confirm our expectations: the more memory we add to the model, the smoother over time are the clustering results. We also compare with the Evolutionary Spectral Clustering (ESC) algorithm which is a state-of-the art method, and we obtain comparable or better results.
Comparison of Kernel Equating and Item Response Theory Equating Methods
ERIC Educational Resources Information Center
Meng, Yu
2012-01-01
The kernel method of test equating is a unified approach to test equating with some advantages over traditional equating methods. Therefore, it is important to evaluate in a comprehensive way the usefulness and appropriateness of the Kernel equating (KE) method, as well as its advantages and disadvantages compared with several popular item…
Evidence-based kernels: fundamental units of behavioral influence.
Embry, Dennis D; Biglan, Anthony
2008-09-01
This paper describes evidence-based kernels, fundamental units of behavioral influence that appear to underlie effective prevention and treatment for children, adults, and families. A kernel is a behavior-influence procedure shown through experimental analysis to affect a specific behavior and that is indivisible in the sense that removing any of its components would render it inert. Existing evidence shows that a variety of kernels can influence behavior in context, and some evidence suggests that frequent use or sufficient use of some kernels may produce longer lasting behavioral shifts. The analysis of kernels could contribute to an empirically based theory of behavioral influence, augment existing prevention or treatment efforts, facilitate the dissemination of effective prevention and treatment practices, clarify the active ingredients in existing interventions, and contribute to efficiently developing interventions that are more effective. Kernels involve one or more of the following mechanisms of behavior influence: reinforcement, altering antecedents, changing verbal relational responding, or changing physiological states directly. The paper describes 52 of these kernels, and details practical, theoretical, and research implications, including calling for a national database of kernels that influence human behavior.
Evidence-based Kernels: Fundamental Units of Behavioral Influence
Biglan, Anthony
2008-01-01
This paper describes evidence-based kernels, fundamental units of behavioral influence that appear to underlie effective prevention and treatment for children, adults, and families. A kernel is a behavior–influence procedure shown through experimental analysis to affect a specific behavior and that is indivisible in the sense that removing any of its components would render it inert. Existing evidence shows that a variety of kernels can influence behavior in context, and some evidence suggests that frequent use or sufficient use of some kernels may produce longer lasting behavioral shifts. The analysis of kernels could contribute to an empirically based theory of behavioral influence, augment existing prevention or treatment efforts, facilitate the dissemination of effective prevention and treatment practices, clarify the active ingredients in existing interventions, and contribute to efficiently developing interventions that are more effective. Kernels involve one or more of the following mechanisms of behavior influence: reinforcement, altering antecedents, changing verbal relational responding, or changing physiological states directly. The paper describes 52 of these kernels, and details practical, theoretical, and research implications, including calling for a national database of kernels that influence human behavior. PMID:18712600
Code of Federal Regulations, 2010 CFR
2010-01-01
... Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE REGULATIONS AND STANDARDS UNDER THE AGRICULTURAL MARKETING ACT OF 1946... Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of...
Code of Federal Regulations, 2011 CFR
2011-01-01
... Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE REGULATIONS AND STANDARDS UNDER THE AGRICULTURAL MARKETING ACT OF 1946... Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of...
Evidence-Based Kernels: Fundamental Units of Behavioral Influence
ERIC Educational Resources Information Center
Embry, Dennis D.; Biglan, Anthony
2008-01-01
This paper describes evidence-based kernels, fundamental units of behavioral influence that appear to underlie effective prevention and treatment for children, adults, and families. A kernel is a behavior-influence procedure shown through experimental analysis to affect a specific behavior and that is indivisible in the sense that removing any of…
Optimal Bandwidth Selection in Observed-Score Kernel Equating
ERIC Educational Resources Information Center
Häggström, Jenny; Wiberg, Marie
2014-01-01
The selection of bandwidth in kernel equating is important because it has a direct impact on the equated test scores. The aim of this article is to examine the use of double smoothing when selecting bandwidths in kernel equating and to compare double smoothing with the commonly used penalty method. This comparison was made using both an equivalent…
Sugar uptake into kernels of tunicate tassel-seed maize
Thomas, P.A.; Felker, F.C.; Crawford, C.G. )
1990-05-01
A maize (Zea mays L.) strain expressing both the tassel-seed (Ts-5) and tunicate (Tu) characters was developed which produces glume-covered kernels on the tassel, often born on 7-10 mm pedicels. Vigorous plants produce up to 100 such kernels interspersed with additional sessile kernels. This floral unit provides a potentially valuable experimental system for studying sugar uptake into developing maize seeds. When detached kernels (with glumes and pedicel intact) are placed in incubation solution, fluid flows up the pedicel and into the glumes, entering the pedicel apoplast near the kernel base. The unusual anatomical features of this maize strain permit experimental access to the pedicel apoplast with much less possibility of kernel base tissue damage than with kernels excised from the cob. ({sup 14}C)Fructose incorporation into soluble and insoluble fractions of endosperm increased for 8 days. Endosperm uptake of sucrose, fructose, and D-glucose was significantly greater than that of L-glucose. Fructose uptake was significantly inhibited by CCCP, DNP, and PCMBS. These results suggest the presence of an active, non-diffusion component of sugar transport in maize kernels.
Introduction to Kernel Methods: Classification of Multivariate Data
NASA Astrophysics Data System (ADS)
Fauvel, M.
2016-05-01
In this chapter, kernel methods are presented for the classification of multivariate data. An introduction example is given to enlighten the main idea of kernel methods. Then emphasis is done on the Support Vector Machine. Structural risk minimization is presented, and linear and non-linear SVM are described. Finally, a full example of SVM classification is given on simulated hyperspectral data.
7 CFR 981.60 - Determination of kernel weight.
Code of Federal Regulations, 2010 CFR
2010-01-01
... Agreements and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which...
7 CFR 981.60 - Determination of kernel weight.
Code of Federal Regulations, 2011 CFR
2011-01-01
... Agreements and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which...
7 CFR 981.60 - Determination of kernel weight.
Code of Federal Regulations, 2014 CFR
2014-01-01
... AGREEMENTS AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which...
7 CFR 981.60 - Determination of kernel weight.
Code of Federal Regulations, 2013 CFR
2013-01-01
... AGREEMENTS AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which...
7 CFR 981.60 - Determination of kernel weight.
Code of Federal Regulations, 2012 CFR
2012-01-01
... Agreements and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which...
Mapping the QCD Phase Transition with Accreting Compact Stars
Blaschke, D.; Poghosyan, G.; Grigorian, H.
2008-10-29
We discuss an idea for how accreting millisecond pulsars could contribute to the understanding of the QCD phase transition in the high-density nuclear matter equation of state (EoS). It is based on two ingredients, the first one being a ''phase diagram'' of rapidly rotating compact star configurations in the plane of spin frequency and mass, determined with state-of-the-art hybrid equations of state, allowing for a transition to color superconducting quark matter. The second is the study of spin-up and accretion evolution in this phase diagram. We show that the quark matter phase transition leads to a characteristic line in the {omega}-M plane, the phase border between neutron stars and hybrid stars with a quark matter core. Along this line a drop in the pulsar's moment of inertia entails a waiting point phenomenon in the accreting millisecond pulsar (AMXP) evolution: most of these objects should therefore be found along the phase border in the {omega}-M plane, which may be viewed as the AMXP analog of the main sequence in the Hertzsprung-Russell diagram for normal stars. In order to prove the existence of a high-density phase transition in the cores of compact stars we need population statistics for AMXPs with sufficiently accurate determination of their masses, spin frequencies and magnetic fields.
Characterization of factors underlying the metabolic shifts in developing kernels of colored maize
Hu, Chaoyang; Li, Quanlin; Shen, Xuefang; Quan, Sheng; Lin, Hong; Duan, Lei; Wang, Yifa; Luo, Qian; Qu, Guorun; Han, Qing; Lu, Yuan; Zhang, Dabing; Yuan, Zheng; Shi, Jianxin
2016-01-01
Elucidation of the metabolic pathways determining pigmentation and their underlying regulatory mechanisms in maize kernels is of high importance in attempts to improve the nutritional composition of our food. In this study, we compared dynamics in the transcriptome and metabolome between colored SW93 and white SW48 by integrating RNA-Seq and non-targeted metabolomics. Our data revealed that expression of enzyme coding genes and levels of primary metabolites decreased gradually from 11 to 21 DAP, corresponding well with the physiological change of developing maize kernels from differentiation through reserve accumulation to maturation, which was cultivar independent. A remarkable up-regulation of anthocyanin and phlobaphene pathway distinguished SW93 from SW48, in which anthocyanin regulating transcriptional factors (R1 and C1), enzyme encoding genes involved in both pathways and corresponding metabolic intermediates were up-regulated concurrently in SW93 but not in SW48. The shift from the shikimate pathway of primary metabolism to the flavonoid pathway of secondary metabolism, however, appears to be under posttranscriptional regulation. This study revealed the link between primary metabolism and kernel coloration, which facilitate further study to explore fundamental questions regarding the evolution of seed metabolic capabilities as well as their potential applications in maize improvement regarding both staple and functional foods. PMID:27739524
Biochemical and molecular characterization of Avena indolines and their role in kernel texture.
Gazza, Laura; Taddei, Federica; Conti, Salvatore; Gazzelloni, Gloria; Muccilli, Vera; Janni, Michela; D'Ovidio, Renato; Alfieri, Michela; Redaelli, Rita; Pogna, Norberto E
2015-02-01
Among cereals, Avena sativa is characterized by an extremely soft endosperm texture, which leads to some negative agronomic and technological traits. On the basis of the well-known softening effect of puroindolines in wheat kernel texture, in this study, indolines and their encoding genes are investigated in Avena species at different ploidy levels. Three novel 14 kDa proteins, showing a central hydrophobic domain with four tryptophan residues and here named vromindoline (VIN)-1,2 and 3, were identified. Each VIN protein in diploid oat species was found to be synthesized by a single Vin gene whereas, in hexaploid A. sativa, three Vin-1, three Vin-2 and two Vin-3 genes coding for VIN-1, VIN-2 and VIN-3, respectively, were described and assigned to the A, C or D genomes based on similarity to their counterparts in diploid species. Expression of oat vromindoline transgenes in the extra-hard durum wheat led to accumulation of vromindolines in the endosperm and caused an approximate 50 % reduction of grain hardness, suggesting a central role for vromindolines in causing the extra-soft texture of oat grain. Further, hexaploid oats showed three orthologous genes coding for avenoindolines A and B, with five or three tryptophan residues, respectively, but very low amounts of avenoindolines were found in mature kernels. The present results identify a novel protein family affecting cereal kernel texture and would further elucidate the phylogenetic evolution of Avena genus.
Accumulation of storage products in oat during kernel development.
Banaś, A; Dahlqvist, A; Debski, H; Gummeson, P O; Stymne, S
2000-12-01
Lipids, proteins and starch are the main storage products in oat seeds. As a first step in elucidating the regulatory mechanisms behind the deposition of these compounds, two different oat varieties, 'Freja' and 'Matilda', were analysed during kernel development. In both cultivars, the majority of the lipids accumulated at very early stage of development but Matilda accumulated about twice the amount of lipids compared to Freja. Accumulation of proteins and starch started also in the early stage of kernel development but, in contrast to lipids, continued over a considerably longer period. The high-oil variety Matilda also accumulated higher amounts of proteins than Freja. The starch content in Freja kernels was higher than in Matilda kernels and the difference was most pronounced during the early stage of development when oil synthesis was most active. Oleosin accumulation continued during the whole period of kernel development.
Anatomically-aided PET reconstruction using the kernel method
NASA Astrophysics Data System (ADS)
Hutchcroft, Will; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi
2016-09-01
This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.
Direct Measurement of Wave Kernels in Time-Distance Helioseismology
NASA Technical Reports Server (NTRS)
Duvall, T. L., Jr.
2006-01-01
Solar f-mode waves are surface-gravity waves which propagate horizontally in a thin layer near the photosphere with a dispersion relation approximately that of deep water waves. At the power maximum near 3 mHz, the wavelength of 5 Mm is large enough for various wave scattering properties to be observable. Gizon and Birch (2002,ApJ,571,966)h ave calculated kernels, in the Born approximation, for the sensitivity of wave travel times to local changes in damping rate and source strength. In this work, using isolated small magnetic features as approximate point-sourc'e scatterers, such a kernel has been measured. The observed kernel contains similar features to a theoretical damping kernel but not for a source kernel. A full understanding of the effect of small magnetic features on the waves will require more detailed modeling.
OSKI: A Library of Automatically Tuned Sparse Matrix Kernels
Vuduc, R; Demmel, J W; Yelick, K A
2005-07-19
The Optimized Sparse Kernel Interface (OSKI) is a collection of low-level primitives that provide automatically tuned computational kernels on sparse matrices, for use by solver libraries and applications. These kernels include sparse matrix-vector multiply and sparse triangular solve, among others. The primary aim of this interface is to hide the complex decision-making process needed to tune the performance of a kernel implementation for a particular user's sparse matrix and machine, while also exposing the steps and potentially non-trivial costs of tuning at run-time. This paper provides an overview of OSKI, which is based on our research on automatically tuned sparse kernels for modern cache-based superscalar machines.
Exploring Three Nucleon Forces in Lattice QCD
Doi, Takumi
2011-10-21
We study the three nucleon force in N{sub f} = 2 dynamical clover fermion lattice QCD, utilizing the Nambu-Bethe-Salpeter wave function of the three nucleon system. Since parity-odd two nucleon potentials are not available in lattice QCD at this moment, we develop a new formulation to extract the genuine three nucleon force which requires only the information of parity-even two nucleon potentials. In order to handle the extremely expensive calculation cost, we consider a specific three-dimensional coordinate configuration for the three nucleons. We find that the linear setup is advantageous, where nucleons are aligned linearly with equal spacings. The lattice calculation is performed with 16{sup 3}x32 configurations at {beta} = 1.95, m{sub {pi}} = 1.13 GeV generated by CP-PACS Collaboration, and the result of the three nucleon force in triton channel is presented.
CHARMONIUM EXCITED STATES FROM LATTICE QCD
Jozef Dudek; Robert Edwards; Nilmani Mathur; David Richards
2007-11-20
We apply the variational method with a large basis of interpolating operators to demonstrate the feasibility of extracting multiple excited states in charmonium from lattice QCD. The calculation is performed in the quenched approximation to QCD, using the clover fermion action on an anisotropic lattice. A crucial element of our approach is a knowledge of the continuum limit of the interpolating operators, providing important additional information on the spin assignment of the states, even at a single value of the lattice spacing. Though we find excited-state masses that are systematically high with respect to the quark potential model, and the experimental masses where known, we attribute this as most likely an artifact of the quenched approximation.
The {Lambda}(1405) in Full QCD
Menadue, Benjamin J.; Kamleh, Waseem; Leinweber, Derek B.; Mahbub, M. Selim
2011-12-14
At 1405.1 MeV, the lowest-lying negative-parity state of the {Lambda} baryon lies surprising low. Indeed, this is lower than the lowest negative-parity state of the nucleon, even though the {Lambda}(1405) possesses a valence strange quark. However, previous Lattice QCD studies have been unable to identify such a low-lying state. Using the PACS-CS (2+1)-flavour full-QCD ensembles, available through the ILDG, we utilise a variational analysis with source and sink smearing to isolate this elusive state. We find three low-lying odd-parity states, and for the first time reproduce the correct level ordering with respect to the nearby scattering thresholds.
Nucleon Structure from Dynamical Lattice QCD
Huey-Wen Lin
2007-06-01
We present lattice QCD numerical calculations of hadronic structure functions and form factors from full-QCD lattices, with a chirally symmetric fermion action, domain-wall fermions, for the sea and valence quarks. The lattice spacing is about 0.12 fm with physical volume approximately (2 fm)3 for RBC 2-flavor ensembles and (3 fm)3 for RBC/UKQCD 2+1-flavor dynamical ones. The lightest sea quark mass is about 1/2 the strange quark mass for the former ensembles and 1/4 for the latter ones. Our calculations include: isovector vector- and axial-charge form factors and the first few moments of the polarized and unpolarized structure functions of the nucleon. Nonperturbative renormalization in RI/MOM scheme is applied.
Nucleon Structure from Dynamical Lattice QCD
Lin, H.-W.
2007-06-13
We present lattice QCD numerical calculations of hadronic structure functions and form factors from full-QCD lattices, with a chirally symmetric fermion action, domain-wall fermions, for the sea and valence quarks. The lattice spacing is about 0.12 fm with physical volume approximately (2 fm)3 for RBC 2-flavor ensembles and (3 fm)3 for RBC/UKQCD 2+1-flavor dynamical ones. The lightest sea quark mass is about 1/2 the strange quark mass for the former ensembles and 1/4 for the latter ones. Our calculations include: isovector vector- and axial-charge form factors and the first few moments of the polarized and unpolarized structure functions of the nucleon. Nonperturbative renormalization in RI/MOM scheme is applied.
Connecting physical resonant amplitudes and lattice QCD
NASA Astrophysics Data System (ADS)
Bolton, Daniel R.; Briceño, Raúl A.; Wilson, David J.
2016-06-01
We present a determination of the isovector, P-wave ππ scattering phase shift obtained by extrapolating recent lattice QCD results from the Hadron Spectrum Collaboration using mπ = 236 MeV. The finite volume spectra are described using extensions of Lüscher's method to determine the infinite volume Unitarized Chiral Perturbation Theory scattering amplitude. We exploit the pion mass dependence of this effective theory to obtain the scattering amplitude at mπ = 140 MeV. The scattering phase shift is found to agree with experiment up to center of mass energies of 1.2 GeV. The analytic continuation of the scattering amplitude to the complex plane yields a ρ-resonance pole at Eρ = [ 755 (2) (1) (02) -i/2 129 (3) (1) (7 1) ] MeV. The techniques presented illustrate a possible pathway towards connecting lattice QCD observables of few-body, strongly interacting systems to experimentally accessible quantities.
eta and eta' Mesons from Lattice QCD
Christ, N.H.; Izubuchi, T.; Dawson, C.; Jung, C.; Liu, Q.; Mawhinney, R.D.; Sachrajda, C.T.; Soni, A.; Zhou, R.
2010-12-08
The large mass of the ninth pseudoscalar meson, the {eta}{prime}, is believed to arise from the combined effects of the axial anomaly and the gauge field topology present in QCD. We report a realistic, 2+1-flavor, lattice QCD calculation of the {eta} and {eta}{prime} masses and mixing which confirms this picture. The physical eigenstates show small octet-singlet mixing with a mixing angle of {theta} = -14.1(2.8){sup o}. Extrapolation to the physical light quark mass gives, with statistical errors only, m{sub {eta}} = 573(6) MeV and m{sub {eta}} = 947(142) MeV, consistent with the experimental values of 548 and 958 MeV.
Pomeron intercept and slope: A QCD connection
Goulianos, Konstantin
2009-12-01
The ratio r of intercept to slope of the Pomeron trajectory is derived in a QCD inspired parton model approach to diffraction based on a (re)normalization of the pp/pp single-diffractive cross section designed to enforce unitarity constraints by eliminating overlapping rapidity gaps. As the collision energy increases, the renormalized single-diffractive cross section tends to a constant which depends on the ratio r. Identifying the constant as the {sigma}{sub o} of the total cross section, {sigma}={sigma}{sub o}{center_dot}s{sup {epsilon}}, yields the ratio r in terms of measured parameters that can be phenomenologically expressed in terms of the pion mass and QCD color factors. The result agrees with the measured value of r.
Connecting physical resonant amplitudes and lattice QCD
NASA Astrophysics Data System (ADS)
Bolton, Daniel R.; Briceño, Raúl A.; Wilson, David J.
2016-06-01
We present a determination of the isovector, P-wave ππ scattering phase shift obtained by extrapolating recent lattice QCD results from the Hadron Spectrum Collaboration using mπ = 236 MeV. The finite volume spectra are described using extensions of Lüscher's method to determine the infinite volume Unitarized Chiral Perturbation Theory scattering amplitude. We exploit the pion mass dependence of this effective theory to obtain the scattering amplitude at mπ = 140 MeV. The scattering phase shift is found to agree with experiment up to center of mass energies of 1.2 GeV. The analytic continuation of the scattering amplitude to the complex plane yields a ρ-resonance pole at Eρ = [ 755 (2) (1) (20 02) -i/2 129 (3) (1) (7 1) ] MeV. The techniques presented illustrate a possible pathway towards connecting lattice QCD observables of few-body, strongly interacting systems to experimentally accessible quantities.
Phase transitions in QCD and string theory
NASA Astrophysics Data System (ADS)
Campell, Bruce A.; Ellis, John; Kalara, S.; Nanopoulos, D. V.; Olive, Keith A.
1991-02-01
We develop a unified effective field theory approach to the high-temperature phase transitions in QCD and string theory, incorporating winding modes (time-like Polyakov loops, vortices) as well as low-mass states (pseudoscalar mesons and glueballs, matter and dilaton supermultiplets). Anomalous scale invariance and the Z3 structure of the centre of SU(3) decree a first-order phase transition with simultaneous deconfinement and Polyakov loop condensation in QCD, whereas string vortex condensation is a second-order phase transition breaking a Z2 symmetry. We argue that vortex condensation is accompanied by a dilaton phase transition to a strong coupling regime, and comment on the possible role of soliton degrees of freedom in the high-temperature string phase. On leave of absence from the School of Physics & Astronomy, University of Minnesota, Minneapolis, Minnesota, USA.
Kinetic Rate Kernels via Hierarchical Liouville-Space Projection Operator Approach.
Zhang, Hou-Dao; Yan, YiJing
2016-05-19
Kinetic rate kernels in general multisite systems are formulated on the basis of a nonperturbative quantum dissipation theory, the hierarchical equations of motion (HEOM) formalism, together with the Nakajima-Zwanzig projection operator technique. The present approach exploits the HEOM-space linear algebra. The quantum non-Markovian site-to-site transfer rate can be faithfully evaluated via projected HEOM dynamics. The developed method is exact, as evident by the comparison to the direct HEOM evaluation results on the population evolution. PMID:26757138
Solving QCD using multi-regge theory.
White, A. R.
1998-07-13
This talk outlines the derivation of a high-energy, transverse momentum cut-off, solution of QCD in which the Regge pole and ''single gluon'' properties of the pomeron are directly related to the confinement and chiral symmetry breaking properties of the hadron spectrum. In first approximation, the pomeron is a single reggeized gluon plus a ''wee parton'' component that compensates for the color and particle properties of the gluon. This solution corresponds to a supercritical phase of Reggeon Field Theory.
Diffraction theory in QCD and beyond
White, A.R.
1987-12-11
A study of the Pomeron in QCD is briefly outlined. Implications for the production of W/sup +/W/sup -/ and Z/sup 0/Z/sup 0/ pairs are described and the possibility that the electroweak scale is a major strong-interaction threshold discussed. The application of Pomeron phase-transition theory to SU(5) dynamical symmetry breaking is suggested and the related ''strong-interaction'' properties of the photon briefly mentioned.
Excited charmonium physics from lattice QCD
Jozef Dudek
2009-12-01
Properties of excited mesons are studied using a lattice QCD simulation of a system comparable to charmonium. We extract a spectrum of states, including those with manifestly exotic quantum numbers. Radiative transition form-factors are also computed, in particular the transition from exotic ·c1 to J /È ³ which is found to be large on the usual scale of magnetic dipole transitions.
Ab initio Hadron structure from lattice QCD
J.D. Bratt; R.G. Edwards; M. Engelhardt; G.T. Fleming; Ph. Hägler; B. Musch; J.W. Negele; K. Orginos; A.V. Pochinsky; D.B. Renner; D.G. Richards; W. Schroers
2007-06-01
Early scattering experiments revealed that the proton was not a point particle but a bound state of many quarks and gluons. Deep inelastic scattering (DIS) experiments have accurately determined the probability of struck quarks carrying a fraction of the proton's momentum. The current generation of experiments and Lattice QCD calculations will provide detailed multi-dimensional pictures of the distributions of quarks and gluons inside the proton.
QCD equation of state from the lattice
Borsanyi, Sz.; Jakovac, A.; Ratti, C.; Szabo, K. K.; Endro''di, G.; Katz, S. D.; Fodor, Z.; Krieg, S.
2011-05-23
We calculate the QCD equation of state with 2+1 staggered lattice flavors and a physical pion mass. We present precision data on the trace anomaly and pressure based on simulations at N{sub t} = 6,8 and 10. These results are confirmed by N{sub t} = 12 simulations at three temperatures. Detailed results can be found in [arXiv:1007.2580v2].
BB Potentials in Quenched Lattice QCD
William Detmold; Kostas Orginos; Martin J. Savage
2007-12-01
The potentials between two B-mesons are computed in the heavy-quark limit using quenched lattice QCD at $m_\\pi\\sim 400~{\\rm MeV}$. Non-zero central potentials are clearly evident in all four spin-isospin channels, (I,s_l) = (0,0) , (0,1) , (1,0) , (1,1), where s_l is the total spin of the light degrees of freedom. At short distance, we find repulsion in the $I\
Fluctuations and the QCD phase diagram
Schaefer, B.-J.
2012-06-15
In this contribution the role of quantum fluctuations for the QCD phase diagram is discussed. This concerns in particular the importance of the matter back-reaction to the gluonic sector. The impact of these fluctuations on the location of the confinement/deconfinement and the chiral transition lines as well as their interrelation are investigated. Consequences of our findings for the size of a possible quarkyonic phase and location of a critical endpoint in the phase diagram are drawn.
Neutrino-Nucleon Interactions and Lattice QCD
NASA Astrophysics Data System (ADS)
Hill, Richard; Kronfeld, Andreas; Meyer, Aaron
2016-03-01
We address techniques to make the theoretical underpinning of neutrino-nucleon scattering more robust. We see this foundation as a necessary step to disentangle fundamental physics (such as neutrino oscillation parameters) from nuclear effects. We address a reanalysis of old experiments with elementary targets, model-independent parametrizations of nucleon form factors based on analyticity, and lattice QCD calculations of the form factors. speaker.
S.R. Beane; U. van Kolck
2005-06-01
We show that existing data suggest a simple scenario in which the nucleon and the Delta and Roper resonances act as chiral partners in a reducible representation of the full QCD chiral symmetry group. We discuss the peculiar interpretation of this scenario using spin-flavour symmetries of the naive constituent quark model, as well as the consistency of the scenario with large-Nc expectations.
Andreas S. Kronfeld
2000-10-17
Computational and theoretical developments in lattice QCD calculations of B and D mesons are surveyed. Several topical examples are given: new ideas for calculating the HQET parameters {bar {Lambda}} and {lambda}{sub 1}; form factors needed to determine {vert_bar}V{sub cb}{vert_bar} and {vert_bar}V{sub ub}{vert_bar}; bag parameters for the mass differences of the B mesons; and decay constants. Prospects for removing the quenched approximation are discussed.
Technology Transfer Automated Retrieval System (TEKTRAN)
The current US corn grading system accounts for the portion of damaged kernels, which is measured by time-consuming and inaccurate visual inspection. Near infrared spectroscopy (NIRS), a non-destructive and fast analytical method, was tested as a tool for discriminating corn kernels with heat and f...
NASA Astrophysics Data System (ADS)
Dong, Yadong; Jiao, Ziti; Zhang, Hu; Bai, Dongni; Zhang, Xiaoning; Li, Yang; He, Dandan
2016-10-01
The semi-empirical, kernel-driven Bidirectional Reflectance Distribution Function (BRDF) model has been widely used for many aspects of remote sensing. With the development of the kernel-driven model, there is a need to further assess the performance of newly developed kernels. The use of visualization tools can facilitate the analysis of model results and the assessment of newly developed kernels. However, the current version of the kernel-driven model does not contain a visualization function. In this study, a user-friendly visualization tool, named MaKeMAT, was developed specifically for the kernel-driven model. The POLDER-3 and CAR BRDF datasets were used to demonstrate the applicability of MaKeMAT. The visualization of inputted multi-angle measurements enhances understanding of multi-angle measurements and allows the choice of measurements with good representativeness. The visualization of modeling results facilitates the assessment of newly developed kernels. The study shows that the visualization tool MaKeMAT can promote the widespread application of the kernel-driven model.
Electromagnetic polarizabilities: Lattice QCD in background fields
W. Detmold, B.C. Tiburzi, A. Walker-Loud
2012-04-01
Chiral perturbation theory makes definitive predictions for the extrinsic behavior of hadrons in external electric and magnetic fields. Near the chiral limit, the electric and magnetic polarizabilities of pions, kaons, and nucleons are determined in terms of a few well-known parameters. In this limit, hadrons become quantum mechanically diffuse as polarizabilities scale with the inverse square-root of the quark mass. In some cases, however, such predictions from chiral perturbation theory have not compared well with experimental data. Ultimately we must turn to first principles numerical simulations of QCD to determine properties of hadrons, and confront the predictions of chiral perturbation theory. To address the electromagnetic polarizabilities, we utilize the background field technique. Restricting our attention to calculations in background electric fields, we demonstrate new techniques to determine electric polarizabilities and baryon magnetic moments for both charged and neutral states. As we can study the quark mass dependence of observables with lattice QCD, the lattice will provide a crucial test of our understanding of low-energy QCD, which will be timely in light of ongoing experiments, such as at COMPASS and HI gamma S.
QCD, Tevatron results and LHC prospects
Elvira, V.Daniel; /Fermilab
2008-08-01
We present a summary of the most recent measurements relevant to Quantum Chromodynamics (QCD) delivered by the D0 and CDF Tevatron experiments by May 2008. CDF and D0 are moving toward precision measurements of QCD based on data samples in excess of 1 fb-1. The inclusive jet cross sections have been extended to forward rapidity regions and measured with unprecedented precision following improvements in the jet energy calibration. Results on dijet mass distributions, bbbar dijet production using tracker based triggers, underlying event in dijet and Drell-Yan samples, inclusive photon and diphoton cross sections complete the list of measurements included in this paper. Good agreement with pQCD within errors is observed for jet production measurements. An improved and consistent theoretical description is needed for photon+jets processes. Collisions at the LHC are scheduled for early fall 2008, opening an era of discoveries at the new energy frontier, 5-7 times higher than that of the Tevatron.
QCD in heavy quark production and decay
Wiss, J.
1997-06-01
The author discusses how QCD is used to understand the physics of heavy quark production and decay dynamics. His discussion of production dynamics primarily concentrates on charm photoproduction data which are compared to perturbative QCD calculations which incorporate fragmentation effects. He begins his discussion of heavy quark decay by reviewing data on charm and beauty lifetimes. Present data on fully leptonic and semileptonic charm decay are then reviewed. Measurements of the hadronic weak current form factors are compared to the nonperturbative QCD-based predictions of Lattice Gauge Theories. He next discusses polarization phenomena present in charmed baryon decay. Heavy Quark Effective Theory predicts that the daughter baryon will recoil from the charmed parent with nearly 100% left-handed polarization, which is in excellent agreement with present data. He concludes by discussing nonleptonic charm decay which is traditionally analyzed in a factorization framework applicable to two-body and quasi-two-body nonleptonic decays. This discussion emphasizes the important role of final state interactions in influencing both the observed decay width of various two-body final states as well as modifying the interference between interfering resonance channels which contribute to specific multibody decays. 50 refs., 77 figs.
Full CKM matrix with lattice QCD
Okamoto, Masataka; /Fermilab
2004-12-01
The authors show that it is now possible to fully determine the CKM matrix, for the first time, using lattice QCD. |V{sub cd}|, |V{sub cs}|, |V{sub ub}|, |V{sub cb}| and |V{sub us}| are, respectively, directly determined with the lattice results for form factors of semileptonic D {yields} {pi}lv, D {yields} Klv, B {yields} {pi}lv, B {yields} Dlv and K {yields} {pi}lv decays. The error from the quenched approximation is removed by using the MILC unquenced lattice gauge configurations, where the effect of u, d and s quarks is included. The error from the ''chiral'' extrapolation (m{sub l} {yields} m{sub ud}) is greatly reduced by using improved staggered quarks. The accuracy is comparable to that of the Particle Data Group averages. In addition, |V{sub ud}|, |V{sub ts}|, |V{sub ts}| and |V{sub td}| are determined by using unitarity of the CKM matrix and the experimental result for sin (2{beta}). In this way, they obtain all 9 CKM matrix elements, where the only theoretical input is lattice QCD. They also obtain all the Wolfenstein parameters, for the first time, using lattice QCD.
Lattice QCD thermodynamics on the Grid
NASA Astrophysics Data System (ADS)
Mościcki, Jakub T.; Woś, Maciej; Lamanna, Massimo; de Forcrand, Philippe; Philipsen, Owe
2010-10-01
We describe how we have used simultaneously O(10) nodes of the EGEE Grid, accumulating ca. 300 CPU-years in 2-3 months, to determine an important property of Quantum Chromodynamics. We explain how Grid resources were exploited efficiently and with ease, using user-level overlay based on Ganga and DIANE tools above standard Grid software stack. Application-specific scheduling and resource selection based on simple but powerful heuristics allowed to improve efficiency of the processing to obtain desired scientific results by a specified deadline. This is also a demonstration of combined use of supercomputers, to calculate the initial state of the QCD system, and Grids, to perform the subsequent massively distributed simulations. The QCD simulation was performed on a 16×4 lattice. Keeping the strange quark mass at its physical value, we reduced the masses of the up and down quarks until, under an increase of temperature, the system underwent a second-order phase transition to a quark-gluon plasma. Then we measured the response of this system to an increase in the quark density. We find that the transition is smoothened rather than sharpened. If confirmed on a finer lattice, this finding makes it unlikely for ongoing experimental searches to find a QCD critical point at small chemical potential.
Confined magnetic monopoles in dense QCD
Gorsky, A.; Shifman, M.; Yung, A.
2011-04-15
Non-Abelian strings exist in the color-flavor locked phase of dense QCD. We show that kinks appearing in the world-sheet theory on these strings, in the form of the kink-antikink bound pairs, are the magnetic monopoles-descendants of the 't Hooft-Polyakov monopoles surviving in such a special form in dense QCD. Our consideration is heavily based on analogies and inspiration coming from certain supersymmetric non-Abelian theories. This is the first ever analytic demonstration that objects unambiguously identifiable as the magnetic monopoles are native to non-Abelian Yang-Mills theories (albeit our analysis extends only to the phase of the monopole confinement and has nothing to say about their condensation). Technically, our demonstration becomes possible due to the fact that low-energy dynamics of the non-Abelian strings in dense QCD is that of the orientational zero modes. It is described by an effective two-dimensional CP(2) model on the string world sheet. The kinks in this model representing confined magnetic monopoles are in a highly quantum regime.
Theoretical overview: Hot and dense QCD in equilibrium
Hatsuda, Tetsuo.
1991-11-01
Static and dynamical properties of QCD at finite temperature and density are reviewed. Non-perturbative aspects of the QCD plasma and modification of the hadron properties associated with the chiral transition are discussed on the basis of lattice data, effective theories and QCD sum rules. Special emphasis is laid on the importance of the finite baryon density to see the effects of the restoration of chiral symmetry in experiment.
Summary of low-energy aspects of QCD and medium-energy hadron parallel sessions
McClelland, J.B.
1991-01-01
Two sessions were organized dealing with low energy aspects of QCD. The first dealt with the issue of QCD dibaryons. The second session centered on mostly low-energy tests of QCD. This report discusses experiments dealing with these sessions.
Privacy preserving RBF kernel support vector machine.
Li, Haoran; Xiong, Li; Ohno-Machado, Lucila; Jiang, Xiaoqian
2014-01-01
Data sharing is challenging but important for healthcare research. Methods for privacy-preserving data dissemination based on the rigorous differential privacy standard have been developed but they did not consider the characteristics of biomedical data and make full use of the available information. This often results in too much noise in the final outputs. We hypothesized that this situation can be alleviated by leveraging a small portion of open-consented data to improve utility without sacrificing privacy. We developed a hybrid privacy-preserving differentially private support vector machine (SVM) model that uses public data and private data together. Our model leverages the RBF kernel and can handle nonlinearly separable cases. Experiments showed that this approach outperforms two baselines: (1) SVMs that only use public data, and (2) differentially private SVMs that are built from private data. Our method demonstrated very close performance metrics compared to nonprivate SVMs trained on the private data. PMID:25013805
Point-Kernel Shielding Code System.
1982-02-17
Version 00 QAD-BSA is a three-dimensional, point-kernel shielding code system based upon the CCC-48/QAD series. It is designed to calculate photon dose rates and heating rates using exponential attenuation and infinite medium buildup factors. Calculational provisions include estimates of fast neutron penetration using data computed by the moments method. Included geometry routines can describe complicated source and shield geometries. An internal library contains data for many frequently used structural and shielding materials, enabling the codemore » to solve most problems with only source strengths and problem geometry required as input. This code system adapts especially well to problems requiring multiple sources and sources with asymmetrical geometry. In addition to being edited separately, the total interaction rates from many sources may be edited at each detector point. Calculated photon interaction rates agree closely with those obtained using QAD-P5A.« less
Kernel density estimation using graphical processing unit
NASA Astrophysics Data System (ADS)
Sunarko, Su'ud, Zaki
2015-09-01
Kernel density estimation for particles distributed over a 2-dimensional space is calculated using a single graphical processing unit (GTX 660Ti GPU) and CUDA-C language. Parallel calculations are done for particles having bivariate normal distribution and by assigning calculations for equally-spaced node points to each scalar processor in the GPU. The number of particles, blocks and threads are varied to identify favorable configuration. Comparisons are obtained by performing the same calculation using 1, 2 and 4 processors on a 3.0 GHz CPU using MPICH 2.0 routines. Speedups attained with the GPU are in the range of 88 to 349 times compared the multiprocessor CPU. Blocks of 128 threads are found to be the optimum configuration for this case.
The flare kernel in the impulsive phase
NASA Technical Reports Server (NTRS)
Dejager, C.
1986-01-01
The impulsive phase of a flare is characterized by impulsive bursts of X-ray and microwave radiation, related to impulsive footpoint heating up to 50 or 60 MK, by upward gas velocities (150 to 400 km/sec) and by a gradual increase of the flare's thermal energy content. These phenomena, as well as non-thermal effects, are all related to the impulsive energy injection into the flare. The available observations are also quantitatively consistent with a model in which energy is injected into the flare by beams of energetic electrons, causing ablation of chromospheric gas, followed by convective rise of gas. Thus, a hole is burned into the chromosphere; at the end of impulsive phase of an average flare the lower part of that hole is situated about 1800 km above the photosphere. H alpha and other optical and UV line emission is radiated by a thin layer (approx. 20 km) at the bottom of the flare kernel. The upward rising and outward streaming gas cools down by conduction in about 45 s. The non-thermal effects in the initial phase are due to curtailing of the energy distribution function by escape of energetic electrons. The single flux tube model of a flare does not fit with these observations; instead we propose the spaghetti-bundle model. Microwave and gamma-ray observations suggest the occurrence of dense flare knots of approx. 800 km diameter, and of high temperature. Future observations should concentrate on locating the microwave/gamma-ray sources, and on determining the kernel's fine structure and the related multi-loop structure of the flaring area.
Labeled Graph Kernel for Behavior Analysis.
Zhao, Ruiqi; Martinez, Aleix M
2016-08-01
Automatic behavior analysis from video is a major topic in many areas of research, including computer vision, multimedia, robotics, biology, cognitive science, social psychology, psychiatry, and linguistics. Two major problems are of interest when analyzing behavior. First, we wish to automatically categorize observed behaviors into a discrete set of classes (i.e., classification). For example, to determine word production from video sequences in sign language. Second, we wish to understand the relevance of each behavioral feature in achieving this classification (i.e., decoding). For instance, to know which behavior variables are used to discriminate between the words apple and onion in American Sign Language (ASL). The present paper proposes to model behavior using a labeled graph, where the nodes define behavioral features and the edges are labels specifying their order (e.g., before, overlaps, start). In this approach, classification reduces to a simple labeled graph matching. Unfortunately, the complexity of labeled graph matching grows exponentially with the number of categories we wish to represent. Here, we derive a graph kernel to quickly and accurately compute this graph similarity. This approach is very general and can be plugged into any kernel-based classifier. Specifically, we derive a Labeled Graph Support Vector Machine (LGSVM) and a Labeled Graph Logistic Regressor (LGLR) that can be readily employed to discriminate between many actions (e.g., sign language concepts). The derived approach can be readily used for decoding too, yielding invaluable information for the understanding of a problem (e.g., to know how to teach a sign language). The derived algorithms allow us to achieve higher accuracy results than those of state-of-the-art algorithms in a fraction of the time. We show experimental results on a variety of problems and datasets, including multimodal data.
Non-perturbative aspects of hadron structure in QCD
Thomas, Anthony W.
2012-09-26
We review recent developments in the understanding of hadron structure in the context of QCD. These developments build on the success of lattice QCD and discoveries in chiral perturbation theory. We focus particularly on tests of QCD through the strangeness content of the nucleon, the investigation of excited states of the nucleon, where lattice QCD, experiment and phenomenology meet. Lastly, we discuss the implications of these developments in hadron structure for our understanding of nuclear structure and the equation of state of dense matter.
Nucleon QCD sum rules in the instanton medium
Ryskin, M. G.; Drukarev, E. G. Sadovnikova, V. A.
2015-09-15
We try to find grounds for the standard nucleon QCD sum rules, based on a more detailed description of the QCD vacuum. We calculate the polarization operator of the nucleon current in the instanton medium. The medium (QCD vacuum) is assumed to be a composition of the small-size instantons and some long-wave gluon fluctuations. We solve the corresponding QCD sum rule equations and demonstrate that there is a solution with the value of the nucleon mass close to the physical one if the fraction of the small-size instantons contribution is w{sub s} ≈ 2/3.
Hua, Wen-Yu; Ghosh, Debashis
2015-09-01
Associating genetic markers with a multidimensional phenotype is an important yet challenging problem. In this work, we establish the equivalence between two popular methods: kernel-machine regression (KMR), and kernel distance covariance (KDC). KMR is a semiparametric regression framework that models covariate effects parametrically and genetic markers non-parametrically, while KDC represents a class of methods that include distance covariance (DC) and Hilbert-Schmidt independence criterion (HSIC), which are nonparametric tests of independence. We show that the equivalence between the score test of KMR and the KDC statistic under certain conditions can lead to a novel generalization of the KDC test that incorporates covariates. Our contributions are 3-fold: (1) establishing the equivalence between KMR and KDC; (2) showing that the principles of KMR can be applied to the interpretation of KDC; (3) the development of a broader class of KDC statistics, where the class members are statistics corresponding to different kernel combinations. Finally, we perform simulation studies and an analysis of real data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) study. The ADNI study suggest that SNPs of FLJ16124 exhibit pairwise interaction effects that are strongly correlated to the changes of brain region volumes. PMID:25939365
Hua, Wen-Yu; Ghosh, Debashis
2015-09-01
Associating genetic markers with a multidimensional phenotype is an important yet challenging problem. In this work, we establish the equivalence between two popular methods: kernel-machine regression (KMR), and kernel distance covariance (KDC). KMR is a semiparametric regression framework that models covariate effects parametrically and genetic markers non-parametrically, while KDC represents a class of methods that include distance covariance (DC) and Hilbert-Schmidt independence criterion (HSIC), which are nonparametric tests of independence. We show that the equivalence between the score test of KMR and the KDC statistic under certain conditions can lead to a novel generalization of the KDC test that incorporates covariates. Our contributions are 3-fold: (1) establishing the equivalence between KMR and KDC; (2) showing that the principles of KMR can be applied to the interpretation of KDC; (3) the development of a broader class of KDC statistics, where the class members are statistics corresponding to different kernel combinations. Finally, we perform simulation studies and an analysis of real data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) study. The ADNI study suggest that SNPs of FLJ16124 exhibit pairwise interaction effects that are strongly correlated to the changes of brain region volumes.
Probability-confidence-kernel-based localized multiple kernel learning with lp norm.
Han, Yina; Liu, Guizhong
2012-06-01
Localized multiple kernel learning (LMKL) is an attractive strategy for combining multiple heterogeneous features in terms of their discriminative power for each individual sample. However, models excessively fitting to a specific sample would obstacle the extension to unseen data, while a more general form is often insufficient for diverse locality characterization. Hence, both learning sample-specific local models for each training datum and extending the learned models to unseen test data should be equally addressed in designing LMKL algorithm. In this paper, for an integrative solution, we propose a probability confidence kernel (PCK), which measures per-sample similarity with respect to probabilistic-prediction-based class attribute: The class attribute similarity complements the spatial-similarity-based base kernels for more reasonable locality characterization, and the predefined form of involved class probability density function facilitates the extension to the whole input space and ensures its statistical meaning. Incorporating PCK into support-vectormachine-based LMKL framework, we propose a new PCK-LMKL with arbitrary l(p)-norm constraint implied in the definition of PCKs, where both the parameters in PCK and the final classifier can be efficiently optimized in a joint manner. Evaluations of PCK-LMKL on both benchmark machine learning data sets (ten University of California Irvine (UCI) data sets) and challenging computer vision data sets (15-scene data set and Caltech-101 data set) have shown to achieve state-of-the-art performances.
QCD and Light-Front Holography
Brodsky, Stanley J.; de Teramond, Guy F.; /Costa Rica U.
2010-10-27
The soft-wall AdS/QCD model, modified by a positive-sign dilaton metric, leads to a remarkable one-parameter description of nonperturbative hadron dynamics. The model predicts a zero-mass pion for zero-mass quarks and a Regge spectrum of linear trajectories with the same slope in the leading orbital angular momentum L of hadrons and the radial quantum number N. Light-Front Holography maps the amplitudes which are functions of the fifth dimension variable z of anti-de Sitter space to a corresponding hadron theory quantized on the light front. The resulting Lorentz-invariant relativistic light-front wave equations are functions of an invariant impact variable {zeta} which measures the separation of the quark and gluonic constituents within the hadron at equal light-front time. The result is to a semi-classical frame-independent first approximation to the spectra and light-front wavefunctions of meson and baryon light-quark bound states, which in turn predict the behavior of the pion and nucleon form factors. The theory implements chiral symmetry in a novel way: the effects of chiral symmetry breaking increase as one goes toward large interquark separation, consistent with spectroscopic data, and the the hadron eigenstates generally have components with different orbital angular momentum; e.g., the proton eigenstate in AdS/QCD with massless quarks has L = 0 and L = 1 light-front Fock components with equal probability. The soft-wall model also predicts the form of the non-perturbative effective coupling {alpha}{sub s}{sup AdS} (Q) and its {beta}-function which agrees with the effective coupling {alpha}{sub g1} extracted from the Bjorken sum rule. The AdS/QCD model can be systematically improved by using its complete orthonormal solutions to diagonalize the full QCD light-front Hamiltonian or by applying the Lippmann-Schwinger method in order to systematically include the QCD interaction terms. A new perspective on quark and gluon condensates is also reviewed.
Forward and small-x QCD physics results from CMS experiment at LHC
NASA Astrophysics Data System (ADS)
Cerci, Deniz Sunar
2016-03-01
The Compact Muon Solenoid (CMS) is one of the two large, multi-purpose experiments at the Large Hadron Collider (LHC) at CERN. During the Run I Phase a large pp collision dataset has been collected and the CMS collaboration has explored measurements that shed light on a new era. Forward and small-x quantum chromodynamics (QCD) physics measurements with CMS experiment covers a wide range of physics subjects. Some of highlights in terms of testing the very low-x QCD, underlying event and multiple interaction characteristics, photon-mediated processes, jets with large rapidity separation at high pseudo-rapidities and the inelastic proton-proton cross section dominated by diffractive interactions are presented. Results are compared to Monte Carlo (MC) models with different parameter tunes for the description of the underlying event and to perturbative QCD calculations. The prominent role of multi-parton interactions has been confirmed in the semihard sector but no clear deviation from the standard Dglap parton evolution due to Bfkl has been observed. An outlook to the prospects at 13 TeV is given.
Effects of sample size on KERNEL home range estimates
Seaman, D.E.; Millspaugh, J.J.; Kernohan, Brian J.; Brundige, Gary C.; Raedeke, Kenneth J.; Gitzen, Robert A.
1999-01-01
Kernel methods for estimating home range are being used increasingly in wildlife research, but the effect of sample size on their accuracy is not known. We used computer simulations of 10-200 points/home range and compared accuracy of home range estimates produced by fixed and adaptive kernels with the reference (REF) and least-squares cross-validation (LSCV) methods for determining the amount of smoothing. Simulated home ranges varied from simple to complex shapes created by mixing bivariate normal distributions. We used the size of the 95% home range area and the relative mean squared error of the surface fit to assess the accuracy of the kernel home range estimates. For both measures, the bias and variance approached an asymptote at about 50 observations/home range. The fixed kernel with smoothing selected by LSCV provided the least-biased estimates of the 95% home range area. All kernel methods produced similar surface fit for most simulations, but the fixed kernel with LSCV had the lowest frequency and magnitude of very poor estimates. We reviewed 101 papers published in The Journal of Wildlife Management (JWM) between 1980 and 1997 that estimated animal home ranges. A minority of these papers used nonparametric utilization distribution (UD) estimators, and most did not adequately report sample sizes. We recommend that home range studies using kernel estimates use LSCV to determine the amount of smoothing, obtain a minimum of 30 observations per animal (but preferably a?Y50), and report sample sizes in published results.
Gaussian kernel width optimization for sparse Bayesian learning.
Mohsenzadeh, Yalda; Sheikhzadeh, Hamid
2015-04-01
Sparse kernel methods have been widely used in regression and classification applications. The performance and the sparsity of these methods are dependent on the appropriate choice of the corresponding kernel functions and their parameters. Typically, the kernel parameters are selected using a cross-validation approach. In this paper, a learning method that is an extension of the relevance vector machine (RVM) is presented. The proposed method can find the optimal values of the kernel parameters during the training procedure. This algorithm uses an expectation-maximization approach for updating kernel parameters as well as other model parameters; therefore, the speed of convergence and computational complexity of the proposed method are the same as the standard RVM. To control the convergence of this fully parameterized model, the optimization with respect to the kernel parameters is performed using a constraint on these parameters. The proposed method is compared with the typical RVM and other competing methods to analyze the performance. The experimental results on the commonly used synthetic data, as well as benchmark data sets, demonstrate the effectiveness of the proposed method in reducing the performance dependency on the initial choice of the kernel parameters. PMID:25794377
Yao, H; Hruska, Z; Kincaid, R; Brown, R; Cleveland, T; Bhatnagar, D
2010-05-01
The objective of this study was to examine the relationship between fluorescence emissions of corn kernels inoculated with Aspergillus flavus and aflatoxin contamination levels within the kernels. Aflatoxin contamination in corn has been a long-standing problem plaguing the grain industry with potentially devastating consequences to corn growers. In this study, aflatoxin-contaminated corn kernels were produced through artificial inoculation of corn ears in the field with toxigenic A. flavus spores. The kernel fluorescence emission data were taken with a fluorescence hyperspectral imaging system when corn kernels were excited with ultraviolet light. Raw fluorescence image data were preprocessed and regions of interest in each image were created for all kernels. The regions of interest were used to extract spectral signatures and statistical information. The aflatoxin contamination level of single corn kernels was then chemically measured using affinity column chromatography. A fluorescence peak shift phenomenon was noted among different groups of kernels with different aflatoxin contamination levels. The fluorescence peak shift was found to move more toward the longer wavelength in the blue region for the highly contaminated kernels and toward the shorter wavelengths for the clean kernels. Highly contaminated kernels were also found to have a lower fluorescence peak magnitude compared with the less contaminated kernels. It was also noted that a general negative correlation exists between measured aflatoxin and the fluorescence image bands in the blue and green regions. The correlation coefficients of determination, r(2), was 0.72 for the multiple linear regression model. The multivariate analysis of variance found that the fluorescence means of four aflatoxin groups, <1, 1-20, 20-100, and >or=100 ng g(-1) (parts per billion), were significantly different from each other at the 0.01 level of alpha. Classification accuracy under a two-class schema ranged from 0.84 to
Bridging the gap between the KERNEL and RT-11
Hendra, R.G.
1981-06-01
A software package is proposed to allow users of the PL-11 language, and the LSI-11 KERNEL in general, to use their PL-11 programs under RT-11. Further, some general purpose extensions to the KERNEL are proposed that facilitate some number conversions and strong manipulations. A Floating Point Package of procedures to allow full use of the hardware floating point capability of the LSI-11 computers is proposed. Extensions to the KERNEL that allow a user to read, write and delete disc files in the manner of RT-11 is also proposed. A device directory listing routine is also included.
Spectrophotometric method for determination of phosphine residues in cashew kernels.
Rangaswamy, J R
1988-01-01
A spectrophotometric method reported for determination of phosphine (PH3) residues in wheat has been extended for determination of these residues in cashew kernels. Unlike the spectrum for wheat, the spectrum of PH3 residue-AgNO3 chromophore from cashew kernels does not show an absorption maximum at 400 nm; nevertheless, reading the absorbance at 400 nm afforded good recoveries of 90-98%. No interference occurred from crop materials, and crop controls showed low absorbance; the method can be applied for determinations as low as 0.01 ppm PH3 residue in cashew kernels.
Kernel simplex growing algorithm for hyperspectral endmember extraction
NASA Astrophysics Data System (ADS)
Zhao, Liaoying; Zheng, Junpeng; Li, Xiaorun; Wang, Lijiao
2014-01-01
In order to effectively extract endmembers for hyperspectral imagery where linear mixing model may not be appropriate due to multiple scattering effects, this paper extends the simplex growing algorithm (SGA) to its kernel version. A new simplex volume formula without dimension reduction is used in SGA to form a new simplex growing algorithm (NSGA). The original data are nonlinearly mapped into a high-dimensional space where the scatters can be ignored. To avoid determining complex nonlinear mapping, a kernel function is used to extend the NSGA to kernel NSGA (KNSGA). Experimental results of simulated and real data prove that the proposed KNSGA approach outperforms SGA and NSGA.
Multitasking kernel for the C and Fortran programming languages
Brooks, E.D. III
1984-09-01
A multitasking kernel for the C and Fortran programming languages which runs on the Unix operating system is presented. The kernel provides a multitasking environment which serves two purposes. The first is to provide an efficient portable environment for the coding, debugging and execution of production multiprocessor programs. The second is to provide a means of evaluating the performance of a multitasking program on model multiprocessors. The performance evaluation features require no changes in the source code of the application and are implemented as a set of compile and run time options in the kernel.
Monte Carlo Code System for Electron (Positron) Dose Kernel Calculations.
1999-05-12
Version 00 KERNEL performs dose kernel calculations for an electron (positron) isotropic point source in an infinite homogeneous medium. First, the auxiliary code PRELIM is used to prepare cross section data for the considered medium. Then the KERNEL code simulates the transport of electrons and bremsstrahlung photons through the medium until all particles reach their cutoff energies. The deposited energy is scored in concentric spherical shells at a radial distance ranging from zero to twicemore » the source particle range.« less
Pion Form Factor in Chiral Limit of Hard-Wall AdS/QCD Model
Anatoly Radyushkin; Hovhannes Grigoryan
2007-12-01
We develop a formalism to calculate form factor and charge density distribution of pion in the chiral limit using the holographic dual model of QCD with hard-wall cutoff. We introduce two conjugate pion wave functions and present analytic expressions for these functions and for the pion form factor. They allow to relate such observables as the pion decay constant and the pion charge electric radius to the values of chiral condensate and hard-wall cutoff scale. The evolution of the pion form factor to large values of the momentum transfer is discussed, and results are compared to existing experimental data.
Advances in Light-Front QCD and New Perspectives for QCD from AdS/CFT
Brodsky, Stanley J.; de Teramond, Guy F.; /Costa Rica U.
2005-10-26
The light-front quantization of gauge theories in light-cone gauge provides a frame-independent wavefunction representation of relativistic bound states, simple forms for current matrix elements, explicit unitarity, and a Fock space built on a trivial vacuum. The AdS/CFT correspondence has led to important insights into the properties of quantum chromodynamics even though QCD is a broken conformal theory. We have recently shown how a model based on a truncated AdS space can be used to obtain the hadronic spectrum of q{bar q}, qqq and gg bound states, as well as their respective light-front wavefunctions. Specific hadrons are identified by the correspondence of string modes with the dimension of the interpolating operator of the hadron's valence Fock state, including orbital angular momentum excitations. The predicted mass spectrum is linear M {proportional_to} L at high orbital angular momentum, in contrast to the quadratic dependence M{sup 2}/L found in the description of spinning strings. Since only one parameter, the QCD scale {Lambda}{sub QCD}, is introduced, the agreement with the pattern of physical states is remarkable. In particular, the ratio of {Delta} to nucleon trajectories is determined by the ratio of zeros of Bessel functions. As a specific application of QCD dynamics from AdS/CFT duality, we describe a computation of the proton magnetic form factor in both the space-like and time-like regions. The extended AdS/CFT space-time theory also provides an analytic model for hadronic light-front wavefunctions, thus providing a relativistic description of hadrons in QCD at the amplitude level. The model wavefunctions display confinement at large inter-quark separation and conformal symmetry at short distances. In particular, the scaling and conformal properties of the LFWFs at high relative momenta agree with perturbative QCD. These AdS/CFT model wavefunctions could be used as an initial ansatz for a variational treatment of the light-front QCD Hamiltonian.
LATTICE QCD AT FINITE TEMPERATURE AND DENSITY.
BLUM,T.; CREUTZ,M.; PETRECZKY,P.
2004-02-24
With the operation of the RHIC heavy ion program, the theoretical understanding of QCD at finite temperature and density has become increasingly important. Though QCD at finite temperature has been extensively studied using lattice Monte-Carlo simulations over the past twenty years, most physical questions relevant for RHIC (and future) heavy ion experiments remain open. In lattice QCD at finite temperature and density there have been at least two major advances in recent years. First, for the first time calculations of real time quantities, like meson spectral functions have become available. Second, the lattice study of the QCD phase diagram and equation of state have been extended to finite baryon density by several groups. Both issues were extensively discussed in the course of the workshop. A real highlight was the study of the QCD phase diagram in (T, {mu})-plane by Z. Fodor and S. Katz and the determination of the critical end-point for the physical value of the pion mass. This was the first time such lattice calculations at, the physical pion mass have been performed. Results by Z Fodor and S. Katz were obtained using a multi-parameter re-weighting method. Other determinations of the critical end point were also presented, in particular using a Taylor expansion around {mu} = 0 (Bielefeld group, Ejiri et al.) and using analytic continuation from imaginary chemical potential (Ph. de Forcrand and O. Philipsen). The result based on Taylor expansion agrees within errors with the new prediction of Z. Fodor and S. Katz, while methods based on analytic continuation still predict a higher value for the critical baryon density. Most of the thermodynamics studies in full QCD (including those presented at this workshop) have been performed using quite coarse lattices, a = 0.2-0.3 fm. Therefore one may worry about cutoff effects in different thermodynamic quantities, like the transition temperature T{sub tr}. At the workshop U. Heller presented a study of the transition
QCD and Top-Quark Results from the Tevatron
Zielinski, Marek; /Rochester U.
2006-10-01
Selected recent QCD and top-quark results from the Tevatron are reviewed, aiming to illustrate progression from basic studies of QCD processes to verification of perturbative calculations and Monte Carlo simulation tools, and to their applications in more novel and complex cases, like top-quark studies and searches for new physics.
The static force from lattice QCD with two dynamical quarks
Leder, B.; Knechtli, F.
2011-05-23
We report on the measurement of the static force from HYP-smeared Wilson loops in two flavour QCD. We analyse the quark mass dependence of the force at three lattice spacings. The QCD static force around distance r{sub 0} is compared with the force obtained from pure gauge theory, potential models and perturbation theory.
QCD Phase Diagram Using Dyson-Schwinger Equations
Liu Yuxin; Qin Sixue; Chang Lei; Roberts, Craig D.
2011-05-24
We describe briefly the Dyson-Schwinger equation approach of QCD and the study of the QCD phase diagram in this approach. The phase diagram in terms of the temperature and chemical potential, and that in the space of coupling strength and current-quark mass are given.
Renormalization group analysis in nonrelativistic QCD for colored scalars
Hoang, Andre H.; Ruiz-Femenia, Pedro
2006-01-01
The velocity nonrelativistic QCD Lagrangian for colored heavy scalar fields in the fundamental representation of QCD and the renormalization group analysis of the corresponding operators are presented. The results are an important ingredient for renormalization group improved computations of scalar-antiscalar bound state energies and production rates at next-to-next-to-leading-logarithmic (NNLL) order.
Continuing Progress on a Lattice QCD Software Infrastructure
Joo, Balint
2008-11-01
We report on the progress of the software effort in the QCD Application Area of SciDAC. In particular, we discuss how the software developed under SciDAC enabled the aggressive exploitation of leadership computers, and we report on progress in the area of QCD software for multi-core architectures.