Wilson Dslash Kernel From Lattice QCD Optimization
Joo, Balint; Smelyanskiy, Mikhail; Kalamkar, Dhiraj D.; Vaidyanathan, Karthikeyan
2015-07-01
Lattice Quantum Chromodynamics (LQCD) is a numerical technique used for calculations in Theoretical Nuclear and High Energy Physics. LQCD is traditionally one of the first applications ported to many new high performance computing architectures and indeed LQCD practitioners have been known to design and build custom LQCD computers. Lattice QCD kernels are frequently used as benchmarks (e.g. 168.wupwise in the SPEC suite) and are generally well understood, and as such are ideal to illustrate several optimization techniques. In this chapter we will detail our work in optimizing the Wilson-Dslash kernels for Intel Xeon Phi, however, as we will show the technique gives excellent performance on regular Xeon Architecture as well.
NASA Astrophysics Data System (ADS)
These are the proceedings of the QCD Evolution 2015 Workshop which was held 26-30 May, 2015 at Jefferson Lab, Newport News, Virginia, USA. The workshop is a continuation of a series of workshops held during four consecutive years 2011, 2012, 2013 at Jefferson Lab, and in 2014 in Santa Fe, NM. With the rapid developments in our understanding of the evolution of parton distributions including low-x, TMDs, GPDs, higher-twist correlation functions, and the associated progress in perturbative QCD, lattice QCD and effective field theory techniques we look forward with great enthusiasm to the 2015 meeting. A special attention was also paid to participation of experimentalists as the topics discussed are of immediate importance for the JLab 12 experimental program and a future Electron Ion Collider.
QCDNUM: Fast QCD evolution and convolution
NASA Astrophysics Data System (ADS)
Botje, M.
2011-02-01
The QCDNUM program numerically solves the evolution equations for parton densities and fragmentation functions in perturbative QCD. Un-polarised parton densities can be evolved up to next-to-next-to-leading order in powers of the strong coupling constant, while polarised densities or fragmentation functions can be evolved up to next-to-leading order. Other types of evolution can be accessed by feeding alternative sets of evolution kernels into the program. A versatile convolution engine provides tools to compute parton luminosities, cross-sections in hadron-hadron scattering, and deep inelastic structure functions in the zero-mass scheme or in generalised mass schemes. Input to these calculations are either the QCDNUM evolved densities, or those read in from an external parton density repository. Included in the software distribution are packages to calculate zero-mass structure functions in un-polarised deep inelastic scattering, and heavy flavour contributions to these structure functions in the fixed flavour number scheme. Program summaryProgram title: QCDNUM version: 17.00 Catalogue identifier: AEHV_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHV_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU Public Licence No. of lines in distributed program, including test data, etc.: 45 736 No. of bytes in distributed program, including test data, etc.: 911 569 Distribution format: tar.gz Programming language: Fortran-77 Computer: All Operating system: All RAM: Typically 3 Mbytes Classification: 11.5 Nature of problem: Evolution of the strong coupling constant and parton densities, up to next-to-next-to-leading order in perturbative QCD. Computation of observable quantities by Mellin convolution of the evolved densities with partonic cross-sections. Solution method: Parametrisation of the parton densities as linear or quadratic splines on a discrete grid, and evolution of the spline
Sivers Asymmetry with QCD Evolution
NASA Astrophysics Data System (ADS)
Echevarria, Miguel G.; Idilbi, Ahmad; Kang, Zhong-Bo; Vitev, Ivan
2015-02-01
We analyze the Sivers asymmetry in both Drell-Yan (DY) production and semi-inclusive deep inelastic scattering (SIDIS), while considering properly defined transverse momentum dependent parton distribution and fragmentation functions and their QCD evolution. After finding a universal non-perturbative spin-independent Sudakov factor that can describe reasonably well the world's data of SIDIS, DY lepton pair and W/Z production in unpolarized scatterings, we perform a global fitting of all the experimental data on the Sivers asymmetry in SIDIS from HERMES, COMPASS and Jefferson Lab. Then we make predictions for the asymmetry in DY lepton pair and W boson production, which could be compared to the future experimental data in order to test the sign change of the Sivers function.
Jet quenching from QCD evolution
NASA Astrophysics Data System (ADS)
Chien, Yang-Ting; Emerman, Alexander; Kang, Zhong-Bo; Ovanesyan, Grigory; Vitev, Ivan
2016-04-01
Recent advances in soft-collinear effective theory with Glauber gluons have led to the development of a new method that gives a unified description of inclusive hadron production in reactions with nucleons and heavy nuclei. We show how this approach, based on the generalization of the DGLAP evolution equations to include final-state medium-induced parton shower corrections for large Q2 processes, can be combined with initial-state effects for applications to jet quenching phenomenology. We demonstrate that the traditional parton energy loss calculations can be regarded as a special soft-gluon emission limit of the general QCD evolution framework. We present phenomenological comparison of the SCETG -based results on the suppression of inclusive charged hadron and neutral pion production in √{sNN }=2.76 TeV lead-lead collisions at the Large Hadron Collider to experimental data. We also show theoretical predictions for the upcoming √{sNN }≃5.1 TeV Pb +Pb run at the LHC.
QCD Evolution of Helicity and Transversity TMDs
Prokudin, Alexei
2014-01-01
We examine the QCD evolution of the helicity and transversity parton distribution functions when including also their dependence on transverse momentum. Using an appropriate definition of these polarized transverse momentum distributions (TMDs), we describe their dependence on the factorization scale and rapidity cutoff, which is essential for phenomenological applications.
R evolution: Improving perturbative QCD
Hoang, Andre H.; Jain, Ambar; Stewart, Iain W.; Scimemi, Ignazio
2010-07-01
Perturbative QCD results in the MS scheme can be dramatically improved by switching to a scheme that accounts for the dominant power law dependence on the factorization scale in the operator product expansion. We introduce the ''MSR scheme'' which achieves this in a Lorentz and gauge invariant way and has a very simple relation to MS. Results in MSR depend on a cutoff parameter R, in addition to the {mu} of MS. R variations can be used to independently estimate (i.) the size of power corrections, and (ii.) higher-order perturbative corrections (much like {mu} in MS). We give two examples at three-loop order, the ratio of mass splittings in the B*-B and D*-D systems, and the Ellis-Jaffe sum rule as a function of momentum transfer Q in deep inelastic scattering. Comparing to data, the perturbative MSR results work well even for Q{approx}1 GeV, and power corrections are reduced compared to MS.
R evolution: Improving perturbative QCD
NASA Astrophysics Data System (ADS)
Hoang, André H.; Jain, Ambar; Scimemi, Ignazio; Stewart, Iain W.
2010-07-01
Perturbative QCD results in the MS¯ scheme can be dramatically improved by switching to a scheme that accounts for the dominant power law dependence on the factorization scale in the operator product expansion. We introduce the “MSR scheme” which achieves this in a Lorentz and gauge invariant way and has a very simple relation to MS¯. Results in MSR depend on a cutoff parameter R, in addition to the μ of MS¯. R variations can be used to independently estimate (i.) the size of power corrections, and (ii.) higher-order perturbative corrections (much like μ in MS¯). We give two examples at three-loop order, the ratio of mass splittings in the B*-B and D*-D systems, and the Ellis-Jaffe sum rule as a function of momentum transfer Q in deep inelastic scattering. Comparing to data, the perturbative MSR results work well even for Q˜1GeV, and power corrections are reduced compared to MS¯.
Resumming double logarithms in the QCD evolution of color dipoles
NASA Astrophysics Data System (ADS)
Iancu, E.; Madrigal, J. D.; Mueller, A. H.; Soyez, G.; Triantafyllopoulos, D. N.
2015-05-01
The higher-order perturbative corrections, beyond leading logarithmic accuracy, to the BFKL evolution in QCD at high energy are well known to suffer from a severe lack-of-convergence problem, due to radiative corrections enhanced by double collinear logarithms. Via an explicit calculation of Feynman graphs in light cone (time-ordered) perturbation theory, we show that the corrections enhanced by double logarithms (either energy-collinear, or double collinear) are associated with soft gluon emissions which are strictly ordered in lifetime. These corrections can be resummed to all orders by solving an evolution equation which is non-local in rapidity. This equation can be equivalently rewritten in local form, but with modified kernel and initial conditions, which resum double collinear logs to all orders. We extend this resummation to the next-to-leading order BFKL and BK equations. The first numerical studies of the collinearly-improved BK equation demonstrate the essential role of the resummation in both stabilizing and slowing down the evolution.
The QCD evolution of TMD in the covariant approach
NASA Astrophysics Data System (ADS)
Efremov, A. V.; Teryaev, O. V.; Zavada, P.
2016-02-01
The procedure for calculation of the QCD evolution of transverse momentum dependent distributions within the covariant approach is suggested. The standard collinear QCD evolution together with the requirements of relativistic invariance and rotational symmetry of the nucleon in its rest frame represent the basic ingredients of our approach. The obtained results are compared with the predictions of some other approaches.
QCD EVOLUTION AND TMD/SPIN EXPERIMENTS
Jian-Ping Chen
2012-12-01
Transverse Spin and Transverse Momemtum Dependent (TMD) distribution study has been one of the main focuses of hadron physics in recent years. The initial exploratory Semi-Incluisve Deep-Inelastic-Scattering (SIDIS) experiments with transversely polarized proton and deuteron from HERMES and COMPASS attracted great attention and lead to very active efforts in both experiments and theory. QCD factorization has been carefully studied. A SIDIS experiment on the neutron with a polarized 3He target was performed at JLab. Recently published results will be shown. Precision TMD experiments are planned at JLab after the 12 GeV energy upgrade. The approved experiments with a new SoLID spectrometer on both the proton and neutron will be presented. Proper QCD evolution treatments beyond collinear cases become crucial for the precision study of the TMDs. Experimentally, Q2 evolution and higher-twist effects are often closely related. The experience of study higher-twist effects in the cases of moments of the spin structure functions will be discussed.
QCD evolution of the Sivers asymmetry
NASA Astrophysics Data System (ADS)
Echevarria, Miguel G.; Idilbi, Ahmad; Kang, Zhong-Bo; Vitev, Ivan
2014-04-01
We study the QCD evolution of the Sivers effect in both semi-inclusive deep inelastic scattering (SIDIS) and Drell-Yan production (DY). We pay close attention to the nonperturbative spin-independent Sudakov factor in the evolution formalism and find a universal form which can describe reasonably well the experimental data on the transverse momentum distributions in SIDIS, DY lepton pair and W/Z production. With this Sudakov factor at hand, we perform a global fitting of all the experimental data on the Sivers asymmetry in SIDIS from HERMES, COMPASS and Jefferson Lab. We then make predictions for the Sivers asymmetry in DY lepton pair and W production that can be compared to the future experimental measurements to test the sign change of the Sivers functions between SIDIS and DY processes and constrain the sea quark Sivers functions.
Correlations and discreteness in nonlinear QCD evolution
Armesto, N.; Milhano, J.
2006-06-01
We consider modifications of the standard nonlinear QCD evolution in an attempt to account for some of the missing ingredients discussed recently, such as correlations, discreteness in gluon emission and Pomeron loops. The evolution is numerically performed using the Balitsky-Kovchegov equation on individual configurations defined by a given initial value of the saturation scale, for reduced rapidities y=({alpha}{sub s}N{sub c}/{pi})Y<10. We consider the effects of averaging over configurations as a way to implement correlations, using three types of Gaussian averaging around a mean saturation scale. Further, we heuristically mimic discreteness in gluon emission by considering a modified evolution in which the tails of the gluon distributions are cut off. The approach to scaling and the behavior of the saturation scale with rapidity in these modified evolutions are studied and compared with the standard mean-field results. For the large but finite values of rapidity explored, no strong quantitative difference in scaling for transverse momenta around the saturation scale is observed. At larger transverse momenta, the influence of the modifications in the evolution seems most noticeable in the first steps of the evolution. No influence on the rapidity behavior of the saturation scale due to the averaging procedure is found. In the cutoff evolution the rapidity evolution of the saturation scale is slowed down and strongly depends on the value of the cutoff. Our results stress the need to go beyond simple modifications of evolution by developing proper theoretical tools that implement such recently discussed ingredients.
Evolution of fluctuations near QCD critical point
Stephanov, M. A.
2010-03-01
We propose to describe the time evolution of quasistationary fluctuations near QCD critical point by a system of stochastic Boltzmann-Langevin-Vlasov-type equations. We derive the equations and study the system analytically in the linearized regime. Known results for equilibrium stationary fluctuations as well as the critical scaling of diffusion coefficient are reproduced. We apply the approach to the long-standing question of the fate of the critical point fluctuations during the hadronic rescattering stage of the heavy-ion collision after chemical freeze-out. We find that if conserved particle number fluctuations survive the rescattering, so do, under a certain additional condition, the fluctuations of nonconserved quantities, such as mean transverse momentum. We derive a simple analytical formula for the magnitude of this memory effect.
QCD Evolution of the Transverse Momentum Dependent Correlations
Zhou, Jian; Liang, Zuo-Tang; Yuan, Feng
2008-12-10
We study the QCD evolution for the twist-three quark-gluon correlation functions associated with the transverse momentum odd quark distributions. Different from that for the leading twist quark distributions, these evolution equations involve more general twist-three functions beyond the correlation functions themselves. They provide important information on nucleon structure, and can be studied in the semi-inclusive hadron production in deep inelastic scattering and Drell-Yan lepton pair production in pp scattering process.
NASA Astrophysics Data System (ADS)
Fomin, Fedor V.
Preprocessing (data reduction or kernelization) as a strategy of coping with hard problems is universally used in almost every implementation. The history of preprocessing, like applying reduction rules simplifying truth functions, can be traced back to the 1950's [6]. A natural question in this regard is how to measure the quality of preprocessing rules proposed for a specific problem. For a long time the mathematical analysis of polynomial time preprocessing algorithms was neglected. The basic reason for this anomaly was that if we start with an instance I of an NP-hard problem and can show that in polynomial time we can replace this with an equivalent instance I' with |I'| < |I| then that would imply P=NP in classical complexity.
Analytic Evolution of Singular Distribution Amplitudes in QCD
Radyushkin, Anatoly V.; Tandogan Kunkel, Asli
2014-03-01
We describe a method of analytic evolution of distribution amplitudes (DA) that have singularities, such as non-zero values at the end-points of the support region, jumps at some points inside the support region and cusps. We illustrate the method by applying it to the evolution of a flat (constant) DA, anti-symmetric at DA and then use it for evolution of the two-photon generalized distribution amplitude. Our approach has advantages over the standard method of expansion in Gegenbauer polynomials, which requires infinite number of terms in order to accurately reproduce functions in the vicinity of singular points, and over a straightforward iteration of an initial distribution with evolution kernel. The latter produces logarithmically divergent terms at each iteration, while in our method the logarithmic singularities are summed from the start, which immediately produces a continuous curve, with only one or two iterations needed afterwards in order to get rather precise results.
Iterative filtering decomposition based on local spectral evolution kernel
Wang, Yang; Wei, Guo-Wei; Yang, Siyang
2011-01-01
The synthesizing information, achieving understanding, and deriving insight from increasingly massive, time-varying, noisy and possibly conflicting data sets are some of most challenging tasks in the present information age. Traditional technologies, such as Fourier transform and wavelet multi-resolution analysis, are inadequate to handle all of the above-mentioned tasks. The empirical model decomposition (EMD) has emerged as a new powerful tool for resolving many challenging problems in data processing and analysis. Recently, an iterative filtering decomposition (IFD) has been introduced to address the stability and efficiency problems of the EMD. Another data analysis technique is the local spectral evolution kernel (LSEK), which provides a near prefect low pass filter with desirable time-frequency localizations. The present work utilizes the LSEK to further stabilize the IFD, and offers an efficient, flexible and robust scheme for information extraction, complexity reduction, and signal and image understanding. The performance of the present LSEK based IFD is intensively validated over a wide range of data processing tasks, including mode decomposition, analysis of time-varying data, information extraction from nonlinear dynamic systems, etc. The utility, robustness and usefulness of the proposed LESK based IFD are demonstrated via a large number of applications, such as the analysis of stock market data, the decomposition of ocean wave magnitudes, the understanding of physiologic signals and information recovery from noisy images. The performance of the proposed method is compared with that of existing methods in the literature. Our results indicate that the LSEK based IFD improves both the efficiency and the stability of conventional EMD algorithms. PMID:22350559
Iterative filtering decomposition based on local spectral evolution kernel.
Wang, Yang; Wei, Guo-Wei; Yang, Siyang
2012-03-01
The synthesizing information, achieving understanding, and deriving insight from increasingly massive, time-varying, noisy and possibly conflicting data sets are some of most challenging tasks in the present information age. Traditional technologies, such as Fourier transform and wavelet multi-resolution analysis, are inadequate to handle all of the above-mentioned tasks. The empirical model decomposition (EMD) has emerged as a new powerful tool for resolving many challenging problems in data processing and analysis. Recently, an iterative filtering decomposition (IFD) has been introduced to address the stability and efficiency problems of the EMD. Another data analysis technique is the local spectral evolution kernel (LSEK), which provides a near prefect low pass filter with desirable time-frequency localizations. The present work utilizes the LSEK to further stabilize the IFD, and offers an efficient, flexible and robust scheme for information extraction, complexity reduction, and signal and image understanding. The performance of the present LSEK based IFD is intensively validated over a wide range of data processing tasks, including mode decomposition, analysis of time-varying data, information extraction from nonlinear dynamic systems, etc. The utility, robustness and usefulness of the proposed LESK based IFD are demonstrated via a large number of applications, such as the analysis of stock market data, the decomposition of ocean wave magnitudes, the understanding of physiologic signals and information recovery from noisy images. The performance of the proposed method is compared with that of existing methods in the literature. Our results indicate that the LSEK based IFD improves both the efficiency and the stability of conventional EMD algorithms. PMID:22350559
NASA Astrophysics Data System (ADS)
Echevarria, Miguel G.; Idilbi, Ahmad; Scimemi, Ignazio
2014-07-01
By considering semi-inclusive deep-inelastic scattering and the (complementary) qT-spectrum for Drell-Yan lepton pair production we derive the QCD evolution for all the leading-twist transverse momentum dependent distribution and fragmentation functions. We argue that all of those functions evolve with Q2 following a single evolution kernel. This kernel is independent of the underlying kinematics and it is also spin independent. Those features hold, in impact parameter space, to all values of bT. The evolution kernel presented has all of its large logarithms resummed up to next-to-next-to leading logarithmic accuracy, which is the highest possible accuracy given the existing perturbative calculations. As a study case we apply this kernel to investigate the evolution of the Collins function, one of the ingredients that have recently attracted much attention within the phenomenological studies of spin asymmetries. Our analysis can be readily implemented to revisit previously obtained fits that involve data at different scales for other spin-dependent functions. Such improved fits are important to get better predictions—with the correct evolution kernel—for certain upcoming experiments aiming to measure the Sivers function, Collins function, transversity, and other spin-dependent functions as well.
Non-Markovian Quantum Evolution: Time-Local Generators and Memory Kernels
NASA Astrophysics Data System (ADS)
Chruściński, Dariusz; Należyty, Paweł
2016-06-01
In this paper we provide a basic introduction to the topic of quantum non-Markovian evolution presenting both time-local and memory kernel approach to the evolution of open quantum systems. We start with the standard notion of a classical Markovian stochastic process and generalize it to classical Markovian stochastic evolution which in turn becomes a starting point of the quantum setting. Our approach is based on the notion of P-divisible, CP-divisible maps and their refinements to k-divisible maps. Basic methods enabling one to detect non-Markovianity of the quantum evolution are also presented. Our analysis is illustrated by several simple examples.
NASA Astrophysics Data System (ADS)
Fleming, Sean
In this talk I review recent experimental and theoretical results in QCD. Since the topic is too vast to cover within given time constraints I choose to highlight some of the subjects that I find particularly exciting. On the experimental side I focus on measurements made at the Tevatron. Specifically jet production rates, and the cross section for B meson production. In addition I discuss an interesting measurement made by the Belle collaboration of double exclusive charmonium production. On the theory side I quickly review recent advances in computing hadronic cross sections at subleading order in perturbation theory. I then move on to soft-collinear effective theory. After a lightning review of the formalism I discuss recently published results on color-suppressed B → D decays.
How to impose initial conditions for QCD evolution of double parton distributions?
NASA Astrophysics Data System (ADS)
Golec-Biernat, Krzysztof; Lewandowska, Emilia
2014-07-01
Double parton distribution functions are used in the QCD description of double parton scattering. The double parton distributions evolve with hard scales through QCD evolution equations which obey nontrivial momentum and valence quark number sum rules. We describe an attempt to construct initial conditions for the evolution equations which exactly fulfill these sum rules and discuss its shortcomings. We also discuss the factorization of the double parton distributions into a product of two single parton distribution functions at small values of the parton momentum fractions.
COLLINEAR SPLITTING, PARTON EVOLUTION AND THE STRANGE-QUARK ASYMMETRY OF THE NUCLEON IN NNLO QCD.
RODRIGO,G.CATANI,S.DE FLORIAN, D.VOGELSANG,W.
2004-04-25
We consider the collinear limit of QCD amplitudes at one-loop order, and their factorization properties directly in color space. These results apply to the multiple collinear limit of an arbitrary number of QCD partons, and are a basic ingredient in many higher-order computations. In particular, we discuss the triple collinear limit and its relation to flavor asymmetries in the QCD evolution of parton densities at three loops. As a phenomenological consequence of this new effect, and of the fact that the nucleon has non-vanishing quark valence densities, we study the perturbative generation of a strange-antistrange asymmetry s(x)-{bar s}(x) in the nucleon's sea.
Efficient evolution of unpolarized and polarized parton distributions with QCD- PEGASUS
NASA Astrophysics Data System (ADS)
Vogt, A.
2005-07-01
The FORTRAN package QCD- PEGASUS is presented. This program provides fast, flexible and accurate solutions of the evolution equations for unpolarized and polarized parton distributions of hadrons in perturbative QCD. The evolution is performed using the symbolic moment-space solutions on a one-fits-all Mellin inversion contour. User options include the order of the evolution including the next-to-next-to-leading order in the unpolarized case, the type of the evolution including an emulation of brute-force solutions, the evolution with a fixed number n of flavors or in the variable- n scheme, and the evolution with a renormalization scale unequal to the factorization scale. The initial distributions are needed in a form facilitating the computation of the complex Mellin moments. Program summaryTitle of program: QCD- PEGASUS Version: 1.0 Catalogue identifier: ADVN Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVN Program obtainable from: CPC Program Library Queen's University of Belfast, N. Ireland License: GNU Public License Computers: all Operating systems: all Program language:FORTRAN 77 (using the common compiler extension of procedure names with more than six characters) Memory required to execute: negligible ( <1 MB) Other programs called: none External files needed: none Number of lines in distributed program, including test data, etc.: 8157 Number of bytes in distributed program, including test data, etc.: 240 578 Distribution format: tar.gz Nature of the physical problem: Solution of the evolution equations for the unpolarized and polarized parton distributions of hadrons at leading order (LO), next-to-leading order and next-to-next-to-leading order of perturbative QCD. Evolution performed either with a fixed number n of effectively massless quark flavors or in the variable- n scheme. The calculation of observables from the parton distributions is not part of the present package. Method of solution: Analytic solution in Mellin space (beyond LO in
Method of Analytic Evolution of Flat Distribution Amplitudes in QCD
Asli Tandogan, Anatoly V. Radyushkin
2011-11-01
A new analytical method of performing ERBL evolution is described. The main goal is to develop an approach that works for distribution amplitudes that do not vanish at the end points, for which the standard method of expansion in Gegenbauer polynomials is inefficient. Two cases of the initial DA are considered: a purely flat DA, given by the same constant for all x, and an antisymmetric DA given by opposite constants for x < 1/2 or x > 1/2. For a purely flat DA, the evolution is governed by an overall (x (1-x)){sup t} dependence on the evolution parameter t times a factor that was calculated as an expansion in t. For an antisymmetric flat DA, an extra overall factor |1-2x|{sup 2t} appears due to a jump at x = 1/2. A good convergence was observed in the t {approx}< 1/2 region. For larger t, one can use the standard method of the Gegenbauer expansion.
Renormalization group evolution of multi-gluon correlators in high energy QCD
Dumitru A.; Venugopalan R.; Jalilian-Marian, J.; Lappi, T.; Schenke, B.
2011-11-06
Many-body QCD in leading high energy Regge asymptotics is described by the Balitsky-JIMWLK hierarchy of renormalization group equations for the x evolution of multi-point Wilson line correlators. These correlators are universal and ubiquitous in final states in deeply inelastic scattering and hadronic collisions. For instance, recently measured di-hadron correlations at forward rapidity in deuteron-gold collisions at the Relativistic Heavy Ion Collider (RHIC) are sensitive to four and six point correlators of Wilson lines in the small x color fields of the dense nuclear target. We evaluate these correlators numerically by solving the functional Langevin equation that describes the Balitsky-JIMWLK hierarchy. We compare the results to mean-field Gaussian and large Nc approximations used in previous phenomenological studies. We comment on the implications of our results for quantitative studies of multi-gluon final states in high energy QCD.
Statistical physics in QCD evolution towards high energies
NASA Astrophysics Data System (ADS)
Munier, Stéphane
2015-08-01
The concepts and methods used for the study of disordered systems have proven useful in the analysis of the evolution equations of quantum chromodynamics in the high-energy regime: Indeed, parton branching in the semi-classical approximation relevant at high energies and at a fixed impact parameter is a peculiar branching-diffusion process, and parton branching supplemented by saturation effects (such as gluon recombination) is a reaction-diffusion process. In this review article, we first introduce the basic concepts in the context of simple toy models, we study the properties of the latter, and show how the results obtained for the simple models may be taken over to quantum chromodynamics.
Real time evolution of non-Gaussian cumulants in the QCD critical regime
Mukherjee, Swagato; Venugopalan, Raju; Yin, Yi
2015-09-23
In this study, we derive a coupled set of equations that describe the nonequilibrium evolution of cumulants of critical fluctuations for spacetime trajectories on the crossover side of the QCD phase diagram. In particular, novel expressions are obtained for the nonequilibrium evolution of non-Gaussian skewness and kurtosis cumulants. UBy utilizing a simple model of the spacetime evolution of a heavy-ion collision, we demonstrate that, depending on the relaxation rate of critical fluctuations, skewness and kurtosis can differ significantly in magnitude as well as in sign from equilibrium expectations. Memory effects are important and shown to persist even for trajectories thatmore » skirt the edge of the critical regime. We use phenomenologically motivated parametrizations of freeze-out curves and of the beam-energy dependence of the net baryon chemical potential to explore the implications of our model study for the critical-point search in heavy-ion collisions.« less
Real time evolution of non-Gaussian cumulants in the QCD critical regime
Mukherjee, Swagato; Venugopalan, Raju; Yin, Yi
2015-09-23
In this study, we derive a coupled set of equations that describe the nonequilibrium evolution of cumulants of critical fluctuations for spacetime trajectories on the crossover side of the QCD phase diagram. In particular, novel expressions are obtained for the nonequilibrium evolution of non-Gaussian skewness and kurtosis cumulants. UBy utilizing a simple model of the spacetime evolution of a heavy-ion collision, we demonstrate that, depending on the relaxation rate of critical fluctuations, skewness and kurtosis can differ significantly in magnitude as well as in sign from equilibrium expectations. Memory effects are important and shown to persist even for trajectories that skirt the edge of the critical regime. We use phenomenologically motivated parametrizations of freeze-out curves and of the beam-energy dependence of the net baryon chemical potential to explore the implications of our model study for the critical-point search in heavy-ion collisions.
Transverse momentum dependent parton distribution and fragmentation functions with QCD evolution
NASA Astrophysics Data System (ADS)
Aybat, S. Mert; Rogers, Ted C.
2011-06-01
We assess the current phenomenological status of transverse momentum dependent (TMD) parton distribution functions (PDFs) and fragmentation functions (FFs) and study the effect of consistently including perturbative QCD (pQCD) evolution. Our goal is to initiate the process of establishing reliable, QCD-evolved parametrizations for the TMD PDFs and TMD FFs that can be used both to test TMD factorization and to search for evidence of the breakdown of TMD factorization that is expected for certain processes. In this article, we focus on spin-independent processes because they provide the simplest illustration of the basic steps and can already be used in direct tests of TMD factorization. Our calculations are based on the Collins-Soper-Sterman (CSS) formalism, supplemented by recent theoretical developments which have clarified the precise definitions of the TMD PDFs and TMD FFs needed for a valid TMD-factorization theorem. Starting with these definitions, we numerically generate evolved TMD PDFs and TMD FFs using as input existing parametrizations for the collinear PDFs, collinear FFs, nonperturbative factors in the CSS factorization formalism, and recent fixed-scale fits. We confirm that evolution has important consequences, both qualitatively and quantitatively, and argue that it should be included in future phenomenological studies of TMD functions. Our analysis is also suggestive of extensions to processes that involve spin-dependent functions such as the Boer-Mulders, Sivers, or Collins functions, which we intend to pursue in future publications. At our website [http://projects.hepforge.org/tmd/], we have made available the tables and calculations needed to obtain the TMD parametrizations presented herein.
Linear vs non-linear QCD evolution in the neutrino-nucleon cross section
NASA Astrophysics Data System (ADS)
Albacete, Javier L.; Illana, José I.; Soto-Ontoso, Alba
2016-03-01
Evidence for an extraterrestrial flux of ultra-high-energy neutrinos, in the order of PeV, has opened a new era in Neutrino Astronomy. An essential ingredient for the determination of neutrino fluxes from the number of observed events is the precise knowledge of the neutrino-nucleon cross section. In this work, based on [1], we present a quantitative study of σνN in the neutrino energy range 104 < Eν < 1014 GeV within two transversal QCD approaches: NLO DGLAP evolution using different sets of PDFs and BK small-x evolution with running coupling and kinematical corrections. Further, we translate this theoretical uncertainty into upper bounds for the ultra-high-energy neutrino flux for different experiments.
Markovian Monte Carlo program EvolFMC v.2 for solving QCD evolution equations
NASA Astrophysics Data System (ADS)
Jadach, S.; Płaczek, W.; Skrzypek, M.; Stokłosa, P.
2010-02-01
We present the program EvolFMC v.2 that solves the evolution equations in QCD for the parton momentum distributions by means of the Monte Carlo technique based on the Markovian process. The program solves the DGLAP-type evolution as well as modified-DGLAP ones. In both cases the evolution can be performed in the LO or NLO approximation. The quarks are treated as massless. The overall technical precision of the code has been established at 5×10. This way, for the first time ever, we demonstrate that with the Monte Carlo method one can solve the evolution equations with precision comparable to the other numerical methods. New version program summaryProgram title: EvolFMC v.2 Catalogue identifier: AEFN_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFN_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including binary test data, etc.: 66 456 (7407 lines of C++ code) No. of bytes in distributed program, including test data, etc.: 412 752 Distribution format: tar.gz Programming language: C++ Computer: PC, Mac Operating system: Linux, Mac OS X RAM: Less than 256 MB Classification: 11.5 External routines: ROOT ( http://root.cern.ch/drupal/) Nature of problem: Solution of the QCD evolution equations for the parton momentum distributions of the DGLAP- and modified-DGLAP-type in the LO and NLO approximations. Solution method: Monte Carlo simulation of the Markovian process of a multiple emission of partons. Restrictions:Limited to the case of massless partons. Implemented in the LO and NLO approximations only. Weighted events only. Unusual features: Modified-DGLAP evolutions included up to the NLO level. Additional comments: Technical precision established at 5×10. Running time: For the 10 6 events at 100 GeV: DGLAP NLO: 27s; C-type modified DGLAP NLO: 150s (MacBook Pro with Mac OS X v.10
Collins, John; Rogers, Ted
2015-04-01
There is considerable controversy about the size and importance of non-perturbative contributions to the evolution of transverse momentum dependent (TMD) parton distribution functions. Standard fits to relatively high-energy Drell-Yan data give evolution that when taken to lower Q is too rapid to be consistent with recent data in semi-inclusive deeply inelastic scattering. Some authors provide very different forms for TMD evolution, even arguing that non-perturbative contributions at large transverse distance bT are not needed or are irrelevant. Here, we systematically analyze the issues, both perturbative and non-perturbative. We make a motivated proposal for the parameterization of the non-perturbative part ofmore » the TMD evolution kernel that could give consistency: with the variety of apparently conflicting data, with theoretical perturbative calculations where they are applicable, and with general theoretical non-perturbative constraints on correlation functions at large distances. We propose and use a scheme- and scale-independent function A(bT) that gives a tool to compare and diagnose different proposals for TMD evolution. We also advocate for phenomenological studies of A(bT) as a probe of TMD evolution. The results are important generally for applications of TMD factorization. In particular, they are important to making predictions for proposed polarized Drell- Yan experiments to measure the Sivers function.« less
Collins, John; Rogers, Ted
2015-04-01
There is considerable controversy about the size and importance of non-perturbative contributions to the evolution of transverse momentum dependent (TMD) parton distribution functions. Standard fits to relatively high-energy Drell-Yan data give evolution that when taken to lower Q is too rapid to be consistent with recent data in semi-inclusive deeply inelastic scattering. Some authors provide very different forms for TMD evolution, even arguing that non-perturbative contributions at large transverse distance bT are not needed or are irrelevant. Here, we systematically analyze the issues, both perturbative and non-perturbative. We make a motivated proposal for the parameterization of the non-perturbative part of the TMD evolution kernel that could give consistency: with the variety of apparently conflicting data, with theoretical perturbative calculations where they are applicable, and with general theoretical non-perturbative constraints on correlation functions at large distances. We propose and use a scheme- and scale-independent function A(bT) that gives a tool to compare and diagnose different proposals for TMD evolution. We also advocate for phenomenological studies of A(bT) as a probe of TMD evolution. The results are important generally for applications of TMD factorization. In particular, they are important to making predictions for proposed polarized Drell- Yan experiments to measure the Sivers function.
QCD evolution of (un)polarized gluon TMDPDFs and the Higgs q T -distribution
NASA Astrophysics Data System (ADS)
Echevarria, Miguel G.; Kasemets, Tomas; Mulders, Piet J.; Pisano, Cristian
2015-07-01
We provide the proper definition of all the leading-twist (un)polarized gluon transverse momentum dependent parton distribution functions (TMDPDFs), by considering the Higgs boson transverse momentum distribution in hadron-hadron collisions and deriving the factorization theorem in terms of them. We show that the evolution of all the (un)polarized gluon TMDPDFs is driven by a universal evolution kernel, which can be resummed up to next-to-next-to-leading-logarithmic accuracy. Considering the proper definition of gluon TMDPDFs, we perform an explicit next-to-leading-order calculation of the unpolarized ( f {1/ g }), linearly polarized ( h {1/⊥ g }) and helicity ( g {1/L g }) gluon TMDPDFs, and show that, as expected, they are free from rapidity divergences. As a byproduct, we obtain the Wilson coefficients of the refactorization of these TMDPDFs at large transverse momentum. In particular, the coefficient of g {1/L g }, which has never been calculated before, constitutes a new and necessary ingredient for a reliable phenomenological extraction of this quantity, for instance at RHIC or the future AFTER@LHC or Electron-Ion Collider. The coefficients of f {1/ g } and h {1/⊥ g } have never been calculated in the present formalism, although they could be obtained by carefully collecting and recasting previous results in the new TMD formalism. We apply these results to analyze the contribution of linearly polarized gluons at different scales, relevant, for instance, for the inclusive production of the Higgs boson and the C-even pseudoscalar bottomonium state η b . Applying our resummation scheme we finally provide predictions for the Higgs boson q T -distribution at the LHC.
Dumitru, Adrian; Jalilian-Marian, Jamal
2010-10-01
Present knowledge of QCD n-point functions of Wilson lines at high energies is rather limited. In practical applications, it is therefore customary to factorize higher n-point functions into products of two-point functions (dipoles) which satisfy the Balitsky-Kovchegov-evolution equation. We employ the Jalilian-Marian-Iancu-McLerran-Weigert-Leonidov-Kovner formalism to derive explicit evolution equations for the 4- and 6-point functions of fundamental Wilson lines and show that if the Gaussian approximation is carried out before the rapidity evolution step is taken, then many leading order N{sub c} contributions are missed. Our evolution equations could specifically be used to improve calculations of forward dijet angular correlations, recently measured by the STAR Collaboration in deuteron-gold collisions at the RHIC collider. Forward dijets in proton-proton collisions at the LHC probe QCD evolution at even smaller light-cone momentum fractions. Such correlations may provide insight into genuine differences between the Jalilian-Marian-Iancu-McLerran-Weigert-Leonidov-Kovner and Balitsky-Kovchegov approaches.
Analytic solution to leading order coupled DGLAP evolution equations: A new perturbative QCD tool
NASA Astrophysics Data System (ADS)
Block, Martin M.; Durand, Loyal; Ha, Phuoc; McKay, Douglas W.
2011-03-01
We have analytically solved the LO perturbative QCD singlet DGLAP equations [V. N. Gribov and L. N. Lipatov, Sov. J. Nucl. Phys. 15, 438 (1972)SJNCAS0038-5506][G. Altarelli and G. Parisi, Nucl. Phys. B126, 298 (1977)][Y. L. Dokshitzer, Sov. Phys. JETP 46, 641 (1977)SPHJAR0038-5646] using Laplace transform techniques. Newly developed, highly accurate, numerical inverse Laplace transform algorithms [M. M. Block, Eur. Phys. J. C 65, 1 (2010)EPCFFB1434-604410.1140/epjc/s10052-009-1195-8][M. M. Block, Eur. Phys. J. C 68, 683 (2010)EPCFFB1434-604410.1140/epjc/s10052-010-1374-7] allow us to write fully decoupled solutions for the singlet structure function Fs(x,Q2) and G(x,Q2) as Fs(x,Q2)=Fs(Fs0(x0),G0(x0)) and G(x,Q2)=G(Fs0(x0),G0(x0)), where the x0 are the Bjorken x values at Q02. Here Fs and G are known functions—found using LO DGLAP splitting functions—of the initial boundary conditions Fs0(x)≡Fs(x,Q02) and G0(x)≡G(x,Q02), i.e., the chosen starting functions at the virtuality Q02. For both G(x) and Fs(x), we are able to either devolve or evolve each separately and rapidly, with very high numerical accuracy—a computational fractional precision of O(10-9). Armed with this powerful new tool in the perturbative QCD arsenal, we compare our numerical results from the above equations with the published MSTW2008 and CTEQ6L LO gluon and singlet Fs distributions [A. D. Martin, W. J. Stirling, R. S. Thorne, and G. Watt, Eur. Phys. J. C 63, 189 (2009)EPCFFB1434-604410.1140/epjc/s10052-009-1072-5], starting from their initial values at Q02=1GeV2 and 1.69GeV2, respectively, using their choice of αs(Q2). This allows an important independent check on the accuracies of their evolution codes and, therefore, the computational accuracies of their published parton distributions. Our method completely decouples the two LO distributions, at the same time guaranteeing that both G and Fs satisfy the singlet coupled DGLAP equations. It also allows one to easily obtain the effects of
Small-x Evolution of Structure Functions in the Next-to-Leading Order
Chirilli, Giovanni A.
2009-12-17
The high-energy behavior of amplitudes in gauge theories can be reformulated in terms of the evolution of Wilson-line operators. In the leading order this evolution is governed by the nonlinear Balitsky-Kovchegov (BK) equation. The NLO corrections define the scale of the running-coupling constant in the BK equation and in QCD, its kernel has both conformal and non-conformal parts. To separate the conformally invariant effects from the running-coupling effects, we calculate the NLO evolution of the color dipoles in the conformal N = 4 SYM theory, then we define the 'composite dipole operator' with the rapidity cutoff preserving conformal invariance, and the resulting Moebius invariant kernel for this operator agrees with the forward NLO BFKL calculation.In QCD, the NLO kernel for the composite operators resolves in a sum of the conformal part and the running-coupling part.
Small-x Evolution of Structure Functions in the Next-to-Leading Order
Giovanni Antonio Chirilli
2009-12-01
The high-energy behavior of amplitudes in gauge theories can be reformulated in terms of the evolution of Wilson-line operators. In the leading order this evolution is governed by the nonlinear Balitsky-Kovchegov (BK) equation. The NLO corrections define the scale of the running coupling constant in the BK equation and in QCD, its kernel has both conformal and non-conformal parts. To separate the conformally invariant effects from the running-coupling effects, we calculate the NLO evolution of the color dipoles in the conformal N = 4 SYM theory, then we define the "composite dipole operator" with the rapidity cutoff preserving conformal invariance, and the resulting Möbius invariant kernel for this operator agrees with the forward NLO BFKL calculation. In QCD, the NLO kernel for the composite operators resolves in a sum of the conformal part and the running-coupling part.
A new approach to parton recombination in the QCD evolution equations
NASA Astrophysics Data System (ADS)
Wei Zhu
1999-06-01
Parton recombination is reconsidered in perturbation theory without using the AGK cutting rules in the leading order of the recombination. We use time-ordered perturbation theory to sum the cut diagrams, which are neglected in the GLR evolution equation. We present a set of new evolution equations including parton recombination.
Bornyakov, V.G.
2005-06-01
Possibilities that are provided by a lattice regularization of QCD for studying nonperturbative properties of QCD are discussed. A review of some recent results obtained from computer calculations in lattice QCD is given. In particular, the results for the QCD vacuum structure, the hadron mass spectrum, and the strong coupling constant are considered.
Small-x evolution of structure functions in the next-to-leading order
Giovanni A. Chirilli
2010-01-01
The high-energy behavior of amplitudes in gauge theories can be reformulated in terms of the evolution of Wilson-line operators. In the leading order this evolution is governed by the non-linear Balitsky-Kovchegov (BK) equation. In QCD the NLO kernel has both conformal and non-conformal parts. To separate the conformally invariant effects from the running-coupling effects, we calculate the NLO evolution of the color dipoles in the conformal N = 4 SYM theory, then we define the "composite dipole operator", and the resulting Mobius invariant kernel for this operator agrees with the forward NLO BFKL calculation.
Extraction of quark transversity distribution and Collins fragmentation functions with QCD evolution
Kang, Zhong-Bo; Prokudin, Alexei; Sun, Peng; Yuan, Feng
2016-01-13
In this paper, we study the transverse momentum dependent (TMD) evolution of the Collins azimuthal asymmetries in e+e- annihilations and semi-inclusive hadron production in deep inelastic scattering (SIDIS) processes. All the relevant coefficients are calculated up to the next-to-leading logarithmic (NLL) order accuracy. By applying the TMD evolution at the approximate NLL order in the Collins- Soper-Sterman (CSS) formalism, we extract transversity distributions for u and d quarks and Collins fragmentation functions from current experimental data by a global analysis of the Collins asymmetries in back-to-back di-hadron productions in e+e- annihilations measured by BELLE and BABAR Collaborations and SIDIS datamore » from HERMES, COMPASS, and JLab HALL A experiments. The impact of the evolution effects and the relevant theoretical uncertainties are discussed. We further discuss the TMD interpretation for our results, and illustrate the unpolarized quark distribution, transversity distribution, unpolarized quark fragmentation and Collins fragmentation functions depending on the transverse momentum and the hard momentum scale. Finally, we give predictions and discuss impact of future experiments.« less
Extraction of quark transversity distribution and Collins fragmentation functions with QCD evolution
NASA Astrophysics Data System (ADS)
Kang, Zhong-Bo; Prokudin, Alexei; Sun, Peng; Yuan, Feng
2016-01-01
We study the transverse-momentum-dependent (TMD) evolution of the Collins azimuthal asymmetries in e+e- annihilations and semi-inclusive hadron production in deep inelastic scattering processes. All the relevant coefficients are calculated up to the next-to-leading-logarithmic-order accuracy. By applying the TMD evolution at the approximate next-to-leading-logarithmic order in the Collins-Soper-Sterman formalism, we extract transversity distributions for u and d quarks and Collins fragmentation functions from current experimental data by a global analysis of the Collins asymmetries in back-to-back dihadron productions in e+e- annihilations measured by BELLE and BABAR collaborations and semi-inclusive hadron production in deep inelastic scattering data from HERMES, COMPASS, and JLab HALL A experiments. The impact of the evolution effects and the relevant theoretical uncertainties are discussed. We further discuss the TMD interpretation for our results and illustrate the unpolarized quark distribution, transversity distribution, unpolarized quark fragmentation, and Collins fragmentation functions depending on the transverse momentum and the hard momentum scale. We make detailed predictions for future experiments and discuss their impact.
Two-loop conformal generators for leading-twist operators in QCD
NASA Astrophysics Data System (ADS)
Braun, V. M.; Manashov, A. N.; Moch, S.; Strohmaier, M.
2016-03-01
QCD evolution equations in minimal subtraction schemes have a hidden symmetry: one can construct three operators that commute with the evolution kernel and form an SL(2) algebra, i.e. they satisfy (exactly) the SL(2) commutation relations. In this paper we find explicit expressions for these operators to two-loop accuracy going over to QCD in non-integer d = 4 - 2ɛ space-time dimensions at the intermediate stage. In this way conformal symmetry of QCD is restored on quantum level at the specially chosen (critical) value of the coupling, and at the same time the theory is regularized allowing one to use the standard renormalization procedure for the relevant Feynman diagrams. Quantum corrections to conformal generators in d = 4 - 2ɛ effectively correspond to the conformal symmetry breaking in the physical theory in four dimensions and the SL(2) commutation relations lead to nontrivial constraints on the renormalization group equations for composite operators. This approach is valid to all orders in perturbation theory and the result includes automatically all terms that can be identified as due to a nonvanishing QCD β-function (in the physical theory in four dimensions). Our result can be used to derive three-loop evolution equations for flavor-nonsinglet quark-antiquark operators including mixing with the operators containing total derivatives. These equations govern, e.g., the scale dependence of generalized hadron parton distributions and light-cone meson distribution amplitudes.
The Chroma Software System for Lattice QCD
Robert Edwards; Balint Joo
2004-06-01
We describe aspects of the Chroma software system for lattice QCD calculations. Chroma is an open source C++ based software system developed using the software infrastructure of the US SciDAC initiative. Chroma interfaces with output from the BAGEL assembly generator for optimized lattice fermion kernels on some architectures. It can be run on workstations, clusters and the QCDOC supercomputer.
Caprio, Michael A; Martinez, Jeannette C; Porter, Patrick A; Bynum, Ed
2016-02-01
Seeds or kernels on hybrid plants are primarily F(2) tissue and will segregate for heterozygous alleles present in the parental F(1) hybrids. In the case of plants expressing Bt-toxins, the F(2) tissue in the kernels will express toxins as they would segregate in any F(2) tissue. In the case of plants expressing two unlinked toxins, the kernels on a Bt plant fertilized by another Bt plant would express anywhere from 0 to 2 toxins. Larvae of corn earworm [Helicoverpa zea (Boddie)] feed on a number of kernels during development and would therefore be exposed to local habitats (kernels) that varied in their toxin expression. Three models were developed for plants expressing two Bt-toxins, one where the traits are unlinked, a second where the traits were linked and a third model assuming that maternal traits were expressed in all kernels as well as paternally inherited traits. Results suggest that increasing larval movement rates off of expressing kernels tended to increase durability while increasing movement rates off of nonexpressing kernels always decreased durability. An ideal block refuge (no pollen flow between blocks and refuges) was more durable than a seed blend because the refuge expressed no toxins, while pollen contamination from plants expressing toxins in a seed blend reduced durability. A linked-trait model in an ideal refuge model predicted the longest durability. The results suggest that using a seed-blend strategy for a kernel feeding insect on a hybrid crop could dramatically reduce durability through the loss of refuge due to extensive cross-pollination. PMID:26527792
Initial-state splitting kernels in cold nuclear matter
NASA Astrophysics Data System (ADS)
Ovanesyan, Grigory; Ringer, Felix; Vitev, Ivan
2016-09-01
We derive medium-induced splitting kernels for energetic partons that undergo interactions in dense QCD matter before a hard-scattering event at large momentum transfer Q2. Working in the framework of the effective theory SCETG, we compute the splitting kernels beyond the soft gluon approximation. We present numerical studies that compare our new results with previous findings. We expect the full medium-induced splitting kernels to be most relevant for the extension of initial-state cold nuclear matter energy loss phenomenology in both p+A and A+A collisions.
Hess, Peter O.
2006-09-25
A review is presented on the contributions of Mexican Scientists to QCD phenomenology. These contributions range from Constituent Quark model's (CQM) with a fixed number of quarks (antiquarks) to those where the number of quarks is not conserved. Also glueball spectra were treated with phenomenological models. Several other approaches are mentioned.
QCD at nonzero chemical potential: Recent progress on the lattice
NASA Astrophysics Data System (ADS)
Aarts, Gert; Attanasio, Felipe; Jäger, Benjamin; Seiler, Erhard; Sexty, Dénes; Stamatescu, Ion-Olimpiu
2016-01-01
We summarise recent progress in simulating QCD at nonzero baryon density using complex Langevin dynamics. After a brief outline of the main idea, we discuss gauge cooling as a means to control the evolution. Subsequently we present a status report for heavy dense QCD and its phase structure, full QCD with staggered quarks, and full QCD with Wilson quarks, both directly and using the hopping parameter expansion to all orders.
Approximate kernel competitive learning.
Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang
2015-03-01
Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. PMID:25528318
NASA Astrophysics Data System (ADS)
Cao, Shanshan; Qin, Guang-You; Bass, Steffen A.
2014-12-01
We study heavy flavor evolution and hadronization in relativistic heavy-ion collisions. The in-medium evolution of heavy quarks is described using our modified Langevin framework that incorporates both collisional and radiative energy loss mechanisms. The subsequent hadronization process for heavy quarks is calculated with a fragmentation plus recombination model. We find significant contribution from gluon radiation to heavy quark energy loss at high pT; the recombination mechanism can greatly enhance the D meson production at medium pT. Our calculation provides a good description of the D meson nuclear modification at the LHC. In addition, we explore the angular correlation functions of heavy flavor pairs which may provide us a potential candidate for distinguishing different energy loss mechanisms of heavy quarks inside the QGP.
Small-x Evolution in the Next-to-Leading Order
Ian Balitsky
2009-10-01
The high-energy behavior of amplitudes in gauge theories can be reformulated in terms of the evolution of Wilson-line operators. In the leading order this evolution is governed by the non-linear BK equation. The NLO corrections define the scale of the running-coupling constant in the BK equation and in QCD, its kernel has both conformal and non-conformal parts. To separate the conformally invariant effects from the running-coupling effects, we calculate the NLO evolution of the color dipoles in the conformal N=4 SYM theory, then we define the 'composite dipole operator' with the rapidity cutoff preserving conformal invariance, and the resulting Möbius invariant kernel for this operator agrees with the forward NLO BFKL calculation.
Melacci, Stefano; Gori, Marco
2013-11-01
Supervised examples and prior knowledge on regions of the input space have been profitably integrated in kernel machines to improve the performance of classifiers in different real-world contexts. The proposed solutions, which rely on the unified supervision of points and sets, have been mostly based on specific optimization schemes in which, as usual, the kernel function operates on points only. In this paper, arguments from variational calculus are used to support the choice of a special class of kernels, referred to as box kernels, which emerges directly from the choice of the kernel function associated with a regularization operator. It is proven that there is no need to search for kernels to incorporate the structure deriving from the supervision of regions of the input space, because the optimal kernel arises as a consequence of the chosen regularization operator. Although most of the given results hold for sets, we focus attention on boxes, whose labeling is associated with their propositional description. Based on different assumptions, some representer theorems are given that dictate the structure of the solution in terms of box kernel expansion. Successful results are given for problems of medical diagnosis, image, and text categorization. PMID:24051728
Melacci, Stefano; Gori, Marco
2013-04-12
Supervised examples and prior knowledge on regions of the input space have been profitably integrated in kernel machines to improve the performance of classifiers in different real-world contexts. The proposed solutions, which rely on the unified supervision of points and sets, have been mostly based on specific optimization schemes in which, as usual, the kernel function operates on points only. In this paper, arguments from variational calculus are used to support the choice of a special class of kernels, referred to as box kernels, which emerges directly from the choice of the kernel function associated with a regularization operator. It is proven that there is no need to search for kernels to incorporate the structure deriving from the supervision of regions of the input space, since the optimal kernel arises as a consequence of the chosen regularization operator. Although most of the given results hold for sets, we focus attention on boxes, whose labeling is associated with their propositional description. Based on different assumptions, some representer theorems are given which dictate the structure of the solution in terms of box kernel expansion. Successful results are given for problems of medical diagnosis, image, and text categorization. PMID:23589591
Sparse representation with kernels.
Gao, Shenghua; Tsang, Ivor Wai-Hung; Chia, Liang-Tien
2013-02-01
Recent research has shown the initial success of sparse coding (Sc) in solving many computer vision tasks. Motivated by the fact that kernel trick can capture the nonlinear similarity of features, which helps in finding a sparse representation of nonlinear features, we propose kernel sparse representation (KSR). Essentially, KSR is a sparse coding technique in a high dimensional feature space mapped by an implicit mapping function. We apply KSR to feature coding in image classification, face recognition, and kernel matrix approximation. More specifically, by incorporating KSR into spatial pyramid matching (SPM), we develop KSRSPM, which achieves a good performance for image classification. Moreover, KSR-based feature coding can be shown as a generalization of efficient match kernel and an extension of Sc-based SPM. We further show that our proposed KSR using a histogram intersection kernel (HIK) can be considered a soft assignment extension of HIK-based feature quantization in the feature coding process. Besides feature coding, comparing with sparse coding, KSR can learn more discriminative sparse codes and achieve higher accuracy for face recognition. Moreover, KSR can also be applied to kernel matrix approximation in large scale learning tasks, and it demonstrates its robustness to kernel matrix approximation, especially when a small fraction of the data is used. Extensive experimental results demonstrate promising results of KSR in image classification, face recognition, and kernel matrix approximation. All these applications prove the effectiveness of KSR in computer vision and machine learning tasks. PMID:23014744
Duff, I.
1994-12-31
This workshop focuses on kernels for iterative software packages. Specifically, the three speakers discuss various aspects of sparse BLAS kernels. Their topics are: `Current status of user lever sparse BLAS`; Current status of the sparse BLAS toolkit`; and `Adding matrix-matrix and matrix-matrix-matrix multiply to the sparse BLAS toolkit`.
Long range two-particle rapidity correlations in A+A collisions from high energy QCD evolution
NASA Astrophysics Data System (ADS)
Dusling, Kevin; Gelis, François; Lappi, Tuomas; Venugopalan, Raju
2010-05-01
Long range rapidity correlations in A+A collisions are sensitive to strong color field dynamics at early times after the collision. These can be computed in a factorization formalism (Gelis, Lappi and Venugopalan (2009) [1]) which expresses the n-gluon inclusive spectrum at arbitrary rapidity separations in terms of the multi-parton correlations in the nuclear wavefunctions. This formalism includes all radiative and rescattering contributions, to leading accuracy in αΔY, where Δ Y is the rapidity separation between either one of the measured gluons and a projectile, or between the measured gluons themselves. In this paper, we use a mean field approximation for the evolution of the nuclear wavefunctions to obtain a compact result for inclusive two gluon correlations in terms of the unintegrated gluon distributions in the nuclear projectiles. The unintegrated gluon distributions satisfy the Balitsky-Kovchegov equation, which we solve with running coupling and with initial conditions constrained by existing data on electron-nucleus collisions. Our results are valid for arbitrary rapidity separations between measured gluons having transverse momenta p,q≳Q, where Q is the saturation scale in the nuclear wavefunctions. We compare our results to data on long range rapidity correlations observed in the near-side ridge at RHIC and make predictions for similar long range rapidity correlations at the LHC.
Random walk through recent CDF QCD results
C. Mesropian
2003-04-09
We present recent results on jet fragmentation, jet evolution in jet and minimum bias events, and underlying event studies. The results presented in this talk address significant questions relevant to QCD and, in particular, to jet studies. One topic discussed is jet fragmentation and the possibility of describing it down to very small momentum scales in terms of pQCD. Another topic is the studies of underlying event energy originating from fragmentation of partons not associated with the hard scattering.
Probing QCD at high energy via correlations
Jalilian-Marian, Jamal
2011-04-26
A hadron or nucleus at high energy or small x{sub Bj} contains many gluons and may be described as a Color Glass Condensate. Angular and rapidity correlations of two particles produced in high energy hadron-hadron collisions is a sensitive probe of high gluon density regime of QCD. Evolution equations which describe rapidity dependence of these correlation functions are derived from a QCD effective action.
Analog forecasting with dynamics-adapted kernels
NASA Astrophysics Data System (ADS)
Zhao, Zhizhen; Giannakis, Dimitrios
2016-09-01
Analog forecasting is a nonparametric technique introduced by Lorenz in 1969 which predicts the evolution of states of a dynamical system (or observables defined on the states) by following the evolution of the sample in a historical record of observations which most closely resembles the current initial data. Here, we introduce a suite of forecasting methods which improve traditional analog forecasting by combining ideas from kernel methods developed in harmonic analysis and machine learning and state-space reconstruction for dynamical systems. A key ingredient of our approach is to replace single-analog forecasting with weighted ensembles of analogs constructed using local similarity kernels. The kernels used here employ a number of dynamics-dependent features designed to improve forecast skill, including Takens’ delay-coordinate maps (to recover information in the initial data lost through partial observations) and a directional dependence on the dynamical vector field generating the data. Mathematically, our approach is closely related to kernel methods for out-of-sample extension of functions, and we discuss alternative strategies based on the Nyström method and the multiscale Laplacian pyramids technique. We illustrate these techniques in applications to forecasting in a low-order deterministic model for atmospheric dynamics with chaotic metastability, and interannual-scale forecasting in the North Pacific sector of a comprehensive climate model. We find that forecasts based on kernel-weighted ensembles have significantly higher skill than the conventional approach following a single analog.
Inheritance of Kernel Color in Corn: Explanations and Investigations.
ERIC Educational Resources Information Center
Ford, Rosemary H.
2000-01-01
Offers a new perspective on traditional problems in genetics on kernel color in corn, including information about genetic regulation, metabolic pathways, and evolution of genes. (Contains 15 references.) (ASK)
Modeling QCD for Hadron Physics
NASA Astrophysics Data System (ADS)
Tandy, P. C.
2011-10-01
We review the approach to modeling soft hadron physics observables based on the Dyson-Schwinger equations of QCD. The focus is on light quark mesons and in particular the pseudoscalar and vector ground states, their decays and electromagnetic couplings. We detail the wide variety of observables that can be correlated by a ladder-rainbow kernel with one infrared parameter fixed to the chiral quark condensate. A recently proposed novel perspective in which the quark condensate is contained within hadrons and not the vacuum is mentioned. The valence quark parton distributions, in the pion and kaon, as measured in the Drell Yan process, are investigated with the same ladder-rainbow truncation of the Dyson-Schwinger and Bethe-Salpeter equations.
Modeling QCD for Hadron Physics
Tandy, P. C.
2011-10-24
We review the approach to modeling soft hadron physics observables based on the Dyson-Schwinger equations of QCD. The focus is on light quark mesons and in particular the pseudoscalar and vector ground states, their decays and electromagnetic couplings. We detail the wide variety of observables that can be correlated by a ladder-rainbow kernel with one infrared parameter fixed to the chiral quark condensate. A recently proposed novel perspective in which the quark condensate is contained within hadrons and not the vacuum is mentioned. The valence quark parton distributions, in the pion and kaon, as measured in the Drell Yan process, are investigated with the same ladder-rainbow truncation of the Dyson-Schwinger and Bethe-Salpeter equations.
Robotic Intelligence Kernel: Communications
Walton, Mike C.
2009-09-16
The INL Robotic Intelligence Kernel-Comms is the communication server that transmits information between one or more robots using the RIK and one or more user interfaces. It supports event handling and multiple hardware communication protocols.
Robotic Intelligence Kernel: Driver
2009-09-16
The INL Robotic Intelligence Kernel-Driver is built on top of the RIK-A and implements a dynamic autonomy structure. The RIK-D is used to orchestrate hardware for sensing and action as well as software components for perception, communication, behavior and world modeling into a single cognitive behavior kernel that provides intrinsic intelligence for a wide variety of unmanned ground vehicle systems.
Nuclear reactions from lattice QCD
Briceño, Raúl A.; Davoudi, Zohreh; Luu, Thomas C.
2015-01-13
In this study, one of the overarching goals of nuclear physics is to rigorously compute properties of hadronic systems directly from the fundamental theory of strong interactions, Quantum Chromodynamics (QCD). In particular, the hope is to perform reliable calculations of nuclear reactions which will impact our understanding of environments that occur during big bang nucleosynthesis, the evolution of stars and supernovae, and within nuclear reactors and high energy/density facilities. Such calculations, being truly ab initio, would include all two-nucleon and three- nucleon (and higher) interactions in a consistent manner. Currently, lattice QCD provides the only reliable option for performing calculations of some of the low-energy hadronic observables. With the aim of bridging the gap between lattice QCD and nuclear many-body physics, the Institute for Nuclear Theory held a workshop on Nuclear Reactions from Lattice QCD on March 2013. In this review article, we report on the topics discussed in this workshop and the path planned to move forward in the upcoming years.
Nuclear reactions from lattice QCD
Briceño, Raúl A.; Davoudi, Zohreh; Luu, Thomas C.
2015-01-13
In this study, one of the overarching goals of nuclear physics is to rigorously compute properties of hadronic systems directly from the fundamental theory of strong interactions, Quantum Chromodynamics (QCD). In particular, the hope is to perform reliable calculations of nuclear reactions which will impact our understanding of environments that occur during big bang nucleosynthesis, the evolution of stars and supernovae, and within nuclear reactors and high energy/density facilities. Such calculations, being truly ab initio, would include all two-nucleon and three- nucleon (and higher) interactions in a consistent manner. Currently, lattice QCD provides the only reliable option for performing calculationsmore » of some of the low-energy hadronic observables. With the aim of bridging the gap between lattice QCD and nuclear many-body physics, the Institute for Nuclear Theory held a workshop on Nuclear Reactions from Lattice QCD on March 2013. In this review article, we report on the topics discussed in this workshop and the path planned to move forward in the upcoming years.« less
Linearized Kernel Dictionary Learning
NASA Astrophysics Data System (ADS)
Golts, Alona; Elad, Michael
2016-06-01
In this paper we present a new approach of incorporating kernels into dictionary learning. The kernel K-SVD algorithm (KKSVD), which has been introduced recently, shows an improvement in classification performance, with relation to its linear counterpart K-SVD. However, this algorithm requires the storage and handling of a very large kernel matrix, which leads to high computational cost, while also limiting its use to setups with small number of training examples. We address these problems by combining two ideas: first we approximate the kernel matrix using a cleverly sampled subset of its columns using the Nystr\\"{o}m method; secondly, as we wish to avoid using this matrix altogether, we decompose it by SVD to form new "virtual samples," on which any linear dictionary learning can be employed. Our method, termed "Linearized Kernel Dictionary Learning" (LKDL) can be seamlessly applied as a pre-processing stage on top of any efficient off-the-shelf dictionary learning scheme, effectively "kernelizing" it. We demonstrate the effectiveness of our method on several tasks of both supervised and unsupervised classification and show the efficiency of the proposed scheme, its easy integration and performance boosting properties.
Foundations of Perturbative QCD
NASA Astrophysics Data System (ADS)
Collins, John
2011-04-01
1. Introduction; 2. Why QCD?; 3. Basics of QCD; 4. Infra-red safety and non-safety; 5. Libby-Sterman analysis and power counting; 6. Parton model to parton theory I; 7. Parton model to parton theory II; 8. Factorization; 9. Corrections to the parton model in QCD; 10. Factorization and subtractions; 11. DIS in QCD; 12. Fragmentation; 13. TMD factorization; 14. Hadron-hadron collisions; 15. More advanced topics; Appendices; References; Index.
Foundations of Perturbative QCD
NASA Astrophysics Data System (ADS)
Collins, John
2013-11-01
1. Introduction; 2. Why QCD?; 3. Basics of QCD; 4. Infra-red safety and non-safety; 5. Libby-Sterman analysis and power counting; 6. Parton model to parton theory I; 7. Parton model to parton theory II; 8. Factorization; 9. Corrections to the parton model in QCD; 10. Factorization and subtractions; 11. DIS in QCD; 12. Fragmentation; 13. TMD factorization; 14. Hadron-hadron collisions; 15. More advanced topics; Appendices; References; Index.
Supersymmetric QCD and high energy cosmic rays: Fragmentation functions of supersymmetric QCD
NASA Astrophysics Data System (ADS)
Corianò, Claudio; Faraggi, Alon E.
2002-04-01
The supersymmetric evolution of the fragmentation functions (or timelike evolution) within N=1 QCD is discussed and predictions for the fragmentation functions of the theory (into final protons) are given. We use a backward running of the supersymmetric DGLAP equations, using a method developed in previous works. We start from the usual QCD parametrizations at low energy and run the DGLAP back, up to an intermediate scale-assumed to be supersymmetric-where we switch-on supersymmetry. From there on we assume the applicability of an N=1 supersymmetric evolution (ESAP). We elaborate on the possible application of these results to high energy cosmic rays near the GZK cutoff.
QCD dynamics in mesons at soft and hard scales
Nguyen, T.; Souchlas, N. A.; Tandy, P. C.
2010-07-27
Using a ladder-rainbow kernel previously established for the soft scale of light quark hadrons, we explore, within a Dyson-Schwinger approach, phenomena that mix soft and hard scales of QCD. The difference between vector and axial vector current correlators is examined to estimate the four quark chiral condensate and the leading distance scale for the onset of non-perturbative phenomena in QCD. The valence quark distributions, in the pion and kaon, defined in deep inelastic scattering, and measured in the Drell Yan process, are investigated with the same ladder-rainbow truncation of the Dyson-Schwinger and Bethe-Salpeter equations.
LeFebvre, W.
1994-08-01
For many years, the popular program top has aided system administrations in examination of process resource usage on their machines. Yet few are familiar with the techniques involved in obtaining this information. Most of what is displayed by top is available only in the dark recesses of kernel memory. Extracting this information requires familiarity not only with how bytes are read from the kernel, but also what data needs to be read. The wide variety of systems and variants of the Unix operating system in today`s marketplace makes writing such a program very challenging. This paper explores the tremendous diversity in kernel information across the many platforms and the solutions employed by top to achieve and maintain ease of portability in the presence of such divergent systems.
Calculates Thermal Neutron Scattering Kernel.
Energy Science and Technology Software Center (ESTSC)
1989-11-10
Version 00 THRUSH computes the thermal neutron scattering kernel by the phonon expansion method for both coherent and incoherent scattering processes. The calculation of the coherent part is suitable only for calculating the scattering kernel for heavy water.
Robotic Intelligence Kernel: Architecture
Energy Science and Technology Software Center (ESTSC)
2009-09-16
The INL Robotic Intelligence Kernel Architecture (RIK-A) is a multi-level architecture that supports a dynamic autonomy structure. The RIK-A is used to coalesce hardware for sensing and action as well as software components for perception, communication, behavior and world modeling into a framework that can be used to create behaviors for humans to interact with the robot.
NASA Technical Reports Server (NTRS)
Spafford, Eugene H.; Mckendry, Martin S.
1986-01-01
An overview of the internal structure of the Clouds kernel was presented. An indication of how these structures will interact in the prototype Clouds implementation is given. Many specific details have yet to be determined and await experimentation with an actual working system.
Robotic Intelligence Kernel: Visualization
Energy Science and Technology Software Center (ESTSC)
2009-09-16
The INL Robotic Intelligence Kernel-Visualization is the software that supports the user interface. It uses the RIK-C software to communicate information to and from the robot. The RIK-V illustrates the data in a 3D display and provides an operating picture wherein the user can task the robot.
NLO evolution of color dipoles in N=4 SYM
Balitsky, Ian; Chirilli, Giovanni
2009-01-01
High-energy behavior of amplitudes in a gauge theory can be reformulated in terms of the evolution of Wilson-line operators. In the leading logarithmic approximation it is given by the conformally invariant BK equation for the evolution of color dipoles. In QCD, the next-to-leading order BK equation has both conformal and non-conformal parts, the latter providing the running of the coupling constant. To separate the conformally invariant effects from the running-coupling effects, we calculate the NLO evolution of the color dipoles in the conformal ${\\cal N}$=4 SYM theory. We define the ``composite dipole operator' with the rapidity cutoff preserving conformal invariance. The resulting M\\"obius invariant kernel agrees with the forward NLO BFKL calculation of Ref. 1
NASA Astrophysics Data System (ADS)
Wilczek, Frank
Introduction Symmetry and the Phenomena of QCD Apparent and Actual Symmetries Asymptotic Freedom Confinement Chiral Symmetry Breaking Chiral Anomalies and Instantons High Temperature QCD: Asymptotic Properties Significance of High Temperature QCD Numerical Indications for Quasi-Free Behavior Ideas About Quark-Gluon Plasma Screening Versus Confinement Models of Chiral Symmetry Breaking More Refined Numerical Experiments High-Temperature QCD: Phase Transitions Yoga of Phase Transitions and Order Parameters Application to Glue Theories Application to Chiral Transitions Close Up on Two Flavors A Genuine Critical Point! (?) High-Density QCD: Methods Hopes, Doubts, and Fruition Another Renormalization Group Pairing Theory Taming the Magnetic Singularity High-Density QCD: Color-Flavor Locking and Quark-Hadron Continuity Gauge Symmetry (Non)Breaking Symmetry Accounting Elementary Excitations A Modified Photon Quark-Hadron Continuity Remembrance of Things Past More Quarks Fewer Quarks and Reality
None
2011-10-06
Modern QCD - Lecture 3 We will introduce processes with initial-state hadrons and discuss parton distributions, sum rules, as well as the need for a factorization scale once radiative corrections are taken into account. We will then discuss the DGLAP equation, the evolution of parton densities, as well as ways in which parton densities are extracted from data.
Urban, Federico R.; Zhitnitsky, Ariel R.
2010-08-30
We review two mechanisms rooted in the infrared sector of QCD which, by exploiting the properties of the QCD ghost, as introduced by Veneziano, provide new insight on the cosmological dark energy problem, first, in the form of a Casimir-like energy from quantising QCD in a box, and second, in the form of additional, time-dependent, vacuum energy density in an expanding universe. Based on [1, 2].
Wilson loops and QCD/string scattering amplitudes
Makeenko, Yuri; Olesen, Poul
2009-07-15
We generalize modern ideas about the duality between Wilson loops and scattering amplitudes in N=4 super Yang-Mills theory to large N QCD by deriving a general relation between QCD meson scattering amplitudes and Wilson loops. We then investigate properties of the open-string disk amplitude integrated over reparametrizations. When the Wilson-loop is approximated by the area behavior, we find that the QCD scattering amplitude is a convolution of the standard Koba-Nielsen integrand and a kernel. As usual poles originate from the first factor, whereas no (momentum-dependent) poles can arise from the kernel. We show that the kernel becomes a constant when the number of external particles becomes large. The usual Veneziano amplitude then emerges in the kinematical regime, where the Wilson loop can be reliably approximated by the area behavior. In this case, we obtain a direct duality between Wilson loops and scattering amplitudes when spatial variables and momenta are interchanged, in analogy with the N=4 super Yang-Mills theory case.
Kernel optimization in discriminant analysis.
You, Di; Hamsici, Onur C; Martinez, Aleix M
2011-03-01
Kernel mapping is one of the most used approaches to intrinsically derive nonlinear classifiers. The idea is to use a kernel function which maps the original nonlinearly separable problem to a space of intrinsically larger dimensionality where the classes are linearly separable. A major problem in the design of kernel methods is to find the kernel parameters that make the problem linear in the mapped representation. This paper derives the first criterion that specifically aims to find a kernel representation where the Bayes classifier becomes linear. We illustrate how this result can be successfully applied in several kernel discriminant analysis algorithms. Experimental results, using a large number of databases and classifiers, demonstrate the utility of the proposed approach. The paper also shows (theoretically and experimentally) that a kernel version of Subclass Discriminant Analysis yields the highest recognition rates. PMID:20820072
MC Kernel: Broadband Waveform Sensitivity Kernels for Seismic Tomography
NASA Astrophysics Data System (ADS)
Stähler, Simon C.; van Driel, Martin; Auer, Ludwig; Hosseini, Kasra; Sigloch, Karin; Nissen-Meyer, Tarje
2016-04-01
We present MC Kernel, a software implementation to calculate seismic sensitivity kernels on arbitrary tetrahedral or hexahedral grids across the whole observable seismic frequency band. Seismic sensitivity kernels are the basis for seismic tomography, since they map measurements to model perturbations. Their calculation over the whole frequency range was so far only possible with approximative methods (Dahlen et al. 2000). Fully numerical methods were restricted to the lower frequency range (usually below 0.05 Hz, Tromp et al. 2005). With our implementation, it's possible to compute accurate sensitivity kernels for global tomography across the observable seismic frequency band. These kernels rely on wavefield databases computed via AxiSEM (www.axisem.info), and thus on spherically symmetric models. The advantage is that frequencies up to 0.2 Hz and higher can be accessed. Since the usage of irregular, adapted grids is an integral part of regularisation in seismic tomography, MC Kernel works in a inversion-grid-centred fashion: A Monte-Carlo integration method is used to project the kernel onto each basis function, which allows to control the desired precision of the kernel estimation. Also, it means that the code concentrates calculation effort on regions of interest without prior assumptions on the kernel shape. The code makes extensive use of redundancies in calculating kernels for different receivers or frequency-pass-bands for one earthquake, to facilitate its usage in large-scale global seismic tomography.
Lee, Myung Hee; Liu, Yufeng
2013-12-01
The continuum regression technique provides an appealing regression framework connecting ordinary least squares, partial least squares and principal component regression in one family. It offers some insight on the underlying regression model for a given application. Moreover, it helps to provide deep understanding of various regression techniques. Despite the useful framework, however, the current development on continuum regression is only for linear regression. In many applications, nonlinear regression is necessary. The extension of continuum regression from linear models to nonlinear models using kernel learning is considered. The proposed kernel continuum regression technique is quite general and can handle very flexible regression model estimation. An efficient algorithm is developed for fast implementation. Numerical examples have demonstrated the usefulness of the proposed technique. PMID:24058224
Norniella, Olga; /Barcelona, IFAE
2005-01-01
Recent QCD measurements from the CDF collaboration at the Tevatron are presented, together with future prospects as the luminosity increases. The measured inclusive jet cross section is compared to pQCD NLO predictions. Precise measurements on jet shapes and hadronic energy flows are compared to different phenomenological models that describe gluon emissions and the underlying event in hadron-hadron interactions.
Lattice QCD in rotating frames.
Yamamoto, Arata; Hirono, Yuji
2013-08-23
We formulate lattice QCD in rotating frames to study the physics of QCD matter under rotation. We construct the lattice QCD action with the rotational metric and apply it to the Monte Carlo simulation. As the first application, we calculate the angular momenta of gluons and quarks in the rotating QCD vacuum. This new framework is useful to analyze various rotation-related phenomena in QCD. PMID:24010426
Exclusive QCD processes, quark-hadron duality, and the transition to perturbative QCD
NASA Astrophysics Data System (ADS)
Corianò, Claudio; Li, Hsiang-nan; Savkli, Cetin
1998-07-01
Experiments at CEBAF will scan the intermediate-energy region of the QCD dynamics for the nucleon form factors and for Compton Scattering. These experiments will definitely clarify the role of resummed perturbation theory and of quark-hadron duality (QCD sum rules) in this regime. With this perspective in mind, we review the factorization theorem of perturbative QCD for exclusive processes at intermediate energy scales, which embodies the transverse degrees of freedom of a parton and the Sudakov resummation of the corresponding large logarithms. We concentrate on the pion and proton electromagnetic form factors and on pion Compton scattering. New ingredients, such as the evolution of the pion wave function and the complete two-loop expression of the Sudakov factor, are included. The sensitivity of our predictions to the infrared cutoff for the Sudakov evolution is discussed. We also elaborate on QCD sum rule methods for Compton Scattering, which provide an alternative description of this process. We show that, by comparing the local duality analysis to resummed perturbation theory, it is possible to describe the transition of exclusive processes to perturbative QCD.
Heavy quarkonium production at collider energies: Factorization and evolution
NASA Astrophysics Data System (ADS)
Kang, Zhong-Bo; Ma, Yan-Qing; Qiu, Jian-Wei; Sterman, George
2014-08-01
We present a perturbative QCD factorization formalism for inclusive production of heavy quarkonia of large transverse momentum, pT at collider energies, including both leading power (LP) and next-to-leading power (NLP) behavior in pT. We demonstrate that both LP and NLP contributions can be factorized in terms of perturbatively calculable short-distance partonic coefficient functions and universal nonperturbative fragmentation functions, and derive the evolution equations that are implied by the factorization. We identify projection operators for all channels of the factorized LP and NLP infrared safe short-distance partonic hard parts, and corresponding operator definitions of fragmentation functions. For the NLP, we focus on the contributions involving the production of a heavy quark pair, a necessary condition for producing a heavy quarkonium. We evaluate the first nontrivial order of evolution kernels for all relevant fragmentation functions, and discuss the role of NLP contributions.
Brodsky, Stanley J.; de Teramond, Guy F.; /Costa Rica U.
2012-02-16
-front QCD Hamiltonian 'Light-Front Holography'. Light-Front Holography is in fact one of the most remarkable features of the AdS/CFT correspondence. The Hamiltonian equation of motion in the light-front (LF) is frame independent and has a structure similar to eigenmode equations in AdS space. This makes a direct connection of QCD with AdS/CFT methods possible. Remarkably, the AdS equations correspond to the kinetic energy terms of the partons inside a hadron, whereas the interaction terms build confinement and correspond to the truncation of AdS space in an effective dual gravity approximation. One can also study the gauge/gravity duality starting from the bound-state structure of hadrons in QCD quantized in the light-front. The LF Lorentz-invariant Hamiltonian equation for the relativistic bound-state system is P{sub {mu}}P{sup {mu}}|{psi}(P)> = (P{sup +}P{sup -} - P{sub {perpendicular}}{sup 2})|{psi}(P)> = M{sup 2}|{psi}(P)>, P{sup {+-}} = P{sup 0} {+-} P{sup 3}, where the LF time evolution operator P{sup -} is determined canonically from the QCD Lagrangian. To a first semiclassical approximation, where quantum loops and quark masses are not included, this leads to a LF Hamiltonian equation which describes the bound-state dynamics of light hadrons in terms of an invariant impact variable {zeta} which measures the separation of the partons within the hadron at equal light-front time {tau} = x{sup 0} + x{sup 3}. This allows us to identify the holographic variable z in AdS space with an impact variable {zeta}. The resulting Lorentz-invariant Schroedinger equation for general spin incorporates color confinement and is systematically improvable. Light-front holographic methods were originally introduced by matching the electromagnetic current matrix elements in AdS space with the corresponding expression using LF theory in physical space time. It was also shown that one obtains identical holographic mapping using the matrix elements of the energy-momentum tensor by perturbing
Kernel Phase and Kernel Amplitude in Fizeau Imaging
NASA Astrophysics Data System (ADS)
Pope, Benjamin J. S.
2016-09-01
Kernel phase interferometry is an approach to high angular resolution imaging which enhances the performance of speckle imaging with adaptive optics. Kernel phases are self-calibrating observables that generalize the idea of closure phases from non-redundant arrays to telescopes with arbitrarily shaped pupils, by considering a matrix-based approximation to the diffraction problem. In this paper I discuss the recent fhistory of kernel phase, in particular in the matrix-based study of sparse arrays, and propose an analogous generalization of the closure amplitude to kernel amplitudes. This new approach can self-calibrate throughput and scintillation errors in optical imaging, which extends the power of kernel phase-like methods to symmetric targets where amplitude and not phase calibration can be a significant limitation, and will enable further developments in high angular resolution astronomy.
NASA Astrophysics Data System (ADS)
Lutz, Matthias F. M.; Lange, Jens Sören; Pennington, Michael; Bettoni, Diego; Brambilla, Nora; Crede, Volker; Eidelman, Simon; Gillitzer, Albrecht; Gradl, Wolfgang; Lang, Christian B.; Metag, Volker; Nakano, Takashi; Nieves, Juan; Neubert, Sebastian; Oka, Makoto; Olsen, Stephen L.; Pappagallo, Marco; Paul, Stephan; Pelizäus, Marc; Pilloni, Alessandro; Prencipe, Elisabetta; Ritman, Jim; Ryan, Sinead; Thoma, Ulrike; Uwer, Ulrich; Weise, Wolfram
2016-04-01
We report on the EMMI Rapid Reaction Task Force meeting 'Resonances in QCD', which took place at GSI October 12-14, 2015. A group of 26 people met to discuss the physics of resonances in QCD. The aim of the meeting was defined by the following three key questions: What is needed to understand the physics of resonances in QCD? Where does QCD lead us to expect resonances with exotic quantum numbers? What experimental efforts are required to arrive at a coherent picture? For light mesons and baryons only those with up, down and strange quark content were considered. For heavy-light and heavy-heavy meson systems, those with charm quarks were the focus. This document summarizes the discussions by the participants, which in turn led to the coherent conclusions we present here.
NASA Astrophysics Data System (ADS)
Geiger, Klaus
1997-08-01
VNI is a general-purpose Monte Carlo event generator, which includes the simulation of lepton-lepton, lepton-hadron, lepton-nucleus, hadron-hadron, hadron-nucleus, and nucleus-nucleus collisions. On the basis of renormalization-group improved parton description and quantum-kinetic theory, it uses the real-time evolution of parton cascades in conjunction with a self-consistent hadronization scheme that is governed by the dynamics itself. The causal evolution from a specific initial state (determined by the colliding beam particles) is followed by the time development of the phase-space densities of partons, pre-hadronic parton clusters, and final-state hadrons, in position space, momentum space and color space. The parton evolution is described in terms of a space-time generalization of the familiar momentum-space description of multiple (semi) hard interactions in QCD, involving 2 → 2 parton collisions, 2 → 1 parton fusion processes, and 1 → 2 radiation processes. The formation of color-singlet pre-hadronic clusters and their decays into hadrons, on the other hand, is treated by using a spatial criterion motivated by confinement and a non-perturbative model for hadronization. This article gives a brief review of the physics underlying VNI, which is followed by a detailed description of the program itself. The latter program description emphasizes easy-to-use pragmatism and explains how to use the program (including a simple example), annotates input and control parameters, and discusses output data provided by it.
NASA Astrophysics Data System (ADS)
Deur, Alexandre; Brodsky, Stanley J.; de Téramond, Guy F.
2016-09-01
We review the present theoretical and empirical knowledge for αs, the fundamental coupling underlying the interactions of quarks and gluons in Quantum Chromodynamics (QCD). The dependence of αs(Q2) on momentum transfer Q encodes the underlying dynamics of hadron physics-from color confinement in the infrared domain to asymptotic freedom at short distances. We review constraints on αs(Q2) at high Q2, as predicted by perturbative QCD, and its analytic behavior at small Q2, based on models of nonperturbative dynamics. In the introductory part of this review, we explain the phenomenological meaning of the coupling, the reason for its running, and the challenges facing a complete understanding of its analytic behavior in the infrared domain. In the second, more technical, part of the review, we discuss the behavior of αs(Q2) in the high momentum transfer domain of QCD. We review how αs is defined, including its renormalization scheme dependence, the definition of its renormalization scale, the utility of effective charges, as well as "Commensurate Scale Relations" which connect the various definitions of the QCD coupling without renormalization-scale ambiguity. We also report recent significant measurements and advanced theoretical analyses which have led to precise QCD predictions at high energy. As an example of an important optimization procedure, we discuss the "Principle of Maximum Conformality", which enhances QCD's predictive power by removing the dependence of the predictions for physical observables on the choice of theoretical conventions such as the renormalization scheme. In the last part of the review, we discuss the challenge of understanding the analytic behavior αs(Q2) in the low momentum transfer domain. We survey various theoretical models for the nonperturbative strongly coupled regime, such as the light-front holographic approach to QCD. This new framework predicts the form of the quark-confinement potential underlying hadron spectroscopy and
Skands, Peter Z.; /Fermilab
2005-07-01
Recent developments in QCD phenomenology have spurred on several improved approaches to Monte Carlo event generation, relative to the post-LEP state of the art. In this brief review, the emphasis is placed on approaches for (1) consistently merging fixed-order matrix element calculations with parton shower descriptions of QCD radiation, (2) improving the parton shower algorithms themselves, and (3) improving the description of the underlying event in hadron collisions.
Small-x evolution in the next-to-leading order
Giovanni Antonio Chirilli
2009-12-01
After a brief introduction to Deep Inelastic Scattering in the Bjorken limit and in the Regge Limit we discuss the operator product expansion in terms of non local string operator and in terms of Wilson lines. We will show how the high-energy behavior of amplitudes in gauge theories can be reformulated in terms of the evolution of Wilson-line operators. In the leading order this evolution is governed by the non-linear Balitsky-Kovchegov (BK) equation. In order to see if this equation is relevant for existing or future deep inelastic scattering (DIS) accelerators (like Electron Ion Collider (EIC) or Large Hadron electron Collider (LHeC)) one needs to know the next-to-leading order (NLO) corrections. In addition, the NLO corrections define the scale of the running-coupling constant in the BK equation and therefore determine the magnitude of the leading-order cross sections. In Quantum Chromodynamics (QCD), the next-to-leading order BK equation has both conformal and non-conformal parts. The NLO kernel for the composite operators resolves in a sum of the conformal part and the running-coupling part. The QCD and kernel of the BK equation is presented.
Bruemmer, David J.
2009-11-17
A robot platform includes perceptors, locomotors, and a system controller. The system controller executes a robot intelligence kernel (RIK) that includes a multi-level architecture and a dynamic autonomy structure. The multi-level architecture includes a robot behavior level for defining robot behaviors, that incorporate robot attributes and a cognitive level for defining conduct modules that blend an adaptive interaction between predefined decision functions and the robot behaviors. The dynamic autonomy structure is configured for modifying a transaction capacity between an operator intervention and a robot initiative and may include multiple levels with at least a teleoperation mode configured to maximize the operator intervention and minimize the robot initiative and an autonomous mode configured to minimize the operator intervention and maximize the robot initiative. Within the RIK at least the cognitive level includes the dynamic autonomy structure.
Ultrahigh energy neutrinos and nonlinear QCD dynamics
Machado, Magno V.T.
2004-09-01
The ultrahigh energy neutrino-nucleon cross sections are computed taking into account different phenomenological implementations of the nonlinear QCD dynamics. Based on the color dipole framework, the results for the saturation model supplemented by the Dokshitzer-Gribov-Lipatov-Altarelli-Parisi (DGLAP) evolution as well as for the Balitskii-Fadin-Kuraev-Lipatov (BFKL) formalism in the geometric scaling regime are presented. They are contrasted with recent calculations using next-to-leading order DGLAP and unified BFKL-DGLAP formalisms.
Tracking flame base movement and interaction with ignition kernels using topological methods
NASA Astrophysics Data System (ADS)
Mascarenhas, A.; Grout, R. W.; Yoo, C. S.; Chen, J. H.
2009-07-01
We segment the stabilization region in a simulation of a lifted jet flame based on its topology induced by the YOH field. Our segmentation method yields regions that correspond to the flame base and to potential auto-ignition kernels. We apply a region overlap based tracking method to follow the flame-base and the kernels over time, to study the evolution of kernels, and to detect when the kernels merge with the flame. The combination of our segmentation and tracking methods allow us observe flame stabilization via merging between the flame base and kernels; we also obtain YCH2O histories inside the kernels and detect a distinct decrease in radical concentration during transition to a developed flame.
FOREWORD: Extreme QCD 2012 (xQCD)
NASA Astrophysics Data System (ADS)
Alexandru, Andrei; Bazavov, Alexei; Liu, Keh-Fei
2013-04-01
The Extreme QCD 2012 conference, held at the George Washington University in August 2012, celebrated the 10th event in the series. It has been held annually since 2003 at different locations: San Carlos (2011), Bad Honnef (2010), Seoul (2009), Raleigh (2008), Rome (2007), Brookhaven (2006), Swansea (2005), Argonne (2004), and Nara (2003). As usual, it was a very productive and inspiring meeting that brought together experts in the field of finite-temperature QCD, both theoretical and experimental. On the experimental side, we heard about recent results from major experiments, such as PHENIX and STAR at Brookhaven National Laboratory, ALICE and CMS at CERN, and also about the constraints on the QCD phase diagram coming from astronomical observations of one of the largest laboratories one can imagine, neutron stars. The theoretical contributions covered a wide range of topics, including QCD thermodynamics at zero and finite chemical potential, new ideas to overcome the sign problem in the latter case, fluctuations of conserved charges and how they allow one to connect calculations in lattice QCD with experimentally measured quantities, finite-temperature behavior of theories with many flavors of fermions, properties and the fate of heavy quarkonium states in the quark-gluon plasma, and many others. The participants took the time to write up and revise their contributions and submit them for publication in these proceedings. Thanks to their efforts, we have now a good record of the ideas presented and discussed during the workshop. We hope that this will serve both as a reminder and as a reference for the participants and for other researchers interested in the physics of nuclear matter at high temperatures and density. To preserve the atmosphere of the event the contributions are ordered in the same way as the talks at the conference. We are honored to have helped organize the 10th meeting in this series, a milestone that reflects the lasting interest in this
ERIC Educational Resources Information Center
Mayr, Ernst
1978-01-01
Traces the history of evolution theory from Lamarck and Darwin to the present. Discusses natural selection in detail. Suggests that, besides biological evolution, there is also a cultural evolution which is more rapid than the former. (MA)
Harris, R.
1992-05-01
We present measurements of jet production and isolated prompt photon production in p{bar p} collisions at {radical}s = 1.8 TeV from the 1988--89 run of the Collider Detector at Fermilab (CDF). To test QCD with jets, the inclusive jet cross section (p{bar p} {yields} J + X) and two jet angular distributions (p{bar P} {yields} JJ + X) are compared to QCD predictions and are used to search for composite quarks. The ratio of the scaled jet cross sections at two Tevatron collision energies ({radical}s= 546 and 1800 GeV) is compared to QCD predictions for X{sub T} scaling violations. Also, we present the first evidence for QCD interference effects (color coherence) in third jet production (p{bar p} {yields} JJJ + X). To test QCD with photons, we present measurements of the transverse momentum spectrum of single isolated prompt photon production (p{bar p} {yields} {gamma} + X), double isolated prompt photon production (p{bar p} {yields} {gamma}{gamma} + X), and the angular distribution of photon-jet events (p{bar p} {yields} {gamma} J + X). We have also measured the isolated production ratio of {eta} and {pi}{sup 0} mesons (p{bar p} {yields} {eta} + X)/(p{bar p} {yields} {pi}{sup 0} + X) = 1.02 {plus minus} .15(stat) {plus minus} .23(sys).
Harris, R.; The CDF Collaboration
1992-05-01
We present measurements of jet production and isolated prompt photon production in p{bar p} collisions at {radical}s = 1.8 TeV from the 1988--89 run of the Collider Detector at Fermilab (CDF). To test QCD with jets, the inclusive jet cross section (p{bar p} {yields} J + X) and two jet angular distributions (p{bar P} {yields} JJ + X) are compared to QCD predictions and are used to search for composite quarks. The ratio of the scaled jet cross sections at two Tevatron collision energies ({radical}s= 546 and 1800 GeV) is compared to QCD predictions for X{sub T} scaling violations. Also, we present the first evidence for QCD interference effects (color coherence) in third jet production (p{bar p} {yields} JJJ + X). To test QCD with photons, we present measurements of the transverse momentum spectrum of single isolated prompt photon production (p{bar p} {yields} {gamma} + X), double isolated prompt photon production (p{bar p} {yields} {gamma}{gamma} + X), and the angular distribution of photon-jet events (p{bar p} {yields} {gamma} J + X). We have also measured the isolated production ratio of {eta} and {pi}{sup 0} mesons (p{bar p} {yields} {eta} + X)/(p{bar p} {yields} {pi}{sup 0} + X) = 1.02 {plus_minus} .15(stat) {plus_minus} .23(sys).
Blazey, G.C.
1995-05-01
Selected recent Quantum Chromodynamics (QCD) results from the D0 and CDF experiments at the Fermilab Tevatron are presented and discussed. The inclusive jet and inclusive triple differential dijet cross sections are compared to next-to-leading order QCD calculations. The sensitivity of the dijet cross section to parton distribution functions (for hadron momentum fractions {approximately} 0.01 to {approximately} 0.4) will constrain the gluon distribution of the proton. Two analyses of dijet production at large rapidity separation are presented. The first analysis tests the contributions of higher order processes to dijet production and can be considered a test of BFKL or GLAP parton evolution. The second analysis yields a strong rapidity gap signal consistent with colorless exchange between the scattered partons. The prompt photon inclusive cross section is consistent with next-to-leading order QCD only at the highest transverse momenta. The discrepancy at lower momenta may be indicative of higher order processes impacting a transverse momentum or ``k{sub T}`` to the partonic interaction. The first measurement of the strong coupling constant from the Tevatron is also presented. The coupling constant can be determined from the ratio of W + 1jet to W + 0jet cross sections and a next-to-leading order QCD calculation.
Electroweak symmetry breaking via QCD.
Kubo, Jisuke; Lim, Kher Sham; Lindner, Manfred
2014-08-29
We propose a new mechanism to generate the electroweak scale within the framework of QCD, which is extended to include conformally invariant scalar degrees of freedom belonging to a larger irreducible representation of SU(3)c. The electroweak symmetry breaking is triggered dynamically via the Higgs portal by the condensation of the colored scalar field around 1 TeV. The mass of the colored boson is restricted to be 350 GeV≲mS≲3 TeV, with the upper bound obtained from perturbative renormalization group evolution. This implies that the colored boson can be produced at the LHC. If the colored boson is electrically charged, the branching fraction of the Higgs boson decaying into two photons can slightly increase, and moreover, it can be produced at future linear colliders. Our idea of nonperturbative electroweak scale generation can serve as a new starting point for more realistic model building in solving the hierarchy problem. PMID:25215976
Kernel Methods on Riemannian Manifolds with Gaussian RBF Kernels.
Jayasumana, Sadeep; Hartley, Richard; Salzmann, Mathieu; Li, Hongdong; Harandi, Mehrtash
2015-12-01
In this paper, we develop an approach to exploiting kernel methods with manifold-valued data. In many computer vision problems, the data can be naturally represented as points on a Riemannian manifold. Due to the non-Euclidean geometry of Riemannian manifolds, usual Euclidean computer vision and machine learning algorithms yield inferior results on such data. In this paper, we define Gaussian radial basis function (RBF)-based positive definite kernels on manifolds that permit us to embed a given manifold with a corresponding metric in a high dimensional reproducing kernel Hilbert space. These kernels make it possible to utilize algorithms developed for linear spaces on nonlinear manifold-valued data. Since the Gaussian RBF defined with any given metric is not always positive definite, we present a unified framework for analyzing the positive definiteness of the Gaussian RBF on a generic metric space. We then use the proposed framework to identify positive definite kernels on two specific manifolds commonly encountered in computer vision: the Riemannian manifold of symmetric positive definite matrices and the Grassmann manifold, i.e., the Riemannian manifold of linear subspaces of a Euclidean space. We show that many popular algorithms designed for Euclidean spaces, such as support vector machines, discriminant analysis and principal component analysis can be generalized to Riemannian manifolds with the help of such positive definite Gaussian kernels. PMID:26539851
Deur, Alexandre; Brodsky, Stanley J.; de Téramond, Guy F.
2016-05-09
Here, we review present knowledge onmore » $$\\alpha_{s}$$, the Quantum Chromodynamics (QCD) running coupling. The dependence of $$\\alpha_s(Q^2)$$ on momentum transfer $Q$ encodes the underlying dynamics of hadron physics --from color confinement in the infrared domain to asymptotic freedom at short distances. We will survey our present theoretical and empirical knowledge of $$\\alpha_s(Q^2)$$, including constraints at high $Q^2$ predicted by perturbative QCD, and constraints at small $Q^2$ based on models of nonperturbative dynamics. In the first, introductory, part of this review, we explain the phenomenological meaning of the coupling, the reason for its running, and the challenges facing a complete understanding of its analytic behavior in the infrared domain. In the second, more technical, part of the review, we discuss $$\\alpha_s(Q^2)$$ in the high momentum transfer domain of QCD. We review how $$\\alpha_s$$ is defined, including its renormalization scheme dependence, the definition of its renormalization scale, the utility of effective charges, as well as `` Commensurate Scale Relations" which connect the various definitions of the QCD coupling without renormalization scale ambiguity. We also report recent important experimental measurements and advanced theoretical analyses which have led to precise QCD predictions at high energy. As an example of an important optimization procedure, we discuss the ``Principle of Maximum Conformality" which enhances QCD's predictive power by removing the dependence of the predictions for physical observables on the choice of the gauge and renormalization scheme. In last part of the review, we discuss $$\\alpha_s(Q^2)$$ in the low momentum transfer domain, where there has been no consensus on how to define $$\\alpha_s(Q^2)$$ or its analytic behavior. We will discuss the various approaches used for low energy calculations. Among them, we will discuss the light-front holographic approach to QCD in the strongly coupled
Mixing state of bi-component mixtures under aggregation with a product kernel
NASA Astrophysics Data System (ADS)
Fernández-Díaz, J. M.; Gómez-García, G. J.
2010-05-01
We analyze the aggregation of a two-component system with a product kernel, to determine its evolution in time during a progressive mixing. The evolution is governed by the Smoluchowski equation, yielding gelation from a certain time. In the past, equilibrium (or asymptotic) solutions have been used to study mixing of bi-component mixtures for non-gelling kernels. In this letter we show that asymptotic solutions are invalid to describe the mixing behavior for the product kernel case (even before gelation). Besides, an equilibrium concentration is not reached. On the contrary, particles with any composition exist all time.
Brodsky, Stanley J.; /SLAC
2007-07-06
I discuss a number of novel topics in QCD, including the use of the AdS/CFT correspondence between Anti-de Sitter space and conformal gauge theories to obtain an analytically tractable approximation to QCD in the regime where the QCD coupling is large and constant. In particular, there is an exact correspondence between the fifth-dimension coordinate z of AdS space and a specific impact variable {zeta} which measures the separation of the quark constituents within the hadron in ordinary space-time. This connection allows one to compute the analytic form of the frame-independent light-front wavefunctions of mesons and baryons, the fundamental entities which encode hadron properties and allow the computation of exclusive scattering amplitudes. I also discuss a number of novel phenomenological features of QCD. Initial- and final-state interactions from gluon-exchange, normally neglected in the parton model, have a profound effect in QCD hard-scattering reactions, leading to leading-twist single-spin asymmetries, diffractive deep inelastic scattering, diffractive hard hadronic reactions, the breakdown of the Lam Tung relation in Drell-Yan reactions, and nuclear shadowing and non-universal antishadowing--leading-twist physics not incorporated in the light-front wavefunctions of the target computed in isolation. I also discuss tests of hidden color in nuclear wavefunctions, the use of diffraction to materialize the Fock states of a hadronic projectile and test QCD color transparency, and anomalous heavy quark effects. The presence of direct higher-twist processes where a proton is produced in the hard subprocess can explain the large proton-to-pion ratio seen in high centrality heavy ion collisions.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 8 2011-01-01 2011-01-01 false Kernel weight. 981.9 Section 981.9 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Regulating Handling Definitions § 981.9 Kernel weight. Kernel weight means the weight of kernels,...
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 8 2010-01-01 2010-01-01 false Kernel weight. 981.9 Section 981.9 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Regulating Handling Definitions § 981.9 Kernel weight. Kernel weight means the weight of kernels,...
Cusp Kernels for Velocity-Changing Collisions
NASA Astrophysics Data System (ADS)
McGuyer, B. H.; Marsland, R., III; Olsen, B. A.; Happer, W.
2012-05-01
We introduce an analytical kernel, the “cusp” kernel, to model the effects of velocity-changing collisions on optically pumped atoms in low-pressure buffer gases. Like the widely used Keilson-Storer kernel [J. Keilson and J. E. Storer, Q. Appl. Math. 10, 243 (1952)QAMAAY0033-569X], cusp kernels are characterized by a single parameter and preserve a Maxwellian velocity distribution. Cusp kernels and their superpositions are more useful than Keilson-Storer kernels, because they are more similar to real kernels inferred from measurements or theory and are easier to invert to find steady-state velocity distributions.
QCD Phase Diagram and the Constant Mass Approximation
NASA Astrophysics Data System (ADS)
Ahmad, A.; Ayala, A.; Bashir, A.; Gutiérrez, E.; Raya, A.
2015-11-01
Dynamical generation of quark masses in the infrared region of QCD plays an important role to understand the peculiar nature of the physics of hadrons. As it is known, the solution of QCD gap equation for the quark mass function is flat for low momentum, but smoothly evolves to the perturbative behavior at high momentum. In this work, we use an effective truncation of QCD gap equation valid up to 1 GeV, and implement it at finite temperature and chemical potential to understand the QCD phase diagram for chiral symmetry breaking-chiral symmetry restoration, and confinement-deconfinement phase transitions from the Schwinger-Dysin equations point of view. Our effective kernel contains a gluon dressing function with two light quark flavors Nf = 2, with current quark mass 0.0035 GeV. An effective coupling, adjusted to reproduce the behavior of the chiral condensate at finite T complements our truncation. We find the critical end point of the phase diagram located at the temperature TE = 0.1245 GeV and the baryonic chemical potential μEB = 0.211 GeV.
Lattice QCD for parallel computers
NASA Astrophysics Data System (ADS)
Quadling, Henley Sean
Lattice QCD is an important tool in the investigation of Quantum Chromodynamics (QCD). This is particularly true at lower energies where traditional perturbative techniques fail, and where other non-perturbative theoretical efforts are not entirely satisfactory. Important features of QCD such as confinement and the masses of the low lying hadronic states have been demonstrated and calculated in lattice QCD simulations. In calculations such as these, non-lattice techniques in QCD have failed. However, despite the incredible advances in computer technology, a full solution of lattice QCD may still be in the too-distant future. Much effort is being expended in the search for ways to reduce the computational burden so that an adequate solution of lattice QCD is possible in the near future. There has been considerable progress in recent years, especially in the research of improved lattice actions. In this thesis, a new approach to lattice QCD algorithms is introduced, which results in very significant efficiency improvements. The new approach is explained in detail, evaluated and verified by comparing physics results with current lattice QCD simulations. The new sub-lattice layout methodology has been specifically designed for current and future hardware. Together with concurrent research into improved lattice actions and more efficient numerical algorithms, the very significant efficiency improvements demonstrated in this thesis can play an important role in allowing lattice QCD researchers access to much more realistic simulations. The techniques presented in this thesis also allow ambitious QCD simulations to be performed on cheap clusters of commodity computers.
Quark-gluon vertex model and lattice-QCD data
Bhagwat, M.S.; Tandy, P.C.
2004-11-01
A model for the dressed-quark-gluon vertex, at zero gluon momentum, is formed from a nonperturbative extension of the two Feynman diagrams that contribute at one loop in perturbation theory. The required input is an existing ladder-rainbow model Bethe-Salpeter kernel from an approach based on the Dyson-Schwinger equations; no new parameters are introduced. The model includes an Ansatz for the triple-gluon vertex. Two of the three vertex amplitudes from the model provide a pointwise description of the recent quenched-lattice-QCD data. An estimate of the effects of quenching is made.
Soft and Hard Scale QCD Dynamics in Mesons
NASA Astrophysics Data System (ADS)
Nguyen, T.; Souchlas, N. A.; Tandy, P. C.
2011-09-01
Using a ladder-rainbow kernel previously established for light quark hadron physics, we explore the extension to masses and electroweak decay constants of ground state pseudoscalar and vector quarkonia and heavy-light mesons in the c- and b-quark regions. We make a systematic study of the effectiveness of a constituent mass concept as a replacement for a heavy quark dressed propagator for such states. The difference between vector and axial vector current correlators is explored within the same model to provide an estimate of the four quark chiral condensate and the leading distance scale for the onset of non-perturbative phenomena in QCD.
Non-perturbative QCD Modeling and Meson Physics
NASA Astrophysics Data System (ADS)
Nguyen, T.; Souchlas, N. A.; Tandy, P. C.
2009-04-01
Using a ladder-rainbow kernel previously established for light quark hadron physics, we explore the extension to masses and electroweak decay constants of ground state pseudoscalar and vector quarkonia and heavy-light mesons in the c- and b-quark regions. We make a systematic study of the effectiveness of a constituent mass concept as a replacement for a heavy quark dressed propagator for such states. The difference between vector and axial vector current correlators is explored within the same model to provide an estimate of the four quark chiral condensate and the leading distance scale for the onset of non-perturbative phenomena in QCD.
Non-perturbative QCD Modeling and Meson Physics
Nguyen, T.; Souchlas, N. A.; Tandy, P. C.
2009-04-20
Using a ladder-rainbow kernel previously established for light quark hadron physics, we explore the extension to masses and electroweak decay constants of ground state pseudoscalar and vector quarkonia and heavy-light mesons in the c- and b-quark regions. We make a systematic study of the effectiveness of a constituent mass concept as a replacement for a heavy quark dressed propagator for such states. The difference between vector and axial vector current correlators is explored within the same model to provide an estimate of the four quark chiral condensate and the leading distance scale for the onset of non-perturbative phenomena in QCD.
Radyushkin, Anatoly V.; Efremov, Anatoly Vasilievich; Ginzburg, Ilya F.
2013-04-01
We discuss some problems concerning the application of perturbative QCD to high energy soft processes. We show that summing the contributions of the lowest twist operators for non-singlet $t$-channel leads to a Regge-like amplitude. Singlet case is also discussed.
Brodsky, Stanley J.; Deshpande, Abhay L.; Gao, Haiyan; McKeown, Robert D.; Meyer, Curtis A.; Meziani, Zein-Eddine; Milner, Richard G.; Qiu, Jianwei; Richards, David G.; Roberts, Craig D.
2015-02-26
This White Paper presents the recommendations and scientific conclusions from the Town Meeting on QCD and Hadronic Physics that took place in the period 13-15 September 2014 at Temple University as part of the NSAC 2014 Long Range Planning process. The meeting was held in coordination with the Town Meeting on Phases of QCD and included a full day of joint plenary sessions of the two meetings. The goals of the meeting were to report and highlight progress in hadron physics in the seven years since the 2007 Long Range Plan (LRP07), and present a vision for the future by identifying the key questions and plausible paths to solutions which should define the next decade. The introductory summary details the recommendations and their supporting rationales, as determined at the Town Meeting on QCD and Hadron Physics, and the endorsements that were voted upon. The larger document is organized as follows. Section 2 highlights major progress since the 2007 LRP. It is followed, in Section 3, by a brief overview of the physics program planned for the immediate future. Finally, Section 4 provides an overview of the physics motivations and goals associated with the next QCD frontier: the Electron-Ion-Collider.
Andreas S. Kronfeld
2002-09-30
After reviewing some of the mathematical foundations and numerical difficulties facing lattice QCD, I review the status of several calculations relevant to experimental high-energy physics. The topics considered are moments of structure functions, which may prove relevant to search for new phenomena at the LHC, and several aspects of flavor physics, which are relevant to understanding CP and flavor violation.
Lincoln, Don
2016-06-28
The strongest force in the universe is the strong nuclear force and it governs the behavior of quarks and gluons inside protons and neutrons. The name of the theory that governs this force is quantum chromodynamics, or QCD. In this video, Fermilab?s Dr. Don Lincoln explains the intricacies of this dominant component of the Standard Model.
Devlin, T.; CDF Collaboration
1996-10-01
The CDF collaboration is engaged in a broad program of QCD measurements at the Fermilab Tevatron Collider. I will discuss inclusive jet production at center-of-mass energies of 1800 GeV and 630 GeV, properties of events with very high total transverse energy and dijet angular distributions.
Plunkett, R.; The CDF Collaboration
1991-10-01
Results are presented for hadronic jet and direct photon production at {radical}{bar s} = 1800 GeV. The data are compared with next-to-leading QCD calculations. A new limit on the scale of possible composite structure of the quarks is also reported. 12 refs., 4 figs.
Nathan Isgur
1997-03-01
The author presents an idiosyncratic view of baryons which calls for a marriage between quark-based and hadronic models of QCD. He advocates a treatment based on valence quark plus glue dominance of hadron structure, with the sea of q pairs (in the form of virtual hadron pairs) as important corrections.
Nawa, Kanabu; Suganuma, Hideo; Kojo, Toru
2007-04-15
We study baryons in holographic QCD with D4/D8/D8 multi-D-brane system. In holographic QCD, the baryon appears as a topologically nontrivial chiral soliton in a four-dimensional effective theory of mesons. We call this topological soliton brane-induced Skyrmion. Some review of D4/D8/D8 holographic QCD is presented from the viewpoints of recent hadron physics and QCD phenomenologies. A four-dimensional effective theory with pions and {rho} mesons is uniquely derived from the non-Abelian Dirac-Born-Infeld (DBI) action of D8 brane with D4 supergravity background at the leading order of large N{sub c}, without small amplitude expansion of meson fields to discuss chiral solitons. For the hedgehog configuration of pion and {rho}-meson fields, we derive the energy functional and the Euler-Lagrange equation of brane-induced Skyrmion from the meson effective action induced by holographic QCD. Performing the numerical calculation, we obtain the soliton solution and figure out the pion profile F(r) and the {rho}-meson profile G-tilde(r) of the brane-induced Skyrmion with its total energy, energy density distribution, and root-mean-square radius. These results are compared with the experimental quantities of baryons and also with the profiles of standard Skyrmion without {rho} mesons. We analyze interaction terms of pions and {rho} mesons in brane-induced Skyrmion, and find a significant {rho}-meson component appearing in the core region of a baryon.
Brodsky, Stanley J.; /SLAC /Southern Denmark U., CP3-Origins
2011-08-12
I review a number of topics where conventional wisdom in hadron physics has been challenged. For example, hadrons can be produced at large transverse momentum directly within a hard higher-twist QCD subprocess, rather than from jet fragmentation. Such 'direct' processes can explain the deviations from perturbative QCD predictions in measurements of inclusive hadron cross sections at fixed x{sub T} = 2p{sub T}/{radical}s, as well as the 'baryon anomaly', the anomalously large proton-to-pion ratio seen in high centrality heavy ion collisions. Initial-state and final-state interactions of the struck quark, the soft-gluon rescattering associated with its Wilson line, lead to Bjorken-scaling single-spin asymmetries, diffractive deep inelastic scattering, the breakdown of the Lam-Tung relation in Drell-Yan reactions, as well as nuclear shadowing and antishadowing. The Gribov-Glauber theory predicts that antishadowing of nuclear structure functions is not universal, but instead depends on the flavor quantum numbers of each quark and antiquark, thus explaining the anomalous nuclear dependence measured in deep-inelastic neutrino scattering. Since shadowing and antishadowing arise from the physics of leading-twist diffractive deep inelastic scattering, one cannot attribute such phenomena to the structure of the nucleus itself. It is thus important to distinguish 'static' structure functions, the probability distributions computed from the square of the target light-front wavefunctions, versus 'dynamical' structure functions which include the effects of the final-state rescattering of the struck quark. The importance of the J = 0 photon-quark QCD contact interaction in deeply virtual Compton scattering is also emphasized. The scheme-independent BLM method for setting the renormalization scale is discussed. Eliminating the renormalization scale ambiguity greatly improves the precision of QCD predictions and increases the sensitivity of searches for new physics at the LHC
QCD with many fermions and QCD topology
NASA Astrophysics Data System (ADS)
Shuryak, Edward
2013-04-01
Major nonperturbative phenomena in QCD - confinement and chiral symmetry breaking - are known to be related with certain topological objects. Recent lattice advances into the domain of many Nf = O(10) fermion flavors have shown that both phase transitions had shifted in this case to much stronger coupling. We discuss confinement in terms of monopole Bose condensation, and discuss how it is affected by fermions "riding" on the monopoles, ending with the Nf dependence of the critical line. Chiral symmetry breaking is discussed in terms of the (anti)selfdual dyons, the instanton constituents. The fermionic zero modes of those have a different meaning and lead to strong interaction between dyons and antidyons. We report some qualitative consequences of this theory and also some information about our first direct numerical study of the dyonic ensemble, in respect to both chiral symmetry breaking and confinement (via back reaction to the holonomy potential).
Domain transfer multiple kernel learning.
Duan, Lixin; Tsang, Ivor W; Xu, Dong
2012-03-01
Cross-domain learning methods have shown promising results by leveraging labeled patterns from the auxiliary domain to learn a robust classifier for the target domain which has only a limited number of labeled samples. To cope with the considerable change between feature distributions of different domains, we propose a new cross-domain kernel learning framework into which many existing kernel methods can be readily incorporated. Our framework, referred to as Domain Transfer Multiple Kernel Learning (DTMKL), simultaneously learns a kernel function and a robust classifier by minimizing both the structural risk functional and the distribution mismatch between the labeled and unlabeled samples from the auxiliary and target domains. Under the DTMKL framework, we also propose two novel methods by using SVM and prelearned classifiers, respectively. Comprehensive experiments on three domain adaptation data sets (i.e., TRECVID, 20 Newsgroups, and email spam data sets) demonstrate that DTMKL-based methods outperform existing cross-domain learning and multiple kernel learning methods. PMID:21646679
Aligning Biomolecular Networks Using Modular Graph Kernels
NASA Astrophysics Data System (ADS)
Towfic, Fadi; Greenlee, M. Heather West; Honavar, Vasant
Comparative analysis of biomolecular networks constructed using measurements from different conditions, tissues, and organisms offer a powerful approach to understanding the structure, function, dynamics, and evolution of complex biological systems. We explore a class of algorithms for aligning large biomolecular networks by breaking down such networks into subgraphs and computing the alignment of the networks based on the alignment of their subgraphs. The resulting subnetworks are compared using graph kernels as scoring functions. We provide implementations of the resulting algorithms as part of BiNA, an open source biomolecular network alignment toolkit. Our experiments using Drosophila melanogaster, Saccharomyces cerevisiae, Mus musculus and Homo sapiens protein-protein interaction networks extracted from the DIP repository of protein-protein interaction data demonstrate that the performance of the proposed algorithms (as measured by % GO term enrichment of subnetworks identified by the alignment) is competitive with some of the state-of-the-art algorithms for pair-wise alignment of large protein-protein interaction networks. Our results also show that the inter-species similarity scores computed based on graph kernels can be used to cluster the species into a species tree that is consistent with the known phylogenetic relationships among the species.
RTOS kernel in portable electrocardiograph
NASA Astrophysics Data System (ADS)
Centeno, C. A.; Voos, J. A.; Riva, G. G.; Zerbini, C.; Gonzalez, E. A.
2011-12-01
This paper presents the use of a Real Time Operating System (RTOS) on a portable electrocardiograph based on a microcontroller platform. All medical device digital functions are performed by the microcontroller. The electrocardiograph CPU is based on the 18F4550 microcontroller, in which an uCOS-II RTOS can be embedded. The decision associated with the kernel use is based on its benefits, the license for educational use and its intrinsic time control and peripherals management. The feasibility of its use on the electrocardiograph is evaluated based on the minimum memory requirements due to the kernel structure. The kernel's own tools were used for time estimation and evaluation of resources used by each process. After this feasibility analysis, the migration from cyclic code to a structure based on separate processes or tasks able to synchronize events is used; resulting in an electrocardiograph running on one Central Processing Unit (CPU) based on RTOS.
NASA Astrophysics Data System (ADS)
Dudek, Jozef J.
2016-03-01
I describe how hadron-hadron scattering amplitudes are related to the eigenstates of QCD in a finite cubic volume. The discrete spectrum of such eigenstates can be determined from correlation functions computed using lattice QCD, and the corresponding scattering amplitudes extracted. I review results from the Hadron Spectrum Collaboration who have used these finite volume methods to study ππ elastic scattering, including the ρ resonance, as well as coupled-channel πK, ηK scattering. The very recent extension to the case where an external current acts is also presented, considering the reaction πγ* → ππ, from which the unstable ρ → πγ transition form factor is extracted. Ongoing calculations are advertised and the outlook for finite volume approaches is presented.
Exponentially modified QCD coupling
Cvetic, Gorazd; Valenzuela, Cristian
2008-04-01
We present a specific class of models for an infrared-finite analytic QCD coupling, such that at large spacelike energy scales the coupling differs from the perturbative one by less than any inverse power of the energy scale. This condition is motivated by the Institute for Theoretical and Experimental Physics operator product expansion philosophy. Allowed by the ambiguity in the analytization of the perturbative coupling, the proposed class of couplings has three parameters. In the intermediate energy region, the proposed coupling has low loop-level and renormalization scheme dependence. The present modification of perturbative QCD must be considered as a phenomenological attempt, with the aim of enlarging the applicability range of the theory of the strong interactions at low energies.
Dudek, Jozef J.; Edwards, Robert G.
2012-03-21
In this study, we present the first comprehensive study of hybrid baryons using lattice QCD methods. Using a large basis of composite QCD interpolating fields we extract an extensive spectrum of baryon states and isolate those of hybrid character using their relatively large overlap onto operators which sample gluonic excitations. We consider the spectrum of Nucleon and Delta states at several quark masses finding a set of positive parity hybrid baryons with quantum numbers $N_{1/2^+},\\,N_{1/2^+},\\,N_{3/2^+},\\, N_{3/2^+},\\,N_{5/2^+},\\,$ and $\\Delta_{1/2^+},\\, \\Delta_{3/2^+}$ at an energy scale above the first band of `conventional' excited positive parity baryons. This pattern of states is compatible with a color octet gluonic excitation having $J^{P}=1^{+}$ as previously reported in the hybrid meson sector and with a comparable energy scale for the excitation, suggesting a common bound-state construction for hybrid mesons and baryons.
Gupta, R.
1998-12-31
The goal of the lectures on lattice QCD (LQCD) is to provide an overview of both the technical issues and the progress made so far in obtaining phenomenologically useful numbers. The lectures consist of three parts. The author`s charter is to provide an introduction to LQCD and outline the scope of LQCD calculations. In the second set of lectures, Guido Martinelli will discuss the progress they have made so far in obtaining results, and their impact on Standard Model phenomenology. Finally, Martin Luescher will discuss the topical subjects of chiral symmetry, improved formulation of lattice QCD, and the impact these improvements will have on the quality of results expected from the next generation of simulations.
Kovacs, E.; CDF Collaboration
1996-02-01
We present results for the inclusive jet cross section and the dijet mass distribution. The inclusive cross section and dijet mass both exhibit significant deviations from the predictions of NLO QCD for jets with E{sub T}>200 GeV, or dijet masses > 400 GeV/c{sup 2}. We show that it is possible, within a global QCD analysis that includes the CDF inclusive jet data, to modify the gluon distribution at high x. The resulting increase in the jet cross-section predictions is 25-35%. Owing to the presence of k{sub T} smearing effects, the direct photon data does not provide as strong a constraint on the gluon distribution as previously thought. A comparison of the CDF and UA2 jet data, which have a common range in x, is plagued by theoretical and experimental uncertainties, and cannot at present confirm the CDF excess or the modified gluon distribution.
Density Estimation with Mercer Kernels
NASA Technical Reports Server (NTRS)
Macready, William G.
2003-01-01
We present a new method for density estimation based on Mercer kernels. The density estimate can be understood as the density induced on a data manifold by a mixture of Gaussians fit in a feature space. As is usual, the feature space and data manifold are defined with any suitable positive-definite kernel function. We modify the standard EM algorithm for mixtures of Gaussians to infer the parameters of the density. One benefit of the approach is it's conceptual simplicity, and uniform applicability over many different types of data. Preliminary results are presented for a number of simple problems.
Bjorken, J.D.
1996-10-01
New directions for exploring QCD at future high-energy colliders are sketched. These include jets within jets. BFKL dynamics, soft and hard diffraction, searches for disoriented chiral condensate, and doing a better job on minimum bias physics. The new experimental opportunities include electron-ion collisions at HERA, a new collider detector at the C0 region of the TeVatron, and the FELIX initiative at the LHC.
Kronfeld, A.S.; Allison, I.F.; Aubin, C.; Bernard, C.; Davies, C.T.H.; DeTar, C.; Di Pierro, M.; Freeland, E.D.; Gottlieb, Steven; Gray, A.; Gregor, E.; Heller, U.M.; Hetrick, J.E.; El-Khadra, Aida X.; Levkova, L.; Mackenzie, P.B.; Maresca, F.; Menscher, D.; Nobes, M.; Okamoto, M.; Oktay, M.B.; /Fermilab /Glasgow U. /Columbia U. /Washington U., St. Louis /Utah U. /DePaul U. /Art Inst. of Chicago /Indiana U. /Ohio State U. /Arizona U. /APS, New York /U. Pacific, Stockton /Illinois U., Urbana /Cornell U., LEPP /Simon Fraser U. /UC, Santa Barbara
2005-09-01
In the past year, we calculated with lattice QCD three quantities that were unknown or poorly known. They are the q{sup 2} dependence of the form factor in semileptonic D {yields} K/{nu} decay, the decay constant of the D meson, and the mass of the B{sub c} meson. In this talk, we summarize these calculations, with emphasis on their (subsequent) confirmation by experiments.
Giannetti, P. )
1991-05-01
Recent analysis of jet data taken at the Fermilab Tevatron Collider at {radical}S = 1.8 Tev are presented. Inclusive jet, dijet, trijet and direct photon measurements are compared to QCD parton level calculations, at orders {alpha}{sub s}{sup 3} or {alpha}{sub s}{sup 2}. The large total transverse energy events are well described by the Herwig shower Montecarlo. 19 refs., 20 figs., 1 tab.
Roberts, C.D.
1994-09-01
The Dyson-Schwinger equations (DSEs) are a tower of coupled integral equations that relate the Green functions of QCD to one another. Solving these equations provides the solution of QCD. This tower of equations includes the equation for the quark self-energy, which is the analogue of the gap equation in superconductivity, and the Bethe-Salpeter equation, the solution of which is the quark-antiquark bound state amplitude in QCD. The application of this approach to solving Abelian and non-Abelian gauge theories is reviewed. The nonperturbative DSE approach is being developed as both: (1) a computationally less intensive alternative and; (2) a complement to numerical simulations of the lattice action of QCD. In recent years, significant progress has been made with the DSE approach so that it is now possible to make sensible and direct comparisons between quantities calculated using this approach and the results of numerical simulations of Abelian gauge theories. Herein the application of the DSE approach to the calculation of pion observables is described: the {pi}-{pi} scattering lengths (a{sub 0}{sup 0}, a{sub 0}{sup 2}, A{sub 1}{sup 1}, a{sub 2}{sup 2}) and associated partial wave amplitudes; the {pi}{sup 0} {yields} {gamma}{gamma} decay width; and the charged pion form factor, F{sub {pi}}(q{sup 2}). Since this approach provides a straightforward, microscopic description of dynamical chiral symmetry breaking (D{sub X}SB) and confinement, the calculation of pion observables is a simple and elegant illustrative example of its power and efficacy. The relevant DSEs are discussed in the calculation of pion observables and concluding remarks are presented.
Hadronic Resonances from Lattice QCD
Lichtl, Adam C.; Bulava, John; Morningstar, Colin; Edwards, Robert; Mathur, Nilmani; Richards, David; Fleming, George; Juge, K. Jimmy; Wallace, Stephen J.
2007-10-26
The determination of the pattern of hadronic resonances as predicted by Quantum Chromodynamics requires the use of non-perturbative techniques. Lattice QCD has emerged as the dominant tool for such calculations, and has produced many QCD predictions which can be directly compared to experiment. The concepts underlying lattice QCD are outlined, methods for calculating excited states are discussed, and results from an exploratory Nucleon and Delta baryon spectrum study are presented.
Hadronic Resonances from Lattice QCD
John Bulava; Robert Edwards; George Fleming; K. Jimmy Juge; Adam C. Lichtl; Nilmani Mathur; Colin Morningstar; David Richards; Stephen J. Wallace
2007-06-16
The determination of the pattern of hadronic resonances as predicted by Quantum Chromodynamics requires the use of non-perturbative techniques. Lattice QCD has emerged as the dominant tool for such calculations, and has produced many QCD predictions which can be directly compared to experiment. The concepts underlying lattice QCD are outlined, methods for calculating excited states are discussed, and results from an exploratory Nucleon and Delta baryon spectrum study are presented.
Resource Letter QCD-1: Quantum chromodynamics
NASA Astrophysics Data System (ADS)
Kronfeld, Andreas S.; Quigg, Chris
2010-11-01
This Resource Letter provides a guide to the literature on quantum chromodynamics (QCD), the relativistic quantum field theory of the strong interactions. Journal articles, books, and other documents are cited for the following topics: Quarks and color, the parton model, Yang-Mills theory, experimental evidence for color, QCD as a color gauge theory, asymptotic freedom, QCD for heavy hadrons, QCD on the lattice, the QCD vacuum, pictures of quark confinement, early and modern applications of perturbative QCD, the determination of the strong coupling and quark masses, QCD and the hadron spectrum, hadron decays, the quark-gluon plasma, the strong nuclear interaction, and QCD's role in nuclear physics.
Technology Transfer Automated Retrieval System (TEKTRAN)
Oat (Avena sativa L.) kernels appear to contain much higher polar lipid concentrations than other plant tissues. We have extracted, identified, and quantified polar lipids from 18 oat genotypes grown in replicated plots in three environments in order to determine genotypic or environmental variation...
Accelerating the Original Profile Kernel
Hamp, Tobias; Goldberg, Tatyana; Rost, Burkhard
2013-01-01
One of the most accurate multi-class protein classification systems continues to be the profile-based SVM kernel introduced by the Leslie group. Unfortunately, its CPU requirements render it too slow for practical applications of large-scale classification tasks. Here, we introduce several software improvements that enable significant acceleration. Using various non-redundant data sets, we demonstrate that our new implementation reaches a maximal speed-up as high as 14-fold for calculating the same kernel matrix. Some predictions are over 200 times faster and render the kernel as possibly the top contender in a low ratio of speed/performance. Additionally, we explain how to parallelize various computations and provide an integrative program that reduces creating a production-quality classifier to a single program call. The new implementation is available as a Debian package under a free academic license and does not depend on commercial software. For non-Debian based distributions, the source package ships with a traditional Makefile-based installer. Download and installation instructions can be found at https://rostlab.org/owiki/index.php/Fast_Profile_Kernel. Bugs and other issues may be reported at https://rostlab.org/bugzilla3/enter_bug.cgi?product=fastprofkernel. PMID:23825697
Adaptive wiener image restoration kernel
Yuan, Ding
2007-06-05
A method and device for restoration of electro-optical image data using an adaptive Wiener filter begins with constructing imaging system Optical Transfer Function, and the Fourier Transformations of the noise and the image. A spatial representation of the imaged object is restored by spatial convolution of the image using a Wiener restoration kernel.
Local Observed-Score Kernel Equating
ERIC Educational Resources Information Center
Wiberg, Marie; van der Linden, Wim J.; von Davier, Alina A.
2014-01-01
Three local observed-score kernel equating methods that integrate methods from the local equating and kernel equating frameworks are proposed. The new methods were compared with their earlier counterparts with respect to such measures as bias--as defined by Lord's criterion of equity--and percent relative error. The local kernel item response…
7 CFR 981.408 - Inedible kernel.
Code of Federal Regulations, 2013 CFR
2013-01-01
... AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... Standards for Shelled Almonds, or which has embedded dirt or other foreign material not easily removed...
7 CFR 981.408 - Inedible kernel.
Code of Federal Regulations, 2010 CFR
2010-01-01
... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... Standards for Shelled Almonds, or which has embedded dirt or other foreign material not easily removed...
7 CFR 981.408 - Inedible kernel.
Code of Federal Regulations, 2014 CFR
2014-01-01
... AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... Standards for Shelled Almonds, or which has embedded dirt or other foreign material not easily removed...
7 CFR 981.408 - Inedible kernel.
Code of Federal Regulations, 2011 CFR
2011-01-01
... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... Standards for Shelled Almonds, or which has embedded dirt or other foreign material not easily removed...
7 CFR 981.408 - Inedible kernel.
Code of Federal Regulations, 2012 CFR
2012-01-01
... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... Standards for Shelled Almonds, or which has embedded dirt or other foreign material not easily removed...
NLO Hierarchy of Wilson Lines Evolution
Balitsky, Ian
2015-03-01
The high-energy behavior of QCD amplitudes can be described in terms of the rapidity evolution of Wilson lines. I present the hierarchy of evolution equations for Wilson lines in the next-to-leading order.
Confronting QCD with the experimental hadronic spectral functions from tau decay
Dominguez, C. A.; Nasrallah, N. F.; Schilcher, K.
2009-09-01
The (nonstrange) vector and axial-vector spectral functions extracted from {tau} decay by the ALEPH Collaboration are confronted with QCD in the framework of a finite energy sum rule involving a polynomial kernel tuned to suppress the region beyond the kinematical end point where there is no longer data. This effectively allows for a QCD finite energy sum rule analysis to be performed beyond the region of the existing data. Results show excellent agreement between data and perturbative QCD in the remarkably wide energy range s=3-10 GeV{sup 2}, leaving room for a dimension d=4 vacuum condensate consistent with values in the literature. A hypothetical dimension d=2 term in the operator product expansion is found to be extremely small, consistent with zero. Fixed order and contour improved perturbation theory are used, with both leading to similar results within errors. Full consistency is found between vector and axial-vector channel results.
NASA Astrophysics Data System (ADS)
Bartels, Jochen
2006-06-01
I summarize the present status of the AGK cutting rules in perturbative QCD. Particular attention is given to the application of the AGK analysis to diffraction and multiple scattering in DIS at HERA and to pp collisions at the LHC. I also discuss the bootstrap conditions which appear in pQCD.
QCD: Questions, challenges, and dilemmas
Bjorken, J.
1996-11-01
An introduction to some outstanding issues in QCD is presented, with emphasis on work by Diakonov and co-workers on the influence of the instanton vacuum on low-energy QCD observables. This includes the calculation of input valence-parton distributions for deep-inelastic scattering. 35 refs., 3 figs.
QCD coupling constants and VDM
Erkol, G.; Ozpineci, A.; Zamiralov, V. S.
2012-10-23
QCD sum rules for coupling constants of vector mesons with baryons are constructed. The corresponding QCD sum rules for electric charges and magnetic moments are also derived and with the use of vector-meson-dominance model related to the coupling constants. The VDM role as the criterium of reciprocal validity of the sum rules is considered.
Sekhar Chivukula
2010-01-08
The symmetries of a quantum field theory can be realized in a variety of ways. Symmetries can be realized explicitly, approximately, through spontaneous symmetry breaking or, via an anomaly, quantum effects can dynamically eliminate a symmetry of the theory that was present at the classical level. Quantum Chromodynamics (QCD), the modern theory of the strong interactions, exemplify each of these possibilities. The interplay of these effects determine the spectrum of particles that we observe and, ultimately, account for 99% of the mass of ordinary matter.
Sakai, Tadakatsu; Sugimoto, Shigeki
2005-12-02
We propose a holographic dual of QCD with massless flavors on the basis of a D4/D8-brane configuration within a probe approximation. We are led to a five-dimensional Yang-Mills theory on a curved space-time along with a Chern-Simons five-form on it, both of which provide us with a unifying framework to study the massless pion and an infinite number of massive vector mesons. We make sample computations of the physical quantities that involve the mesons and compare them with the experimental data. It is found that most of the results of this model are compatible with the experiments.
NASA Astrophysics Data System (ADS)
Sakai, Tadakatsu; Sugimoto, Shigeki
2005-12-01
We propose a holographic dual of QCD with massless flavors on the basis of a D4/D8-brane configuration within a probe approximation. We are led to a five-dimensional Yang-Mills theory on a curved space-time along with a Chern-Simons five-form on it, both of which provide us with a unifying framework to study the massless pion and an infinite number of massive vector mesons. We make sample computations of the physical quantities that involve the mesons and compare them with the experimental data. It is found that most of the results of this model are compatible with the experiments.
Cool QCD: Hadronic Physics and QCD in Nuclei
NASA Astrophysics Data System (ADS)
Cates, Gordon
2015-10-01
QCD is the only strongly-coupled theory given to us by Nature, and it gives rise to a host of striking phenomena. Two examples in hadronic physics include the dynamic generation of mass and the confinement of quarks. Indeed, the vast majority of the mass of visible matter is due to the kinetic and potential energy of the massless gluons and the essentially massless quarks. QCD also gives rise to the force that binds protons and neutrons into nuclei, including subtle effects that have historically been difficult to understand. Describing these phenomena in terms of QCD has represented a daunting task, but remarkable progress has been achieved in both theory and experiment. Both CEBAF at Jefferson Lab and RHIC at Brookhaven National Lab have provided unprecedented experimental tools for investigating QCD, and upgrades at both facilities promise even greater opportunities in the future. Also important are programs at FermiLab as well as the LHC at CERN. Looking further ahead, an electron ion collider (EIC) has the potential to answer whole new sets of questions regarding the role of gluons in nuclear matter, an issue that lies at the heart of the generation of mass. On the theoretical side, rapid progress in supercomputers is enabling stunning progress in Lattice QCD calculations, and approximate forms of QCD are also providing deep new physical insight. In this talk I will describe both recent advances in Cool QCD as well as the exciting scientific opportunities that exist for the future.
Soltz, R; Vranas, P; Blumrich, M; Chen, D; Gara, A; Giampap, M; Heidelberger, P; Salapura, V; Sexton, J; Bhanot, G
2007-04-11
The theory of the strong nuclear force, Quantum Chromodynamics (QCD), can be numerically simulated from first principles on massively-parallel supercomputers using the method of Lattice Gauge Theory. We describe the special programming requirements of lattice QCD (LQCD) as well as the optimal supercomputer hardware architectures that it suggests. We demonstrate these methods on the BlueGene massively-parallel supercomputer and argue that LQCD and the BlueGene architecture are a natural match. This can be traced to the simple fact that LQCD is a regular lattice discretization of space into lattice sites while the BlueGene supercomputer is a discretization of space into compute nodes, and that both are constrained by requirements of locality. This simple relation is both technologically important and theoretically intriguing. The main result of this paper is the speedup of LQCD using up to 131,072 CPUs on the largest BlueGene/L supercomputer. The speedup is perfect with sustained performance of about 20% of peak. This corresponds to a maximum of 70.5 sustained TFlop/s. At these speeds LQCD and BlueGene are poised to produce the next generation of strong interaction physics theoretical results.
Nonperturbative QCD Calculations
NASA Astrophysics Data System (ADS)
Dellby, Niklas
1995-01-01
The research described in this thesis is an exact transformation of the Yang-Mills quantum chromodynamics (QCD) Lagrangrian into a form that is suitable for nonperturbative calculations. The conventional Yang-Mills Lagrangian has proven to be an excellent basis for perturbative calculations, but in nonperturbative calculations it is difficult to separate gauge problems from physical properties. To mitigate this problem, I develop a new equivalent Lagrangian that is not only expressed completely in terms of the field strengths ofthe gauge field but is also manifestly Lorentz and gauge invariant. The new Lagrangian is quadratic in derivatives, with non-linear local couplings, thus it is ideally suited for a numerical calculation. The field-strength Lagrangian is of such a form that it is possible to do a straightforward numerical stationary path expansion and find the fundamental QCD properties. This thesis examines several approximations analytically, investigating different ways to utilize the new Lagrangian. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253 -1690.).
Dudek, Jozef J.; Edwards, Robert G.
2012-03-21
In this study, we present the first comprehensive study of hybrid baryons using lattice QCD methods. Using a large basis of composite QCD interpolating fields we extract an extensive spectrum of baryon states and isolate those of hybrid character using their relatively large overlap onto operators which sample gluonic excitations. We consider the spectrum of Nucleon and Delta states at several quark masses finding a set of positive parity hybrid baryons with quantum numbersmore » $$N_{1/2^+},\\,N_{1/2^+},\\,N_{3/2^+},\\, N_{3/2^+},\\,N_{5/2^+},\\,$$ and $$\\Delta_{1/2^+},\\, \\Delta_{3/2^+}$$ at an energy scale above the first band of `conventional' excited positive parity baryons. This pattern of states is compatible with a color octet gluonic excitation having $$J^{P}=1^{+}$$ as previously reported in the hybrid meson sector and with a comparable energy scale for the excitation, suggesting a common bound-state construction for hybrid mesons and baryons.« less
None
2011-10-06
Modern QCD - Lecture 1 Starting from the QCD Lagrangian we will revisit some basic QCD concepts and derive fundamental properties like gauge invariance and isospin symmetry and will discuss the Feynman rules of the theory. We will then focus on the gauge group of QCD and derive the Casimirs CF and CA and some useful color identities.
Kernel Near Principal Component Analysis
MARTIN, SHAWN B.
2002-07-01
We propose a novel algorithm based on Principal Component Analysis (PCA). First, we present an interesting approximation of PCA using Gram-Schmidt orthonormalization. Next, we combine our approximation with the kernel functions from Support Vector Machines (SVMs) to provide a nonlinear generalization of PCA. After benchmarking our algorithm in the linear case, we explore its use in both the linear and nonlinear cases. We include applications to face data analysis, handwritten digit recognition, and fluid flow.
Derivation of aerodynamic kernel functions
NASA Technical Reports Server (NTRS)
Dowell, E. H.; Ventres, C. S.
1973-01-01
The method of Fourier transforms is used to determine the kernel function which relates the pressure on a lifting surface to the prescribed downwash within the framework of Dowell's (1971) shear flow model. This model is intended to improve upon the potential flow aerodynamic model by allowing for the aerodynamic boundary layer effects neglected in the potential flow model. For simplicity, incompressible, steady flow is considered. The proposed method is illustrated by deriving known results from potential flow theory.
QCD Factorization and PDFs from Lattice QCD Calculation
NASA Astrophysics Data System (ADS)
Ma, Yan-Qing; Qiu, Jian-Wei
2015-02-01
In this talk, we review a QCD factorization based approach to extract parton distribution and correlation functions from lattice QCD calculation of single hadron matrix elements of quark-gluon operators. We argue that although the lattice QCD calculations are done in the Euclidean space, the nonperturbative collinear behavior of the matrix elements are the same as that in the Minkowski space, and could be systematically factorized into parton distribution functions with infrared safe matching coefficients. The matching coefficients can be calculated perturbatively by applying the factorization formalism on to asymptotic partonic states.
Kernel CMAC with improved capability.
Horváth, Gábor; Szabó, Tamás
2007-02-01
The cerebellar model articulation controller (CMAC) has some attractive features, namely fast learning capability and the possibility of efficient digital hardware implementation. Although CMAC was proposed many years ago, several open questions have been left even for today. The most important ones are about its modeling and generalization capabilities. The limits of its modeling capability were addressed in the literature, and recently, certain questions of its generalization property were also investigated. This paper deals with both the modeling and the generalization properties of CMAC. First, a new interpolation model is introduced. Then, a detailed analysis of the generalization error is given, and an analytical expression of this error for some special cases is presented. It is shown that this generalization error can be rather significant, and a simple regularized training algorithm to reduce this error is proposed. The results related to the modeling capability show that there are differences between the one-dimensional (1-D) and the multidimensional versions of CMAC. This paper discusses the reasons of this difference and suggests a new kernel-based interpretation of CMAC. The kernel interpretation gives a unified framework. Applying this approach, both the 1-D and the multidimensional CMACs can be constructed with similar modeling capability. Finally, this paper shows that the regularized training algorithm can be applied for the kernel interpretations too, which results in a network with significantly improved approximation capabilities. PMID:17278566
RKRD: Runtime Kernel Rootkit Detection
NASA Astrophysics Data System (ADS)
Grover, Satyajit; Khosravi, Hormuzd; Kolar, Divya; Moffat, Samuel; Kounavis, Michael E.
In this paper we address the problem of protecting computer systems against stealth malware. The problem is important because the number of known types of stealth malware increases exponentially. Existing approaches have some advantages for ensuring system integrity but sophisticated techniques utilized by stealthy malware can thwart them. We propose Runtime Kernel Rootkit Detection (RKRD), a hardware-based, event-driven, secure and inclusionary approach to kernel integrity that addresses some of the limitations of the state of the art. Our solution is based on the principles of using virtualization hardware for isolation, verifying signatures coming from trusted code as opposed to malware for scalability and performing system checks driven by events. Our RKRD implementation is guided by our goals of strong isolation, no modifications to target guest OS kernels, easy deployment, minimal infra-structure impact, and minimal performance overhead. We developed a system prototype and conducted a number of experiments which show that the per-formance impact of our solution is negligible.
NASA Astrophysics Data System (ADS)
Peter, Ulmschneider
When we are looking for intelligent life outside the Earth, there is a fundamental question: Assuming that life has formed on an extraterrestrial planet, will it also develop toward intelligence? As this is hotly debated, we will now describe the development of life on Earth in more detail in order to show that there are good reasons why evolution should culminate in intelligent beings.
Visualizing and Interacting with Kernelized Data.
Barbosa, A; Paulovich, F V; Paiva, A; Goldenstein, S; Petronetto, F; Nonato, L G
2016-03-01
Kernel-based methods have experienced a substantial progress in the last years, tuning out an essential mechanism for data classification, clustering and pattern recognition. The effectiveness of kernel-based techniques, though, depends largely on the capability of the underlying kernel to properly embed data in the feature space associated to the kernel. However, visualizing how a kernel embeds the data in a feature space is not so straightforward, as the embedding map and the feature space are implicitly defined by the kernel. In this work, we present a novel technique to visualize the action of a kernel, that is, how the kernel embeds data into a high-dimensional feature space. The proposed methodology relies on a solid mathematical formulation to map kernelized data onto a visual space. Our approach is faster and more accurate than most existing methods while still allowing interactive manipulation of the projection layout, a game-changing trait that other kernel-based projection techniques do not have. PMID:26829242
Lattice QCD and Nuclear Physics
Konstantinos Orginos
2007-03-01
A steady stream of developments in Lattice QCD have made it possible today to begin to address the question of how nuclear physics emerges from the underlying theory of strong interactions. Central role in this understanding play both the effective field theory description of nuclear forces and the ability to perform accurate non-perturbative calculations in lo w energy QCD. Here I present some recent results that attempt to extract important low energy constants of the effective field theory of nuclear forces from lattice QCD.
Hadron physics in holographic QCD
NASA Astrophysics Data System (ADS)
Santra, A. B.; Lombardo, U.; Bonanno, A.
2012-07-01
Hadron physics deals with the study of strongly interacting subatomic particles such as neutrons, protons, pions and others, collectively known as baryons and mesons. Physics of strong interaction is difficult. There are several approaches to understand it. However, in the recent years, an approach called, holographic QCD, based on string theory (or gauge-gravity duality) is becoming popular providing an alternative description of strong interaction physics. In this article, we aim to discuss development of strong interaction physics through QCD and string theory, leading to holographic QCD.
Non-perturbative QCD effects in q T spectra of Drell-Yan and Z-boson production
NASA Astrophysics Data System (ADS)
D'Alesio, Umberto; Echevarria, Miguel G.; Melis, Stefano; Scimemi, Ignazio
2014-11-01
The factorization theorems for transverse momentum distributions of dilepton/boson production, recently formulated by Collins and Echevarria-Idilbi-Scimemi in terms of well-defined transverse momentum dependent distributions (TMDs), allows for a systematic and quantitative analysis of non-perturbative QCD effects of the cross sections involving these quantities. In this paper we perform a global fit using all current available data for Drell-Yan and Z-boson production at hadron colliders within this framework. The perturbative calculable pieces of our estimates are included using a complete resummation at next-to-next-to-leading-logarithmic accuracy. Performing the matching of transverse momentum distributions onto the standard collinear parton distribution functions and recalling that the corresponding matching coefficient can be partially exponentiated, we find that this exponentiated part is spin-independent and resummable. We argue that the inclusion of higher order perturbative pieces is necessary when data from lower energy scales are analyzed. We consider non-perturbative corrections both to the intrinsic nucleon structure and to the evolution kernel and find that the non-perturbative part of the TMDs could be parametrized in terms of a minimal set of parameters (namely 2-3). When all corrections are included the global fit so performed gives a χ 2 /d .o .f . ≲ 1 and a very precise prediction for vector boson production at the Large Hadron Collider (LHC).
QCD Corrections and New Physics - Proceedings of the International Symposium
NASA Astrophysics Data System (ADS)
Kodaira, Jiro; Onogi, Tetsuya; Sasaki, Ken
1998-09-01
The Table of Contents for the full book PDF is as follows: * Preface * Opening Address * Top Quark Physics * Threshold Resummation of Soft Gluons in Hadronic Reactions - An Introduction * Recent Results from CDF * Top Quark Physics: Overview * Complete Description of Polarization Effects in Top Quark Decays Including Higher Order QCD Corrections * Top Pair Production in e+e- and γγ Processes * Structure Functions I * Highlights of Physics at HERA * Some Aspects of the BFKL Evolution * Structure Functions II * New Result from SMC on g_{1}^ρ * Studies of the Nucleon Spin Structure by HERMES * Recent Developments in Perturbative QCD: Q2 Evolution of Chiral-Odd Distributions h1(x,Q2) and hL(x,Q2) * The Small x Behavior of g1 in the Resummed Approach * Jet Physics * QCD Results from LEP1 and LEP2 * Twenty Years of Jet Physics : Old and New * Multi-Parton Loop Amplitudes and Next-to-Leading Order Jet Cross-Sections * Heavy Meson * PQCD Analysis of Inclusive Heavy Hadrons Decays * Strong Coupling Constant from Lattice QCD * Heavy-Light Decay Constant from Lattice NRQCD * Concluding Remarks * Program * Organizing Committee * List of Participants
Nonlinear projection trick in kernel methods: an alternative to the kernel trick.
Kwak, Nojun
2013-12-01
In kernel methods such as kernel principal component analysis (PCA) and support vector machines, the so called kernel trick is used to avoid direct calculations in a high (virtually infinite) dimensional kernel space. In this brief, based on the fact that the effective dimensionality of a kernel space is less than the number of training samples, we propose an alternative to the kernel trick that explicitly maps the input data into a reduced dimensional kernel space. This is easily obtained by the eigenvalue decomposition of the kernel matrix. The proposed method is named as the nonlinear projection trick in contrast to the kernel trick. With this technique, the applicability of the kernel methods is widened to arbitrary algorithms that do not use the dot product. The equivalence between the kernel trick and the nonlinear projection trick is shown for several conventional kernel methods. In addition, we extend PCA-L1, which uses L1-norm instead of L2-norm (or dot product), into a kernel version and show the effectiveness of the proposed approach. PMID:24805227
Image texture analysis of crushed wheat kernels
NASA Astrophysics Data System (ADS)
Zayas, Inna Y.; Martin, C. R.; Steele, James L.; Dempster, Richard E.
1992-03-01
The development of new approaches for wheat hardness assessment may impact the grain industry in marketing, milling, and breeding. This study used image texture features for wheat hardness evaluation. Application of digital imaging to grain for grading purposes is principally based on morphometrical (shape and size) characteristics of the kernels. A composite sample of 320 kernels for 17 wheat varieties were collected after testing and crushing with a single kernel hardness characterization meter. Six wheat classes where represented: HRW, HRS, SRW, SWW, Durum, and Club. In this study, parameters which characterize texture or spatial distribution of gray levels of an image were determined and used to classify images of crushed wheat kernels. The texture parameters of crushed wheat kernel images were different depending on class, hardness and variety of the wheat. Image texture analysis of crushed wheat kernels showed promise for use in class, hardness, milling quality, and variety discrimination.
Excited Baryons in Holographic QCD
de Teramond, Guy F.; Brodsky, Stanley J.; /SLAC /Southern Denmark U., CP3-Origins
2011-11-08
The light-front holographic QCD approach is used to describe baryon spectroscopy and the systematics of nucleon transition form factors. Baryon spectroscopy and the excitation dynamics of nucleon resonances encoded in the nucleon transition form factors can provide fundamental insight into the strong-coupling dynamics of QCD. The transition from the hard-scattering perturbative domain to the non-perturbative region is sensitive to the detailed dynamics of confined quarks and gluons. Computations of such phenomena from first principles in QCD are clearly very challenging. The most successful theoretical approach thus far has been to quantize QCD on discrete lattices in Euclidean space-time; however, dynamical observables in Minkowski space-time, such as the time-like hadronic form factors are not amenable to Euclidean numerical lattice computations.
QCD analogy for quantum gravity
NASA Astrophysics Data System (ADS)
Holdom, Bob; Ren, Jing
2016-06-01
Quadratic gravity presents us with a renormalizable, asymptotically free theory of quantum gravity. When its couplings grow strong at some scale, as in QCD, then this strong scale sets the Planck mass. QCD has a gluon that does not appear in the physical spectrum. Quadratic gravity has a spin-2 ghost that we conjecture does not appear in the physical spectrum. We discuss how the QCD analogy leads to this conjecture and to the possible emergence of general relativity. Certain aspects of the QCD path integral and its measure are also similar for quadratic gravity. With the addition of the Einstein-Hilbert term, quadratic gravity has a dimensionful parameter that seems to control a quantum phase transition and the size of a mass gap in the strong phase.
Molecular Hydrodynamics from Memory Kernels.
Lesnicki, Dominika; Vuilleumier, Rodolphe; Carof, Antoine; Rotenberg, Benjamin
2016-04-01
The memory kernel for a tagged particle in a fluid, computed from molecular dynamics simulations, decays algebraically as t^{-3/2}. We show how the hydrodynamic Basset-Boussinesq force naturally emerges from this long-time tail and generalize the concept of hydrodynamic added mass. This mass term is negative in the present case of a molecular solute, which is at odds with incompressible hydrodynamics predictions. Lastly, we discuss the various contributions to the friction, the associated time scales, and the crossover between the molecular and hydrodynamic regimes upon increasing the solute radius. PMID:27104730
KERNEL PHASE IN FIZEAU INTERFEROMETRY
Martinache, Frantz
2010-11-20
The detection of high contrast companions at small angular separation appears feasible in conventional direct images using the self-calibration properties of interferometric observable quantities. The friendly notion of closure phase, which is key to the recent observational successes of non-redundant aperture masking interferometry used with adaptive optics, appears to be one example of a wide family of observable quantities that are not contaminated by phase noise. In the high-Strehl regime, soon to be available thanks to the coming generation of extreme adaptive optics systems on ground-based telescopes, and already available from space, closure phase like information can be extracted from any direct image, even taken with a redundant aperture. These new phase-noise immune observable quantities, called kernel phases, are determined a priori from the knowledge of the geometry of the pupil only. Re-analysis of archive data acquired with the Hubble Space Telescope NICMOS instrument using this new kernel-phase algorithm demonstrates the power of the method as it clearly detects and locates with milliarcsecond precision a known companion to a star at angular separation less than the diffraction limit.
Nuclear chromodynamics: applications of QCD to relativistic multiquark systems
Brodsky, S.J.; Ji, C.R.
1984-07-01
We review the applications of quantum chromodynamics to nuclear multiquark systems. In particular, predictions are given for the deuteron reduced form factor in the high momentum transfer region, hidden color components in nuclear wavefunctions, and the short distance effective force between nucleons. A new antisymmetrization technique is presented which allows a basis for relativistic multiquark wavefunctions and solutions to their evolution to short distances. Areas in which conventional nuclear theory conflicts with QCD are also briefly reviewed. 48 references.
QCD measurements at the Tevatron
Bandurin, Dmitry; /Florida State U.
2011-12-01
Selected quantum chromodynamics (QCD) measurements performed at the Fermilab Run II Tevatron p{bar p} collider running at {radical}s = 1.96 TeV by CDF and D0 Collaborations are presented. The inclusive jet, dijet production and three-jet cross section measurements are used to test perturbative QCD calculations, constrain parton distribution function (PDF) determinations, and extract a precise value of the strong coupling constant, {alpha}{sub s}(m{sub Z}) = 0.1161{sub -0.0048}{sup +0.0041}. Inclusive photon production cross-section measurements reveal an inability of next-to-leading-order (NLO) perturbative QCD (pQCD) calculations to describe low-energy photons arising directly in the hard scatter. The diphoton production cross-sections check the validity of the NLO pQCD predictions, soft-gluon resummation methods implemented in theoretical calculations, and contributions from the parton-to-photon fragmentation diagrams. Events with W/Z+jets productions are used to measure many kinematic distributions allowing extensive tests and tunes of predictions from pQCD NLO and Monte-Carlo (MC) event generators. The charged-particle transverse momenta (p{sub T}) and multiplicity distributions in the inclusive minimum bias events are used to tune non-perturbative QCD models, including those describing the multiple parton interactions (MPI). Events with inclusive production of {gamma} and 2 or 3 jets are used to study increasingly important MPI phenomenon at high p{sub T}, measure an effective interaction cross section, {sigma}{sub eff} = 16.4 {+-} 2.3 mb, and limit existing MPI models.
Code of Federal Regulations, 2011 CFR
2011-01-01
... Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than one-eighth of its original volume...
Code of Federal Regulations, 2010 CFR
2010-01-01
... Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than one-eighth of its original volume...
Code of Federal Regulations, 2013 CFR
2013-01-01
..., CERTIFICATION, AND STANDARDS) United States Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than...
Code of Federal Regulations, 2012 CFR
2012-01-01
... Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than one-eighth of its original volume...
Code of Federal Regulations, 2014 CFR
2014-01-01
..., CERTIFICATION, AND STANDARDS) United States Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than...
Hairpin Vortex Dynamics in a Kernel Experiment
NASA Astrophysics Data System (ADS)
Meng, H.; Yang, W.; Sheng, J.
1998-11-01
A surface-mounted trapezoidal tab is known to shed hairpin-like vortices and generate a pair of counter-rotating vortices in its wake. Such a flow serves as a kernel experiment for studying the dynamics of these vortex structures. Created by and scaled with the tab, the vortex structures are more orderly and larger than those in the natural wall turbulence and thus suitable for measurement by Particle Image Velocimetry (PIV) and visualization by Planar Laser Induced Fluorescence (PLIF). Time-series PIV provides insight into the evolution, self-enhancement, regeneration, and interaction of hairpin vortices, as well as interactions of the hairpins with the pressure-induced counter-rotating vortex pair (CVP). The topology of the wake structure indicates that the hairpin "heads" are formed from lifted shear-layer instability and "legs" from stretching by the CVP, which passes the energy to the hairpins. The CVP diminishes after one tab height, while the hairpins persist until 10 20 tab heights downstream. It is concluded that the lift-up of the near-surface viscous fluids is the key to hairpin vortex dynamics. Whether from the pumping action of the CVP or the ejection by an existing hairpin, the 3D lift-up of near-surface vorticity contributes to the increase of hairpin vortex strength and creation of secondary hairpins. http://www.mne.ksu.edu/ meng/labhome.html
Kernel spectral clustering with memory effect
NASA Astrophysics Data System (ADS)
Langone, Rocco; Alzate, Carlos; Suykens, Johan A. K.
2013-05-01
Evolving graphs describe many natural phenomena changing over time, such as social relationships, trade markets, metabolic networks etc. In this framework, performing community detection and analyzing the cluster evolution represents a critical task. Here we propose a new model for this purpose, where the smoothness of the clustering results over time can be considered as a valid prior knowledge. It is based on a constrained optimization formulation typical of Least Squares Support Vector Machines (LS-SVM), where the objective function is designed to explicitly incorporate temporal smoothness. The latter allows the model to cluster the current data well and to be consistent with the recent history. We also propose new model selection criteria in order to carefully choose the hyper-parameters of our model, which is a crucial issue to achieve good performances. We successfully test the model on four toy problems and on a real world network. We also compare our model with Evolutionary Spectral Clustering, which is a state-of-the-art algorithm for community detection of evolving networks, illustrating that the kernel spectral clustering with memory effect can achieve better or equal performances.
Corn kernel oil and corn fiber oil
Technology Transfer Automated Retrieval System (TEKTRAN)
Unlike most edible plant oils that are obtained directly from oil-rich seeds by either pressing or solvent extraction, corn seeds (kernels) have low levels of oil (4%) and commercial corn oil is obtained from the corn germ (embryo) which is an oil-rich portion of the kernel. Commercial corn oil cou...
Andersen, Jens O.; Leganger, Lars E.; Strickland, Michael; Su, Nan
2011-10-15
In this brief report we compare the predictions of a recent next-to-next-to-leading order hard-thermal-loop perturbation theory (HTLpt) calculation of the QCD trace anomaly to available lattice data. We focus on the trace anomaly scaled by T{sup 2} in two cases: N{sub f}=0 and N{sub f}=3. When using the canonical value of {mu}=2{pi}T for the renormalization scale, we find that for Yang-Mills theory (N{sub f}=0) agreement between HTLpt and lattice data for the T{sup 2}-scaled trace anomaly begins at temperatures on the order of 8T{sub c}, while treating the subtracted piece as an interaction term when including quarks (N{sub f}=3) agreement begins already at temperatures above 2T{sub c}. In both cases we find that at very high temperatures the T{sup 2}-scaled trace anomaly increases with temperature in accordance with the predictions of HTLpt.
Recent QCD results from the Tevatron
Pickarz, Henryk; CDF and DO collaboration
1997-02-01
Recent QCD results from the CDF and D0 detectors at the Tevatron proton-antiproton collider are presented. An outlook for future QCD tests at the Tevatron collider is also breifly discussed. 27 refs., 11 figs.
Kenneth Wilson and Lattice QCD
NASA Astrophysics Data System (ADS)
Ukawa, Akira
2015-09-01
We discuss the physics and computation of lattice QCD, a space-time lattice formulation of quantum chromodynamics, and Kenneth Wilson's seminal role in its development. We start with the fundamental issue of confinement of quarks in the theory of the strong interactions, and discuss how lattice QCD provides a framework for understanding this phenomenon. A conceptual issue with lattice QCD is a conflict of space-time lattice with chiral symmetry of quarks. We discuss how this problem is resolved. Since lattice QCD is a non-linear quantum dynamical system with infinite degrees of freedom, quantities which are analytically calculable are limited. On the other hand, it provides an ideal case of massively parallel numerical computations. We review the long and distinguished history of parallel-architecture supercomputers designed and built for lattice QCD. We discuss algorithmic developments, in particular the difficulties posed by the fermionic nature of quarks, and their resolution. The triad of efforts toward better understanding of physics, better algorithms, and more powerful supercomputers have produced major breakthroughs in our understanding of the strong interactions. We review the salient results of this effort in understanding the hadron spectrum, the Cabibbo-Kobayashi-Maskawa matrix elements and CP violation, and quark-gluon plasma at high temperatures. We conclude with a brief summary and a future perspective.
Threefold Complementary Approach to Holographic QCD
Brodsky, Stanley J.; de Teramond, Guy F.; Dosch, Hans Gunter
2013-12-27
A complementary approach, derived from (a) higher-dimensional anti-de Sitter (AdS) space, (b) light-front quantization and (c) the invariance properties of the full conformal group in one dimension leads to a nonperturbative relativistic light-front wave equation which incorporates essential spectroscopic and dynamical features of hadron physics. The fundamental conformal symmetry of the classical QCD Lagrangian in the limit of massless quarks is encoded in the resulting effective theory. The mass scale for confinement emerges from the isomorphism between the conformal group andSO(2,1). This scale appears in the light-front Hamiltonian by mapping to the evolution operator in the formalism of de Alfaro, Fubini and Furlan, which retains the conformal invariance of the action. Remarkably, the specific form of the confinement interaction and the corresponding modification of AdS space are uniquely determined in this procedure.
Bayesian Kernel Mixtures for Counts
Canale, Antonio; Dunson, David B.
2011-01-01
Although Bayesian nonparametric mixture models for continuous data are well developed, there is a limited literature on related approaches for count data. A common strategy is to use a mixture of Poissons, which unfortunately is quite restrictive in not accounting for distributions having variance less than the mean. Other approaches include mixing multinomials, which requires finite support, and using a Dirichlet process prior with a Poisson base measure, which does not allow smooth deviations from the Poisson. As a broad class of alternative models, we propose to use nonparametric mixtures of rounded continuous kernels. An efficient Gibbs sampler is developed for posterior computation, and a simulation study is performed to assess performance. Focusing on the rounded Gaussian case, we generalize the modeling framework to account for multivariate count data, joint modeling with continuous and categorical variables, and other complications. The methods are illustrated through applications to a developmental toxicity study and marketing data. This article has supplementary material online. PMID:22523437
Bayesian Kernel Mixtures for Counts.
Canale, Antonio; Dunson, David B
2011-12-01
Although Bayesian nonparametric mixture models for continuous data are well developed, there is a limited literature on related approaches for count data. A common strategy is to use a mixture of Poissons, which unfortunately is quite restrictive in not accounting for distributions having variance less than the mean. Other approaches include mixing multinomials, which requires finite support, and using a Dirichlet process prior with a Poisson base measure, which does not allow smooth deviations from the Poisson. As a broad class of alternative models, we propose to use nonparametric mixtures of rounded continuous kernels. An efficient Gibbs sampler is developed for posterior computation, and a simulation study is performed to assess performance. Focusing on the rounded Gaussian case, we generalize the modeling framework to account for multivariate count data, joint modeling with continuous and categorical variables, and other complications. The methods are illustrated through applications to a developmental toxicity study and marketing data. This article has supplementary material online. PMID:22523437
QCD sum rules on the complex Borel plane
NASA Astrophysics Data System (ADS)
Araki, Ken-Ji; Ohtani, Keisuke; Gubler, Philipp; Oka, Makoto
2014-07-01
Borel-transformed QCD sum rules conventionally use a real-valued parameter (the Borel mass) for specifying the exponential weight over which hadronic spectral functions are averaged. In this paper, it is shown that the Borel mass can be generalized to have complex values and that new classes of sum rules can be derived from the resulting averages over the spectral functions. The real and imaginary parts of these novel sum rules turn out to have damped oscillating kernels and potentially contain a larger amount of information on the hadronic spectrum than the real-valued QCD sum rules. As a first practical test, we have formulated complex Borel sum rules for the φ -meson channel and have analyzed them using the maximum entropy method, by which we can extract the most probable spectral function from the sum rules without strong assumptions on its functional form. As a result, it is demonstrated that, compared to earlier studies, the complex-valued sum rules allow us to extract the spectral function with a significantly improved resolution and thus to study more detailed structures of the hadronic spectrum than previously possible.
LATTICE QCD THERMODYNAMICS WITH WILSON QUARKS.
EJIRI,S.
2007-11-20
We review studies of QCD thermodynamics by lattice QCD simulations with dynamical Wilson quarks. After explaining the basic properties of QCD with Wilson quarks at finite temperature including the phase structure and the scaling properties around the chiral phase transition, we discuss the critical temperature, the equation of state and heavy-quark free energies.
Lattice QCD input for axion cosmology
NASA Astrophysics Data System (ADS)
Berkowitz, Evan; Buchoff, Michael I.; Rinaldi, Enrico
2015-08-01
One intriguing beyond-the-Standard-Model particle is the QCD axion, which could simultaneously provide a solution to the Strong C P Problem and account for some, if not all, of the dark matter density in the Universe. This particle is a pseudo-Nambu-Goldstone boson of the conjectured Peccei-Quinn symmetry of the Standard Model. Its mass and interactions are suppressed by a heavy symmetry-breaking scale, fa, the value of which is roughly greater than 109 GeV (or, conversely, the axion mass, ma, is roughly less than 104 μ eV ). The density of axions in the Universe, which cannot exceed the relic dark matter density and is a quantity of great interest in axion experiments like ADMX, is a result of the early Universe interplay between cosmological evolution and the axion mass as a function of temperature. The latter quantity is proportional to the second derivative of the temperature-dependent QCD free energy with respect to the C P -violating phase, θ . However, this quantity is generically nonperturbative, and previous calculations have only employed instanton models at the high temperatures of interest (roughly 1 GeV). In this and future works, we aim to calculate the temperature-dependent axion mass at small θ from first-principle lattice calculations, with controlled statistical and systematic errors. Once calculated, this temperature-dependent axion mass is input for the classical evolution equations of the axion density of the Universe, which is required to be less than or equal to the dark matter density. Due to a variety of lattice systematic effects at the very high temperatures required, we perform a calculation of the leading small-θ cumulant of the theta vacua on large volume lattices for SU(3) Yang-Mills with high statistics as a first proof of concept, before attempting a full QCD calculation in the future. From these pure glue results, the misalignment mechanism yields the axion mass bound ma≥(14.6 ±0.1 ) μ eV when Peccei-Quinn breaking occurs
A Framework for Lattice QCD Calculations on GPUs
Winter, Frank; Clark, M A; Edwards, Robert G; Joo, Balint
2014-08-01
Computing platforms equipped with accelerators like GPUs have proven to provide great computational power. However, exploiting such platforms for existing scientific applications is not a trivial task. Current GPU programming frameworks such as CUDA C/C++ require low-level programming from the developer in order to achieve high performance code. As a result porting of applications to GPUs is typically limited to time-dominant algorithms and routines, leaving the remainder not accelerated which can open a serious Amdahl's law issue. The lattice QCD application Chroma allows to explore a different porting strategy. The layered structure of the software architecture logically separates the data-parallel from the application layer. The QCD Data-Parallel software layer provides data types and expressions with stencil-like operations suitable for lattice field theory and Chroma implements algorithms in terms of this high-level interface. Thus by porting the low-level layer one can effectively move the whole application in one swing to a different platform. The QDP-JIT/PTX library, the reimplementation of the low-level layer, provides a framework for lattice QCD calculations for the CUDA architecture. The complete software interface is supported and thus applications can be run unaltered on GPU-based parallel computers. This reimplementation was possible due to the availability of a JIT compiler (part of the NVIDIA Linux kernel driver) which translates an assembly-like language (PTX) to GPU code. The expression template technique is used to build PTX code generators and a software cache manages the GPU memory. This reimplementation allows us to deploy an efficient implementation of the full gauge-generation program with dynamical fermions on large-scale GPU-based machines such as Titan and Blue Waters which accelerates the algorithm by more than an order of magnitude.
J.J. Sakurai Prize for Theoretical Particle Physics: 40 Years of Lattice QCD
NASA Astrophysics Data System (ADS)
Lepage, Peter
2016-03-01
Lattice QCD was invented in 1973-74 by Ken Wilson, who passed away in 2013. This talk will describe the evolution of lattice QCD through the past 40 years with particular emphasis on its first years, and on the past decade, when lattice QCD simulations finally came of age. Thanks to theoretical breakthroughs in the late 1990s and early 2000s, lattice QCD simulations now produce the most accurate theoretical calculations in the history of strong-interaction physics. They play an essential role in high-precision experimental studies of physics within and beyond the Standard Model of Particle Physics. The talk will include a non-technical review of the conceptual ideas behind this revolutionary development in (highly) nonlinear quantum physics, together with a survey of its current impact on theoretical and experimental particle physics, and prospects for the future. Work supported by the National Science Foundation.
Neutron star structure from QCD
NASA Astrophysics Data System (ADS)
Fraga, Eduardo S.; Kurkela, Aleksi; Vuorinen, Aleksi
2016-03-01
In this review article, we argue that our current understanding of the thermodynamic properties of cold QCD matter, originating from first principles calculations at high and low densities, can be used to efficiently constrain the macroscopic properties of neutron stars. In particular, we demonstrate that combining state-of-the-art results from Chiral Effective Theory and perturbative QCD with the current bounds on neutron star masses, the Equation of State of neutron star matter can be obtained to an accuracy better than 30% at all densities.
The supercritical pomeron in QCD.
White, A. R.
1998-06-29
Deep-inelastic diffractive scaling violations have provided fundamental insight into the QCD pomeron, suggesting a single gluon inner structure rather than that of a perturbative two-gluon bound state. This talk outlines a derivation of a high-energy, transverse momentum cut-off, confining solution of QCD. The pomeron, in first approximation, is a single reggeized gluon plus a ''wee parton'' component that compensates for the color and particle properties of the gluon. This solution corresponds to a super-critical phase of Reggeon Field Theory.
QCD inequalities for hadron interactions.
Detmold, William
2015-06-01
We derive generalizations of the Weingarten-Witten QCD mass inequalities for particular multihadron systems. For systems of any number of identical pseudoscalar mesons of maximal isospin, these inequalities prove that near threshold interactions between the constituent mesons must be repulsive and that no bound states can form in these channels. Similar constraints in less symmetric systems are also extracted. These results are compatible with experimental results (where known) and recent lattice QCD calculations, and also lead to a more stringent bound on the nucleon mass than previously derived, m_{N}≥3/2m_{π}. PMID:26196617
Yun, J.C.
1990-10-10
In this paper we report recent QCD analysis with the new data taken from CDF detector. CDF recorded an integrated luminosity of 4.4 nb{sup {minus}1} during the 1988--1989 run at center of mass system (CMS) energy of 1.8 TeV. The major topics of this report are inclusive jet, dijet, trijet and direct photon analysis. These measurements are compared of QCD predictions. For the inclusive jet an dijet analysis, tests of quark compositeness are emphasized. 11 refs., 6 figs.
QCD corrections to triboson production
NASA Astrophysics Data System (ADS)
Lazopoulos, Achilleas; Melnikov, Kirill; Petriello, Frank
2007-07-01
We present a computation of the next-to-leading order QCD corrections to the production of three Z bosons at the Large Hadron Collider. We calculate these corrections using a completely numerical method that combines sector decomposition to extract infrared singularities with contour deformation of the Feynman parameter integrals to avoid internal loop thresholds. The NLO QCD corrections to pp→ZZZ are approximately 50% and are badly underestimated by the leading order scale dependence. However, the kinematic dependence of the corrections is minimal in phase space regions accessible at leading order.
Lattice QCD clusters at Fermilab
Holmgren, D.; Mackenzie, Paul B.; Singh, Anitoj; Simone, Jim; /Fermilab
2004-12-01
As part of the DOE SciDAC ''National Infrastructure for Lattice Gauge Computing'' project, Fermilab builds and operates production clusters for lattice QCD simulations. This paper will describe these clusters. The design of lattice QCD clusters requires careful attention to balancing memory bandwidth, floating point throughput, and network performance. We will discuss our investigations of various commodity processors, including Pentium 4E, Xeon, Opteron, and PPC970. We will also discuss our early experiences with the emerging Infiniband and PCI Express architectures. Finally, we will present our predictions and plans for future clusters.
Glueball decay in holographic QCD
Hashimoto, Koji; Tan, C.-I; Terashima, Seiji
2008-04-15
Using holographic QCD based on D4-branes and D8-anti-D8-branes, we have computed couplings of glueballs to light mesons. We describe glueball decay by explicitly calculating its decay widths and branching ratios. Interestingly, while glueballs remain less well understood both theoretically and experimentally, our results are found to be consistent with the experimental data for the scalar glueball candidate f{sub 0}(1500). More generally, holographic QCD predicts that decay of any glueball to 4{pi}{sup 0} is suppressed, and that mixing of the lightest glueball with qq mesons is small.
QCD: Challenges for the future
Burrows, P.; Dawson, S.; Orr, L.; Smith, W.H.
1997-01-13
Despite many experimental verifications of the correctness of our basic understanding of QCD, there remain numerous open questions in strong interaction physics and we focus on the role of future colliders in addressing these questions. We discuss possible advances in the measurement of {alpha}{sub s}, in the study of parton distribution functions, and in the understanding of low x physics at present colliders and potential new facilities. We also touch briefly on the role of spin physics in advancing our understanding of QCD.
Nucleon Structure from Lattice QCD
David Richards
2007-09-05
Recent advances in lattice field theory, in computer technology and in chiral perturbation theory have enabled lattice QCD to emerge as a powerful quantitative tool in understanding hadron structure. I describe recent progress in the computation of the nucleon form factors and moments of parton distribution functions, before proceeding to describe lattice studies of the Generalized Parton Distributions (GPDs). In particular, I show how lattice studies of GPDs contribute to building a three-dimensional picture of the proton, I conclude by describing the prospects for studying the structure of resonances from lattice QCD.
Putting Priors in Mixture Density Mercer Kernels
NASA Technical Reports Server (NTRS)
Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd
2004-01-01
This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly infinite dimensional feature space. We describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using predefined kernels. These data adaptive kernels can en- code prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS). The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains template for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic- algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code. The results show that the Mixture Density Mercer-Kernel described here outperforms tree-based classification in distinguishing high-redshift galaxies from low- redshift galaxies by approximately 16% on test data, bagged trees by approximately 7%, and bagged trees built on a much larger sample of data by approximately 2%.
Huang, Lulu; Massa, Lou
2010-01-01
The Kernel Energy Method (KEM) provides a way to calculate the ab-initio energy of very large biological molecules. The results are accurate, and the computational time reduced. However, by use of a list of double kernel interactions a significant additional reduction of computational effort may be achieved, still retaining ab-initio accuracy. A numerical comparison of the indices that name the known double interactions in question, allow one to list higher order interactions having the property of topological continuity within the full molecule of interest. When, that list of interactions is unpacked, as a kernel expansion, which weights the relative importance of each kernel in an expression for the total molecular energy, high accuracy, and a further significant reduction in computational effort results. A KEM molecular energy calculation based upon the HF/STO3G chemical model, is applied to the protein insulin, as an illustration. PMID:21243065
NASA Astrophysics Data System (ADS)
Boz, Tamer; Giudice, Pietro; Hands, Simon; Skullerud, Jon-Ivar; Williams, Anthony G.
2016-01-01
QCD at high chemical potential has interesting properties such as deconfinement of quarks. Two-color QCD, which enables numerical simulations on the lattice, constitutes a laboratory to study QCD at high chemical potential. Among the interesting properties of two-color QCD at high density is the diquark condensation, for which we present recent results obtained on a finer lattice compared to previous studies. The quark propagator in two-color QCD at non-zero chemical potential is referred to as the Gor'kov propagator. We express the Gor'kov propagator in terms of form factors and present recent lattice simulation results.
Kernel map compression for speeding the execution of kernel-based methods.
Arif, Omar; Vela, Patricio A
2011-06-01
The use of Mercer kernel methods in statistical learning theory provides for strong learning capabilities, as seen in kernel principal component analysis and support vector machines. Unfortunately, after learning, the computational complexity of execution through a kernel is of the order of the size of the training set, which is quite large for many applications. This paper proposes a two-step procedure for arriving at a compact and computationally efficient execution procedure. After learning in the kernel space, the proposed extension exploits the universal approximation capabilities of generalized radial basis function neural networks to efficiently approximate and replace the projections onto the empirical kernel map used during execution. Sample applications demonstrate significant compression of the kernel representation with graceful performance loss. PMID:21550884
Constructing perturbation theory kernels for large-scale structure in generalized cosmologies
NASA Astrophysics Data System (ADS)
Taruya, Atsushi
2016-07-01
We present a simple numerical scheme for perturbation theory (PT) calculations of large-scale structure. Solving the evolution equations for perturbations numerically, we construct the PT kernels as building blocks of statistical calculations, from which the power spectrum and/or correlation function can be systematically computed. The scheme is especially applicable to the generalized structure formation including modified gravity, in which the analytic construction of PT kernels is intractable. As an illustration, we show several examples for power spectrum calculations in f (R ) gravity and Λ CDM models.
Renormalization in Coulomb gauge QCD
NASA Astrophysics Data System (ADS)
Andraši, A.; Taylor, John C.
2011-04-01
In the Coulomb gauge of QCD, the Hamiltonian contains a non-linear Christ-Lee term, which may alternatively be derived from a careful treatment of ambiguous Feynman integrals at 2-loop order. We investigate how and if UV divergences from higher order graphs can be consistently absorbed by renormalization of the Christ-Lee term. We find that they cannot.
QCD Phase Transitions, Volume 15
Schaefer, T.; Shuryak, E.
1999-03-20
The title of the workshop, ''The QCD Phase Transitions'', in fact happened to be too narrow for its real contents. It would be more accurate to say that it was devoted to different phases of QCD and QCD-related gauge theories, with strong emphasis on discussion of the underlying non-perturbative mechanisms which manifest themselves as all those phases. Before we go to specifics, let us emphasize one important aspect of the present status of non-perturbative Quantum Field Theory in general. It remains true that its studies do not get attention proportional to the intellectual challenge they deserve, and that the theorists working on it remain very fragmented. The efforts to create Theory of Everything including Quantum Gravity have attracted the lion share of attention and young talent. Nevertheless, in the last few years there was also a tremendous progress and even some shift of attention toward emphasis on the unity of non-perturbative phenomena. For example, we have seen some efforts to connect the lessons from recent progress in Supersymmetric theories with that in QCD, as derived from phenomenology and lattice. Another example is Maldacena conjecture and related development, which connect three things together, string theory, super-gravity and the (N=4) supersymmetric gauge theory. Although the progress mentioned is remarkable by itself, if we would listen to each other more we may have chance to strengthen the field and reach better understanding of the spectacular non-perturbative physics.
Lattice QCD in Background Fields
William Detmold, Brian Tiburzi, Andre Walker-Loud
2009-06-01
Electromagnetic properties of hadrons can be computed by lattice simulations of QCD in background fields. We demonstrate new techniques for the investigation of charged hadron properties in electric fields. Our current calculations employ large electric fields, motivating us to analyze chiral dynamics in strong QED backgrounds, and subsequently uncover surprising non-perturbative effects present at finite volume.
Basics of QCD perturbation theory
Soper, D.E.
1997-06-01
This is an introduction to the use of QCD perturbation theory, emphasizing generic features of the theory that enable one to separate short-time and long-time effects. The author also covers some important classes of applications: electron-positron annihilation to hadrons, deeply inelastic scattering, and hard processes in hadron-hadron collisions. 31 refs., 38 figs.
Experimenting with Langevin lattice QCD
Gavai, R.V.; Potvin, J.; Sanielevici, S.
1987-05-01
We report on the status of our investigations of the effects of systematic errors upon the practical merits of Langevin updating in full lattice QCD. We formulate some rules for the safe use of this updating procedure and some observations on problems which may be common to all approximate fermion algorithms.
Seven topics in perturbative QCD
Buras, A.J.
1980-09-01
The following topics of perturbative QCD are discussed: (1) deep inelastic scattering; (2) higher order corrections to e/sup +/e/sup -/ annihilation, to photon structure functions and to quarkonia decays; (3) higher order corrections to fragmentation functions and to various semi-inclusive processes; (4) higher twist contributions; (5) exclusive processes; (6) transverse momentum effects; (7) jet and photon physics.
7 CFR 981.401 - Adjusted kernel weight.
Code of Federal Regulations, 2013 CFR
2013-01-01
... weight of delivery 10,000 10,000 2. Percent of edible kernel weight 53.0 84.0 3. Less weight loss in... 7 Agriculture 8 2013-01-01 2013-01-01 false Adjusted kernel weight. 981.401 Section 981.401... Administrative Rules and Regulations § 981.401 Adjusted kernel weight. (a) Definition. Adjusted kernel...
7 CFR 51.2296 - Three-fourths half kernel.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Three-fourths half kernel. 51.2296 Section 51.2296... STANDARDS) United States Standards for Shelled English Walnuts (Juglans Regia) Definitions § 51.2296 Three-fourths half kernel. Three-fourths half kernel means a portion of a half of a kernel which has more...
UPDATE OF GRAY KERNEL DISEASE OF MACADAMIA - 2006
Technology Transfer Automated Retrieval System (TEKTRAN)
Gray kernel is an important disease of macadamia that affects the quality of kernels with gray discoloration and a permeating, foul odor that can render entire batches of nuts unmarketable. We report on the successful production of gray kernel in raw macadamia kernels artificially inoculated with s...
7 CFR 981.401 - Adjusted kernel weight.
Code of Federal Regulations, 2011 CFR
2011-01-01
... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams;...
7 CFR 981.401 - Adjusted kernel weight.
Code of Federal Regulations, 2010 CFR
2010-01-01
... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams;...
7 CFR 981.401 - Adjusted kernel weight.
Code of Federal Regulations, 2014 CFR
2014-01-01
... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams;...
7 CFR 981.401 - Adjusted kernel weight.
Code of Federal Regulations, 2012 CFR
2012-01-01
... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams;...
7 CFR 51.2125 - Split or broken kernels.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 2 2011-01-01 2011-01-01 false Split or broken kernels. 51.2125 Section 51.2125... STANDARDS) United States Standards for Grades of Shelled Almonds Definitions § 51.2125 Split or broken kernels. Split or broken kernels means seven-eighths or less of complete whole kernels but which will...
7 CFR 51.2125 - Split or broken kernels.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 2 2012-01-01 2012-01-01 false Split or broken kernels. 51.2125 Section 51.2125... STANDARDS) United States Standards for Grades of Shelled Almonds Definitions § 51.2125 Split or broken kernels. Split or broken kernels means seven-eighths or less of complete whole kernels but which will...
KITTEN Lightweight Kernel 0.1 Beta
Energy Science and Technology Software Center (ESTSC)
2007-12-12
The Kitten Lightweight Kernel is a simplified OS (operating system) kernel that is intended to manage a compute node's hardware resources. It provides a set of mechanisms to user-level applications for utilizing hardware resources (e.g., allocating memory, creating processes, accessing the network). Kitten is much simpler than general-purpose OS kernels, such as Linux or Windows, but includes all of the esssential functionality needed to support HPC (high-performance computing) MPI, PGAS and OpenMP applications. Kitten providesmore » unique capabilities such as physically contiguous application memory, transparent large page support, and noise-free tick-less operation, which enable HPC applications to obtain greater efficiency and scalability than with general purpose OS kernels.« less
Biological sequence classification with multivariate string kernels.
Kuksa, Pavel P
2013-01-01
String kernel-based machine learning methods have yielded great success in practical tasks of structured/sequential data analysis. They often exhibit state-of-the-art performance on many practical tasks of sequence analysis such as biological sequence classification, remote homology detection, or protein superfamily and fold prediction. However, typical string kernel methods rely on the analysis of discrete 1D string data (e.g., DNA or amino acid sequences). In this paper, we address the multiclass biological sequence classification problems using multivariate representations in the form of sequences of features vectors (as in biological sequence profiles, or sequences of individual amino acid physicochemical descriptors) and a class of multivariate string kernels that exploit these representations. On three protein sequence classification tasks, the proposed multivariate representations and kernels show significant 15-20 percent improvements compared to existing state-of-the-art sequence classification methods. PMID:24384708
Biological Sequence Analysis with Multivariate String Kernels.
Kuksa, Pavel P
2013-03-01
String kernel-based machine learning methods have yielded great success in practical tasks of structured/sequential data analysis. They often exhibit state-of-the-art performance on many practical tasks of sequence analysis such as biological sequence classification, remote homology detection, or protein superfamily and fold prediction. However, typical string kernel methods rely on analysis of discrete one-dimensional (1D) string data (e.g., DNA or amino acid sequences). In this work we address the multi-class biological sequence classification problems using multivariate representations in the form of sequences of features vectors (as in biological sequence profiles, or sequences of individual amino acid physico-chemical descriptors) and a class of multivariate string kernels that exploit these representations. On a number of protein sequence classification tasks proposed multivariate representations and kernels show significant 15-20\\% improvements compared to existing state-of-the-art sequence classification methods. PMID:23509193
Variational Dirichlet Blur Kernel Estimation.
Zhou, Xu; Mateos, Javier; Zhou, Fugen; Molina, Rafael; Katsaggelos, Aggelos K
2015-12-01
Blind image deconvolution involves two key objectives: 1) latent image and 2) blur estimation. For latent image estimation, we propose a fast deconvolution algorithm, which uses an image prior of nondimensional Gaussianity measure to enforce sparsity and an undetermined boundary condition methodology to reduce boundary artifacts. For blur estimation, a linear inverse problem with normalization and nonnegative constraints must be solved. However, the normalization constraint is ignored in many blind image deblurring methods, mainly because it makes the problem less tractable. In this paper, we show that the normalization constraint can be very naturally incorporated into the estimation process by using a Dirichlet distribution to approximate the posterior distribution of the blur. Making use of variational Dirichlet approximation, we provide a blur posterior approximation that considers the uncertainty of the estimate and removes noise in the estimated kernel. Experiments with synthetic and real data demonstrate that the proposed method is very competitive to the state-of-the-art blind image restoration methods. PMID:26390458
Weighted Bergman Kernels and Quantization}
NASA Astrophysics Data System (ADS)
Engliš, Miroslav
Let Ω be a bounded pseudoconvex domain in CN, φ, ψ two positive functions on Ω such that - log ψ, - log φ are plurisubharmonic, and z∈Ω a point at which - log φ is smooth and strictly plurisubharmonic. We show that as k-->∞, the Bergman kernels with respect to the weights φkψ have an asymptotic expansion
Axion cosmology, lattice QCD and the dilute instanton gas
NASA Astrophysics Data System (ADS)
Borsanyi, Sz.; Dierigl, M.; Fodor, Z.; Katz, S. D.; Mages, S. W.; Nogradi, D.; Redondo, J.; Ringwald, A.; Szabo, K. K.
2016-01-01
Axions are one of the most attractive dark matter candidates. The evolution of their number density in the early universe can be determined by calculating the topological susceptibility χ (T) of QCD as a function of the temperature. Lattice QCD provides an ab initio technique to carry out such a calculation. A full result needs two ingredients: physical quark masses and a controlled continuum extrapolation from non-vanishing to zero lattice spacings. We determine χ (T) in the quenched framework (infinitely large quark masses) and extrapolate its values to the continuum limit. The results are compared with the prediction of the dilute instanton gas approximation (DIGA). A nice agreement is found for the temperature dependence, whereas the overall normalization of the DIGA result still differs from the non-perturbative continuum extrapolated lattice results by a factor of order ten. We discuss the consequences of our findings for the prediction of the amount of axion dark matter.
TICK: Transparent Incremental Checkpointing at Kernel Level
Energy Science and Technology Software Center (ESTSC)
2004-10-25
TICK is a software package implemented in Linux 2.6 that allows the save and restore of user processes, without any change to the user code or binary. With TICK a process can be suspended by the Linux kernel upon receiving an interrupt and saved in a file. This file can be later thawed in another computer running Linux (potentially the same computer). TICK is implemented as a Linux kernel module, in the Linux version 2.6.5
PET Image Reconstruction Using Kernel Method
Wang, Guobao; Qi, Jinyi
2014-01-01
Image reconstruction from low-count PET projection data is challenging because the inverse problem is ill-posed. Prior information can be used to improve image quality. Inspired by the kernel methods in machine learning, this paper proposes a kernel based method that models PET image intensity in each pixel as a function of a set of features obtained from prior information. The kernel-based image model is incorporated into the forward model of PET projection data and the coefficients can be readily estimated by the maximum likelihood (ML) or penalized likelihood image reconstruction. A kernelized expectation-maximization (EM) algorithm is presented to obtain the ML estimate. Computer simulations show that the proposed approach can achieve better bias versus variance trade-off and higher contrast recovery for dynamic PET image reconstruction than the conventional maximum likelihood method with and without post-reconstruction denoising. Compared with other regularization-based methods, the kernel method is easier to implement and provides better image quality for low-count data. Application of the proposed kernel method to a 4D dynamic PET patient dataset showed promising results. PMID:25095249
PET image reconstruction using kernel method.
Wang, Guobao; Qi, Jinyi
2015-01-01
Image reconstruction from low-count positron emission tomography (PET) projection data is challenging because the inverse problem is ill-posed. Prior information can be used to improve image quality. Inspired by the kernel methods in machine learning, this paper proposes a kernel based method that models PET image intensity in each pixel as a function of a set of features obtained from prior information. The kernel-based image model is incorporated into the forward model of PET projection data and the coefficients can be readily estimated by the maximum likelihood (ML) or penalized likelihood image reconstruction. A kernelized expectation-maximization algorithm is presented to obtain the ML estimate. Computer simulations show that the proposed approach can achieve better bias versus variance trade-off and higher contrast recovery for dynamic PET image reconstruction than the conventional maximum likelihood method with and without post-reconstruction denoising. Compared with other regularization-based methods, the kernel method is easier to implement and provides better image quality for low-count data. Application of the proposed kernel method to a 4-D dynamic PET patient dataset showed promising results. PMID:25095249
On the interface between perturbative and nonperturbative QCD
NASA Astrophysics Data System (ADS)
Deur, Alexandre; Brodsky, Stanley J.; de Téramond, Guy F.
2016-06-01
The QCD running coupling αs (Q2) sets the strength of the interactions of quarks and gluons as a function of the momentum transfer Q. The Q2 dependence of the coupling is required to describe hadronic interactions at both large and short distances. In this article we adopt the light-front holographic approach to strongly-coupled QCD, a formalism which incorporates confinement, predicts the spectroscopy of hadrons composed of light quarks, and describes the low-Q2 analytic behavior of the strong coupling αs (Q2). The high-Q2 dependence of the coupling αs (Q2) is specified by perturbative QCD and its renormalization group equation. The matching of the high and low Q2 regimes of αs (Q2) then determines the scale Q0 which sets the interface between perturbative and nonperturbative hadron dynamics. The value of Q0 can be used to set the factorization scale for DGLAP evolution of hadronic structure functions and the ERBL evolution of distribution amplitudes. We discuss the scheme-dependence of the value of Q0 and the infrared fixed-point of the QCD coupling. Our analysis is carried out for the MS ‾, g1, MOM and V renormalization schemes. Our results show that the discrepancies on the value of αs at large distance seen in the literature can be explained by different choices of renormalization schemes. We also provide the formulae to compute αs (Q2) over the entire range of space-like momentum transfer for the different renormalization schemes discussed in this article.
Evaluating the Gradient of the Thin Wire Kernel
NASA Technical Reports Server (NTRS)
Wilton, Donald R.; Champagne, Nathan J.
2008-01-01
Recently, a formulation for evaluating the thin wire kernel was developed that employed a change of variable to smooth the kernel integrand, canceling the singularity in the integrand. Hence, the typical expansion of the wire kernel in a series for use in the potential integrals is avoided. The new expression for the kernel is exact and may be used directly to determine the gradient of the wire kernel, which consists of components that are parallel and radial to the wire axis.
Calculation of the nucleon axial charge in lattice QCD
D. B. Renner; R. G. Edwards; G. Fleming; Ph. Hagler; J. W. Negele; K. Orginos; A. V. Pochinsky; D. G. Richards; W. Schroers
2006-09-01
Protons and neutrons have a rich structure in terms of their constituents, the quarks and gluons. Understanding this structure requires solving Quantum Chromodynamics (QCD). However QCD is extremely complicated, so we must numerically solve the equations of QCD using a method known as lattice QCD. Here we describe a typical lattice QCD calculation by examining our recent computation of the nucleon axial charge.
QCD with chiral 4-fermion interactions ({chi}QCD)
Kogut, J.B.; Sinclair, D.K.
1996-10-01
Lattice QCD with staggered quarks is augmented by the addition of a chiral 4-fermion interaction. The Dirac operator is now non-singular at m{sub q}=0, decreasing the computing requirements for light quark simulations by at least an order of magnitude. We present preliminary results from simulations at finite and zero temperatures for m{sub q}=0, with and without gauge fields. Chiral QCD enables simulations at physical u and d quark masses with at least an order of magnitude saving in CPU time. It also enables simulations with zero quark masses which is important for determining the equation of state. A renormalization group analysis will be needed to continue to the continuum limit. 7 refs., 2 figs.
Evaluating and Interpreting the Chemical Relevance of the Linear Response Kernel for Atoms.
Boisdenghien, Zino; Van Alsenoy, Christian; De Proft, Frank; Geerlings, Paul
2013-02-12
Although a lot of work has been done on the chemical relevance of the atom-condensed linear response kernel χAB regarding inductive, mesomeric, and hyperconjugative effects as well as (anti)aromaticity of molecules, the same cannot be said about its not condensed form χ(r,r'). Using a single Slater determinant KS type ansatz involving second order perturbation theory, we set out to investigate the linear response kernel for a number of judiciously chosen closed (sub)shell atoms throughout the periodic table and its relevance, e.g., in relation to the shell structure and polarizability. The numerical results are to the best of our knowledge the first systematic study on this noncondensed linear response function, the results for He and Be being in line with earlier work by Savin. Different graphical representations of the kernel are presented and discussed. Moreover, a frontier orbital approach has been tested illustrating the sensitivity of the nonintegrated kernel to the nodal structure of the orbitals. As a test of our method, a numerical integration of the linear response kernel was performed, yielding an accuracy of 10(-4). We also compare calculated values of the polarizability tensor and their evolution throughout the periodic table to high-level values found in the literature. PMID:26588743
Geiger, K.; Longacre, R.; Srivastava, D.K.
1999-02-01
VNI is a general-purpose Monte-Carlo event-generator, which includes the simulation of lepton-lepton, lepton-hadron, lepton-nucleus, hadron-hadron, hadron-nucleus, and nucleus-nucleus collisions. It uses the real-time evolution of parton cascades in conjunction with a self-consistent hadronization scheme, as well as the development of hadron cascades after hadronization. The causal evolution from a specific initial state (determined by the colliding beam particles) is followed by the time-development of the phase-space densities of partons, pre-hadronic parton clusters, and final-state hadrons, in position-space, momentum-space and color-space. The parton-evolution is described in terms of a space-time generalization of the familiar momentum-space description of multiple (semi)hard interactions in QCD, involving 2 {r_arrow} 2 parton collisions, 2 {r_arrow} 1 parton fusion processes, and 1 {r_arrow} 2 radiation processes. The formation of color-singlet pre-hadronic clusters and their decays into hadrons, on the other hand, is treated by using a spatial criterion motivated by confinement and a non-perturbative model for hadronization. Finally, the cascading of produced prehadronic clusters and of hadrons includes a multitude of 2 {r_arrow} n processes, and is modeled in parallel to the parton cascade description. This paper gives a brief review of the physics underlying VNI, as well as a detailed description of the program itself. The latter program description emphasizes easy-to-use pragmatism and explains how to use the program (including simple examples), annotates input and control parameters, and discusses output data provided by it.
anQCD: Fortran programs for couplings at complex momenta in various analytic QCD models
NASA Astrophysics Data System (ADS)
Ayala, César; Cvetič, Gorazd
2016-02-01
We provide three Fortran programs which evaluate the QCD analytic (holomorphic) couplings Aν(Q2) for complex or real squared momenta Q2. These couplings are holomorphic analogs of the powers a(Q2)ν of the underlying perturbative QCD (pQCD) coupling a(Q2) ≡αs(Q2) / π, in three analytic QCD models (anQCD): Fractional Analytic Perturbation Theory (FAPT), Two-delta analytic QCD (2 δanQCD), and Massive Perturbation Theory (MPT). The index ν can be noninteger. The provided programs do basically the same job as the Mathematica package anQCD.m published by us previously (Ayala and Cvetič, 2015), but are now written in Fortran.
Dru Renner
2012-04-01
Precision computation of hadronic physics with lattice QCD is becoming feasible. The last decade has seen precent-level calculations of many simple properties of mesons, and the last few years have seen calculations of baryon masses, including the nucleon mass, accurate to a few percent. As computational power increases and algorithms advance, the precise calculation of a variety of more demanding hadronic properties will become realistic. With this in mind, I discuss the current lattice QCD calculations of generalized parton distributions with an emphasis on the prospects for well-controlled calculations for these observables as well. I will do this by way of several examples: the pion and nucleon form factors and moments of the nucleon parton and generalized-parton distributions.
Quark eigenmodes and lattice QCD
NASA Astrophysics Data System (ADS)
Liu, Guofeng
In this thesis, we study a number of topics in lattice QCD through the low-lying quark eigenmodes in the domain wall fermion (DWF) formulation in the quenched approximation. Specifically, we present results for the chiral condensate measured from these eigenmodes; we investigate the QCD vacuum structure by looking at the correlation between the magnitude of the chirality density, |psi†(x)gamma5psi( x)|, and the normal density, psi†( x)psi(x), for these states; we study the behavior of DWF formulation at large quark masses by investigating the mass dependence of the eigenvalues of the physical four dimensional-states as well as the bulk, five-dimensional states.
LATTICE QCD AT FINITE DENSITY.
SCHMIDT, C.
2006-07-23
I discuss different approaches to finite density lattice QCD. In particular, I focus on the structure of the phase diagram and discuss attempts to determine the location of the critical end-point. Recent results on the transition line as function of the chemical potential (T{sub c}({mu}{sub q})) are reviewed. Along the transition line, hadronic fluctuations have been calculated; which can be used to characterize properties of the Quark Gluon plasma and eventually can also help to identify the location of the critical end-point in the QCD phase diagram on the lattice and in heavy ion experiments. Furthermore, I comment on the structure of the phase diagram at large {mu}{sub q}.
Innovations in Lattice QCD Algorithms
Konstantinos Orginos
2006-06-25
Lattice QCD calculations demand a substantial amount of computing power in order to achieve the high precision results needed to better understand the nature of strong interactions, assist experiment to discover new physics, and predict the behavior of a diverse set of physical systems ranging from the proton itself to astrophysical objects such as neutron stars. However, computer power alone is clearly not enough to tackle the calculations we need to be doing today. A steady stream of recent algorithmic developments has made an important impact on the kinds of calculations we can currently perform. In this talk I am reviewing these algorithms and their impact on the nature of lattice QCD calculations performed today.
Sudakov safety in perturbative QCD
NASA Astrophysics Data System (ADS)
Larkoski, Andrew J.; Marzani, Simone; Thaler, Jesse
2015-06-01
Traditional calculations in perturbative quantum chromodynamics (pQCD) are based on an order-by-order expansion in the strong coupling αs. Observables that are calculable in this way are known as "safe." Recently, a class of unsafe observables was discovered that do not have a valid αs expansion but are nevertheless calculable in pQCD using all-orders resummation. These observables are called "Sudakov safe" since singularities at each αs order are regulated by an all-orders Sudakov form factor. In this paper, we give a concrete definition of Sudakov safety based on conditional probability distributions, and we study a one-parameter family of momentum sharing observables that interpolate between the safe and unsafe regimes. The boundary between these regimes is particularly interesting, as the resulting distribution can be understood as the ultraviolet fixed point of a generalized fragmentation function, yielding a leading behavior that is independent of αs.
Huston, J. |; CDF Collaboration
1994-01-01
CDF has recently concluded a very successful 1992--93 data run in which an integrated luminosity of 21.3 pb {sup {minus}1} was written to tape. The large data sample allows for a greater discovery potential for new phenomena and for better statistical and systematic precision in analysis of conventional physics. This paper summarizes some of the new results from QCD analyses for this run.
Brodsky, Stanley J.; de Teramond, Guy F.; /Costa Rica U. /SLAC
2007-02-21
The AdS/CFT correspondence between string theory in AdS space and conformal .eld theories in physical spacetime leads to an analytic, semi-classical model for strongly-coupled QCD which has scale invariance and dimensional counting at short distances and color confinement at large distances. Although QCD is not conformally invariant, one can nevertheless use the mathematical representation of the conformal group in five-dimensional anti-de Sitter space to construct a first approximation to the theory. The AdS/CFT correspondence also provides insights into the inherently non-perturbative aspects of QCD, such as the orbital and radial spectra of hadrons and the form of hadronic wavefunctions. In particular, we show that there is an exact correspondence between the fifth-dimensional coordinate of AdS space z and a specific impact variable {zeta} which measures the separation of the quark and gluonic constituents within the hadron in ordinary space-time. This connection allows one to compute the analytic form of the frame-independent light-front wavefunctions, the fundamental entities which encode hadron properties and allow the computation of decay constants, form factors, and other exclusive scattering amplitudes. New relativistic lightfront equations in ordinary space-time are found which reproduce the results obtained using the 5-dimensional theory. The effective light-front equations possess remarkable algebraic structures and integrability properties. Since they are complete and orthonormal, the AdS/CFT model wavefunctions can also be used as a basis for the diagonalization of the full light-front QCD Hamiltonian, thus systematically improving the AdS/CFT approximation.
Yamamoto, Arata
2016-07-29
We propose the lattice QCD calculation of the Berry phase, which is defined by the ground state of a single fermion. We perform the ground-state projection of a single-fermion propagator, construct the Berry link variable on a momentum-space lattice, and calculate the Berry phase. As the first application, the first Chern number of the (2+1)-dimensional Wilson fermion is calculated by the Monte Carlo simulation. PMID:27517766
DeGrand, T.
1997-06-01
These lectures provide an introduction to lattice methods for nonperturbative studies of Quantum Chromodynamics. Lecture 1: Basic techniques for QCD and results for hadron spectroscopy using the simplest discretizations; lecture 2: Improved actions--what they are and how well they work; lecture 3: SLAC physics from the lattice-structure functions, the mass of the glueball, heavy quarks and {alpha}{sub s} (M{sub z}), and B-{anti B} mixing. 67 refs., 36 figs.
FermiQCD: A tool kit for parallel lattice QCD applications
Di Pierro, M.
2002-03-01
We present here the most recent version of FermiQCD, a collection of C++ classes, functions and parallel algorithms for lattice QCD, based on Matrix Distributed Processing. FermiQCD allows fast development of parallel lattice applications and includes some SSE2 optimizations for clusters of Pentium 4 PCs.
Online Sequential Extreme Learning Machine With Kernels.
Scardapane, Simone; Comminiello, Danilo; Scarpiniti, Michele; Uncini, Aurelio
2015-09-01
The extreme learning machine (ELM) was recently proposed as a unifying framework for different families of learning algorithms. The classical ELM model consists of a linear combination of a fixed number of nonlinear expansions of the input vector. Learning in ELM is hence equivalent to finding the optimal weights that minimize the error on a dataset. The update works in batch mode, either with explicit feature mappings or with implicit mappings defined by kernels. Although an online version has been proposed for the former, no work has been done up to this point for the latter, and whether an efficient learning algorithm for online kernel-based ELM exists remains an open problem. By explicating some connections between nonlinear adaptive filtering and ELM theory, in this brief, we present an algorithm for this task. In particular, we propose a straightforward extension of the well-known kernel recursive least-squares, belonging to the kernel adaptive filtering (KAF) family, to the ELM framework. We call the resulting algorithm the kernel online sequential ELM (KOS-ELM). Moreover, we consider two different criteria used in the KAF field to obtain sparse filters and extend them to our context. We show that KOS-ELM, with their integration, can result in a highly efficient algorithm, both in terms of obtained generalization error and training time. Empirical evaluations demonstrate interesting results on some benchmarking datasets. PMID:25561597
NASA Astrophysics Data System (ADS)
Niemi, H.; Eskola, K. J.; Paatelainen, R.
2016-02-01
We introduce an event-by-event perturbative-QCD + saturation + hydro ("EKRT") framework for ultrarelativistic heavy-ion collisions, where we compute the produced fluctuating QCD-matter energy densities from next-to-leading-order perturbative QCD using a saturation conjecture to control soft-particle production and describe the space-time evolution of the QCD matter with dissipative fluid dynamics, event by event. We perform a simultaneous comparison of the centrality dependence of hadronic multiplicities, transverse momentum spectra, and flow coefficients of the azimuth-angle asymmetries against the LHC and RHIC measurements. We compare also the computed event-by-event probability distributions of relative fluctuations of elliptic flow and event-plane angle correlations with the experimental data from Pb +Pb collisions at the LHC. We show how such a systematic multienergy and multiobservable analysis tests the initial-state calculation and the applicability region of hydrodynamics and, in particular, how it constrains the temperature dependence of the shear viscosity-to-entropy ratio of QCD matter in its different phases in a remarkably consistent manner.
None
2011-10-06
Modern QCD - Lecture 2 We will start discussing the matter content of the theory and revisit the experimental measurements that led to the discovery of quarks. We will then consider a classic QCD observable, the R-ratio, and use it to illustrate the appearance of UV divergences and the need to renormalize the coupling constant of QCD. We will then discuss asymptotic freedom and confinement. Finally, we will examine a case where soft and collinear infrared divergences appear, will discuss the soft approximation in QCD and will introduce the concept of infrared safe jets.
Transport Processes in High Temperature QCD Plasmas
NASA Astrophysics Data System (ADS)
Hong, Juhee
The transport properties of high temperature QCD plasmas can be described by kinetic theory based on the Boltzmann equation. At a leading-log approximation, the Boltzmann equation is reformulated as a Fokker-Planck equation. First, we compute the spectral densities of
Nonparametric entropy estimation using kernel densities.
Lake, Douglas E
2009-01-01
The entropy of experimental data from the biological and medical sciences provides additional information over summary statistics. Calculating entropy involves estimates of probability density functions, which can be effectively accomplished using kernel density methods. Kernel density estimation has been widely studied and a univariate implementation is readily available in MATLAB. The traditional definition of Shannon entropy is part of a larger family of statistics, called Renyi entropy, which are useful in applications that require a measure of the Gaussianity of data. Of particular note is the quadratic entropy which is related to the Friedman-Tukey (FT) index, a widely used measure in the statistical community. One application where quadratic entropy is very useful is the detection of abnormal cardiac rhythms, such as atrial fibrillation (AF). Asymptotic and exact small-sample results for optimal bandwidth and kernel selection to estimate the FT index are presented and lead to improved methods for entropy estimation. PMID:19897106
Tile-Compressed FITS Kernel for IRAF
NASA Astrophysics Data System (ADS)
Seaman, R.
2011-07-01
The Flexible Image Transport System (FITS) is a ubiquitously supported standard of the astronomical community. Similarly, the Image Reduction and Analysis Facility (IRAF), developed by the National Optical Astronomy Observatory, is a widely used astronomical data reduction package. IRAF supplies compatibility with FITS format data through numerous tools and interfaces. The most integrated of these is IRAF's FITS image kernel that provides access to FITS from any IRAF task that uses the basic IMIO interface. The original FITS kernel is a complex interface of purpose-built procedures that presents growing maintenance issues and lacks recent FITS innovations. A new FITS kernel is being developed at NOAO that is layered on the CFITSIO library from the NASA Goddard Space Flight Center. The simplified interface will minimize maintenance headaches as well as add important new features such as support for the FITS tile-compressed (fpack) format.
Fast generation of sparse random kernel graphs
Hagberg, Aric; Lemons, Nathan; Du, Wen -Bo
2015-09-10
The development of kernel-based inhomogeneous random graphs has provided models that are flexible enough to capture many observed characteristics of real networks, and that are also mathematically tractable. We specify a class of inhomogeneous random graph models, called random kernel graphs, that produces sparse graphs with tunable graph properties, and we develop an efficient generation algorithm to sample random instances from this model. As real-world networks are usually large, it is essential that the run-time of generation algorithms scales better than quadratically in the number of vertices n. We show that for many practical kernels our algorithm runs in timemore » at most ο(n(logn)²). As an example, we show how to generate samples of power-law degree distribution graphs with tunable assortativity.« less
Fast generation of sparse random kernel graphs
Hagberg, Aric; Lemons, Nathan; Du, Wen -Bo
2015-09-10
The development of kernel-based inhomogeneous random graphs has provided models that are flexible enough to capture many observed characteristics of real networks, and that are also mathematically tractable. We specify a class of inhomogeneous random graph models, called random kernel graphs, that produces sparse graphs with tunable graph properties, and we develop an efficient generation algorithm to sample random instances from this model. As real-world networks are usually large, it is essential that the run-time of generation algorithms scales better than quadratically in the number of vertices n. We show that for many practical kernels our algorithm runs in time at most ο(n(logn)²). As an example, we show how to generate samples of power-law degree distribution graphs with tunable assortativity.
Fast Generation of Sparse Random Kernel Graphs
2015-01-01
The development of kernel-based inhomogeneous random graphs has provided models that are flexible enough to capture many observed characteristics of real networks, and that are also mathematically tractable. We specify a class of inhomogeneous random graph models, called random kernel graphs, that produces sparse graphs with tunable graph properties, and we develop an efficient generation algorithm to sample random instances from this model. As real-world networks are usually large, it is essential that the run-time of generation algorithms scales better than quadratically in the number of vertices n. We show that for many practical kernels our algorithm runs in time at most 𝒪(n(logn)2). As a practical example we show how to generate samples of power-law degree distribution graphs with tunable assortativity. PMID:26356296
Experimental study of turbulent flame kernel propagation
Mansour, Mohy; Peters, Norbert; Schrader, Lars-Uve
2008-07-15
Flame kernels in spark ignited combustion systems dominate the flame propagation and combustion stability and performance. They are likely controlled by the spark energy, flow field and mixing field. The aim of the present work is to experimentally investigate the structure and propagation of the flame kernel in turbulent premixed methane flow using advanced laser-based techniques. The spark is generated using pulsed Nd:YAG laser with 20 mJ pulse energy in order to avoid the effect of the electrodes on the flame kernel structure and the variation of spark energy from shot-to-shot. Four flames have been investigated at equivalence ratios, {phi}{sub j}, of 0.8 and 1.0 and jet velocities, U{sub j}, of 6 and 12 m/s. A combined two-dimensional Rayleigh and LIPF-OH technique has been applied. The flame kernel structure has been collected at several time intervals from the laser ignition between 10 {mu}s and 2 ms. The data show that the flame kernel structure starts with spherical shape and changes gradually to peanut-like, then to mushroom-like and finally disturbed by the turbulence. The mushroom-like structure lasts longer in the stoichiometric and slower jet velocity. The growth rate of the average flame kernel radius is divided into two linear relations; the first one during the first 100 {mu}s is almost three times faster than that at the later stage between 100 and 2000 {mu}s. The flame propagation is slightly faster in leaner flames. The trends of the flame propagation, flame radius, flame cross-sectional area and mean flame temperature are related to the jet velocity and equivalence ratio. The relations obtained in the present work allow the prediction of any of these parameters at different conditions. (author)
Transverse Momentum-Dependent Parton Distributions From Lattice QCD
Michael Engelhardt, Bernhard Musch, Philipp Haegler, Andreas Schaefer
2012-12-01
Starting from a definition of transverse momentum-dependent parton distributions for semi-inclusive deep inelastic scattering and the Drell-Yan process, given in terms of matrix elements of a quark bilocal operator containing a staple-shaped Wilson connection, a scheme to determine such observables in lattice QCD is developed and explored. Parametrizing the aforementioned matrix elements in terms of invariant amplitudes permits a simple transformation of the problem to a Lorentz frame suited for the lattice calculation. Results for the Sivers and Boer-Mulders transverse momentum shifts are presented, focusing in particular on their dependence on the staple extent and the Collins-Soper evolution parameter.
Full Waveform Inversion Using Waveform Sensitivity Kernels
NASA Astrophysics Data System (ADS)
Schumacher, Florian; Friederich, Wolfgang
2013-04-01
We present a full waveform inversion concept for applications ranging from seismological to enineering contexts, in which the steps of forward simulation, computation of sensitivity kernels, and the actual inversion are kept separate of each other. We derive waveform sensitivity kernels from Born scattering theory, which for unit material perturbations are identical to the Born integrand for the considered path between source and receiver. The evaluation of such a kernel requires the calculation of Green functions and their strains for single forces at the receiver position, as well as displacement fields and strains originating at the seismic source. We compute these quantities in the frequency domain using the 3D spectral element code SPECFEM3D (Tromp, Komatitsch and Liu, 2008) and the 1D semi-analytical code GEMINI (Friederich and Dalkolmo, 1995) in both, Cartesian and spherical framework. We developed and implemented the modularized software package ASKI (Analysis of Sensitivity and Kernel Inversion) to compute waveform sensitivity kernels from wavefields generated by any of the above methods (support for more methods is planned), where some examples will be shown. As the kernels can be computed independently from any data values, this approach allows to do a sensitivity and resolution analysis first without inverting any data. In the context of active seismic experiments, this property may be used to investigate optimal acquisition geometry and expectable resolution before actually collecting any data, assuming the background model is known sufficiently well. The actual inversion step then, can be repeated at relatively low costs with different (sub)sets of data, adding different smoothing conditions. Using the sensitivity kernels, we expect the waveform inversion to have better convergence properties compared with strategies that use gradients of a misfit function. Also the propagation of the forward wavefield and the backward propagation from the receiver
Volatile compound formation during argan kernel roasting.
El Monfalouti, Hanae; Charrouf, Zoubida; Giordano, Manuela; Guillaume, Dominique; Kartah, Badreddine; Harhar, Hicham; Gharby, Saïd; Denhez, Clément; Zeppa, Giuseppe
2013-01-01
Virgin edible argan oil is prepared by cold-pressing argan kernels previously roasted at 110 degrees C for up to 25 minutes. The concentration of 40 volatile compounds in virgin edible argan oil was determined as a function of argan kernel roasting time. Most of the volatile compounds begin to be formed after 15 to 25 minutes of roasting. This suggests that a strictly controlled roasting time should allow the modulation of argan oil taste and thus satisfy different types of consumers. This could be of major importance considering the present booming use of edible argan oil. PMID:23472454
Modified wavelet kernel methods for hyperspectral image classification
NASA Astrophysics Data System (ADS)
Hsu, Pai-Hui; Huang, Xiu-Man
2015-10-01
Hyperspectral images have the capability of acquiring images of earth surface with several hundred of spectral bands. Providing such abundant spectral data should increase the abilities in classifying land use/cover type. However, due to the high dimensionality of hyperspectral data, traditional classification methods are not suitable for hyperspectral data classification. The common method to solve this problem is dimensionality reduction by using feature extraction before classification. Kernel methods such as support vector machine (SVM) and multiple kernel learning (MKL) have been successfully applied to hyperspectral images classification. In kernel methods applications, the selection of kernel function plays an important role. The wavelet kernel with multidimensional wavelet functions can find the optimal approximation of data in feature space for classification. The SVM with wavelet kernels (called WSVM) have been also applied to hyperspectral data and improve classification accuracy. In this study, wavelet kernel method combined multiple kernel learning algorithm and wavelet kernels was proposed for hyperspectral image classification. After the appropriate selection of a linear combination of kernel functions, the hyperspectral data will be transformed to the wavelet feature space, which should have the optimal data distribution for kernel learning and classification. Finally, the proposed methods were compared with the existing methods. A real hyperspectral data set was used to analyze the performance of wavelet kernel method. According to the results the proposed wavelet kernel methods in this study have well performance, and would be an appropriate tool for hyperspectral image classification.
Kernel abortion in maize. II. Distribution of /sup 14/C among kernel carboydrates
Hanft, J.M.; Jones, R.J.
1986-06-01
This study was designed to compare the uptake and distribution of /sup 14/C among fructose, glucose, sucrose, and starch in the cob, pedicel, and endosperm tissues of maize (Zea mays L.) kernels induced to abort by high temperature with those that develop normally. Kernels cultured in vitro at 309 and 35/sup 0/C were transferred to (/sup 14/C)sucrose media 10 days after pollination. Kernels cultured at 35/sup 0/C aborted prior to the onset of linear dry matter accumulation. Significant uptake into the cob, pedicel, and endosperm of radioactivity associated with the soluble and starch fractions of the tissues was detected after 24 hours in culture on atlageled media. After 8 days in culture on (/sup 14/C)sucrose media, 48 and 40% of the radioactivity associated with the cob carbohydrates was found in the reducing sugars at 30 and 35/sup 0/C, respectively. Of the total carbohydrates, a higher percentage of label was associated with sucrose and lower percentage with fructose and glucose in pedicel tissue of kernels cultured at 35/sup 0/C compared to kernels cultured at 30/sup 0/C. These results indicate that sucrose was not cleaved to fructose and glucose as rapidly during the unloading process in the pedicel of kernels induced to abort by high temperature. Kernels cultured at 35/sup 0/C had a much lower proportion of label associated with endosperm starch (29%) than did kernels cultured at 30/sup 0/C (89%). Kernels cultured at 35/sup 0/C had a correspondingly higher proportion of /sup 14/C in endosperm fructose, glucose, and sucrose.
Charm quark energy loss in infinite QCD matter using a parton cascade model
NASA Astrophysics Data System (ADS)
Younus, Mohammed; Coleman-Smith, Christopher E.; Bass, Steffen A.; Srivastava, Dinesh K.
2015-02-01
We utilize the parton cascade model to study the evolution of charm quarks propagating through a thermal brick of QCD matter. We determine the energy loss and the transport coefficient q ̂ for charm quarks. The calculations are done at a constant temperature of 350 MeV and the results are compared to analytical calculations of heavy-quark energy loss in order to validate the applicability of using a parton cascade model for the study of heavy-quark dynamics in hot and dense QCD matter.
Deep Sequencing of RNA from Ancient Maize Kernels
Rasmussen, Morten; Cappellini, Enrico; Romero-Navarro, J. Alberto; Wales, Nathan; Alquezar-Planas, David E.; Penfield, Steven; Brown, Terence A.; Vielle-Calzada, Jean-Philippe; Montiel, Rafael; Jørgensen, Tina; Odegaard, Nancy; Jacobs, Michael; Arriaza, Bernardo; Higham, Thomas F. G.; Ramsey, Christopher Bronk; Willerslev, Eske; Gilbert, M. Thomas P.
2013-01-01
The characterization of biomolecules from ancient samples can shed otherwise unobtainable insights into the past. Despite the fundamental role of transcriptomal change in evolution, the potential of ancient RNA remains unexploited – perhaps due to dogma associated with the fragility of RNA. We hypothesize that seeds offer a plausible refuge for long-term RNA survival, due to the fundamental role of RNA during seed germination. Using RNA-Seq on cDNA synthesized from nucleic acid extracts, we validate this hypothesis through demonstration of partial transcriptomal recovery from two sources of ancient maize kernels. The results suggest that ancient seed transcriptomics may offer a powerful new tool with which to study plant domestication. PMID:23326310
Accuracy of Reduced and Extended Thin-Wire Kernels
Burke, G J
2008-11-24
Some results are presented comparing the accuracy of the reduced thin-wire kernel and an extended kernel with exact integration of the 1/R term of the Green's function and results are shown for simple wire structures.
QCD tests in electron-positron scattering
Maruyama, T.
1995-11-01
Recent results on QCD tests at the Z{sup o} resonance are described. Measurements of Color factor ratios, and studies of final state photon radiation are performed by the LEP experiments. QCD tests using a longitudinally polarized beam are reported by the SLD experiment.
Lattice QCD and High Baryon Density State
Nagata, Keitaro; Nakamura, Atsushi; Motoki, Shinji; Nakagawa, Yoshiyuki; Saito, Takuya
2011-10-21
We report our recent studies on the finite density QCD obtained from lattice QCD simulation with clover-improved Wilson fermions of two flavor and RG-improved gauge action. We approach the subject from two paths, i.e., the imaginary and chemical potentials.
Quantum properties of QCD string fragmentation
NASA Astrophysics Data System (ADS)
Todorova-Nová, Šárka
2016-07-01
A simple quantization concept for a 3-dim QCD string is used to derive properties of QCD flux tube from the mass spectrum of light mesons and to predict observable quantum effects in correlations between adjacent hadrons. The quantized fragmentation model is presented and compared with experimental observations.
Solvable models and hidden symmetries in QCD
Yepez-Martinez, Tochtli; Hess, P. O.; Civitarese, O.; Lerma H., S.
2010-12-23
We show that QCD Hamiltonians at low energy exhibit an SU(2) structure, when only few orbital levels are considered. In case many orbital levels are taken into account we also find a semi-analytic solution for the energy levels of the dominant part of the QCD Hamiltonian. The findings are important to propose the structure of phenomenological models.
Up- and down-quark masses from finite-energy QCD sum rules to five loops
Dominguez, C. A.; Nasrallah, N. F.; Roentsch, R. H.; Schilcher, K.
2009-01-01
The up- and down-quark masses are determined from an optimized QCD finite-energy sum rule involving the correlator of axial-vector divergences, to five-loop order in perturbative QCD, and including leading nonperturbative QCD and higher order quark-mass corrections. This finite-energy sum rule is designed to reduce considerably the systematic uncertainties arising from the (unmeasured) hadronic resonance sector, which in this framework contributes less than 3-4% to the quark mass. This is achieved by introducing an integration kernel in the form of a second degree polynomial, restricted to vanish at the peak of the two lowest lying resonances. The driving hadronic contribution is then the pion pole, with parameters well known from experiment. The determination is done in the framework of contour improved perturbation theory, which exhibits a very good convergence, leading to a remarkably stable result in the unusually wide window s{sub 0}=1.0-4.0 GeV{sup 2}, where s{sub 0} is the radius of the integration contour in the complex energy (squared) plane. The results are m{sub u}(Q=2 GeV)=2.9{+-}0.2 MeV, m{sub d}(Q=2 GeV)=5.3{+-}0.4 MeV, and (m{sub u}+m{sub d})/2=4.1{+-}0.2 MeV (at a scale Q=2 GeV)
Fabrication of Uranium Oxycarbide Kernels for HTR Fuel
Charles Barnes; CLay Richardson; Scott Nagley; John Hunn; Eric Shaber
2010-10-01
Babcock and Wilcox (B&W) has been producing high quality uranium oxycarbide (UCO) kernels for Advanced Gas Reactor (AGR) fuel tests at the Idaho National Laboratory. In 2005, 350-µm, 19.7% 235U-enriched UCO kernels were produced for the AGR-1 test fuel. Following coating of these kernels and forming the coated-particles into compacts, this fuel was irradiated in the Advanced Test Reactor (ATR) from December 2006 until November 2009. B&W produced 425-µm, 14% enriched UCO kernels in 2008, and these kernels were used to produce fuel for the AGR-2 experiment that was inserted in ATR in 2010. B&W also produced 500-µm, 9.6% enriched UO2 kernels for the AGR-2 experiments. Kernels of the same size and enrichment as AGR-1 were also produced for the AGR-3/4 experiment. In addition to fabricating enriched UCO and UO2 kernels, B&W has produced more than 100 kg of natural uranium UCO kernels which are being used in coating development tests. Successive lots of kernels have demonstrated consistent high quality and also allowed for fabrication process improvements. Improvements in kernel forming were made subsequent to AGR-1 kernel production. Following fabrication of AGR-2 kernels, incremental increases in sintering furnace charge size have been demonstrated. Recently small scale sintering tests using a small development furnace equipped with a residual gas analyzer (RGA) has increased understanding of how kernel sintering parameters affect sintered kernel properties. The steps taken to increase throughput and process knowledge have reduced kernel production costs. Studies have been performed of additional modifications toward the goal of increasing capacity of the current fabrication line to use for production of first core fuel for the Next Generation Nuclear Plant (NGNP) and providing a basis for the design of a full scale fuel fabrication facility.
Kernel Partial Least Squares for Nonlinear Regression and Discrimination
NASA Technical Reports Server (NTRS)
Rosipal, Roman; Clancy, Daniel (Technical Monitor)
2002-01-01
This paper summarizes recent results on applying the method of partial least squares (PLS) in a reproducing kernel Hilbert space (RKHS). A previously proposed kernel PLS regression model was proven to be competitive with other regularized regression methods in RKHS. The family of nonlinear kernel-based PLS models is extended by considering the kernel PLS method for discrimination. Theoretical and experimental results on a two-class discrimination problem indicate usefulness of the method.
Consistent Perturbative Fixed Point Calculations in QCD and Supersymmetric QCD
NASA Astrophysics Data System (ADS)
Ryttov, Thomas A.
2016-08-01
We suggest how to consistently calculate the anomalous dimension γ* of the ψ ¯ ψ operator in finite order perturbation theory at an infrared fixed point for asymptotically free theories. If the n +1 loop beta function and n loop anomalous dimension are known, then γ* can be calculated exactly and fully scheme independently in a Banks-Zaks expansion through O (Δfn) , where Δf=N¯ f-Nf , Nf is the number of flavors, and N¯f is the number of flavors above which asymptotic freedom is lost. For a supersymmetric theory, the calculation preserves supersymmetry order by order in Δf. We then compute γ* through O (Δf2) for supersymmetric QCD in the dimensional reduction scheme and find that it matches the exact known result. We find that γ* is astonishingly well described in perturbation theory already at the few loops level throughout the entire conformal window. We finally compute γ* through O (Δf3) for QCD and a variety of other nonsupersymmetric fermionic gauge theories. Small values of γ* are observed for a large range of flavors.
Consistent Perturbative Fixed Point Calculations in QCD and Supersymmetric QCD.
Ryttov, Thomas A
2016-08-12
We suggest how to consistently calculate the anomalous dimension γ_{*} of the ψ[over ¯]ψ operator in finite order perturbation theory at an infrared fixed point for asymptotically free theories. If the n+1 loop beta function and n loop anomalous dimension are known, then γ_{*} can be calculated exactly and fully scheme independently in a Banks-Zaks expansion through O(Δ_{f}^{n}), where Δ_{f}=N[over ¯]_{f}-N_{f}, N_{f} is the number of flavors, and N[over ¯]_{f} is the number of flavors above which asymptotic freedom is lost. For a supersymmetric theory, the calculation preserves supersymmetry order by order in Δ_{f}. We then compute γ_{*} through O(Δ_{f}^{2}) for supersymmetric QCD in the dimensional reduction scheme and find that it matches the exact known result. We find that γ_{*} is astonishingly well described in perturbation theory already at the few loops level throughout the entire conformal window. We finally compute γ_{*} through O(Δ_{f}^{3}) for QCD and a variety of other nonsupersymmetric fermionic gauge theories. Small values of γ_{*} are observed for a large range of flavors. PMID:27563948
QCD Collisional Energy Loss Reexamined
NASA Astrophysics Data System (ADS)
Peshier, A.
2006-11-01
It is shown that at a large temperature and E→∞ the QCD collisional energy loss reads dE/dx˜α(mD2)T2. Compared to previous approaches, which led to dEB/dx˜α2T2ln(ET/mD2) similar to the Bethe-Bloch formula in QED, we take into account the running of the strong coupling. As one significant consequence, due to asymptotic freedom, dE/dx becomes E independent for large parton energies. Some implications with regard to heavy ion collisions are pointed out.
QCD collisional energy loss reexamined.
Peshier, A
2006-11-24
It is shown that at a large temperature and E --> infinity the QCD collisional energy loss reads dE/dx approximately alpha(m(D)2)T2. Compared to previous approaches, which led to dE(B)/dx approximately alpha2 T2 ln(ET/m(D)2) similar to the Bethe-Bloch formula in QED, we take into account the running of the strong coupling. As one significant consequence, due to asymptotic freedom, dE/dx becomes E independent for large parton energies. Some implications with regard to heavy ion collisions are pointed out. PMID:17155739
"Quantum Field Theory and QCD"
Jaffe, Arthur M.
2006-02-25
This grant partially funded a meeting, "QFT & QCD: Past, Present and Future" held at Harvard University, Cambridge, MA on March 18-19, 2005. The participants ranged from senior scientists (including at least 9 Nobel Prize winners, and 1 Fields medalist) to graduate students and undergraduates. There were several hundred persons in attendance at each lecture. The lectures ranged from superlative reviews of past progress, lists of important, unsolved questions, to provocative hypotheses for future discovery. The project generated a great deal of interest on the internet, raising awareness and interest in the open questions of theoretical physics.
Nucleon Structure from Lattice QCD
Haegler, Philipp
2011-10-24
Hadron structure calculations in lattice QCD have seen substantial progress during recent years. We illustrate the achievements that have been made by discussing latest lattice results for a limited number of important observables related to nucleon form factors and generalized parton distributions. A particular focus is placed on the decomposition of the nucleon spin 1/2 in terms of quark spin and orbital angular momentum contributions. Results and limitations of the necessary chiral extrapolations based on ChPT will be briefly discussed.
Spectral continuity in dense QCD
Hatsuda, Tetsuo; Yamamoto, Naoki; Tachibana, Motoi
2008-07-01
The vector mesons in three-flavor quark matter with chiral and diquark condensates are studied using the in-medium QCD sum rules. The diquark condensate leads to a mass splitting between the flavor-octet and flavor-singlet channels. At high density, the singlet vector meson disappears from the low-energy spectrum, while the octet vector mesons survive as light excitations with a mass comparable to the fermion gap. A possible connection between the light gluonic modes and the flavor-octet vector mesons at high density is also discussed.
Nuclear Physics from Lattice QCD
William Detmold, Silas Beane, Konstantinos Orginos, Martin Savage
2011-01-01
We review recent progress toward establishing lattice Quantum Chromodynamics as a predictive calculational framework for nuclear physics. A survey of the current techniques that are used to extract low-energy hadronic scattering amplitudes and interactions is followed by a review of recent two-body and few-body calculations by the NPLQCD collaboration and others. An outline of the nuclear physics that is expected to be accomplished with Lattice QCD in the next decade, along with estimates of the required computational resources, is presented.
Single transverse-spin asymmetry in QCD
NASA Astrophysics Data System (ADS)
Koike, Yuji
2014-09-01
So far large single transverse-spin asymmetries (SSA) have been observed in many high-energy processes such as semi-inclusive deep inelastic scattering and proton-proton collisions. Since the conventional parton model and perturbative QCD can not accomodate such large SSAs, the framework for QCD hard processes had to be extended to understand the mechanism of SSA. In this extended frameworks of QCD, intrinsic transverse momentum of partons and the multi-parton (quark-gluon and pure-gluonic) correlations in the hadrons, which were absent in the conventional framework, play a crucial role to cause SSAs, and well-defined formulation of these effects has been a big challenge for QCD theorists. Study on these effects has greatly promoted our understanding on QCD dynamics and hadron structure. In this talk, I will present an overview on these theoretical activity, emphasizing the important role of the Drell-Yan process.
Kernel Temporal Differences for Neural Decoding
Bae, Jihye; Sanchez Giraldo, Luis G.; Pohlmeyer, Eric A.; Francis, Joseph T.; Sanchez, Justin C.; Príncipe, José C.
2015-01-01
We study the feasibility and capability of the kernel temporal difference (KTD)(λ) algorithm for neural decoding. KTD(λ) is an online, kernel-based learning algorithm, which has been introduced to estimate value functions in reinforcement learning. This algorithm combines kernel-based representations with the temporal difference approach to learning. One of our key observations is that by using strictly positive definite kernels, algorithm's convergence can be guaranteed for policy evaluation. The algorithm's nonlinear functional approximation capabilities are shown in both simulations of policy evaluation and neural decoding problems (policy improvement). KTD can handle high-dimensional neural states containing spatial-temporal information at a reasonable computational complexity allowing real-time applications. When the algorithm seeks a proper mapping between a monkey's neural states and desired positions of a computer cursor or a robot arm, in both open-loop and closed-loop experiments, it can effectively learn the neural state to action mapping. Finally, a visualization of the coadaptation process between the decoder and the subject shows the algorithm's capabilities in reinforcement learning brain machine interfaces. PMID:25866504
Kernel method and linear recurrence system
NASA Astrophysics Data System (ADS)
Hou, Qing-Hu; Mansour, Toufik
2008-06-01
Based on the kernel method, we present systematic methods to solve equation systems on generating functions of two variables. Using these methods, we get the generating functions for the number of permutations which avoid 1234 and 12k(k-1)...3 and permutations which avoid 1243 and 12...k.
7 CFR 981.8 - Inedible kernel.
Code of Federal Regulations, 2010 CFR
2010-01-01
... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order... of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or brown spot, as defined in the United States Standards for Shelled Almonds, or which has embedded...
7 CFR 981.8 - Inedible kernel.
Code of Federal Regulations, 2011 CFR
2011-01-01
... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order... of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or brown spot, as defined in the United States Standards for Shelled Almonds, or which has embedded...
7 CFR 981.8 - Inedible kernel.
Code of Federal Regulations, 2014 CFR
2014-01-01
... AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order... of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or brown spot, as defined in the United States Standards for Shelled Almonds, or which has embedded...
7 CFR 981.8 - Inedible kernel.
Code of Federal Regulations, 2012 CFR
2012-01-01
... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order... of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or brown spot, as defined in the United States Standards for Shelled Almonds, or which has embedded...
7 CFR 981.8 - Inedible kernel.
Code of Federal Regulations, 2013 CFR
2013-01-01
... AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order... of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or brown spot, as defined in the United States Standards for Shelled Almonds, or which has embedded...
INTACT OR UNIT-KERNEL SWEET CORN
This report evaluates process and product modifications in canned and frozen sweet corn manufacture with the objective of reducing the total effluent produced in processing. In particular it evaluates the proposed replacement of process steps that yield cut or whole kernel corn w...
Arbitrary-resolution global sensitivity kernels
NASA Astrophysics Data System (ADS)
Nissen-Meyer, T.; Fournier, A.; Dahlen, F.
2007-12-01
Extracting observables out of any part of a seismogram (e.g. including diffracted phases such as Pdiff) necessitates the knowledge of 3-D time-space wavefields for the Green functions that form the backbone of Fréchet sensitivity kernels. While known for a while, this idea is still computationally intractable in 3-D, facing major simulation and storage issues when high-frequency wavefields are considered at the global scale. We recently developed a new "collapsed-dimension" spectral-element method that solves the 3-D system of elastodynamic equations in a 2-D space, based on exploring symmetry considerations of the seismic-wave radiation patterns. We will present the technical background on the computation of waveform kernels, various examples of time- and frequency-dependent sensitivity kernels and subsequently extracted time-window kernels (e.g. banana- doughnuts). Given the computationally light-weighted 2-D nature, we will explore some crucial parameters such as excitation type, source time functions, frequency, azimuth, discontinuity locations, and phase type, i.e. an a priori view into how, when, and where seismograms carry 3-D Earth signature. A once-and-for-all database of 2-D waveforms for various source depths shall then serve as a complete set of global time-space sensitivity for a given spherically symmetric background model, thereby allowing for tomographic inversions with arbitrary frequencies, observables, and phases.
Application of the matrix exponential kernel
NASA Technical Reports Server (NTRS)
Rohach, A. F.
1972-01-01
A point matrix kernel for radiation transport, developed by the transmission matrix method, has been used to develop buildup factors and energy spectra through slab layers of different materials for a point isotropic source. Combinations of lead-water slabs were chosen for examples because of the extreme differences in shielding properties of these two materials.
7 CFR 868.254 - Broken kernels determination.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 7 2010-01-01 2010-01-01 false Broken kernels determination. 868.254 Section 868.254 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Governing Application of Standards § 868.254 Broken kernels determination. Broken kernels shall...
7 CFR 868.304 - Broken kernels determination.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 7 2010-01-01 2010-01-01 false Broken kernels determination. 868.304 Section 868.304 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Application of Standards § 868.304 Broken kernels determination. Broken kernels shall be determined by the...
Applying Single Kernel Sorting Technology to Developing Scab Resistant Lines
Technology Transfer Automated Retrieval System (TEKTRAN)
We are using automated single-kernel near-infrared (SKNIR) spectroscopy instrumentation to sort fusarium head blight (FHB) infected kernels from healthy kernels, and to sort segregating populations by hardness to enhance the development of scab resistant hard and soft wheat varieties. We sorted 3 r...
21 CFR 176.350 - Tamarind seed kernel powder.
Code of Federal Regulations, 2012 CFR
2012-04-01
... Substances for Use Only as Components of Paper and Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as a component of articles intended for use in producing... 21 Food and Drugs 3 2012-04-01 2012-04-01 false Tamarind seed kernel powder. 176.350 Section...
21 CFR 176.350 - Tamarind seed kernel powder.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 21 Food and Drugs 3 2014-04-01 2014-04-01 false Tamarind seed kernel powder. 176.350 Section 176...) INDIRECT FOOD ADDITIVES: PAPER AND PAPERBOARD COMPONENTS Substances for Use Only as Components of Paper and Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as...
21 CFR 176.350 - Tamarind seed kernel powder.
Code of Federal Regulations, 2013 CFR
2013-04-01
... Substances for Use Only as Components of Paper and Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as a component of articles intended for use in producing... 21 Food and Drugs 3 2013-04-01 2013-04-01 false Tamarind seed kernel powder. 176.350 Section...
21 CFR 176.350 - Tamarind seed kernel powder.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 3 2010-04-01 2009-04-01 true Tamarind seed kernel powder. 176.350 Section 176... Substances for Use Only as Components of Paper and Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as a component of articles intended for use in...
21 CFR 176.350 - Tamarind seed kernel powder.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 3 2011-04-01 2011-04-01 false Tamarind seed kernel powder. 176.350 Section 176... Substances for Use Only as Components of Paper and Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as a component of articles intended for use in...
Thermomechanical property of rice kernels studied by DMA
Technology Transfer Automated Retrieval System (TEKTRAN)
The thermomechanical property of the rice kernels was investigated using a dynamic mechanical analyzer (DMA). The length change of rice kernel with a loaded constant force along the major axis direction was detected during temperature scanning. The thermomechanical transition occurred in rice kernel...
7 CFR 51.2125 - Split or broken kernels.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 2 2014-01-01 2014-01-01 false Split or broken kernels. 51.2125 Section 51.2125 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards... § 51.2125 Split or broken kernels. Split or broken kernels means seven-eighths or less of...
7 CFR 51.2125 - Split or broken kernels.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 2 2013-01-01 2013-01-01 false Split or broken kernels. 51.2125 Section 51.2125 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards... § 51.2125 Split or broken kernels. Split or broken kernels means seven-eighths or less of...
7 CFR 868.304 - Broken kernels determination.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 7 2011-01-01 2011-01-01 false Broken kernels determination. 868.304 Section 868.304 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Application of Standards § 868.304 Broken kernels determination. Broken kernels shall be determined by the...
7 CFR 868.304 - Broken kernels determination.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 7 2014-01-01 2014-01-01 false Broken kernels determination. 868.304 Section 868.304 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Application of Standards § 868.304 Broken kernels determination. Broken kernels shall be determined by the...
7 CFR 868.254 - Broken kernels determination.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 7 2014-01-01 2014-01-01 false Broken kernels determination. 868.254 Section 868.254 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Governing Application of Standards § 868.254 Broken kernels determination. Broken kernels shall...
7 CFR 868.254 - Broken kernels determination.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 7 2011-01-01 2011-01-01 false Broken kernels determination. 868.254 Section 868.254 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Governing Application of Standards § 868.254 Broken kernels determination. Broken kernels shall...
7 CFR 868.304 - Broken kernels determination.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 7 2012-01-01 2012-01-01 false Broken kernels determination. 868.304 Section 868.304 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Application of Standards § 868.304 Broken kernels determination. Broken kernels shall be determined by the...
7 CFR 868.254 - Broken kernels determination.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 7 2012-01-01 2012-01-01 false Broken kernels determination. 868.254 Section 868.254 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Governing Application of Standards § 868.254 Broken kernels determination. Broken kernels shall...
The QCD/SM working group: Summary report
W. Giele et al.
2004-01-12
Quantum Chromo-Dynamics (QCD), and more generally the physics of the Standard Model (SM), enter in many ways in high energy processes at TeV Colliders, and especially in hadron colliders (the Tevatron at Fermilab and the forthcoming LHC at CERN), First of all, at hadron colliders, QCD controls the parton luminosity, which rules the production rates of any particle or system with large invariant mass and/or large transverse momentum. Accurate predictions for any signal of possible ''New Physics'' sought at hadron colliders, as well as the corresponding backgrounds, require an improvement in the control of uncertainties on the determination of PDF and of the propagation of these uncertainties in the predictions. Furthermore, to fully exploit these new types of PDF with uncertainties, uniform tools (computer interfaces, standardization of the PDF evolution codes used by the various groups fitting PDF's) need to be proposed and developed. The dynamics of colour also affects, both in normalization and shape, various observables of the signals of any possible ''New Physics'' sought at the TeV scale, such as, e.g. the production rate, or the distributions in transverse momentum of the Higgs boson. Last, but not least, QCD governs many backgrounds to the searches for this ''New Physics''. Large and important QCD corrections may come from extra hard parton emission (and the corresponding virtual corrections), involving multi-leg and/or multi-loop amplitudes. This requires complex higher order calculations, and new methods have to be designed to compute the required multi-legs and/or multi-loop corrections in a tractable form. In the case of semi-inclusive observables, logarithmically enhanced contributions coming from multiple soft and collinear gluon emission require sophisticated QCD resummation techniques. Resummation is a catch-all name for efforts to extend the predictive power of QCD by summing the large logarithmic corrections to all orders in perturbation theory. In
Adam P. Szczepaniak; Eric S. Swanson
2000-12-12
Here we will discuss how the nonabelian Coulomb kernel exhibits confinement already at the mean field level. In the heavy quark limit residual interactions between heavy quarks and transverse gluons are spin dependent i.e., relativistic and can be calculated using the Foldy-Wouthuysen transformation. This makes the Coulomb gauge suitable for studying the nonrelativistic limit. Finally it is possible to use standard mean field techniques to define quasiparticle excitations, which, as we discuss below, have similar properties to what is usually assumed about constituent quarks in the light quark sector.
Nuclear Physics and Lattice QCD
Beane, Silas
2003-11-01
Impressive progress is currently being made in computing properties and interac- tions of the low-lying hadrons using lattice QCD. However, cost limitations will, for the foreseeable future, necessitate the use of quark masses, Mq, that are signif- icantly larger than those of nature, lattice spacings, a, that are not significantly smaller than the physical scale of interest, and lattice sizes, L, that are not sig- nificantly larger than the physical scale of interest. Extrapolations in the quark masses, lattice spacing and lattice volume are therefore required. The hierarchy of mass scales is: L 1 j Mq j â ºC j a 1 . The appropriate EFT for incorporating the light quark masses, the finite lattice spacing and the lattice size into hadronic observables is C-PT, which provides systematic expansions in the small parame- ters e m L, 1/ Lâ ºC, p/â ºC, Mq/â ºC and aâ ºC . The lattice introduces other unphysical scales as well. Lattice QCD quarks will increasingly be artificially separated
Smith, W.H.
1997-06-01
These lectures describe QCD physics studies over the period 1992--1996 from data taken with collisions of 27 GeV electrons and positrons with 820 GeV protons at the HERA collider at DESY by the two general-purpose detectors H1 and ZEUS. The focus of these lectures is on structure functions and jet production in deep inelastic scattering, photoproduction, and diffraction. The topics covered start with a general introduction to HERA and ep scattering. Structure functions are discussed. This includes the parton model, scaling violation, and the extraction of F{sub 2}, which is used to determine the gluon momentum distribution. Both low and high Q{sup 2} regimes are discussed. The low Q{sup 2} transition from perturbative QCD to soft hadronic physics is examined. Jet production in deep inelastic scattering to measure {alpha}{sub s}, and in photoproduction to study resolved and direct photoproduction, is also presented. This is followed by a discussion of diffraction that begins with a general introduction to diffraction in hadronic collisions and its relation to ep collisions, and moves on to deep inelastic scattering, where the structure of diffractive exchange is studied, and in photoproduction, where dijet production provides insights into the structure of the Pomeron. 95 refs., 39 figs.
Brodsky, Stanley J.; Cao, Fu-Guang; de Teramond, Guy F.; /Costa Rica U.
2011-11-04
The QCD evolution of the pion distribution amplitude (DA) {phi}{sub {pi}} (x, Q{sup 2}) is computed for several commonly used models. Our analysis includes the nonperturbative form predicted by lightfront holographic QCD, thus combining the nonperturbative bound state dynamics of the pion with the perturbative ERBL evolution of the pion distribution amplitude. We calculate the meson-photon transition form factors for the {pi}{sup 0}, {eta} and {eta}' using the hard-scattering formalism. We point out that a widely-used approximation of replacing {phi} (x; (1 - x)Q) with {phi} (x;Q) in the calculations will unjustifiably reduce the predictions for the meson-photon transition form factors. It is found that the four models of the pion DA discussed give very different predictions for the Q{sup 2} dependence of the meson-photon transition form factors in the region of Q{sup 2} > 30 GeV{sup 2}. More accurate measurements of these transition form factors at the large Q{sup 2} region will be able to distinguish the four models of the pion DA. The rapid growth of the large Q{sup 2} data for the pion-photon transition form factor reported by the BABAR Collaboration is difficult to explain within the current framework of QCD. If the BABAR data for the meson-photon transition form factor for the {pi}{sup 0} is confirmed, it could indicate physics beyond-the-standard model, such as a weakly-coupled elementary C = + axial vector or pseudoscalar z{sup 0} in the few GeV domain, an elementary field which would provide the coupling {gamma}{sup *}{gamma} {yields} z{sup 0} {yields} {pi}{sup 0} at leading twist. Our analysis thus indicates the importance of additional measurements of the pion-photon transition form factor at large Q{sup 2}.
Carbothermic Synthesis of ~820- m UN Kernels. Investigation of Process Variables
Lindemer, Terrence; Silva, Chinthaka M; Henry, Jr, John James; McMurray, Jake W; Jolly, Brian C; Hunt, Rodney Dale; Terrani, Kurt A
2015-06-01
This report details the continued investigation of process variables involved in converting sol-gel-derived, urainia-carbon microspheres to ~820-μm-dia. UN fuel kernels in flow-through, vertical refractory-metal crucibles at temperatures up to 2123 K. Experiments included calcining of air-dried UO_{3}-H_{2}O-C microspheres in Ar and H_{2}-containing gases, conversion of the resulting UO_{2}-C kernels to dense UO_{2}:2UC in the same gases and vacuum, and its conversion in N_{2} to in UC_{1-x}N_{x}. The thermodynamics of the relevant reactions were applied extensively to interpret and control the process variables. Producing the precursor UO_{2}:2UC kernel of ~96% theoretical density was required, but its subsequent conversion to UC_{1-x}N_{x} at 2123 K was not accompanied by sintering and resulted in ~83-86% of theoretical density. Decreasing the UC_{1-x}N_{x} kernel carbide component via HCN evolution was shown to be quantitatively consistent with present and past experiments and the only useful application of H2 in the entire process.
Kernel weights optimization for error diffusion halftoning method
NASA Astrophysics Data System (ADS)
Fedoseev, Victor
2015-02-01
This paper describes a study to find the best error diffusion kernel for digital halftoning under various restrictions on the number of non-zero kernel coefficients and their set of values. As an objective measure of quality, WSNR was used. The problem of multidimensional optimization was solved numerically using several well-known algorithms: Nelder- Mead, BFGS, and others. The study found a kernel function that provides a quality gain of about 5% in comparison with the best of the commonly used kernel introduced by Floyd and Steinberg. Other kernels obtained allow to significantly reduce the computational complexity of the halftoning process without reducing its quality.
Chare kernel; A runtime support system for parallel computations
Shu, W. ); Kale, L.V. )
1991-03-01
This paper presents the chare kernel system, which supports parallel computations with irregular structure. The chare kernel is a collection of primitive functions that manage chares, manipulative messages, invoke atomic computations, and coordinate concurrent activities. Programs written in the chare kernel language can be executed on different parallel machines without change. Users writing such programs concern themselves with the creation of parallel actions but not with assigning them to specific processors. The authors describe the design and implementation of the chare kernel. Performance of chare kernel programs on two hypercube machines, the Intel iPSC/2 and the NCUBE, is also given.
Vranas, P
2007-06-18
Quantum Chromodynamics is the theory of nuclear and sub-nuclear physics. It is a celebrated theory and one of its inventors, F. Wilczek, has termed it as '... our most perfect physical theory'. Part of this is related to the fact that QCD can be numerically simulated from first principles using the methods of lattice gauge theory. The computational demands of QCD are enormous and have not only played a role in the history of supercomputers but are also helping define their future. Here I will discuss the intimate relation of QCD and massively parallel supercomputers with focus on the Blue Gene supercomputer and QCD thermodynamics. I will present results on the performance of QCD on the Blue Gene as well as physics simulation results of QCD at temperatures high enough that sub-nuclear matter transitions to a plasma state of elementary particles, the quark gluon plasma. This state of matter is thought to have existed at around 10 microseconds after the big bang. Current heavy ion experiments are in the quest of reproducing it for the first time since then. And numerical simulations of QCD on the Blue Gene systems are calculating the theoretical values of fundamental parameters so that comparisons of experiment and theory can be made.
Saturation and universality in QCD at small /x
NASA Astrophysics Data System (ADS)
Iancu, E.; McLerran, L.
2001-06-01
We find approximate solutions to the renormalization group equation which governs the quantum evolution of the effective theory for the Color Glass Condensate. This is a functional Fokker-Planck equation which generates in particular the non-linear evolution equations previously derived by Balitsky and Kovchegov within perturbative QCD. In the limit where the transverse momentum of the external probe is large compared to the saturation momentum, our approximations yield the Gaussian ansatz for the effective action of the McLerran-Venugopalan model. In the opposite limit, of a small external momentum, we find that the effective theory is governed by a scale-invariant universal action which has the correct properties to describe gluon saturation.
Difference image analysis: automatic kernel design using information criteria
NASA Astrophysics Data System (ADS)
Bramich, D. M.; Horne, Keith; Alsubai, K. A.; Bachelet, E.; Mislis, D.; Parley, N.
2016-03-01
We present a selection of methods for automatically constructing an optimal kernel model for difference image analysis which require very few external parameters to control the kernel design. Each method consists of two components; namely, a kernel design algorithm to generate a set of candidate kernel models, and a model selection criterion to select the simplest kernel model from the candidate models that provides a sufficiently good fit to the target image. We restricted our attention to the case of solving for a spatially invariant convolution kernel composed of delta basis functions, and we considered 19 different kernel solution methods including six employing kernel regularization. We tested these kernel solution methods by performing a comprehensive set of image simulations and investigating how their performance in terms of model error, fit quality, and photometric accuracy depends on the properties of the reference and target images. We find that the irregular kernel design algorithm employing unregularized delta basis functions, combined with either the Akaike or Takeuchi information criterion, is the best kernel solution method in terms of photometric accuracy. Our results are validated by tests performed on two independent sets of real data. Finally, we provide some important recommendations for software implementations of difference image analysis.
The QCD vacuum, hadrons and superdense matter
Shuryak, E.
1986-01-01
This is probably the only textbook available that gathers QCD, many-body theory and phase transitions in one volume. The presentation is pedagogical and readable. Contents: The QCD Vacuum: Introduction; QCD on the Lattice Topological Effects in Gauges Theories. Correlation Functions and Microscopic Excitations: Introduction; Operator Product Expansion; The Sum Rules beyond OPE; Nonpower Contributions to Correlators and Instantons; Hadronic Spectroscopy on the Lattice. Dense Matter: Hadronic Matter; Asymptotically Dense Quark-Gluon Plasma; Instantons in Matter; Lattice Calculations at Finite Temperature; Phase Transitions; Macroscopic Excitations and Experiments: General Properties of High Energy Collisions; ''Barometers'', ''Thermometers'', Interferometric ''Microscope''; Experimental Perspectives.
Excited light isoscalar mesons from lattice QCD
Christopher Thomas
2011-07-01
I report a recent lattice QCD calculation of an excited spectrum of light isoscalar mesons, something that has up to now proved challenging for lattice QCD. With novel techniques we extract an extensive spectrum with high statistical precision, including spin-four states and, for the first time, light isoscalars with exotic quantum numbers. In addition, the hidden flavour content of these mesons is determined, providing a window on annihilation dynamics in QCD. I comment on future prospects including applications to the study of resonances.
QCD thermodynamics and missing hadron states
NASA Astrophysics Data System (ADS)
Petreczky, Peter
2016-03-01
Equation of State and fluctuations of conserved charges in hot strongly interacting matter are being calculated with increasing accuracy in lattice QCD, and continuum results at physical quark masses become available. At sufficiently low temperature the thermodynamic quantities can be understood in terms of hadron resonance gas model that includes known hadrons and hadronic resonances from Particle Data Book. However, for some quantities it is necessary to include undiscovered hadronic resonances (missing states) that are, however, predicted by quark model and lattice QCD study of hadron spectrum. Thus, QCD thermodynamics can provide indications for the existence of yet undiscovered hadron states.
Death to perturbative QCD in exclusive processes?
Eckardt, R.; Hansper, J.; Gari, M.F.
1994-04-01
The authors discuss the question of whether perturbative QCD is applicable in calculations of exclusive processes at available momentum transfers. They show that the currently used method of determining hadronic quark distribution amplitudes from QCD sum rules yields wave functions which are completely undetermined because the polynomial expansion diverges. Because of the indeterminacy of the wave functions no statement can be made at present as to whether perturbative QCD is valid. The authors emphasize the necessity of a rigorous discussion of the subject and the importance of experimental data in the range of interest.
Shape of mesons in holographic QCD
Torabian, Mahdi; Yee, Ho-Ung
2009-10-15
Based on the expectation that the constituent quark model may capture the right physics in the large N limit, we point out that the orbital angular momentum of the quark-antiquark pair inside light mesons of low spins in the constituent quark model may provide a clue for the holographic dual string model of large N QCD. Our discussion, relying on a few suggestive assumptions, leads to a necessity of world-sheet fermions in the bulk of dual strings that can incorporate intrinsic spins of fundamental QCD degrees of freedom. We also comment on the interesting issue of the size of mesons in holographic QCD.
Towards the chiral limit in QCD
Shailesh Chandrasekharan
2006-02-28
Computing hadronic observables by solving QCD from first principles with realistic quark masses is an important challenge in fundamental nuclear and particle physics research. Although lattice QCD provides a rigorous framework for such calculations many difficulties arise. Firstly, there are no good algorithms to solve lattice QCD with realistically light quark masses. Secondly, due to critical slowing down, Monte Carlo algorithms are able to access only small lattice sizes on coarse lattices. Finally, due to sign problems it is almost impossible to study the physics of finite baryon density. Lattice QCD contains roughly three mass scales: the cutoff (or inverse lattice spacing) a{sup -1}, the confinement scale {Lambda}{sub QCD}, and the pion mass m{sub {pi}}. Most conventional Monte Carlo algorithms for QCD become inefficient in two regimes: when {Lambda}{sub QCD} becomes small compared to a{sup -1} and when m{sub {pi}} becomes small compared to {Lambda}{sub QCD}. The former can be largely controlled by perturbation theory thanks to asymptotic freedom. The latter is more difficult since chiral extrapolations are typically non-analytic and can be unreliable if the calculations are not done at sufficiently small quark masses. For this reason it has been difficult to compute quantities close to the chiral limit. The essential goal behind this proposal was to develop a new approach towards understanding QCD and QCD-like theories with sufficiently light quarks. The proposal was based on a novel cluster algorithm discovered in the strong coupling limit with staggered fermions [1]. This algorithm allowed us to explore the physics of exactly massless quarks and as well as light quarks. Thus, the hope was that this discovery would lead to the complete solution of at least a few strongly coupled QCD-like theories. The solution would be far better than those achievable through conventional methods and thus would be able to shed light on the chiral physics from a new
A meshfree unification: reproducing kernel peridynamics
NASA Astrophysics Data System (ADS)
Bessa, M. A.; Foster, J. T.; Belytschko, T.; Liu, Wing Kam
2014-06-01
This paper is the first investigation establishing the link between the meshfree state-based peridynamics method and other meshfree methods, in particular with the moving least squares reproducing kernel particle method (RKPM). It is concluded that the discretization of state-based peridynamics leads directly to an approximation of the derivatives that can be obtained from RKPM. However, state-based peridynamics obtains the same result at a significantly lower computational cost which motivates its use in large-scale computations. In light of the findings of this study, an update to the method is proposed such that the limitations regarding application of boundary conditions and the use of non-uniform grids are corrected by using the reproducing kernel approximation.
Searching and Indexing Genomic Databases via Kernelization
Gagie, Travis; Puglisi, Simon J.
2015-01-01
The rapid advance of DNA sequencing technologies has yielded databases of thousands of genomes. To search and index these databases effectively, it is important that we take advantage of the similarity between those genomes. Several authors have recently suggested searching or indexing only one reference genome and the parts of the other genomes where they differ. In this paper, we survey the 20-year history of this idea and discuss its relation to kernelization in parameterized complexity. PMID:25710001
From QCD to physical resonances
NASA Astrophysics Data System (ADS)
Bolton, Daniel R.; Briceño, Raúl A.; Wilson, David J.
2016-05-01
In this talk, we present the first chiral extrapolation of a resonant scattering amplitude obtained from lattice QCD. Finite-volume spectra, determined by the Hadron Spectrum Collaboration at mπ = 236 MeV [1], for the isotriplet ππ channel are analyzed using the Lüscher method to determine the infinite-volume scattering amplitude. Unitarized Chiral Perturbation Theory is then used to extrapolate the scattering amplitude to the physical light quark masses. The viability of this procedure is demonstrated by its agreement with the experimentally determined scattering phase shift up to center-of-mass energies of 1.2 GeV. Finally, we analytically continue the amplitude to the complex plane to obtain the ρ-pole at [ 755 (2 )(1 )(02 20 ) -i/2 129 (3 )(1 )(1 7 ) ] MeV.
QCD tests with polarized beams
Maruyama, Takashi; SLD Collaboration
1996-09-01
The authors present three QCD studies performed by the SLD experiment at SLAC, utilizing the highly polarized SLC electron beam. They examined particle production differences in light quark and antiquark hemispheres, and observed more high momentum baryons and K{sup {minus}}`s than antibaryons and K{sup +}`s in quark hemispheres, consistent with the leading particle hypothesis. They performed a search for jet handedness in light q- and {anti q}-jets. Assuming Standard Model values of quark polarization in Z{sup 0} decays, they have set an improved upper limit on the analyzing power of the handedness method. They studied the correlation between the Z{sup 0} spin and the event-plane orientation in polarized Z{sup 0} decays into three jets.
Gluonic transversity from lattice QCD
NASA Astrophysics Data System (ADS)
Detmold, W.; Shanahan, P. E.
2016-07-01
We present an exploratory study of the gluonic structure of the ϕ meson using lattice QCD (LQCD). This includes the first investigation of gluonic transversity via the leading moment of the twist-2 double-helicity-flip gluonic structure function Δ (x ,Q2). This structure function only exists for targets of spin J ≥1 and does not mix with quark distributions at leading twist, thereby providing a particularly clean probe of gluonic degrees of freedom. We also explore the gluonic analogue of the Soffer bound which relates the helicity flip and nonflip gluonic distributions, finding it to be saturated at the level of 80%. This work sets the stage for more complex LQCD studies of gluonic structure in the nucleon and in light nuclei where Δ (x ,Q2) is an "exotic glue" observable probing gluons in a nucleus not associated with individual nucleons.
Lattice QCD Beyond Ground States
Huey-Wen Lin; Saul D. Cohen
2007-09-11
In this work, we apply black box methods (methods not requiring input) to find excited-state energies. A variety of such methods for lattice QCD were introduced at the 3rd iteration of the numerical workshop series. We first review a selection of approaches that have been used in lattice calculations to determine multiple energy states: multiple correlator fits, the variational method and Bayesian fitting. In the second half, we will focus on a black box method, the multi-effective mass. We demonstrate the approach on a toy model, as well as on real lattice data, extracting multiple states from single correlators. Without complicated operator construction or specialized fitting programs, the black box method shows good consistency with the traditional approaches.
Nuclear Force from Lattice QCD
Ishii, N.; Aoki, S.; Hatsuda, T.
2007-07-13
The nucleon-nucleon (NN) potential is studied by lattice QCD simulations in the quenched approximation, using the plaquette gauge action and the Wilson quark action on a 32{sup 4} [{approx_equal}(4.4 fm){sup 4}] lattice. A NN potential V{sub NN}(r) is defined from the equal-time Bethe-Salpeter amplitude with a local interpolating operator for the nucleon. By studying the NN interaction in the {sup 1}S{sub 0} and {sup 3}S{sub 1} channels, we show that the central part of V{sub NN}(r) has a strong repulsive core of a few hundred MeV at short distances (r < or approx. 0.5 fm) surrounded by an attractive well at medium and long distances. These features are consistent with the known phenomenological features of the nuclear force.
Nuclear force from lattice QCD.
Ishii, N; Aoki, S; Hatsuda, T
2007-07-13
The nucleon-nucleon (NN) potential is studied by lattice QCD simulations in the quenched approximation, using the plaquette gauge action and the Wilson quark action on a 32(4) [approximately (4.4 fm)(4)] lattice. A NN potential V(NN)(r) is defined from the equal-time Bethe-Salpeter amplitude with a local interpolating operator for the nucleon. By studying the NN interaction in the (1)S(0) and (3)S(1) channels, we show that the central part of V(NN)(r) has a strong repulsive core of a few hundred MeV at short distances (r approximately < 0.5 fm) surrounded by an attractive well at medium and long distances. These features are consistent with the known phenomenological features of the nuclear force. PMID:17678213
Hormuzdiar, J.N.; Hsu, S.D.
1999-02-01
We describe a class of pionic breather solutions (PBS) which appear in the chiral Lagrangian description of low-energy QCD. These configurations are long lived, with lifetimes greater than 10{sup 3} fm/c, and could arise as remnants of disoriented chiral condensate (DCC) formation at RHIC. We show that the chiral Lagrangian equations of motion for a uniformly isospin-polarized domain reduce to those of the sine-Gordon model. Consequently, our solutions are directly related to the breather solutions of sine-Gordon theory in 3+1 dimensions. We investigate the possibility of PBS formation from multiple domains of DCC, and show that the probability of formation is non-negligible. {copyright} {ital 1999} {ital The American Physical Society}
Multiple kernel learning for dimensionality reduction.
Lin, Yen-Yu; Liu, Tyng-Luh; Fuh, Chiou-Shann
2011-06-01
In solving complex visual learning tasks, adopting multiple descriptors to more precisely characterize the data has been a feasible way for improving performance. The resulting data representations are typically high-dimensional and assume diverse forms. Hence, finding a way of transforming them into a unified space of lower dimension generally facilitates the underlying tasks such as object recognition or clustering. To this end, the proposed approach (termed MKL-DR) generalizes the framework of multiple kernel learning for dimensionality reduction, and distinguishes itself with the following three main contributions: first, our method provides the convenience of using diverse image descriptors to describe useful characteristics of various aspects about the underlying data. Second, it extends a broad set of existing dimensionality reduction techniques to consider multiple kernel learning, and consequently improves their effectiveness. Third, by focusing on the techniques pertaining to dimensionality reduction, the formulation introduces a new class of applications with the multiple kernel learning framework to address not only the supervised learning problems but also the unsupervised and semi-supervised ones. PMID:20921580
A Fast Reduced Kernel Extreme Learning Machine.
Deng, Wan-Yu; Ong, Yew-Soon; Zheng, Qing-Hua
2016-04-01
In this paper, we present a fast and accurate kernel-based supervised algorithm referred to as the Reduced Kernel Extreme Learning Machine (RKELM). In contrast to the work on Support Vector Machine (SVM) or Least Square SVM (LS-SVM), which identifies the support vectors or weight vectors iteratively, the proposed RKELM randomly selects a subset of the available data samples as support vectors (or mapping samples). By avoiding the iterative steps of SVM, significant cost savings in the training process can be readily attained, especially on Big datasets. RKELM is established based on the rigorous proof of universal learning involving reduced kernel-based SLFN. In particular, we prove that RKELM can approximate any nonlinear functions accurately under the condition of support vectors sufficiency. Experimental results on a wide variety of real world small instance size and large instance size applications in the context of binary classification, multi-class problem and regression are then reported to show that RKELM can perform at competitive level of generalized performance as the SVM/LS-SVM at only a fraction of the computational effort incurred. PMID:26829605
A Kernel Classification Framework for Metric Learning.
Wang, Faqiang; Zuo, Wangmeng; Zhang, Lei; Meng, Deyu; Zhang, David
2015-09-01
Learning a distance metric from the given training samples plays a crucial role in many machine learning tasks, and various models and optimization algorithms have been proposed in the past decade. In this paper, we generalize several state-of-the-art metric learning methods, such as large margin nearest neighbor (LMNN) and information theoretic metric learning (ITML), into a kernel classification framework. First, doublets and triplets are constructed from the training samples, and a family of degree-2 polynomial kernel functions is proposed for pairs of doublets or triplets. Then, a kernel classification framework is established to generalize many popular metric learning methods such as LMNN and ITML. The proposed framework can also suggest new metric learning methods, which can be efficiently implemented, interestingly, using the standard support vector machine (SVM) solvers. Two novel metric learning methods, namely, doublet-SVM and triplet-SVM, are then developed under the proposed framework. Experimental results show that doublet-SVM and triplet-SVM achieve competitive classification accuracies with state-of-the-art metric learning methods but with significantly less training time. PMID:25347887
Semi-Supervised Kernel Mean Shift Clustering.
Anand, Saket; Mittal, Sushil; Tuzel, Oncel; Meer, Peter
2014-06-01
Mean shift clustering is a powerful nonparametric technique that does not require prior knowledge of the number of clusters and does not constrain the shape of the clusters. However, being completely unsupervised, its performance suffers when the original distance metric fails to capture the underlying cluster structure. Despite recent advances in semi-supervised clustering methods, there has been little effort towards incorporating supervision into mean shift. We propose a semi-supervised framework for kernel mean shift clustering (SKMS) that uses only pairwise constraints to guide the clustering procedure. The points are first mapped to a high-dimensional kernel space where the constraints are imposed by a linear transformation of the mapped points. This is achieved by modifying the initial kernel matrix by minimizing a log det divergence-based objective function. We show the advantages of SKMS by evaluating its performance on various synthetic and real datasets while comparing with state-of-the-art semi-supervised clustering algorithms. PMID:26353281
NASA Astrophysics Data System (ADS)
Pope, Benjamin; Tuthill, Peter; Hinkley, Sasha; Ireland, Michael J.; Greenbaum, Alexandra; Latyshev, Alexey; Monnier, John D.; Martinache, Frantz
2016-01-01
At present, the principal limitation on the resolution and contrast of astronomical imaging instruments comes from aberrations in the optical path, which may be imposed by the Earth's turbulent atmosphere or by variations in the alignment and shape of the telescope optics. These errors can be corrected physically, with active and adaptive optics, and in post-processing of the resulting image. A recently developed adaptive optics post-processing technique, called kernel-phase interferometry, uses linear combinations of phases that are self-calibrating with respect to small errors, with the goal of constructing observables that are robust against the residual optical aberrations in otherwise well-corrected imaging systems. Here, we present a direct comparison between kernel phase and the more established competing techniques, aperture masking interferometry, point spread function (PSF) fitting and bispectral analysis. We resolve the α Ophiuchi binary system near periastron, using the Palomar 200-Inch Telescope. This is the first case in which kernel phase has been used with a full aperture to resolve a system close to the diffraction limit with ground-based extreme adaptive optics observations. Excellent agreement in astrometric quantities is found between kernel phase and masking, and kernel phase significantly outperforms PSF fitting and bispectral analysis, demonstrating its viability as an alternative to conventional non-redundant masking under appropriate conditions.
Parton distributions from lattice QCD: an update
Detmold, W; Melnitchouk, W; Thomas, A W
2004-04-01
We review the extraction of parton distributions from their moments calculated in lattice QCD, focusing in particular on their extrapolation to the physical region. As examples, we consider both the unpolarized and polarized isovector parton distributions of the nucleon.
Opportunities, challenges, and fantasies in lattice QCD
NASA Astrophysics Data System (ADS)
Wilczek, Frank
2003-05-01
Some important problems in quantitative QCD will certainly yield to hard work and adequate investment of resources, others appear difficult but may be accessible, and still others will require essentially new ideas. Here I identify several examples in each class.
Heavy Quarks, QCD, and Effective Field Theory
Thomas Mehen
2012-10-09
The research supported by this OJI award is in the area of heavy quark and quarkonium production, especially the application Soft-Collinear E ective Theory (SCET) to the hadronic production of quarkonia. SCET is an e ffective theory which allows one to derive factorization theorems and perform all order resummations for QCD processes. Factorization theorems allow one to separate the various scales entering a QCD process, and in particular, separate perturbative scales from nonperturbative scales. The perturbative physics can then be calculated using QCD perturbation theory. Universal functions with precise fi eld theoretic de nitions describe the nonperturbative physics. In addition, higher order perturbative QCD corrections that are enhanced by large logarithms can be resummed using the renormalization group equations of SCET. The applies SCET to the physics of heavy quarks, heavy quarkonium, and similar particles.
None
2011-10-06
Modern QCD - Lecture 4 We will consider some processes of interest at the LHC and will discuss the main elements of their cross-section calculations. We will also summarize the current status of higher order calculations.
Strange Baryon Physics in Full Lattice QCD
Huey-Wen Lin
2007-11-01
Strange baryon spectra and form factors are key probes to study excited nuclear matter. The use of lattice QCD allows us to test the strength of the Standard Model by calculating strange baryon quantities from first principles.
Excited light meson spectroscopy from lattice QCD
Christopher Thomas, Hadron Spectrum Collaboration
2012-04-01
I report on recent progress in calculating excited meson spectra using lattice QCD, emphasizing results and phenomenology. With novel techniques we can now extract extensive spectra of excited mesons with high statistical precision, including spin-four states and those with exotic quantum numbers. As well as isovector meson spectra, I will present new calculations of the spectrum of excited light isoscalar mesons, something that has up to now been a challenge for lattice QCD. I show determinations of the flavor content of these mesons, including the eta-eta' mixing angle, providing a window on annihilation dynamics in QCD. I will also discuss recent work on using lattice QCD to map out the energy-dependent phase shift in pi-pi scattering and future applications of the methodology to the study of resonances and decays.
Simplifying Multi-Jet QCD Computation
Peskin, Michael E.; /SLAC
2011-11-04
These lectures give a pedagogical discussion of the computation of QCD tree amplitudes for collider physics. The tools reviewed are spinor products, color ordering, MHV amplitudes, and the Britto-Cachazo-Feng-Witten recursion formula.
QCD mechanisms for heavy particle production
Brodsky, S.J.
1985-09-01
For very large pair mass, the production of heavy quarks and supersymmetric particles is expected to be governed by ACD fusion subprocesses. At lower mass scales other QCD mechanisms such as prebinding distortion and intrinsic heavy particle Fock states can become important, possibly accounting for the anomalies observed for charm hadroproduction. We emphasize the importance of final-state Coulomb interactions at low relative velocity in QCD and predict the existence of heavy narrow four quark resonances (c c-bar u u-bar) and (cc c-bar c-bar) in ..gamma gamma.. reactions. Coherent QCD contributions are discussed as a contribution to the non-additivity of nuclear structure functions and heavy particle production cross sections. We also predict a new type of amplitude zero for exclusive heavy meson pair production which follows from the tree-graph structure of QCD. 35 refs., 8 figs., 1 tab.
Recent QCD Studies at the Tevatron
Group, Robert Craig
2008-04-01
Since the beginning of Run II at the Fermilab Tevatron the QCD physics groups of the CDF and D0 experiments have worked to reach unprecedented levels of precision for many QCD observables. Thanks to the large dataset--over 3 fb{sup -1} of integrated luminosity recorded by each experiment--important new measurements have recently been made public and will be summarized in this paper.
QCD and hard diffraction at the LHC
Albrow, Michael G.; /Fermilab
2005-09-01
As an introduction to QCD at the LHC the author gives an overview of QCD at the Tevatron, emphasizing the high Q{sup 2} frontier which will be taken over by the LHC. After describing briefly the LHC detectors the author discusses high mass diffraction, in particular central exclusive production of Higgs and vector boson pairs. The author introduces the FP420 project to measure the scattered protons 420m downstream of ATLAS and CMS.
Novel QCD effects in nuclear collisions
Brodsky, S.J.
1991-12-01
Heavy ion collisions can provide a novel environment for testing fundamental dynamical processes in QCD, including minijet formation and interactions, formation zone phenomena, color filtering, coherent co-mover interactions, and new higher twist mechanisms which could account for the observed excess production and anomalous nuclear target dependence of heavy flavor production. The possibility of using light-cone thermodynamics and a corresponding covariant temperature to describe the QCD phases of the nuclear fragmentation region is also briefly discussed.
Lattice and Phase Diagram in QCD
Lombardo, Maria Paola
2008-10-13
Model calculations have produced a number of very interesting expectations for the QCD Phase Diagram, and the task of a lattice calculations is to put these studies on a quantitative grounds. I will give an overview of the current status of the lattice analysis of the QCD phase diagram, from the quantitative results of mature calculations at zero and small baryochemical potential, to the exploratory studies of the colder, denser phase.
Precision lattice QCD: challenges and prospects
NASA Astrophysics Data System (ADS)
Hashimoto, Shoji
2013-04-01
With Peta-flops scale computational resources, lattice QCD simulation has recently reached one of its primary goals, i.e. reproducing the low-lying hadron spectrum starting from the QCD Lagrangian. Applications to various other phenomenological quantities, for which no other way of precise theoretical calculation is available, would become the next milestone. In this talk I will provide a brief overview of the field and summarize the remaining problems to be solved before achieving the precision calculations.
Soft and hard contributions to QCD processes
Slavnov, D.A.; Bakulina, E.N.
1995-06-01
QCD corrections of order {alpha}{sub s} for deep inelastic lepton scattering and the Drell-Yan process are considered. The common soft part of these corrections is found. This result makes it possible to determine the modified parton distribution functions unambiguously beyond the leading logarithmic approximation. These distribution functions are used to obtain QCD corrections that are free of infrared and collinear ambiguities. 6 refs., 2 figs.
Some new/old approaches to QCD
Gross, D.J.
1992-11-01
In this lecture I shall discuss some recent attempts to revive some old ideas to address the problem of solving QCD. I believe that it is timely to return to this problem which has been woefully neglected for the last decade. QCD is a permanent part of the theoretical landscape and eventually we will have to develop analytic tools for dealing with the theory in the infra-red. Lattice techniques are useful but they have not yet lived up to their promise. Even if one manages to derive the hadronic spectrum numerically, to an accuracy of 10% or even 1%, we will not be truly satisfied unless we have some analytic understanding of the results. Also, lattice Monte-Carlo methods can only be used to answer a small set of questions. Many issues of great conceptual and practical interest-in particular the calculation of scattering amplitudes, are thus far beyond lattice control. Any progress in controlling QCD in an explicit analytic, fashion would be of great conceptual value. It would also be of great practical aid to experimentalists, who must use rather ad-hoc and primitive models of QCD scattering amplitudes to estimate the backgrounds to interesting new physics. I will discuss an attempt to derive a string representation of QCD and a revival of the large N approach to QCD. Both of these ideas have a long history, many theorist-years have been devoted to their pursuit-so far with little success. I believe that it is time to try again. In part this is because of the progress in the last few years in string theory. Our increased understanding of string theory should make the attempt to discover a stringy representation of QCD easier, and the methods explored in matrix models might be employed to study the large N limit of QCD.
Some New/Old Approaches to QCD
DOE R&D Accomplishments Database
Gross, D. J.
1992-11-01
In this lecture I shall discuss some recent attempts to revive some old ideas to address the problem of solving QCD. I believe that it is timely to return to this problem which has been woefully neglected for the last decade. QCD is a permanent part of the theoretical landscape and eventually we will have to develop analytic tools for dealing with the theory in the infra-red. Lattice techniques are useful but they have not yet lived up to their promise. Even if one manages to derive the hadronic spectrum numerically, to an accuracy of 10% or even 1%, we will not be truly satisfied unless we have some analytic understanding of the results. Also, lattice Monte-Carlo methods can only be used to answer a small set of questions. Many issues of great conceptual and practical interest-in particular the calculation of scattering amplitudes, are thus far beyond lattice control. Any progress in controlling QCD in an explicit analytic, fashion would be of great conceptual value. It would also be of great practical aid to experimentalists, who must use rather ad-hoc and primitive models of QCD scattering amplitudes to estimate the backgrounds to interesting new physics. I will discuss an attempt to derive a string representation of QCD and a revival of the large N approach to QCD. Both of these ideas have a long history, many theorist-years have been devoted to their pursuit-so far with little success. I believe that it is time to try again. In part this is because of the progress in the last few years in string theory. Our increased understanding of string theory should make the attempt to discover a stringy representation of QCD easier, and the methods explored in matrix models might be employed to study the large N limit of QCD.
Lattice QCD and the Jefferson Laboratory Program
Jozef Dudek, Robert Edwards, David Richards, Konstantinos Orginos
2011-06-01
Lattice gauge theory provides our only means of performing \\textit{ab initio} calculations in the non-perturbative regime. It has thus become an increasing important component of the Jefferson Laboratory physics program. In this paper, we describe the contributions of lattice QCD to our understanding of hadronic and nuclear physics, focusing on the structure of hadrons, the calculation of the spectrum and properties of resonances, and finally on deriving an understanding of the QCD origin of nuclear forces.
Protein interaction sentence detection using multiple semantic kernels
2011-01-01
Background Detection of sentences that describe protein-protein interactions (PPIs) in biomedical publications is a challenging and unresolved pattern recognition problem. Many state-of-the-art approaches for this task employ kernel classification methods, in particular support vector machines (SVMs). In this work we propose a novel data integration approach that utilises semantic kernels and a kernel classification method that is a probabilistic analogue to SVMs. Semantic kernels are created from statistical information gathered from large amounts of unlabelled text using lexical semantic models. Several semantic kernels are then fused into an overall composite classification space. In this initial study, we use simple features in order to examine whether the use of combinations of kernels constructed using word-based semantic models can improve PPI sentence detection. Results We show that combinations of semantic kernels lead to statistically significant improvements in recognition rates and receiver operating characteristic (ROC) scores over the plain Gaussian kernel, when applied to a well-known labelled collection of abstracts. The proposed kernel composition method also allows us to automatically infer the most discriminative kernels. Conclusions The results from this paper indicate that using semantic information from unlabelled text, and combinations of such information, can be valuable for classification of short texts such as PPI sentences. This study, however, is only a first step in evaluation of semantic kernels and probabilistic multiple kernel learning in the context of PPI detection. The method described herein is modular, and can be applied with a variety of feature types, kernels, and semantic models, in order to facilitate full extraction of interacting proteins. PMID:21569604
Windows on the axion. [quantum chromodynamics (QCD)
NASA Technical Reports Server (NTRS)
Turner, Michael S.
1989-01-01
Peccei-Quinn symmetry with attendant axion is a most compelling, and perhaps the most minimal, extension of the standard model, as it provides a very elegant solution to the nagging strong CP-problem associated with the theta vacuum structure of QCD. However, particle physics gives little guidance as to the axion mass; a priori, the plausible values span the range: 10(-12)eV is approx. less than m(a) which is approx. less than 10(6)eV, some 18 orders-of-magnitude. Laboratory experiments have excluded masses greater than 10(4)eV, leaving unprobed some 16 orders-of-magnitude. Axions have a host of interesting astrophysical and cosmological effects, including, modifying the evolution of stars of all types (our sun, red giants, white dwarfs, and neutron stars), contributing significantly to the mass density of the Universe today, and producting detectable line radiation through the decays of relic axions. Consideration of these effects has probed 14 orders-of-magnitude in axion mass, and has left open only two windows for further exploration: 10(-6)eV is approx. less than m(a) is approx. less than 10(-3)eV and 1eV is approx. less than m(a) is approx. less than 5eV (hadronic axions only). Both these windows are accessible to experiment, and a variety of very interesting experiments, all of which involve heavenly axions, are being planned or are underway.
QCD: results from lattice quantum chromodynamics
Kronfeld, Andreas S.; /Fermilab
2006-10-01
Quantum chromodynamics (QCD) is the modern theory of the strong force. In this theory, the main objects are quarks and gluons, which are bound by the strong force into protons, neutrons, and other particles called hadrons. In the framework of QCD, the strong nuclear force binding protons and neutrons together into nuclei is actually only a residue of the much stronger forces acting between quarks and gluons. In fact, inside the proton, even the concept of force is not very useful. Within all hadrons they have a swirl of gluons being exchanged back and forth as a manifestation of the strong force. To make matters worse, gluons can split into two, and then rejoin, or they can split into a quark-antiquark pair. Even the simplest hadron is a complex system hosting constantly interacting components. Despite this complexity, QCD is well established experimentally. This is because at short distances (or high energies), the coupling between the particles is effectively small and particles move around with relative freedom. This is called asymptotic freedom and QCD is amenable to the traditional methods of quantum field theory in this regime. High-energy experiments have tested and confirmed QCD in this realm, which led to the 2004 Nobel Prize in Physics for Drs. David Gross, David Politzer, and Frank Wilczek, the theorists who provided the theory for short-range QCD and asymptotic freedom.
Multiple kernel learning for sparse representation-based classification.
Shrivastava, Ashish; Patel, Vishal M; Chellappa, Rama
2014-07-01
In this paper, we propose a multiple kernel learning (MKL) algorithm that is based on the sparse representation-based classification (SRC) method. Taking advantage of the nonlinear kernel SRC in efficiently representing the nonlinearities in the high-dimensional feature space, we propose an MKL method based on the kernel alignment criteria. Our method uses a two step training method to learn the kernel weights and sparse codes. At each iteration, the sparse codes are updated first while fixing the kernel mixing coefficients, and then the kernel mixing coefficients are updated while fixing the sparse codes. These two steps are repeated until a stopping criteria is met. The effectiveness of the proposed method is demonstrated using several publicly available image classification databases and it is shown that this method can perform significantly better than many competitive image classification algorithms. PMID:24835226
Small convolution kernels for high-fidelity image restoration
NASA Technical Reports Server (NTRS)
Reichenbach, Stephen E.; Park, Stephen K.
1991-01-01
An algorithm is developed for computing the mean-square-optimal values for small, image-restoration kernels. The algorithm is based on a comprehensive, end-to-end imaging system model that accounts for the important components of the imaging process: the statistics of the scene, the point-spread function of the image-gathering device, sampling effects, noise, and display reconstruction. Subject to constraints on the spatial support of the kernel, the algorithm generates the kernel values that restore the image with maximum fidelity, that is, the kernel minimizes the expected mean-square restoration error. The algorithm is consistent with the derivation of the spatially unconstrained Wiener filter, but leads to a small, spatially constrained kernel that, unlike the unconstrained filter, can be efficiently implemented by convolution. Simulation experiments demonstrate that for a wide range of imaging systems these small kernels can restore images with fidelity comparable to images restored with the unconstrained Wiener filter.
Kawamura, Hiroyuki; Tanaka, Kazuhiro
2010-06-01
The B-meson distribution amplitude (DA) is defined as the matrix element of a quark-antiquark bilocal light-cone operator in the heavy-quark effective theory, corresponding to a long-distance component in the factorization formula for exclusive B-meson decays. The evolution equation for the B-meson DA is governed by the cusp anomalous dimension as well as the Dokshitzer-Gribov-Lipatov-Altarelli-Parisi-type anomalous dimension, and these anomalous dimensions give the ''quasilocal'' kernel in the coordinate-space representation. We show that this evolution equation can be solved analytically in the coordinate space, accomplishing the relevant Sudakov resummation at the next-to-leading logarithmic accuracy. The quasilocal nature leads to a quite simple form of our solution which determines the B-meson DA with a quark-antiquark light-cone separation t in terms of the DA at a lower renormalization scale {mu} with smaller interquark separations zt (z{<=}1). This formula allows us to present rigorous calculation of the B-meson DA at the factorization scale {approx}{radical}(m{sub b{Lambda}QCD}) for t less than {approx}1 GeV{sup -1}, using the recently obtained operator product expansion of the DA as the input at {mu}{approx}1 GeV. We also derive the master formula, which reexpresses the integrals of the DA at {mu}{approx}{radical}(m{sub b{Lambda}QCD}) for the factorization formula by the compact integrals of the DA at {mu}{approx}1 GeV.
NASA Astrophysics Data System (ADS)
Kawamura, Hiroyuki; Tanaka, Kazuhiro
2010-06-01
The B-meson distribution amplitude (DA) is defined as the matrix element of a quark-antiquark bilocal light-cone operator in the heavy-quark effective theory, corresponding to a long-distance component in the factorization formula for exclusive B-meson decays. The evolution equation for the B-meson DA is governed by the cusp anomalous dimension as well as the Dokshitzer-Gribov-Lipatov-Altarelli-Parisi-type anomalous dimension, and these anomalous dimensions give the “quasilocal” kernel in the coordinate-space representation. We show that this evolution equation can be solved analytically in the coordinate space, accomplishing the relevant Sudakov resummation at the next-to-leading logarithmic accuracy. The quasilocal nature leads to a quite simple form of our solution which determines the B-meson DA with a quark-antiquark light-cone separation t in terms of the DA at a lower renormalization scale μ with smaller interquark separations zt (z≤1). This formula allows us to present rigorous calculation of the B-meson DA at the factorization scale ˜mbΛQCD for t less than ˜1GeV-1, using the recently obtained operator product expansion of the DA as the input at μ˜1GeV. We also derive the master formula, which reexpresses the integrals of the DA at μ˜mbΛQCD for the factorization formula by the compact integrals of the DA at μ˜1GeV.
QCD structure of nuclear interactions
NASA Astrophysics Data System (ADS)
Granados, Carlos G.
The research presented in this dissertation investigated selected processes involving baryons and nuclei in hard scattering reactions. These processes are characterized by the production of particles with large energies and transverse momenta. Through these processes, this work explored both, the constituent (quark) structure of baryons (specifically nucleons and Delta-Isobars), and the mechanisms through which the interactions between these constituents ultimately control the selected reactions. The first of such reactions is the hard nucleon-nucleon elastic scattering, which was studied here considering the quark exchange between the nucleons to be the dominant mechanism of interaction in the constituent picture. In particular, it was found that an angular asymmetry exhibited by proton-neutron elastic scattering data is explained within this framework if a quark-diquark picture dominates the nucleon's structure instead of a more traditional SU(6) three quarks picture. The latter yields an asymmetry around 90o center of mass scattering with a sign opposite to what is experimentally observed. The second process is the hard breakup by a photon of a nucleon-nucleon system in light nuclei. Proton-proton (pp) and proton-neutron (pn) breakup in 3He, and DeltaDelta-isobars production in deuteron breakup were analyzed in the hard rescattering model (HRM), which in conjunction with the quark interchange mechanism provides a Quantum Chromodynamics (QCD) description of the reaction. Through the HRM, cross sections for both channels in 3He photodisintegration were computed without the need of a fitting parameter. The results presented here for pp breakup show excellent agreement with recent experimental data. In DeltaDelta-isobars production in deuteron breakup, HRM angular distributions for the two DeltaDelta channels were compared to the pn channel and to each other. An important prediction fromthis study is that the Delta++Delta- channel consistently dominates Delta+Delta0
Monte Carlo Code System for Electron (Positron) Dose Kernel Calculations.
CHIBANI, OMAR
1999-05-12
Version 00 KERNEL performs dose kernel calculations for an electron (positron) isotropic point source in an infinite homogeneous medium. First, the auxiliary code PRELIM is used to prepare cross section data for the considered medium. Then the KERNEL code simulates the transport of electrons and bremsstrahlung photons through the medium until all particles reach their cutoff energies. The deposited energy is scored in concentric spherical shells at a radial distance ranging from zero to twice the source particle range.
Scale-invariant Lipatov kernels from t-channel unitarity
Coriano, C.; White, A.R.
1994-11-14
The Lipatov equation can be regarded as a reggeon Bethe-Salpeter equation in which higher-order reggeon interactions give higher-order kernels. Infra-red singular contributions in a general kernel are produced by t-channel nonsense states and the allowed kinematic forms are determined by unitarity. Ward identity and infra-red finiteness gauge invariance constraints then determine the corresponding scale-invariant part of a general higher-order kernel.
Simple analytic QCD model with perturbative QCD behavior at high momenta
Contreras, Carlos; Espinosa, Olivier; Cvetic, Gorazd; Martinez, Hector E.
2010-10-01
Analytic QCD models are those where the QCD running coupling has the physically correct analytic behavior, i.e., no Landau singularities in the Euclidean regime. We present a simple analytic QCD model in which the discontinuity function of the running coupling at high momentum scales is the same as in perturbative QCD (just like in the analytic QCD model of Shirkov and Solovtsov), but at low scales it is replaced by a delta function which parametrizes the unknown behavior there. We require that the running coupling agree to a high degree with the perturbative coupling at high energies, which reduces the number of free parameters of the model from four to one. The remaining parameter is fixed by requiring the reproduction of the correct value of the semihadronic tau decay ratio.
Nucleon spin structure and perturbative QCD frontier on the move
NASA Astrophysics Data System (ADS)
Pasechnik, Roman S.; Shirkov, Dmitry V.; Teryaev, Oleg V.; Solovtsova, Olga P.; Khandramai, Vyacheslav L.
2010-01-01
We discuss the interplay between higher orders of the perturbative QCD expansion and higher-twist contributions in the analysis of recent Jefferson Lab data on the lowest moments of spin-dependent proton and neutron structure functions Γ1p,n(Q2) and Bjorken sum rule function Γ1p-n(Q2) at 0.05
The Excited-state Spectrum of QCD through Lattice Gauge Theory Calculations
David Richards
2012-12-01
I describe recent progress at understanding the excited state spectrum of QCD through lattice gauge calculations. I begin by outlining the evolution of the lattice effort at JLab. I detail the impact of recent lattice calculations on the present and upcoming experimental programs, and in particular that of the 12 GeV upgrade of Jefferson Laboratory. I conclude with the prospect for future calculations.
Robust kernel collaborative representation for face recognition
NASA Astrophysics Data System (ADS)
Huang, Wei; Wang, Xiaohui; Ma, Yanbo; Jiang, Yuzheng; Zhu, Yinghui; Jin, Zhong
2015-05-01
One of the greatest challenges of representation-based face recognition is that the training samples are usually insufficient. In other words, the training set usually does not include enough samples to show varieties of high-dimensional face images caused by illuminations, facial expressions, and postures. When the test sample is significantly different from the training samples of the same subject, the recognition performance will be sharply reduced. We propose a robust kernel collaborative representation based on virtual samples for face recognition. We think that the virtual training set conveys some reasonable and possible variations of the original training samples. Hence, we design a new object function to more closely match the representation coefficients generated from the original and virtual training sets. In order to further improve the robustness, we implement the corresponding representation-based face recognition in kernel space. It is noteworthy that any kind of virtual training samples can be used in our method. We use noised face images to obtain virtual face samples. The noise can be approximately viewed as a reflection of the varieties of illuminations, facial expressions, and postures. Our work is a simple and feasible way to obtain virtual face samples to impose Gaussian noise (and other types of noise) specifically to the original training samples to obtain possible variations of the original samples. Experimental results on the FERET, Georgia Tech, and ORL face databases show that the proposed method is more robust than two state-of-the-art face recognition methods, such as CRC and Kernel CRC.
Influence of wheat kernel physical properties on the pulverizing process.
Dziki, Dariusz; Cacak-Pietrzak, Grażyna; Miś, Antoni; Jończyk, Krzysztof; Gawlik-Dziki, Urszula
2014-10-01
The physical properties of wheat kernel were determined and related to pulverizing performance by correlation analysis. Nineteen samples of wheat cultivars about similar level of protein content (11.2-12.8 % w.b.) and obtained from organic farming system were used for analysis. The kernel (moisture content 10 % w.b.) was pulverized by using the laboratory hammer mill equipped with round holes 1.0 mm screen. The specific grinding energy ranged from 120 kJkg(-1) to 159 kJkg(-1). On the basis of data obtained many of significant correlations (p < 0.05) were found between wheat kernel physical properties and pulverizing process of wheat kernel, especially wheat kernel hardness index (obtained on the basis of Single Kernel Characterization System) and vitreousness significantly and positively correlated with the grinding energy indices and the mass fraction of coarse particles (> 0.5 mm). Among the kernel mechanical properties determined on the basis of uniaxial compression test only the rapture force was correlated with the impact grinding results. The results showed also positive and significant relationships between kernel ash content and grinding energy requirements. On the basis of wheat physical properties the multiple linear regression was proposed for predicting the average particle size of pulverized kernel. PMID:25328207
A short- time beltrami kernel for smoothing images and manifolds.
Spira, Alon; Kimmel, Ron; Sochen, Nir
2007-06-01
We introduce a short-time kernel for the Beltrami image enhancing flow. The flow is implemented by "convolving" the image with a space dependent kernel in a similar fashion to the solution of the heat equation by a convolution with a Gaussian kernel. The kernel is appropriate for smoothing regular (flat) 2-D images, for smoothing images painted on manifolds, and for simultaneously smoothing images and the manifolds they are painted on. The kernel combines the geometry of the image and that of the manifold into one metric tensor, thus enabling a natural unified approach for the manipulation of both. Additionally, the derivation of the kernel gives a better geometrical understanding of the Beltrami flow and shows that the bilateral filter is a Euclidean approximation of it. On a practical level, the use of the kernel allows arbitrarily large time steps as opposed to the existing explicit numerical schemes for the Beltrami flow. In addition, the kernel works with equal ease on regular 2-D images and on images painted on parametric or triangulated manifolds. We demonstrate the denoising properties of the kernel by applying it to various types of images and manifolds. PMID:17547140
Isolation of bacterial endophytes from germinated maize kernels.
Rijavec, Tomaz; Lapanje, Ales; Dermastia, Marina; Rupnik, Maja
2007-06-01
The germination of surface-sterilized maize kernels under aseptic conditions proved to be a suitable method for isolation of kernel-associated bacterial endophytes. Bacterial strains identified by partial 16S rRNA gene sequencing as Pantoea sp., Microbacterium sp., Frigoribacterium sp., Bacillus sp., Paenibacillus sp., and Sphingomonas sp. were isolated from kernels of 4 different maize cultivars. Genus Pantoea was associated with a specific maize cultivar. The kernels of this cultivar were often overgrown with the fungus Lecanicillium aphanocladii; however, those exhibiting Pantoea growth were never colonized with it. Furthermore, the isolated bacterium strain inhibited fungal growth in vitro. PMID:17668041
A Kernel-based Account of Bibliometric Measures
NASA Astrophysics Data System (ADS)
Ito, Takahiko; Shimbo, Masashi; Kudo, Taku; Matsumoto, Yuji
The application of kernel methods to citation analysis is explored. We show that a family of kernels on graphs provides a unified perspective on the three bibliometric measures that have been discussed independently: relatedness between documents, global importance of individual documents, and importance of documents relative to one or more (root) documents (relative importance). The framework provided by the kernels establishes relative importance as an intermediate between relatedness and global importance, in which the degree of `relativity,' or the bias between relatedness and importance, is naturally controlled by a parameter characterizing individual kernels in the family.
QCD as a topologically ordered system
Zhitnitsky, Ariel R.
2013-09-15
We argue that QCD belongs to a topologically ordered phase similar to many well-known condensed matter systems with a gap such as topological insulators or superconductors. Our arguments are based on an analysis of the so-called “deformed QCD” which is a weakly coupled gauge theory, but nevertheless preserves all the crucial elements of strongly interacting QCD, including confinement, nontrivial θ dependence, degeneracy of the topological sectors, etc. Specifically, we construct the so-called topological “BF” action which reproduces the well known infrared features of the theory such as non-dispersive contribution to the topological susceptibility which cannot be associated with any propagating degrees of freedom. Furthermore, we interpret the well known resolution of the celebrated U(1){sub A} problem where the would be η{sup ′} Goldstone boson generates its mass as a result of mixing of the Goldstone field with a topological auxiliary field characterizing the system. We then identify the non-propagating auxiliary topological field of the BF formulation in deformed QCD with the Veneziano ghost (which plays the crucial role in resolution of the U(1){sub A} problem). Finally, we elaborate on relation between “string-net” condensation in topologically ordered condensed matter systems and long range coherent configurations, the “skeletons”, studied in QCD lattice simulations. -- Highlights: •QCD may belong to a topologically ordered phase similar to condensed matter (CM) systems. •We identify the non-propagating topological field in deformed QCD with the Veneziano ghost. •Relation between “string-net” condensates in CM systems and the “skeletons” in QCD lattice simulations is studied.
Hadronic and nuclear interactions in QCD
Not Available
1982-01-01
Despite the evidence that QCD - or something close to it - gives a correct description of the structure of hadrons and their interactions, it seems paradoxical that the theory has thus far had very little impact in nuclear physics. One reason for this is that the application of QCD to distances larger than 1 fm involves coherent, non-perturbative dynamics which is beyond present calculational techniques. For example, in QCD the nuclear force can evidently be ascribed to quark interchange and gluon exchange processes. These, however, are as complicated to analyze from a fundamental point of view as is the analogous covalent bond in molecular physics. Since a detailed description of quark-quark interactions and the structure of hadronic wavefunctions is not yet well-understood in QCD, it is evident that a quantitative first-principle description of the nuclear force will require a great deal of theoretical effort. Another reason for the limited impact of QCD in nuclear physics has been the conventional assumption that nuclear interactions can for the most part be analyzed in terms of an effective meson-nucleon field theory or potential model in isolation from the details of short distance quark and gluon structure of hadrons. These lectures, argue that this view is untenable: in fact, there is no correspondence principle which yields traditional nuclear physics as a rigorous large-distance or non-relativistic limit of QCD dynamics. On the other hand, the distinctions between standard nuclear physics dynamics and QCD at nuclear dimensions are extremely interesting and illuminating for both particle and nuclear physics.
Optimized Derivative Kernels for Gamma Ray Spectroscopy
Vlachos, D. S.; Kosmas, O. T.; Simos, T. E.
2007-12-26
In gamma ray spectroscopy, the photon detectors measure the number of photons with energy that lies in an interval which is called a channel. This accumulation of counts produce a measuring function that its deviation from the ideal one may produce high noise in the unfolded spectrum. In order to deal with this problem, the ideal accumulation function is interpolated with the use of special designed derivative kernels. Simulation results are presented which show that this approach is very effective even in spectra with low statistics.
Oil point pressure of Indian almond kernels
NASA Astrophysics Data System (ADS)
Aregbesola, O.; Olatunde, G.; Esuola, S.; Owolarafe, O.
2012-07-01
The effect of preprocessing conditions such as moisture content, heating temperature, heating time and particle size on oil point pressure of Indian almond kernel was investigated. Results showed that oil point pressure was significantly (P < 0.05) affected by above mentioned parameters. It was also observed that oil point pressure reduced with increase in heating temperature and heating time for both coarse and fine particles. Furthermore, an increase in moisture content resulted in increased oil point pressure for coarse particles while there was a reduction in oil point pressure with increase in moisture content for fine particles.
Verification of Chare-kernel programs
Bhansali, S.; Kale, L.V. )
1989-01-01
Experience with concurrent programming has shown that concurrent programs can conceal bugs even after extensive testing. Thus, there is a need for practical techniques which can establish the correctness of parallel programs. This paper proposes a method for showing how to prove the partial correctness of programs written in the Chare-kernel language, which is a language designed to support the parallel execution of computation with irregular structures. The proof is based on the lattice proof technique and is divided into two parts. The first part is concerned with the program behavior within a single chare instance, whereas the second part captures the inter-chare interaction.
TMDs: Evolution, modeling, precision
NASA Astrophysics Data System (ADS)
D'Alesio, Umberto; Echevarría, Miguel G.; Melis, Stefano; Scimemi, Ignazio
2015-01-01
The factorization theorem for qT spectra in Drell-Yan processes, boson production and semi-inclusive deep inelastic scattering allows for the determination of the non-perturbative parts of transverse momentum dependent parton distribution functions. Here we discuss the fit of Drell-Yan and Z-production data using the transverse momentum dependent formalism and the resummation of the evolution kernel. We find a good theoretical stability of the results and a final χ2/points ≲ 1. We show how the fixing of the non-perturbative pieces of the evolution can be used to make predictions at present and future colliders.
Prediction of kernel density of corn using single-kernel near infrared spectroscopy
Technology Transfer Automated Retrieval System (TEKTRAN)
Corn hardness as is an important property for dry and wet-millers, food processors and corn breeders developing hybrids for specific markets. Of the several methods used to measure hardness, kernel density measurements are one of the more repeatable methods to quantify hardness. Near infrared spec...
Two flavor QCD and confinement
D'Elia, Massimo; Di Giacomo, Adriano; Pica, Claudio
2005-12-01
We argue that the order of the chiral transition for N{sub f}=2 is a sensitive probe of the QCD vacuum, in particular, of the mechanism of color confinement. A strategy is developed to investigate the order of the transition by use of finite size scaling analysis. An in-depth numerical investigation is performed with staggered fermions on lattices with L{sub t}=4 and L{sub s}=12, 16, 20, 24, 32 and quark masses am{sub q} ranging from 0.01335 to 0.307036. The specific heat and a number of susceptibilities are measured and compared with the expectations of an O(4) second order and of a first order phase transition. A detailed comparison with previous works, which all use similar techniques as ours, is performed. A second order transition in the O(4) and O(2) universality classes are incompatible with our data, which seem to prefer a first order transition. However we have L{sub t}=4 and unimproved action, so that a check with improved techniques (algorithm and action) and possibly larger L{sub t} will be needed to assess this issue on a firm basis.
Linear and kernel methods for multi- and hypervariate change detection
NASA Astrophysics Data System (ADS)
Nielsen, Allan A.; Canty, Morton J.
2010-10-01
The iteratively re-weighted multivariate alteration detection (IR-MAD) algorithm may be used both for unsuper- vised change detection in multi- and hyperspectral remote sensing imagery as well as for automatic radiometric normalization of multi- or hypervariate multitemporal image sequences. Principal component analysis (PCA) as well as maximum autocorrelation factor (MAF) and minimum noise fraction (MNF) analyses of IR-MAD images, both linear and kernel-based (which are nonlinear), may further enhance change signals relative to no-change background. The kernel versions are based on a dual formulation, also termed Q-mode analysis, in which the data enter into the analysis via inner products in the Gram matrix only. In the kernel version the inner products of the original data are replaced by inner products between nonlinear mappings into higher dimensional feature space. Via kernel substitution, also known as the kernel trick, these inner products between the mappings are in turn replaced by a kernel function and all quantities needed in the analysis are expressed in terms of the kernel function. This means that we need not know the nonlinear mappings explicitly. Kernel principal component analysis (PCA), kernel MAF and kernel MNF analyses handle nonlinearities by implicitly transforming data into high (even innite) dimensional feature space via the kernel function and then performing a linear analysis in that space. In image analysis the Gram matrix is often prohibitively large (its size is the number of pixels in the image squared). In this case we may sub-sample the image and carry out the kernel eigenvalue analysis on a set of training data samples only. To obtain a transformed version of the entire image we then project all pixels, which we call the test data, mapped nonlinearly onto the primal eigenvectors. IDL (Interactive Data Language) implementations of IR-MAD, automatic radiometric normalization and kernel PCA/MAF/MNF transformations have been written
Scientific Computing Kernels on the Cell Processor
Williams, Samuel W.; Shalf, John; Oliker, Leonid; Kamil, Shoaib; Husbands, Parry; Yelick, Katherine
2007-04-04
The slowing pace of commodity microprocessor performance improvements combined with ever-increasing chip power demands has become of utmost concern to computational scientists. As a result, the high performance computing community is examining alternative architectures that address the limitations of modern cache-based designs. In this work, we examine the potential of using the recently-released STI Cell processor as a building block for future high-end computing systems. Our work contains several novel contributions. First, we introduce a performance model for Cell and apply it to several key scientific computing kernels: dense matrix multiply, sparse matrix vector multiply, stencil computations, and 1D/2D FFTs. The difficulty of programming Cell, which requires assembly level intrinsics for the best performance, makes this model useful as an initial step in algorithm design and evaluation. Next, we validate the accuracy of our model by comparing results against published hardware results, as well as our own implementations on a 3.2GHz Cell blade. Additionally, we compare Cell performance to benchmarks run on leading superscalar (AMD Opteron), VLIW (Intel Itanium2), and vector (Cray X1E) architectures. Our work also explores several different mappings of the kernels and demonstrates a simple and effective programming model for Cell's unique architecture. Finally, we propose modest microarchitectural modifications that could significantly increase the efficiency of double-precision calculations. Overall results demonstrate the tremendous potential of the Cell architecture for scientific computations in terms of both raw performance and power efficiency.
Stable Local Volatility Calibration Using Kernel Splines
NASA Astrophysics Data System (ADS)
Coleman, Thomas F.; Li, Yuying; Wang, Cheng
2010-09-01
We propose an optimization formulation using L1 norm to ensure accuracy and stability in calibrating a local volatility function for option pricing. Using a regularization parameter, the proposed objective function balances the calibration accuracy with the model complexity. Motivated by the support vector machine learning, the unknown local volatility function is represented by a kernel function generating splines and the model complexity is controlled by minimizing the 1-norm of the kernel coefficient vector. In the context of the support vector regression for function estimation based on a finite set of observations, this corresponds to minimizing the number of support vectors for predictability. We illustrate the ability of the proposed approach to reconstruct the local volatility function in a synthetic market. In addition, based on S&P 500 market index option data, we demonstrate that the calibrated local volatility surface is simple and resembles the observed implied volatility surface in shape. Stability is illustrated by calibrating local volatility functions using market option data from different dates.
Transcriptome analysis of Ginkgo biloba kernels
He, Bing; Gu, Yincong; Xu, Meng; Wang, Jianwen; Cao, Fuliang; Xu, Li-an
2015-01-01
Ginkgo biloba is a dioecious species native to China with medicinally and phylogenetically important characteristics; however, genomic resources for this species are limited. In this study, we performed the first transcriptome sequencing for Ginkgo kernels at five time points using Illumina paired-end sequencing. Approximately 25.08-Gb clean reads were obtained, and 68,547 unigenes with an average length of 870 bp were generated by de novo assembly. Of these unigenes, 29,987 (43.74%) were annotated in publicly available plant protein database. A total of 3,869 genes were identified as significantly differentially expressed, and enrichment analysis was conducted at different time points. Furthermore, metabolic pathway analysis revealed that 66 unigenes were responsible for terpenoid backbone biosynthesis, with up to 12 up-regulated unigenes involved in the biosynthesis of ginkgolide and bilobalide. Differential gene expression analysis together with real-time PCR experiments indicated that the synthesis of bilobalide may have interfered with the ginkgolide synthesis process in the kernel. These data can remarkably expand the existing transcriptome resources of Ginkgo, and provide a valuable platform to reveal more on developmental and metabolic mechanisms of this species. PMID:26500663
Delimiting Areas of Endemism through Kernel Interpolation
Oliveira, Ubirajara; Brescovit, Antonio D.; Santos, Adalberto J.
2015-01-01
We propose a new approach for identification of areas of endemism, the Geographical Interpolation of Endemism (GIE), based on kernel spatial interpolation. This method differs from others in being independent of grid cells. This new approach is based on estimating the overlap between the distribution of species through a kernel interpolation of centroids of species distribution and areas of influence defined from the distance between the centroid and the farthest point of occurrence of each species. We used this method to delimit areas of endemism of spiders from Brazil. To assess the effectiveness of GIE, we analyzed the same data using Parsimony Analysis of Endemism and NDM and compared the areas identified through each method. The analyses using GIE identified 101 areas of endemism of spiders in Brazil GIE demonstrated to be effective in identifying areas of endemism in multiple scales, with fuzzy edges and supported by more synendemic species than in the other methods. The areas of endemism identified with GIE were generally congruent with those identified for other taxonomic groups, suggesting that common processes can be responsible for the origin and maintenance of these biogeographic units. PMID:25611971
Antiangular Ordering of Gluon Radiation in QCD Media
Mehtar-Tani, Yacine; Salgado, Carlos A.; Tywoniuk, Konrad
2011-03-25
We investigate angular and energy distributions of medium-induced gluon emission off a quark-antiquark antenna in the framework of perturbative QCD as an attempt toward understanding, from first principles, jet evolution inside the quark-gluon plasma. In-medium color coherence between emitters, neglected in all previous calculations, leads to a novel mechanism of soft-gluon radiation. The structure of the corresponding spectrum, in contrast with known medium-induced radiation, i.e., off a single emitter, retains some properties of the vacuum case; in particular, it exhibits a soft divergence. However, as opposed to the vacuum, the collinear singularity is regulated by the pair opening angle, leading to a strict angular separation between vacuum and medium-induced radiation, denoted as antiangular ordering. We comment on the possible consequences of this new contribution for jet observables in heavy-ion collisions.
Transverse momentum-dependent parton distribution functions from lattice QCD
Michael Engelhardt, Philipp Haegler, Bernhard Musch, John Negele, Andreas Schaefer
2012-12-01
Transverse momentum-dependent parton distributions (TMDs) relevant for semi-inclusive deep inelastic scattering (SIDIS) and the Drell-Yan process can be defined in terms of matrix elements of a quark bilocal operator containing a staple-shaped Wilson connection. Starting from such a definition, a scheme to determine TMDs in lattice QCD is developed and explored. Parametrizing the aforementioned matrix elements in terms of invariant amplitudes permits a simple transformation of the problem to a Lorentz frame suited for the lattice calculation. Results for the Sivers and Boer-Mulders transverse momentum shifts are obtained using ensembles at the pion masses 369MeV and 518MeV, focusing in particular on the dependence of these shifts on the staple extent and a Collins-Soper-type evolution parameter quantifying proximity of the staples to the light cone.
Transient anomalous charge production in strong-field QCD
NASA Astrophysics Data System (ADS)
Tanji, Naoto; Mueller, Niklas; Berges, Jürgen
2016-04-01
We investigate axial charge production in two-color QCD out of equilibrium. We compute the real-time evolution starting with spatially homogeneous strong gauge fields, while the fermions are in vacuum. The idealized class of initial conditions is motivated by glasma flux tubes in the context of heavy-ion collisions. We focus on axial charge production at early times, where important aspects of the anomalous dynamics can be derived analytically. This is compared to real-time lattice simulations. Quark production at early times leading to anomalous charge generation is investigated using Wilson fermions. Our results indicate that coherent gauge fields can transiently produce significant amounts of axial charge density, while part of the induced charges persist to be present even well beyond characteristic decoherence times. The comparisons to analytic results provide stringent tests of real-time representations of the axial anomaly on the lattice.
Technology Transfer Automated Retrieval System (TEKTRAN)
Maize kernel density impacts milling quality of the grain due to kernel hardness. Harder kernels are correlated with higher test weight and are more resistant to breakage during harvest and transport. Softer kernels, in addition to being susceptible to mechanical damage, are also prone to pathogen ...
Light mesons in QCD and unquenching effects from the 3PI effective action
NASA Astrophysics Data System (ADS)
Williams, Richard; Fischer, Christian S.; Heupel, Walter
2016-02-01
We investigate the impact of unquenching effects on QCD Green's functions, in the form of quark-loop contributions to both the gluon propagator and three-gluon vertex, in a three-loop inspired truncation of the three-particle irreducible (3PI) effective action. The fully coupled system of Dyson-Schwinger equations for the quark-gluon, ghost-gluon and three-gluon vertices, together with the quark propagator, are solved self-consistently; our only input are the ghost and gluon propagators themselves that are constrained by calculations within lattice QCD. We find that the two different unquenching effects have roughly equal, but opposite, impact on the quark-gluon vertex and quark propagator, with an overall negative impact on the latter. By taking further derivatives of the 3PI effective action, we construct the corresponding quark-antiquark kernel of the Bethe-Salpeter equation for mesons. The leading component is gluon exchange between two fully dressed quark-gluon vertices, thus introducing for the first time an obvious scalar-scalar component to the binding. We gain access to time-like properties of bound states by analytically continuing the coupled system of Dyson-Schwinger equations to the complex plane. We observe that the vector axial-vector splitting is in accord with experiment and that the lightest quark-antiquark scalar meson is above 1 GeV in mass.
Community detection using Kernel Spectral Clustering with memory
NASA Astrophysics Data System (ADS)
Langone, Rocco; Suykens, Johan A. K.
2013-02-01
This work is related to the problem of community detection in dynamic scenarios, which for instance arises in the segmentation of moving objects, clustering of telephone traffic data, time-series micro-array data etc. A desirable feature of a clustering model which has to capture the evolution of communities over time is the temporal smoothness between clusters in successive time-steps. In this way the model is able to track the long-term trend and in the same time it smooths out short-term variation due to noise. We use the Kernel Spectral Clustering with Memory effect (MKSC) which allows to predict cluster memberships of new nodes via out-of-sample extension and has a proper model selection scheme. It is based on a constrained optimization formulation typical of Least Squares Support Vector Machines (LS-SVM), where the objective function is designed to explicitly incorporate temporal smoothness as a valid prior knowledge. The latter, in fact, allows the model to cluster the current data well and to be consistent with the recent history. Here we propose a generalization of the MKSC model with an arbitrary memory, not only one time-step in the past. The experiments conducted on toy problems confirm our expectations: the more memory we add to the model, the smoother over time are the clustering results. We also compare with the Evolutionary Spectral Clustering (ESC) algorithm which is a state-of-the art method, and we obtain comparable or better results.
Kernel Method Based Human Model for Enhancing Interactive Evolutionary Optimization
Zhao, Qiangfu; Liu, Yong
2015-01-01
A fitness landscape presents the relationship between individual and its reproductive success in evolutionary computation (EC). However, discrete and approximate landscape in an original search space may not support enough and accurate information for EC search, especially in interactive EC (IEC). The fitness landscape of human subjective evaluation in IEC is very difficult and impossible to model, even with a hypothesis of what its definition might be. In this paper, we propose a method to establish a human model in projected high dimensional search space by kernel classification for enhancing IEC search. Because bivalent logic is a simplest perceptual paradigm, the human model is established by considering this paradigm principle. In feature space, we design a linear classifier as a human model to obtain user preference knowledge, which cannot be supported linearly in original discrete search space. The human model is established by this method for predicting potential perceptual knowledge of human. With the human model, we design an evolution control method to enhance IEC search. From experimental evaluation results with a pseudo-IEC user, our proposed model and method can enhance IEC search significantly. PMID:25879050
Holographic models and the QCD trace anomaly
Jose L. Goity, Roberto C. Trinchero
2012-08-01
Five dimensional dilaton models are considered as possible holographic duals of the pure gauge QCD vacuum. In the framework of these models, the QCD trace anomaly equation is considered. Each quantity appearing in that equation is computed by holographic means. Two exact solutions for different dilaton potentials corresponding to perturbative and non-perturbative {beta}-functions are studied. It is shown that in the perturbative case, where the {beta}-function is the QCD one at leading order, the resulting space is not asymptotically AdS. In the non-perturbative case, the model considered presents confinement of static quarks and leads to a non-vanishing gluon condensate, although it does not correspond to an asymptotically free theory. In both cases analyses based on the trace anomaly and on Wilson loops are carried out.
QCD sign problem for small chemical potential
Splittorff, K.; Verbaarschot, J. J. M.
2007-06-01
The expectation value of the complex phase factor of the fermion determinant is computed in the microscopic domain of QCD at nonzero chemical potential. We find that the average phase factor is nonvanishing below a critical value of the chemical potential equal to half the pion mass and vanishes exponentially in the volume for larger values of the chemical potential. This holds for QCD with dynamical quarks as well as for quenched and phase quenched QCD. The average phase factor has an essential singularity for zero chemical potential and cannot be obtained by analytic continuation from imaginary chemical potential or by means of a Taylor expansion. The leading order correction in the p-expansion of the chiral Lagrangian is calculated as well.
Quarkonium states in an anisotropic QCD plasma
Dumitru, Adrian; Guo Yun; Mocsy, Agnes; Strickland, Michael
2009-03-01
We consider quarkonium in a hot quantum chromodynamics (QCD) plasma which, due to expansion and nonzero viscosity, exhibits a local anisotropy in momentum space. At short distances the heavy-quark potential is known at tree level from the hard-thermal loop resummed gluon propagator in anisotropic perturbative QCD. The potential at long distances is modeled as a QCD string which is screened at the same scale as the Coulomb field. At asymptotic separation the potential energy is nonzero and inversely proportional to the temperature. We obtain numerical solutions of the three-dimensional Schroedinger equation for this potential. We find that quarkonium binding is stronger at nonvanishing viscosity and expansion rate, and that the anisotropy leads to polarization of the P-wave states.
Exploring hyperons and hypernuclei with lattice QCD
Beane, S.R.; Bedaque, P.F.; Parreno, A.; Savage, M.J.
2003-01-01
In this work we outline a program for lattice QCD that wouldprovide a first step toward understanding the strong and weakinteractions of strange baryons. The study of hypernuclear physics hasprovided a significant amount of information regarding the structure andweak decays of light nuclei containing one or two Lambda's, and Sigma's.From a theoretical standpoint, little is known about the hyperon-nucleoninteraction, which is required input for systematic calculations ofhypernuclear structure. Furthermore, the long-standing discrepancies inthe P-wave amplitudes for nonleptonic hyperon decays remain to beunderstood, and their resolution is central to a better understanding ofthe weak decays of hypernuclei. We present a framework that utilizesLuscher's finite-volume techniques in lattice QCD to extract thescattering length and effective range for Lambda-N scattering in both QCDand partially-quenched QCD. The effective theory describing thenonleptonic decays of hyperons using isospin symmetry alone, appropriatefor lattice calculations, is constructed.
Introduction to Kernel Methods: Classification of Multivariate Data
NASA Astrophysics Data System (ADS)
Fauvel, M.
2016-05-01
In this chapter, kernel methods are presented for the classification of multivariate data. An introduction example is given to enlighten the main idea of kernel methods. Then emphasis is done on the Support Vector Machine. Structural risk minimization is presented, and linear and non-linear SVM are described. Finally, a full example of SVM classification is given on simulated hyperspectral data.
Comparison of Kernel Equating and Item Response Theory Equating Methods
ERIC Educational Resources Information Center
Meng, Yu
2012-01-01
The kernel method of test equating is a unified approach to test equating with some advantages over traditional equating methods. Therefore, it is important to evaluate in a comprehensive way the usefulness and appropriateness of the Kernel equating (KE) method, as well as its advantages and disadvantages compared with several popular item…
High speed sorting of Fusarium-damaged wheat kernels
Technology Transfer Automated Retrieval System (TEKTRAN)
Recent studies have found that resistance to Fusarium fungal infection can be inherited in wheat from one generation to another. However, there is not yet available a cost effective method to separate Fusarium-damaged wheat kernels from undamaged kernels so that wheat breeders can take advantage of...
Covariant Perturbation Expansion of Off-Diagonal Heat Kernel
NASA Astrophysics Data System (ADS)
Gou, Yu-Zi; Li, Wen-Du; Zhang, Ping; Dai, Wu-Sheng
2016-07-01
Covariant perturbation expansion is an important method in quantum field theory. In this paper an expansion up to arbitrary order for off-diagonal heat kernels in flat space based on the covariant perturbation expansion is given. In literature, only diagonal heat kernels are calculated based on the covariant perturbation expansion.
Evidence-Based Kernels: Fundamental Units of Behavioral Influence
ERIC Educational Resources Information Center
Embry, Dennis D.; Biglan, Anthony
2008-01-01
This paper describes evidence-based kernels, fundamental units of behavioral influence that appear to underlie effective prevention and treatment for children, adults, and families. A kernel is a behavior-influence procedure shown through experimental analysis to affect a specific behavior and that is indivisible in the sense that removing any of…
7 CFR 981.60 - Determination of kernel weight.
Code of Federal Regulations, 2013 CFR
2013-01-01
... AGREEMENTS AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which...
7 CFR 981.60 - Determination of kernel weight.
Code of Federal Regulations, 2011 CFR
2011-01-01
... Agreements and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which...
7 CFR 981.60 - Determination of kernel weight.
Code of Federal Regulations, 2012 CFR
2012-01-01
... Agreements and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which...
7 CFR 981.60 - Determination of kernel weight.
Code of Federal Regulations, 2014 CFR
2014-01-01
... AGREEMENTS AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which...
7 CFR 981.60 - Determination of kernel weight.
Code of Federal Regulations, 2010 CFR
2010-01-01
... Agreements and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which...
Integrating the Gradient of the Thin Wire Kernel
NASA Technical Reports Server (NTRS)
Champagne, Nathan J.; Wilton, Donald R.
2008-01-01
A formulation for integrating the gradient of the thin wire kernel is presented. This approach employs a new expression for the gradient of the thin wire kernel derived from a recent technique for numerically evaluating the exact thin wire kernel. This approach should provide essentially arbitrary accuracy and may be used with higher-order elements and basis functions using the procedure described in [4].When the source and observation points are close, the potential integrals over wire segments involving the wire kernel are split into parts to handle the singular behavior of the integrand [1]. The singularity characteristics of the gradient of the wire kernel are different than those of the wire kernel, and the axial and radial components have different singularities. The characteristics of the gradient of the wire kernel are discussed in [2]. To evaluate the near electric and magnetic fields of a wire, the integration of the gradient of the wire kernel needs to be calculated over the source wire. Since the vector bases for current have constant direction on linear wire segments, these integrals reduce to integrals of the form
Polynomial Kernels for Hard Problems on Disk Graphs
NASA Astrophysics Data System (ADS)
Jansen, Bart
Kernelization is a powerful tool to obtain fixed-parameter tractable algorithms. Recent breakthroughs show that many graph problems admit small polynomial kernels when restricted to sparse graph classes such as planar graphs, bounded-genus graphs or H-minor-free graphs. We consider the intersection graphs of (unit) disks in the plane, which can be arbitrarily dense but do exhibit some geometric structure. We give the first kernelization results on these dense graph classes. Connected Vertex Cover has a kernel with 12k vertices on unit-disk graphs and with 3k 2 + 7k vertices on disk graphs with arbitrary radii. Red-Blue Dominating Set parameterized by the size of the smallest color class has a linear-vertex kernel on planar graphs, a quadratic-vertex kernel on unit-disk graphs and a quartic-vertex kernel on disk graphs. Finally we prove that H -Matching on unit-disk graphs has a linear-vertex kernel for every fixed graph H.
Optimal Bandwidth Selection in Observed-Score Kernel Equating
ERIC Educational Resources Information Center
Häggström, Jenny; Wiberg, Marie
2014-01-01
The selection of bandwidth in kernel equating is important because it has a direct impact on the equated test scores. The aim of this article is to examine the use of double smoothing when selecting bandwidths in kernel equating and to compare double smoothing with the commonly used penalty method. This comparison was made using both an equivalent…
Evidence-based Kernels: Fundamental Units of Behavioral Influence
Biglan, Anthony
2008-01-01
This paper describes evidence-based kernels, fundamental units of behavioral influence that appear to underlie effective prevention and treatment for children, adults, and families. A kernel is a behavior–influence procedure shown through experimental analysis to affect a specific behavior and that is indivisible in the sense that removing any of its components would render it inert. Existing evidence shows that a variety of kernels can influence behavior in context, and some evidence suggests that frequent use or sufficient use of some kernels may produce longer lasting behavioral shifts. The analysis of kernels could contribute to an empirically based theory of behavioral influence, augment existing prevention or treatment efforts, facilitate the dissemination of effective prevention and treatment practices, clarify the active ingredients in existing interventions, and contribute to efficiently developing interventions that are more effective. Kernels involve one or more of the following mechanisms of behavior influence: reinforcement, altering antecedents, changing verbal relational responding, or changing physiological states directly. The paper describes 52 of these kernels, and details practical, theoretical, and research implications, including calling for a national database of kernels that influence human behavior. PMID:18712600
Sugar uptake into kernels of tunicate tassel-seed maize
Thomas, P.A.; Felker, F.C.; Crawford, C.G. )
1990-05-01
A maize (Zea mays L.) strain expressing both the tassel-seed (Ts-5) and tunicate (Tu) characters was developed which produces glume-covered kernels on the tassel, often born on 7-10 mm pedicels. Vigorous plants produce up to 100 such kernels interspersed with additional sessile kernels. This floral unit provides a potentially valuable experimental system for studying sugar uptake into developing maize seeds. When detached kernels (with glumes and pedicel intact) are placed in incubation solution, fluid flows up the pedicel and into the glumes, entering the pedicel apoplast near the kernel base. The unusual anatomical features of this maize strain permit experimental access to the pedicel apoplast with much less possibility of kernel base tissue damage than with kernels excised from the cob. ({sup 14}C)Fructose incorporation into soluble and insoluble fractions of endosperm increased for 8 days. Endosperm uptake of sucrose, fructose, and D-glucose was significantly greater than that of L-glucose. Fructose uptake was significantly inhibited by CCCP, DNP, and PCMBS. These results suggest the presence of an active, non-diffusion component of sugar transport in maize kernels.
Brodsky, Stanley J.; de Teramond, Guy F.; /SLAC /Southern Denmark U., CP3-Origins /Costa Rica U.
2011-01-10
AdS/QCD, the correspondence between theories in a dilaton-modified five-dimensional anti-de Sitter space and confining field theories in physical space-time, provides a remarkable semiclassical model for hadron physics. Light-front holography allows hadronic amplitudes in the AdS fifth dimension to be mapped to frame-independent light-front wavefunctions of hadrons in physical space-time. The result is a single-variable light-front Schroedinger equation which determines the eigenspectrum and the light-front wavefunctions of hadrons for general spin and orbital angular momentum. The coordinate z in AdS space is uniquely identified with a Lorentz-invariant coordinate {zeta} which measures the separation of the constituents within a hadron at equal light-front time and determines the off-shell dynamics of the bound state wavefunctions as a function of the invariant mass of the constituents. The hadron eigenstates generally have components with different orbital angular momentum; e.g., the proton eigenstate in AdS/QCD with massless quarks has L = 0 and L = 1 light-front Fock components with equal probability. Higher Fock states with extra quark-anti quark pairs also arise. The soft-wall model also predicts the form of the nonperturbative effective coupling and its {beta}-function. The AdS/QCD model can be systematically improved by using its complete orthonormal solutions to diagonalize the full QCD light-front Hamiltonian or by applying the Lippmann-Schwinger method to systematically include QCD interaction terms. Some novel features of QCD are discussed, including the consequences of confinement for quark and gluon condensates. A method for computing the hadronization of quark and gluon jets at the amplitude level is outlined.
Direct Measurement of Wave Kernels in Time-Distance Helioseismology
NASA Technical Reports Server (NTRS)
Duvall, T. L., Jr.
2006-01-01
Solar f-mode waves are surface-gravity waves which propagate horizontally in a thin layer near the photosphere with a dispersion relation approximately that of deep water waves. At the power maximum near 3 mHz, the wavelength of 5 Mm is large enough for various wave scattering properties to be observable. Gizon and Birch (2002,ApJ,571,966)h ave calculated kernels, in the Born approximation, for the sensitivity of wave travel times to local changes in damping rate and source strength. In this work, using isolated small magnetic features as approximate point-sourc'e scatterers, such a kernel has been measured. The observed kernel contains similar features to a theoretical damping kernel but not for a source kernel. A full understanding of the effect of small magnetic features on the waves will require more detailed modeling.
A Robustness Testing Campaign for IMA-SP Partitioning Kernels
NASA Astrophysics Data System (ADS)
Grixti, Stephen; Lopez Trecastro, Jorge; Sammut, Nicholas; Zammit-Mangion, David
2015-09-01
With time and space partitioned architectures becoming increasingly appealing to the European space sector, the dependability of partitioning kernel technology is a key factor to its applicability in European Space Agency projects. This paper explores the potential of the data type fault model, which injects faults through the Application Program Interface, in partitioning kernel robustness testing. This fault injection methodology has been tailored to investigate its relevance in uncovering vulnerabilities within partitioning kernels and potentially contributing towards fault removal campaigns within this domain. This is demonstrated through a robustness testing case study of the XtratuM partitioning kernel for SPARC LEON3 processors. The robustness campaign exposed a number of vulnerabilities in XtratuM, exhibiting the potential benefits of using such a methodology for the robustness assessment of partitioning kernels.
OSKI: A Library of Automatically Tuned Sparse Matrix Kernels
Vuduc, R; Demmel, J W; Yelick, K A
2005-07-19
The Optimized Sparse Kernel Interface (OSKI) is a collection of low-level primitives that provide automatically tuned computational kernels on sparse matrices, for use by solver libraries and applications. These kernels include sparse matrix-vector multiply and sparse triangular solve, among others. The primary aim of this interface is to hide the complex decision-making process needed to tune the performance of a kernel implementation for a particular user's sparse matrix and machine, while also exposing the steps and potentially non-trivial costs of tuning at run-time. This paper provides an overview of OSKI, which is based on our research on automatically tuned sparse kernels for modern cache-based superscalar machines.
QCD unitarity constraints on Reggeon Field Theory
NASA Astrophysics Data System (ADS)
Kovner, Alex; Levin, Eugene; Lublinsky, Michael
2016-08-01
We point out that the s-channel unitarity of QCD imposes meaningful constraints on a possible form of the QCD Reggeon Field Theory. We show that neither the BFKL nor JIMWLK nor Braun's Hamiltonian satisfy the said constraints. In a toy, zero transverse dimensional case we construct a model that satisfies the analogous constraint and show that at infinite energy it indeed tends to a "black disk limit" as opposed to the model with triple Pomeron vertex only, routinely used as a toy model in the literature.
QCD subgroup on diffractive and forward physics
Albrow, M.G.; Baker, W.; Bhatti, A.
1997-09-01
Over the last few years, there has been a resurgence of interest in small-x or diffractive physics. This has been due to the realization that perturbative QCD techniques may be applicable to what was previously thought of as a non-perturbative problem and to the opening up of new energy regimes at HERA and the Tevatron collider. The goal is to understand the pomeron, and hence the behavior of total cross sections, elastic scattering and diffractive excitation, in terms of the underlying theory, QCD. This paper is divided into experiments of hadron-hadron colliders and electron-proton colliders.
Experimental Study of Nucleon Structure and QCD
Jian-Ping Chen
2012-03-01
Overview of Experimental Study of Nucleon Structure and QCD, with focus on the spin structure. Nucleon (spin) Structure provides valuable information on QCD dynamics. A decade of experiments from JLab yields these exciting results: (1) valence spin structure, duality; (2) spin sum rules and polarizabilities; (3) precision measurements of g{sub 2} - high-twist; and (4) first neutron transverse spin results - Collins/Sivers/A{sub LT}. There is a bright future as the 12 GeV Upgrade will greatly enhance our capability: (1) Precision determination of the valence quark spin structure flavor separation; and (2) Precision extraction of transversity/tensor charge/TMDs.
Chiral symmetry restoration in holographic noncommutative QCD
NASA Astrophysics Data System (ADS)
Nakajima, Tadahito; Ohtake, Yukiko; Suzuki, Kenji
2011-09-01
We consider the noncommutative deformation of the Sakai-Sugimoto model at finite temperature and finite baryon chemical potential. The space noncommutativity is possible to have an influence on the flavor dynamics of the QCD. The critical temperature and critical value of the chemical potential are modified by the space noncommutativity. The influence of the space noncommutativity on the flavor dynamics of the QCD is caused by the Wess-Zumino term in the effective action of the D8-branes. The intermediate temperature phase, in which the gluons deconfine but the chiral symmetry remains broken, is easy to be realized in some region of the noncommutativity parameter.
Hadron scattering and resonances in QCD
NASA Astrophysics Data System (ADS)
Dudek, Jozef J.
2016-05-01
I describe how hadron-hadron scattering amplitudes are related to the eigenstates of QCD in a finite cubic volume. The discrete spectrum of such eigenstates can be determined from correlation functions computed using lattice QCD, and the corresponding scattering amplitudes extracted. I review results from the Hadron Spectrum Collaboration who have used these finite volume methods to study ππ elastic scattering, including the ρ resonance, as well as coupled-channel π >K, ηK scattering. Ongoing calculations are advertised and the outlook for finite volume approaches is presented.
Exclusive hadronic and nuclear processes in QCD
Brodsky, S.J.
1985-12-01
Hadronic and nuclear processes are covered, in which all final particles are measured at large invariant masses compared with each other, i.e., large momentum transfer exclusive reactions. Hadronic wave functions in QCD and QCD sum rule constraints on hadron wave functions are discussed. The question of the range of applicability of the factorization formula and perturbation theory for exclusive processes is considered. Some consequences of quark and gluon degrees of freedom in nuclei are discussed which are outside the usual domain of traditional nuclear physics. 44 refs., 7 figs. (LEW)
QCD resummation for hadronic final states
NASA Astrophysics Data System (ADS)
Luisoni, Gionata; Marzani, Simone
2015-10-01
We review the basic concepts of all-order calculations in quantum chromodynamics (QCD) and their application to collider phenomenology. We start by discussing the factorization properties of QCD amplitudes and cross-sections in the soft and collinear limits and their resulting all-order exponentiation. We then discuss several applications of this formalism to observables which are of great interest at particle colliders. In this context, we describe the all-order resummation of event-shape distributions, as well as observables that probe the internal structure of hadronic jets.
String breaking in four dimensional lattice QCD
Duncan, A.; Eichten, E.; Thacker, H.
2001-06-01
Virtual quark pair screening leads to breaking of the string between fundamental representation quarks in QCD. For unquenched four dimensional lattice QCD, this (so far elusive) phenomenon is studied using the recently developed truncated determinant algorithm (TDA). The dynamical configurations were generated on a 650 MHz PC. Quark eigenmodes up to 420 MeV are included exactly in these TDA studies performed at low quark mass on large coarse [but O(a{sup 2}) improved] lattices. A study of Wilson line correlators in Coulomb gauge extracted from an ensemble of 1000 two-flavor dynamical configurations reveals evidence for flattening of the string tension at distances R{approx}>1 fm.
Kinetic Rate Kernels via Hierarchical Liouville-Space Projection Operator Approach.
Zhang, Hou-Dao; Yan, YiJing
2016-05-19
Kinetic rate kernels in general multisite systems are formulated on the basis of a nonperturbative quantum dissipation theory, the hierarchical equations of motion (HEOM) formalism, together with the Nakajima-Zwanzig projection operator technique. The present approach exploits the HEOM-space linear algebra. The quantum non-Markovian site-to-site transfer rate can be faithfully evaluated via projected HEOM dynamics. The developed method is exact, as evident by the comparison to the direct HEOM evaluation results on the population evolution. PMID:26757138
Topological Charge Evolution in the Markov-Chain of QCD
Derek Leinweber; Anthony Williams; Jian-bo Zhang; Frank Lee
2004-04-01
The topological charge is studied on lattices of large physical volume and fine lattice spacing. We illustrate how a parity transformation on the SU(3) link-variables of lattice gauge configurations reverses the sign of the topological charge and leaves the action invariant. Random applications of the parity transformation are proposed to traverse from one topological charge sign to the other. The transformation provides an improved unbiased estimator of the ensemble average and is essential in improving the ergodicity of the Markov chain process.
Technology Transfer Automated Retrieval System (TEKTRAN)
The current US corn grading system accounts for the portion of damaged kernels, which is measured by time-consuming and inaccurate visual inspection. Near infrared spectroscopy (NIRS), a non-destructive and fast analytical method, was tested as a tool for discriminating corn kernels with heat and f...
Technology Transfer Automated Retrieval System (TEKTRAN)
The objective of this study was to examine the relationship between fluorescence emissions of corn kernels inoculated with Aspergillus flavus and aflatoxin contamination levels within the kernels. The choice of methodology was based on the principle that many biological materials exhibit fluorescenc...
Modified kernel-based nonlinear feature extraction.
Ma, J.; Perkins, S. J.; Theiler, J. P.; Ahalt, S.
2002-01-01
Feature Extraction (FE) techniques are widely used in many applications to pre-process data in order to reduce the complexity of subsequent processes. A group of Kernel-based nonlinear FE ( H E ) algorithms has attracted much attention due to their high performance. However, a serious limitation that is inherent in these algorithms -- the maximal number of features extracted by them is limited by the number of classes involved -- dramatically degrades their flexibility. Here we propose a modified version of those KFE algorithms (MKFE), This algorithm is developed from a special form of scatter-matrix, whose rank is not determined by the number of classes involved, and thus breaks the inherent limitation in those KFE algorithms. Experimental results suggest that MKFE algorithm is .especially useful when the training set is small.
Privacy preserving RBF kernel support vector machine.
Li, Haoran; Xiong, Li; Ohno-Machado, Lucila; Jiang, Xiaoqian
2014-01-01
Data sharing is challenging but important for healthcare research. Methods for privacy-preserving data dissemination based on the rigorous differential privacy standard have been developed but they did not consider the characteristics of biomedical data and make full use of the available information. This often results in too much noise in the final outputs. We hypothesized that this situation can be alleviated by leveraging a small portion of open-consented data to improve utility without sacrificing privacy. We developed a hybrid privacy-preserving differentially private support vector machine (SVM) model that uses public data and private data together. Our model leverages the RBF kernel and can handle nonlinearly separable cases. Experiments showed that this approach outperforms two baselines: (1) SVMs that only use public data, and (2) differentially private SVMs that are built from private data. Our method demonstrated very close performance metrics compared to nonprivate SVMs trained on the private data. PMID:25013805
Point-Kernel Shielding Code System.
Energy Science and Technology Software Center (ESTSC)
1982-02-17
Version 00 QAD-BSA is a three-dimensional, point-kernel shielding code system based upon the CCC-48/QAD series. It is designed to calculate photon dose rates and heating rates using exponential attenuation and infinite medium buildup factors. Calculational provisions include estimates of fast neutron penetration using data computed by the moments method. Included geometry routines can describe complicated source and shield geometries. An internal library contains data for many frequently used structural and shielding materials, enabling the codemore » to solve most problems with only source strengths and problem geometry required as input. This code system adapts especially well to problems requiring multiple sources and sources with asymmetrical geometry. In addition to being edited separately, the total interaction rates from many sources may be edited at each detector point. Calculated photon interaction rates agree closely with those obtained using QAD-P5A.« less
Kernel density estimation using graphical processing unit
NASA Astrophysics Data System (ADS)
Sunarko, Su'ud, Zaki
2015-09-01
Kernel density estimation for particles distributed over a 2-dimensional space is calculated using a single graphical processing unit (GTX 660Ti GPU) and CUDA-C language. Parallel calculations are done for particles having bivariate normal distribution and by assigning calculations for equally-spaced node points to each scalar processor in the GPU. The number of particles, blocks and threads are varied to identify favorable configuration. Comparisons are obtained by performing the same calculation using 1, 2 and 4 processors on a 3.0 GHz CPU using MPICH 2.0 routines. Speedups attained with the GPU are in the range of 88 to 349 times compared the multiprocessor CPU. Blocks of 128 threads are found to be the optimum configuration for this case.
Light-like Wilson line in QCD without path ordering
NASA Astrophysics Data System (ADS)
Nayak, Gouranga C.
2016-07-01
Unlike the Wilson line in QED the Wilson line in QCD contains path ordering. In this paper we get rid of the path ordering in the light-like Wilson line in QCD by simplifying all the infinite number of noncommuting terms in the SU(3) pure gauge. We prove that the light-like Wilson line in QCD naturally emerges when path integral formulation of QCD is used to prove factorization of soft and collinear divergences at all order in coupling constant in QCD processes at high energy colliders.
Labeled Graph Kernel for Behavior Analysis.
Zhao, Ruiqi; Martinez, Aleix M
2016-08-01
Automatic behavior analysis from video is a major topic in many areas of research, including computer vision, multimedia, robotics, biology, cognitive science, social psychology, psychiatry, and linguistics. Two major problems are of interest when analyzing behavior. First, we wish to automatically categorize observed behaviors into a discrete set of classes (i.e., classification). For example, to determine word production from video sequences in sign language. Second, we wish to understand the relevance of each behavioral feature in achieving this classification (i.e., decoding). For instance, to know which behavior variables are used to discriminate between the words apple and onion in American Sign Language (ASL). The present paper proposes to model behavior using a labeled graph, where the nodes define behavioral features and the edges are labels specifying their order (e.g., before, overlaps, start). In this approach, classification reduces to a simple labeled graph matching. Unfortunately, the complexity of labeled graph matching grows exponentially with the number of categories we wish to represent. Here, we derive a graph kernel to quickly and accurately compute this graph similarity. This approach is very general and can be plugged into any kernel-based classifier. Specifically, we derive a Labeled Graph Support Vector Machine (LGSVM) and a Labeled Graph Logistic Regressor (LGLR) that can be readily employed to discriminate between many actions (e.g., sign language concepts). The derived approach can be readily used for decoding too, yielding invaluable information for the understanding of a problem (e.g., to know how to teach a sign language). The derived algorithms allow us to achieve higher accuracy results than those of state-of-the-art algorithms in a fraction of the time. We show experimental results on a variety of problems and datasets, including multimodal data. PMID:26415154
The flare kernel in the impulsive phase
NASA Technical Reports Server (NTRS)
Dejager, C.
1986-01-01
The impulsive phase of a flare is characterized by impulsive bursts of X-ray and microwave radiation, related to impulsive footpoint heating up to 50 or 60 MK, by upward gas velocities (150 to 400 km/sec) and by a gradual increase of the flare's thermal energy content. These phenomena, as well as non-thermal effects, are all related to the impulsive energy injection into the flare. The available observations are also quantitatively consistent with a model in which energy is injected into the flare by beams of energetic electrons, causing ablation of chromospheric gas, followed by convective rise of gas. Thus, a hole is burned into the chromosphere; at the end of impulsive phase of an average flare the lower part of that hole is situated about 1800 km above the photosphere. H alpha and other optical and UV line emission is radiated by a thin layer (approx. 20 km) at the bottom of the flare kernel. The upward rising and outward streaming gas cools down by conduction in about 45 s. The non-thermal effects in the initial phase are due to curtailing of the energy distribution function by escape of energetic electrons. The single flux tube model of a flare does not fit with these observations; instead we propose the spaghetti-bundle model. Microwave and gamma-ray observations suggest the occurrence of dense flare knots of approx. 800 km diameter, and of high temperature. Future observations should concentrate on locating the microwave/gamma-ray sources, and on determining the kernel's fine structure and the related multi-loop structure of the flaring area.
Chiral logarithms in quenched QCD
Y. Chen; S. J. Dong; T. Draper; I. Horvath; F. X. Lee; K. F. Liu; N. Mathur; and J. B. Zhang
2004-08-01
The quenched chiral logarithms are examined on a 163x28 lattice with Iwasaki gauge action and overlap fermions. The pion decay constant fpi is used to set the lattice spacing, a = 0.200(3) fm. With pion mass as low as {approx}180 MeV, we see the quenched chiral logarithms clearly in mpi2/m and fP, the pseudoscalar decay constant. The authors analyze the data to determine how low the pion mass needs to be in order for the quenched one-loop chiral perturbation theory (chiPT) to apply. With the constrained curve-fitting method, they are able to extract the quenched chiral logarithmic parameter delta together with other low-energy parameters. Only for mpi<=300 MeV do we obtain a consistent and stable fit with a constant delta which they determine to be 0.24(3)(4) (at the chiral scale Lambdachi = 0.8 GeV). By comparing to the 123x28 lattice, they estimate the finite volume effect to be about 2.7% for the smallest pion mass. They also fitted the pion mass to the form for the re-summed cactus diagrams and found that its applicable region is extended farther than the range for the one-loop formula, perhaps up to mpi {approx}500-600 MeV. The scale independent delta is determined to be 0.20(3) in this case. The authors study the quenched non-analytic terms in the nucleon mass and find that the coefficient C1/2 in the nucleon mass is consistent with the prediction of one-loop chiPT. They also obtain the low energy constant L5 from fpi. They conclude from this study that it is imperative to cover only the range of data with the pion mass less than {approx}300 MeV in order to examine the chiral behavior of the hadron masses and decay constants in quenched QCD and match them with quenched one-loop chiPT.
NASA Astrophysics Data System (ADS)
Brodsky, Stanley J.; de Téramond, Guy F.; Deur, Alexandre; Dosch, Hans Günter
2015-09-01
The valence Fock-state wavefunctions of the light-front (LF) QCD Hamiltonian satisfy a relativistic equation of motion, analogous to the nonrelativistic radial Schrödinger equation, with an effective confining potential U which systematically incorporates the effects of higher quark and gluon Fock states. If one requires that the effective action which underlies the QCD Lagrangian remains conformally invariant and extends the formalism of de Alfaro, Fubini and Furlan to LF Hamiltonian theory, the potential U has a unique form of a harmonic oscillator potential, and a mass gap arises. The result is a nonperturbative relativistic LF quantum mechanical wave equation which incorporates color confinement and other essential spectroscopic and dynamical features of hadron physics, including a massless pion for zero quark mass and linear Regge trajectories with the same slope in the radial quantum number n and orbital angular momentum L. Only one mass parameter κ appears. The corresponding LF Dirac equation provides a dynamical and spectroscopic model of nucleons. The same LF equations arise from the holographic mapping of the soft-wall model modification of AdS5 space with a unique dilaton profile to QCD (3+1) at fixed LF time. LF holography thus provides a precise relation between the bound-state amplitudes in the fifth dimension of Anti-de Sitter (AdS) space and the boost-invariant LFWFs describing the internal structure of hadrons in physical space-time. We also show how the mass scale underlying confinement and the masses of light-quark hadrons determines the scale controlling the evolution of the perturbative QCD coupling. The relation between scales is obtained by matching the nonperturbative dynamics, as described by an effective conformal theory mapped to the LF and its embedding in AdS space, to the perturbative QCD regime computed to four-loop order. The data for the effective coupling defined from the Bjorken sum rule are remarkably consistent with the
QCD Physics at the Tevatron Collider
Messina, Andrea
2005-10-12
In this contribution some of the prominent QCD physics results from CDF and D0 experiments in Run II are presented. The cross sections and the properties of jets are discussed for both the inclusive and the b-jet production. Results on the associate production of light and heavy flavour jets together with vector bosons are also reported.
Large Scale Commodity Clusters for Lattice QCD
A. Pochinsky; W. Akers; R. Brower; J. Chen; P. Dreher; R. Edwards; S. Gottlieb; D. Holmgren; P. Mackenzie; J. Negele; D. Richards; J. Simone; W. Watson
2002-06-01
We describe the construction of large scale clusters for lattice QCD computing being developed under the umbrella of the U.S. DoE SciDAC initiative. We discuss the study of floating point and network performance that drove the design of the cluster, and present our plans for future multi-Terascale facilities.
BRST invariance in Coulomb gauge QCD
NASA Astrophysics Data System (ADS)
Andraši, A.; Taylor, J. C.
2015-12-01
In the Coulomb gauge, the Hamiltonian of QCD contains terms of order ħ2, identified by Christ and Lee, which are non-local but instantaneous. The question is addressed how do these terms fit in with BRST invariance. Our discussion is confined to the simplest, O(g4) , example.
Toward lattice QCD simulation on AP1000
NASA Astrophysics Data System (ADS)
Ohta, Shigemi
AP1000 is Fujitsu Laboratory's experimental parallel computer consisting of up to 1024 microcomputers called cells. It is found that each AP1000 cell can sustain two to three megaflops computational speed for full QCD lattice numerical simulations in IEEE 64-bit floating point format
Phase structure of QCD for heavy quarks
NASA Astrophysics Data System (ADS)
Fischer, Christian S.; Luecker, Jan; Pawlowski, Jan M.
2015-01-01
We investigate the nature of the deconfinement and Roberge-Weiss transition in the heavy quark regime for finite real and imaginary chemical potential within the functional approach to continuum QCD. We extract the critical phase boundary between the first-order and crossover regions and also explore tricritical scaling. Our results confirm previous ones from finite volume lattice studies.
QCD PHASE TRANSITIONS-VOLUME 15.
SCHAFER,T.
1998-11-04
The title of the workshop, ''The QCD Phase Transitions'', in fact happened to be too narrow for its real contents. It would be more accurate to say that it was devoted to different phases of QCD and QCD-related gauge theories, with strong emphasis on discussion of the underlying non-perturbative mechanisms which manifest themselves as all those phases. Before we go to specifics, let us emphasize one important aspect of the present status of non-perturbative Quantum Field Theory in general. It remains true that its studies do not get attention proportional to the intellectual challenge they deserve, and that the theorists working on it remain very fragmented. The efforts to create Theory of Everything including Quantum Gravity have attracted the lion share of attention and young talent. Nevertheless, in the last few years there was also a tremendous progress and even some shift of attention toward emphasis on the unity of non-perturbative phenomena. For example, we have seen some. efforts to connect the lessons from recent progress in Supersymmetric theories with that in QCD, as derived from phenomenology and lattice. Another example is Maldacena conjecture and related development, which connect three things together, string theory, super-gravity and the (N=4) supersymmetric gauge theory. Although the progress mentioned is remarkable by itself, if we would listen to each other more we may have chance to strengthen the field and reach better understanding of the spectacular non-perturbative physics.
Nonperturbative QCD corrections to electroweak observables
Dru B Renner, Xu Feng, Karl Jansen, Marcus Petschlies
2011-12-01
Nonperturbative QCD corrections are important to many low-energy electroweak observables, for example the muon magnetic moment. However, hadronic corrections also play a significant role at much higher energies due to their impact on the running of standard model parameters, such as the electromagnetic coupling. Currently, these hadronic contributions are accounted for by a combination of experimental measurements and phenomenological modeling but ideally should be calculated from first principles. Recent developments indicate that many of the most important hadronic corrections may be feasibly calculated using lattice QCD methods. To illustrate this, we will examine the lattice computation of the leading-order QCD corrections to the muon magnetic moment, paying particular attention to a recently developed method but also reviewing the results from other calculations. We will then continue with several examples that demonstrate the potential impact of the new approach: the leading-order corrections to the electron and tau magnetic moments, the running of the electromagnetic coupling, and a class of the next-to-leading-order corrections for the muon magnetic moment. Along the way, we will mention applications to the Adler function, the determination of the strong coupling constant and QCD corrections to muonic-hydrogen.
The Top Quark, QCD, And New Physics.
DOE R&D Accomplishments Database
Dawson, S.
2002-06-01
The role of the top quark in completing the Standard Model quark sector is reviewed, along with a discussion of production, decay, and theoretical restrictions on the top quark properties. Particular attention is paid to the top quark as a laboratory for perturbative QCD. As examples of the relevance of QCD corrections in the top quark sector, the calculation of e{sup+}e{sup -}+ t{bar t} at next-to-leading-order QCD using the phase space slicing algorithm and the implications of a precision measurement of the top quark mass are discussed in detail. The associated production of a t{bar t} pair and a Higgs boson in either e{sup+}e{sup -} or hadronic collisions is presented at next-to-leading-order QCD and its importance for a measurement of the top quark Yulrawa coupling emphasized. Implications of the heavy top quark mass for model builders are briefly examined, with the minimal supersymmetric Standard Model and topcolor discussed as specific examples.
Quark screening lengths in finite temperature QCD
Gocksch, A. California Univ., Santa Barbara, CA . Inst. for Theoretical Physics)
1990-11-01
We have computed Landau gauge quark propagators in both the confined and deconfined phase of QCD. I discuss the magnitude of the resulting screening lengths as well as aspects of chiral symmetry relevant to the quark propagator. 12 refs., 1 fig., 1 tab.
The CKM Matrix from Lattice QCD
Mackenzie, Paul B.; /Fermilab
2009-07-01
Lattice QCD plays an essential role in testing and determining the parameters of the CKM theory of flavor mixing and CP violation. Very high precisions are required for lattice calculations analyzing CKM data; I discuss the prospects for achieving them. Lattice calculations will also play a role in investigating flavor mixing and CP violation beyond the Standard Model.
Exact Adler Function in Supersymmetric QCD
NASA Astrophysics Data System (ADS)
Shifman, M.; Stepanyantz, K.
2015-02-01
The Adler function D is found exactly in supersymmetric QCD. Our exact formula relates D (Q2) to the anomalous dimension of the matter superfields γ (αs(Q2)) . En route we prove another theorem: the absence of the so-called singlet contribution to D . While such singlet contributions are present in individual supergraphs, they cancel in the sum.
Marking up lattice QCD configurations and ensembles
P.Coddington; B.Joo; C.M.Maynard; D.Pleiter; T.Yoshie
2007-10-01
QCDml is an XML-based markup language designed for sharing QCD configurations and ensembles world-wide via the International Lattice Data Grid (ILDG). Based on the latest release, we present key ingredients of the QCDml in order to provide some starting points for colleagues in this community to markup valuable configurations and submit them to the ILDG.
THE TOP QUARK, QCD, AND NEW PHYSICS.
DAWSON,S.
2002-06-01
The role of the top quark in completing the Standard Model quark sector is reviewed, along with a discussion of production, decay, and theoretical restrictions on the top quark properties. Particular attention is paid to the top quark as a laboratory for perturbative QCD. As examples of the relevance of QCD corrections in the top quark sector, the calculation of e{sup +}e{sup -} + t{bar t} at next-to-leading-order QCD using the phase space slicing algorithm and the implications of a precision measurement of the top quark mass are discussed in detail. The associated production of a t{bar t} pair and a Higgs boson in either e{sup +}e{sup -} or hadronic collisions is presented at next-to-leading-order QCD and its importance for a measurement of the top quark Yulrawa coupling emphasized. Implications of the heavy top quark mass for model builders are briefly examined, with the minimal supersymmetric Standard Model and topcolor discussed as specific examples.
On-Shell Methods in Perturbative QCD
Bern, Zvi; Dixon, Lance J.; Kosower, David A.
2007-04-25
We review on-shell methods for computing multi-parton scattering amplitudes in perturbative QCD, utilizing their unitarity and factorization properties. We focus on aspects which are useful for the construction of one-loop amplitudes needed for phenomenological studies at the Large Hadron Collider.
Exploring Hyperons and Hypernuclei with Lattice QCD
S.R. Beane; P.F. Bedaque; A. Parreno; M.J. Savage
2005-01-01
In this work we outline a program for lattice QCD that would provide a first step toward understanding the strong and weak interactions of strange baryons. The study of hypernuclear physics has provided a significant amount of information regarding the structure and weak decays of light nuclei containing one or two Lambda's, and Sigma's. From a theoretical standpoint, little is known about the hyperon-nucleon interaction, which is required input for systematic calculations of hypernuclear structure. Furthermore, the long-standing discrepancies in the P-wave amplitudes for nonleptonic hyperon decays remain to be understood, and their resolution is central to a better understanding of the weak decays of hypernuclei. We present a framework that utilizes Luscher's finite-volume techniques in lattice QCD to extract the scattering length and effective range for Lambda-N scattering in both QCD and partially-quenched QCD. The effective theory describing the nonleptonic decays of hyperons using isospin symmetry alone, appropriate for lattice calculations, is constructed.
Effective charges and expansion parameters in QCD
Braaten, E.; Leveille, J.P.
1981-10-01
The momentum subtraction scheme MOM has been empirically successful in producing small QCD corrections to physical quantities at one loop order. By explicit calculations, we show that with a suitable shift in the renormalization scale, the minimal subtraction scheme coupling constant ..cap alpha../sub MS/ coincides with typical momentum scheme coupling constants at both one and two loop order.
Pluto collaboration
1981-02-01
Results obtained with the PLUTO detector at PETRA are presented. Multihadron final states have been analysed with respect to clustering, energy-energy correlations and transverse momenta in jets. QCD predictions for hard gluon emission and soft gluon-quark cascades are discussed. Results on ..cap alpha../sub s/ and the gluon spin are given.
Schvellinger, Martin
2008-07-28
We briefly review one of the current applications of the AdS/CFT correspondence known as AdS/QCD and discuss about the calculation of four-point quark-flavour current correlation functions and their applications to the calculation of observables related to neutral kaon decays and neutral kaon mixing processes.
Varelas, N.; D0 Collaboration
1997-10-01
We present recent results on jet production, dijet angular distributions, W+ Jets, and color coherence from p{anti p} collisions at {radical}s = 1.8 TeV at the Fermilab Tevatron Collider using the D0 detector. The data are compared to perturbative QCD calculations or to predictions of parton shower based Monte Carlo models.
QCD in hadron-hadron collisions
Albrow, M.
1997-03-01
Quantum Chromodynamics provides a good description of many aspects of high energy hadron-hadron collisions, and this will be described, along with some aspects that are not yet understood in QCD. Topics include high E{sub T} jet production, direct photon, W, Z and heavy flavor production, rapidity gaps and hard diffraction.
Factorization and other novel effects in QCD
Brodsky, S.J.
1983-09-01
Recent progress in proving the validity of factorization for inclusive reactions in QCD is reviewed. A new necessary condition involving the target length is emphasized. We also discuss a number of novel effects in gauge theory including null zone phenomena, color transparency, formation zone conditions, and possible heavy quark Fock states components in ordinary hadrons. 36 references.
Dual condensate and QCD phase transition
Zhang Bo; Bruckmann, Falk; Fodor, Zoltan; Szabo, Kalman K.; Gattringer, Christof
2011-05-23
The dual condensate is a new QCD phase transition order parameter, which connnects confinement and chiral symmetry breaking as different mass limits. We discuss the relation between the fermion spectrum at general boundary conditions and the dual condensate and show numerical results for the latter from unquenched SU(3) lattice configurations.
Visualization Tools for Lattice QCD - Final Report
Massimo Di Pierro
2012-03-15
Our research project is about the development of visualization tools for Lattice QCD. We developed various tools by extending existing libraries, adding new algorithms, exposing new APIs, and creating web interfaces (including the new NERSC gauge connection web site). Our tools cover the full stack of operations from automating download of data, to generating VTK files (topological charge, plaquette, Polyakov lines, quark and meson propagators, currents), to turning the VTK files into images, movies, and web pages. Some of the tools have their own web interfaces. Some Lattice QCD visualization have been created in the past but, to our knowledge, our tools are the only ones of their kind since they are general purpose, customizable, and relatively easy to use. We believe they will be valuable to physicists working in the field. They can be used to better teach Lattice QCD concepts to new graduate students; they can be used to observe the changes in topological charge density and detect possible sources of bias in computations; they can be used to observe the convergence of the algorithms at a local level and determine possible problems; they can be used to probe heavy-light mesons with currents and determine their spatial distribution; they can be used to detect corrupted gauge configurations. There are some indirect results of this grant that will benefit a broader audience than Lattice QCD physicists.
Heavy quark masses from lattice QCD
NASA Astrophysics Data System (ADS)
Lytle, Andrew T.
2016-07-01
Progress in quark mass determinations from lattice QCD is reviewed, focusing on results for charm and bottom mass. These are of particular interest for precision Higgs studies. Recent determinations have achieved percent-level uncertainties with controlled systematics. Future prospects for these calculations are also discussed.
Gaussian kernel width optimization for sparse Bayesian learning.
Mohsenzadeh, Yalda; Sheikhzadeh, Hamid
2015-04-01
Sparse kernel methods have been widely used in regression and classification applications. The performance and the sparsity of these methods are dependent on the appropriate choice of the corresponding kernel functions and their parameters. Typically, the kernel parameters are selected using a cross-validation approach. In this paper, a learning method that is an extension of the relevance vector machine (RVM) is presented. The proposed method can find the optimal values of the kernel parameters during the training procedure. This algorithm uses an expectation-maximization approach for updating kernel parameters as well as other model parameters; therefore, the speed of convergence and computational complexity of the proposed method are the same as the standard RVM. To control the convergence of this fully parameterized model, the optimization with respect to the kernel parameters is performed using a constraint on these parameters. The proposed method is compared with the typical RVM and other competing methods to analyze the performance. The experimental results on the commonly used synthetic data, as well as benchmark data sets, demonstrate the effectiveness of the proposed method in reducing the performance dependency on the initial choice of the kernel parameters. PMID:25794377
Classification of maize kernels using NIR hyperspectral imaging.
Williams, Paul J; Kucheryavskiy, Sergey
2016-10-15
NIR hyperspectral imaging was evaluated to classify maize kernels of three hardness categories: hard, medium and soft. Two approaches, pixel-wise and object-wise, were investigated to group kernels according to hardness. The pixel-wise classification assigned a class to every pixel from individual kernels and did not give acceptable results because of high misclassification. However by using a predefined threshold and classifying entire kernels based on the number of correctly predicted pixels, improved results were achieved (sensitivity and specificity of 0.75 and 0.97). Object-wise classification was performed using two methods for feature extraction - score histograms and mean spectra. The model based on score histograms performed better for hard kernel classification (sensitivity and specificity of 0.93 and 0.97), while that of mean spectra gave better results for medium kernels (sensitivity and specificity of 0.95 and 0.93). Both feature extraction methods can be recommended for classification of maize kernels on production scale. PMID:27173544
Effects of sample size on KERNEL home range estimates
Seaman, D.E.; Millspaugh, J.J.; Kernohan, Brian J.; Brundige, Gary C.; Raedeke, Kenneth J.; Gitzen, Robert A.
1999-01-01
Kernel methods for estimating home range are being used increasingly in wildlife research, but the effect of sample size on their accuracy is not known. We used computer simulations of 10-200 points/home range and compared accuracy of home range estimates produced by fixed and adaptive kernels with the reference (REF) and least-squares cross-validation (LSCV) methods for determining the amount of smoothing. Simulated home ranges varied from simple to complex shapes created by mixing bivariate normal distributions. We used the size of the 95% home range area and the relative mean squared error of the surface fit to assess the accuracy of the kernel home range estimates. For both measures, the bias and variance approached an asymptote at about 50 observations/home range. The fixed kernel with smoothing selected by LSCV provided the least-biased estimates of the 95% home range area. All kernel methods produced similar surface fit for most simulations, but the fixed kernel with LSCV had the lowest frequency and magnitude of very poor estimates. We reviewed 101 papers published in The Journal of Wildlife Management (JWM) between 1980 and 1997 that estimated animal home ranges. A minority of these papers used nonparametric utilization distribution (UD) estimators, and most did not adequately report sample sizes. We recommend that home range studies using kernel estimates use LSCV to determine the amount of smoothing, obtain a minimum of 30 observations per animal (but preferably a?Y50), and report sample sizes in published results.
New thresholds for Primordial Black Hole formation during the QCD phase transition
NASA Astrophysics Data System (ADS)
Sobrinho, J. L. G.; Augusto, P.; Gonçalves, A. L.
2016-08-01
Primordial Black Holes (PBHs) might have formed in the early Universe as a consequence of the collapse of density fluctuations with an amplitude above a critical value δc: the formation threshold. Although for a radiation-dominated Universe δc remains constant, if the Universe experiences some dust-like phases (e.g. phase transitions) δc might decrease, improving the chances of PBH formation. We studied the evolution of δc during the QCD phase transition epoch within three different models: Bag Model (BM), Lattice Fit Model (LFM), and Crossover Model (CM). We found that the reduction on the background value of δc can be as high as 77% (BM), which might imply a ˜10-10 probability of PBHs forming at the QCD epoch.
Big bang nucleosynthesis and ΛQCD
NASA Astrophysics Data System (ADS)
Kneller, James P.; McLaughlin, Gail C.
2003-11-01
Big bang nucleosynthesis (BBN) has increasingly become the tool of choice for investigating the permitted variation of fundamental constants during the earliest epochs of the Universe. Here we present a BBN calculation that has been modified to permit changes in the QCD scale, ΛQCD. The primary effects of changing the QCD scale upon BBN are through the deuteron binding energy BD and the neutron-proton mass difference δmnp, which both play crucial roles in determining the primordial abundances. In this paper we show how a simplified BBN calculation allows us to restrict the nuclear data we need to just BD and δmnp yet still gives useful results so that any variation in ΛQCD may be constrained via the corresponding shifts in BD and δmnp by using the current estimates of the primordial deuterium abundance and helium mass fraction. The simplification predicts the helium-4 and deuterium abundances to within 1% and 50%, respectively, when compared with the results of a standard BBN code. But ΛQCD also affects much of the remaining required nuclear input so this method introduces a systematic error into the calculation and we find a degeneracy between BD and δmnp. We show how increased understanding of the relationship of the pion mass and/or BD to other nuclear parameters, such as the binding energy of tritium and the cross section of T+D→4He+n, would yield constraints upon any change in BD and δmnp at the 10% level.
Machine learning algorithms for damage detection: Kernel-based approaches
NASA Astrophysics Data System (ADS)
Santos, Adam; Figueiredo, Eloi; Silva, M. F. M.; Sales, C. S.; Costa, J. C. W. A.
2016-02-01
This paper presents four kernel-based algorithms for damage detection under varying operational and environmental conditions, namely based on one-class support vector machine, support vector data description, kernel principal component analysis and greedy kernel principal component analysis. Acceleration time-series from an array of accelerometers were obtained from a laboratory structure and used for performance comparison. The main contribution of this study is the applicability of the proposed algorithms for damage detection as well as the comparison of the classification performance between these algorithms and other four ones already considered as reliable approaches in the literature. All proposed algorithms revealed to have better classification performance than the previous ones.
Monte Carlo Code System for Electron (Positron) Dose Kernel Calculations.
Energy Science and Technology Software Center (ESTSC)
1999-05-12
Version 00 KERNEL performs dose kernel calculations for an electron (positron) isotropic point source in an infinite homogeneous medium. First, the auxiliary code PRELIM is used to prepare cross section data for the considered medium. Then the KERNEL code simulates the transport of electrons and bremsstrahlung photons through the medium until all particles reach their cutoff energies. The deposited energy is scored in concentric spherical shells at a radial distance ranging from zero to twicemore » the source particle range.« less
Bridging the gap between the KERNEL and RT-11
Hendra, R.G.
1981-06-01
A software package is proposed to allow users of the PL-11 language, and the LSI-11 KERNEL in general, to use their PL-11 programs under RT-11. Further, some general purpose extensions to the KERNEL are proposed that facilitate some number conversions and strong manipulations. A Floating Point Package of procedures to allow full use of the hardware floating point capability of the LSI-11 computers is proposed. Extensions to the KERNEL that allow a user to read, write and delete disc files in the manner of RT-11 is also proposed. A device directory listing routine is also included.
Kernel simplex growing algorithm for hyperspectral endmember extraction
NASA Astrophysics Data System (ADS)
Zhao, Liaoying; Zheng, Junpeng; Li, Xiaorun; Wang, Lijiao
2014-01-01
In order to effectively extract endmembers for hyperspectral imagery where linear mixing model may not be appropriate due to multiple scattering effects, this paper extends the simplex growing algorithm (SGA) to its kernel version. A new simplex volume formula without dimension reduction is used in SGA to form a new simplex growing algorithm (NSGA). The original data are nonlinearly mapped into a high-dimensional space where the scatters can be ignored. To avoid determining complex nonlinear mapping, a kernel function is used to extend the NSGA to kernel NSGA (KNSGA). Experimental results of simulated and real data prove that the proposed KNSGA approach outperforms SGA and NSGA.
KOVCHEGOV,Y.V.
2000-04-25
The authors derive an equation determining the small-x evolution of the F{sub 2} structure function of a large nucleus which resumes a cascade of gluons in the leading logarithmic approximation using Mueller's color dipole model. In the traditional language it corresponds to resummation of the pomeron fan diagrams, originally conjectured in the GLR equation. The authors show that the solution of the equation describes the physics of structure functions at high partonic densities, thus allowing them to gain some understanding of the most interesting and challenging phenomena in small-x physics--saturation.
Understanding QCD at high density from a Z3 -symmetric QCD-like theory
NASA Astrophysics Data System (ADS)
Kouno, Hiroaki; Kashiwa, Kouji; Takahashi, Junichi; Misumi, Tatsuhiro; Yahiro, Masanobu
2016-03-01
We investigate QCD at large μ /T by using Z3-symmetric S U (3 ) gauge theory, where μ is the quark-number chemical potential and T is temperature. We impose the flavor-dependent twist boundary condition on quarks in QCD. This QCD-like theory has the twist angle θ as a parameter, and agrees with QCD when θ =0 and becomes symmetric when θ =2 π /3 . For both QCD and the Z3-symmetric S U (3 ) gauge theory, the phase diagram is drawn in μ -T plane with the Polyakov-loop extended Nambu-Jona-Lasinio model. In the Z3-symmetric S U (3 ) gauge theory, the Polyakov loop φ is zero in the confined phase appearing at T ≲200 MeV and μ ≲300 MeV . The perfectly confined phase never coexists with the color superconducting (CSC) phase, since finite diquark condensate in the CSC phase breaks Z3 symmetry and then makes φ finite. When μ ≳300 MeV , the CSC phase is more stable than the perfectly confined phase at T ≲100 MeV . Meanwhile, the chiral symmetry can be broken in the perfectly confined phase, since the chiral condensate is Z3 invariant. Consequently, the perfectly confined phase is divided into the perfectly confined phase without chiral symmetry restoration in a region of μ ≲300 MeV and T ≲200 MeV and the perfectly confined phase with chiral symmetry restoration in a region of μ ≳300 MeV and 100 ≲T ≲200 MeV . At low temperature, the basic phase structure of Z3-symmetric QCD-like theory remains in QCD. Properties of the sign problem in Z3-symmetric theory are also discussed. We discuss a numerical framework to evaluate observables at θ =0 from those at θ =2 π /3 .
Mapping the QCD Phase Transition with Accreting Compact Stars
Blaschke, D.; Poghosyan, G.; Grigorian, H.
2008-10-29
We discuss an idea for how accreting millisecond pulsars could contribute to the understanding of the QCD phase transition in the high-density nuclear matter equation of state (EoS). It is based on two ingredients, the first one being a ''phase diagram'' of rapidly rotating compact star configurations in the plane of spin frequency and mass, determined with state-of-the-art hybrid equations of state, allowing for a transition to color superconducting quark matter. The second is the study of spin-up and accretion evolution in this phase diagram. We show that the quark matter phase transition leads to a characteristic line in the {omega}-M plane, the phase border between neutron stars and hybrid stars with a quark matter core. Along this line a drop in the pulsar's moment of inertia entails a waiting point phenomenon in the accreting millisecond pulsar (AMXP) evolution: most of these objects should therefore be found along the phase border in the {omega}-M plane, which may be viewed as the AMXP analog of the main sequence in the Hertzsprung-Russell diagram for normal stars. In order to prove the existence of a high-density phase transition in the cores of compact stars we need population statistics for AMXPs with sufficiently accurate determination of their masses, spin frequencies and magnetic fields.
QCD and Asymptotic Freedom:. Perspectives and Prospects
NASA Astrophysics Data System (ADS)
Wilczek, Frank
QCD is now a mature theory, and it is possible to begin to view its place in the conceptual universe of physics with an appropriate perspective. There is a certain irony in the achievements of QCD. For the problems which initially drove its development — specifically, the desire to understand in detail the force that holds atomic nuclei together, and later the desire to calculate the spectrum of hadrons and their interactions — only limited insight has been achieved. However, I shall argue that QCD is actually more special and important a theory than one had any right to anticipate. In many ways, the importance of the solution transcends that of the original motivating problems. After elaborating on these quasiphilosophical remarks, I discuss two current frontiers of physics that illustrate the continuing vitality of the ideas. The recent wealth of beautiful precision experiments measuring the parameters of the standard model have made it possible to consider the unification of couplings in unprecedented quantitative detail. One central result emerging from these developments is a tantalizing hint of virtual supersymmetry. The possibility of phase transitions in matter at temperatures of order ~102 MeV, governed by QCD dynamics, is of interest from several points of view. Besides having a certain intrinsic grandeur, the question “Does the nature of matter change qualitatively, as it is radically heated?” is important for cosmology, relevant to planned high-energy heavy-ion collision experiments, and provides a promising arena for numerical simulations of QCD. Recent numerical work seems to be consistent with expectations suggested by renormalization group analysis of the potential universality classes of the QCD chiral phase transition; specifically, that the transition is second-order for two species of massless quarks but first order otherwise. There is an interesting possibility of long-range correlations in heavy ion collisions due to the creation of
Bilinear analysis for kernel selection and nonlinear feature extraction.
Yang, Shu; Yan, Shuicheng; Zhang, Chao; Tang, Xiaoou
2007-09-01
This paper presents a unified criterion, Fisher + kernel criterion (FKC), for feature extraction and recognition. This new criterion is intended to extract the most discriminant features in different nonlinear spaces, and then, fuse these features under a unified measurement. Thus, FKC can simultaneously achieve nonlinear discriminant analysis and kernel selection. In addition, we present an efficient algorithm Fisher + kernel analysis (FKA), which utilizes the bilinear analysis, to optimize the new criterion. This FKA algorithm can alleviate the ill-posed problem existed in traditional kernel discriminant analysis (KDA), and usually, has no singularity problem. The effectiveness of our proposed algorithm is validated by a series of face-recognition experiments on several different databases. PMID:18220192
A kernel adaptive algorithm for quaternion-valued inputs.
Paul, Thomas K; Ogunfunmi, Tokunbo
2015-10-01
The use of quaternion data can provide benefit in applications like robotics and image recognition, and particularly for performing transforms in 3-D space. Here, we describe a kernel adaptive algorithm for quaternions. A least mean square (LMS)-based method was used, resulting in the derivation of the quaternion kernel LMS (Quat-KLMS) algorithm. Deriving this algorithm required describing the idea of a quaternion reproducing kernel Hilbert space (RKHS), as well as kernel functions suitable with quaternions. A modified HR calculus for Hilbert spaces was used to find the gradient of cost functions defined on a quaternion RKHS. In addition, the use of widely linear (or augmented) filtering is proposed to improve performance. The benefit of the Quat-KLMS and widely linear forms in learning nonlinear transformations of quaternion data are illustrated with simulations. PMID:25594982
Intelligent classification methods of grain kernels using computer vision analysis
NASA Astrophysics Data System (ADS)
Lee, Choon Young; Yan, Lei; Wang, Tianfeng; Lee, Sang Ryong; Park, Cheol Woo
2011-06-01
In this paper, a digital image analysis method was developed to classify seven kinds of individual grain kernels (common rice, glutinous rice, rough rice, brown rice, buckwheat, common barley and glutinous barley) widely planted in Korea. A total of 2800 color images of individual grain kernels were acquired as a data set. Seven color and ten morphological features were extracted and processed by linear discriminant analysis to improve the efficiency of the identification process. The output features from linear discriminant analysis were used as input to the four-layer back-propagation network to classify different grain kernel varieties. The data set was divided into three groups: 70% for training, 20% for validation, and 10% for testing the network. The classification experimental results show that the proposed method is able to classify the grain kernel varieties efficiently.
Kernel-based Linux emulation for Plan 9.
Minnich, Ronald G.
2010-09-01
CNKemu is a kernel-based system for the 9k variant of the Plan 9 kernel. It is designed to provide transparent binary support for programs compiled for IBM's Compute Node Kernel (CNK) on the Blue Gene series of supercomputers. This support allows users to build applications with the standard Blue Gene toolchain, including C++ and Fortran compilers. While the CNK is not Linux, IBM designed the CNK so that the user interface has much in common with the Linux 2.0 system call interface. The Plan 9 CNK emulator hence provides the foundation of kernel-based Linux system call support on Plan 9. In this paper we discuss cnkemu's implementation and some of its more interesting features, such as the ability to easily intermix Plan 9 and Linux system calls.
Constructing Bayesian formulations of sparse kernel learning methods.
Cawley, Gavin C; Talbot, Nicola L C
2005-01-01
We present here a simple technique that simplifies the construction of Bayesian treatments of a variety of sparse kernel learning algorithms. An incomplete Cholesky factorisation is employed to modify the dual parameter space, such that the Gaussian prior over the dual model parameters is whitened. The regularisation term then corresponds to the usual weight-decay regulariser, allowing the Bayesian analysis to proceed via the evidence framework of MacKay. There is in addition a useful by-product associated with the incomplete Cholesky factorisation algorithm, it also identifies a subset of the training data forming an approximate basis for the entire dataset in the kernel-induced feature space, resulting in a sparse model. Bayesian treatments of the kernel ridge regression (KRR) algorithm, with both constant and heteroscedastic (input dependent) variance structures, and kernel logistic regression (KLR) are provided as illustrative examples of the proposed method, which we hope will be more widely applicable. PMID:16085387
Kernel MAD Algorithm for Relative Radiometric Normalization
NASA Astrophysics Data System (ADS)
Bai, Yang; Tang, Ping; Hu, Changmiao
2016-06-01
The multivariate alteration detection (MAD) algorithm is commonly used in relative radiometric normalization. This algorithm is based on linear canonical correlation analysis (CCA) which can analyze only linear relationships among bands. Therefore, we first introduce a new version of MAD in this study based on the established method known as kernel canonical correlation analysis (KCCA). The proposed method effectively extracts the non-linear and complex relationships among variables. We then conduct relative radiometric normalization experiments on both the linear CCA and KCCA version of the MAD algorithm with the use of Landsat-8 data of Beijing, China, and Gaofen-1(GF-1) data derived from South China. Finally, we analyze the difference between the two methods. Results show that the KCCA-based MAD can be satisfactorily applied to relative radiometric normalization, this algorithm can well describe the nonlinear relationship between multi-temporal images. This work is the first attempt to apply a KCCA-based MAD algorithm to relative radiometric normalization.
SCAP. Point Kernel Single or Albedo Scatter
Disney, R.K.; Bevan, S.E.
1982-08-05
SCAP solves for radiation transport in complex geometries using the single or albedo-scatter point kernel method. The program is designed to calculate the neutron or gamma-ray radiation level at detector points located within or outside a complex radiation scatter source geometry or a user-specified discrete scattering volume. The geometry is described by zones bounded by intersecting quadratic surfaces with an arbitrary maximum number of boundary surfaces per zone. The anisotropic point sources are described as point-wise energy dependent distributions of polar angles on a meridian; isotropic point sources may be specified also. The attenuation function for gamma rays is an exponential function on the primary source leg and the scatter leg with a buildup factor approximation to account for multiple scatter on the scatter leg. The neutron attenuation function is an exponential function using neutron removal cross sections on the primary source leg and scatter leg. Line or volumetric sources can be represented as distributions of isotropic point sources, with uncollided line-of-sight attenuation and buildup calculated between each source point and the detector point.
Local Kernel for Brains Classification in Schizophrenia
NASA Astrophysics Data System (ADS)
Castellani, U.; Rossato, E.; Murino, V.; Bellani, M.; Rambaldelli, G.; Tansella, M.; Brambilla, P.
In this paper a novel framework for brain classification is proposed in the context of mental health research. A learning by example method is introduced by combining local measurements with non linear Support Vector Machine. Instead of considering a voxel-by-voxel comparison between patients and controls, we focus on landmark points which are characterized by local region descriptors, namely Scale Invariance Feature Transform (SIFT). Then, matching is obtained by introducing the local kernel for which the samples are represented by unordered set of features. Moreover, a new weighting approach is proposed to take into account the discriminative relevance of the detected groups of features. Experiments have been performed including a set of 54 patients with schizophrenia and 54 normal controls on which region of interest (ROI) have been manually traced by experts. Preliminary results on Dorso-lateral PreFrontal Cortex (DLPFC) region are promising since up to 75% of successful classification rate has been obtained with this technique and the performance has improved up to 85% when the subjects have been stratified by sex.
Temporal-kernel recurrent neural networks.
Sutskever, Ilya; Hinton, Geoffrey
2010-03-01
A Recurrent Neural Network (RNN) is a powerful connectionist model that can be applied to many challenging sequential problems, including problems that naturally arise in language and speech. However, RNNs are extremely hard to train on problems that have long-term dependencies, where it is necessary to remember events for many timesteps before using them to make a prediction. In this paper we consider the problem of training RNNs to predict sequences that exhibit significant long-term dependencies, focusing on a serial recall task where the RNN needs to remember a sequence of characters for a large number of steps before reconstructing it. We introduce the Temporal-Kernel Recurrent Neural Network (TKRNN), which is a variant of the RNN that can cope with long-term dependencies much more easily than a standard RNN, and show that the TKRNN develops short-term memory that successfully solves the serial recall task by representing the input string with a stable state of its hidden units. PMID:19932002
Phoneme recognition with kernel learning algorithms
NASA Astrophysics Data System (ADS)
Namarvar, Hassan H.; Berger, Theodore W.
2004-10-01
An isolated phoneme recognition system is proposed using time-frequency domain analysis and support vector machines (SVMs). The TIMIT corpus which contains a total of 6300 sentences, ten sentences spoken by each of 630 speakers from eight major dialect regions of the United States, was used in this experiment. Provided time-aligned phonetic transcription was used to extract phonemes from speech samples. A 55-output classifier system was designed corresponding to 55 classes of phonemes and trained with the kernel learning algorithms. The training dataset was extracted from clean training samples. A portion of the database, i.e., 65338 samples of training dataset, was used to train the system. The performance of the system on the training dataset was 76.4%. The whole test dataset of the TIMIT corpus was used to test the generalization of the system. All samples, i.e., 55655 samples of the test dataset, were used to test the system. The performance of the system on the test dataset was 45.3%. This approach is currently under development to extend the algorithm for continuous phoneme recognition. [Work supported in part by grants from DARPA, NASA, and ONR.
Nonlinear stochastic system identification of skin using volterra kernels.
Chen, Yi; Hunter, Ian W
2013-04-01
Volterra kernel stochastic system identification is a technique that can be used to capture and model nonlinear dynamics in biological systems, including the nonlinear properties of skin during indentation. A high bandwidth and high stroke Lorentz force linear actuator system was developed and used to test the mechanical properties of bulk skin and underlying tissue in vivo using a non-white input force and measuring an output position. These short tests (5 s) were conducted in an indentation configuration normal to the skin surface and in an extension configuration tangent to the skin surface. Volterra kernel solution methods were used including a fast least squares procedure and an orthogonalization solution method. The practical modifications, such as frequency domain filtering, necessary for working with low-pass filtered inputs are also described. A simple linear stochastic system identification technique had a variance accounted for (VAF) of less than 75%. Representations using the first and second Volterra kernels had a much higher VAF (90-97%) as well as a lower Akaike information criteria (AICc) indicating that the Volterra kernel models were more efficient. The experimental second Volterra kernel matches well with results from a dynamic-parameter nonlinearity model with fixed mass as a function of depth as well as stiffness and damping that increase with depth into the skin. A study with 16 subjects showed that the kernel peak values have mean coefficients of variation (CV) that ranged from 3 to 8% and showed that the kernel principal components were correlated with location on the body, subject mass, body mass index (BMI), and gender. These fast and robust methods for Volterra kernel stochastic system identification can be applied to the characterization of biological tissues, diagnosis of skin diseases, and determination of consumer product efficacy. PMID:23264003